Introduction

Users of instruments need information about the performance characteristics of instruments and how these characteristics are determined and documented if they are to make reliable measurements and report measurement characteristics. Users of measurements need this knowledge if they are to reach valid conclusions based on those measurements.

The lesson begins by exploring static performance characteristics, which leads naturally to a discussion of calibration as the means of determining the values of the performance parameters of a sensor or instrument. This is followed by a thorough description of dynamic performance characteristics and responses to first and second order inputs using three classic examples: step or impulse, ramp or time varying, and oscillatory inputs.

The instruments used by atmospheric scientists sample the characteristics of a medium that is constantly changing and is subject to modification by upstream influences as well as the local setting. These attributes are discussed in a section on the general characteristics of measurements, focusing on representativeness and homogeneity of the region affecting the measurement.

The last section is devoted to the quantitative estimate of uncertainty, adhering to the standard evaluation methodologies and expressions employed by the National Institute of Standards and Technology (NIST).

photo of instrumentation set up for observing variations in the atmospheric boundary layer

An array of 3D sonic anemometers is deployed for a surface-layer turbulence study. Image from NCAR/EOL.

Introduction » Training Objectives

Topics introduced in this lesson include the general characteristics of measured data and the importance of homogeneity when assessing how well the measurement represents the phenomenon; i.e., its “representativeness.” Measurement errors and uncertainty play a central role in discussions of instrument performance. In this lesson, we make an effort to quell the use of the term “accuracy,” which is often misused. We focus instead on the National Institute of Science and Technology guidelines for evaluating and expressing uncertainty. The lesson also describes the types or components of uncertainty that arise from systematic and random effects.

Upon completion of this lesson, you’ll be able to:

  1. Define and explain the static performance characteristics of an instrument.
  2. Describe the process of static calibration.
  3. Identify the difference between static and dynamic performance characteristics.
  4. Describe how the principle of superposition relates to signal analysis.
  5. Describe the dynamic performance characteristics for a first-order system.
  6. Distinguish between a first and higher order measurement system and associated energy reservoirs.
  7. Describe representativity and homogeneity by identifying features within an application that could generate internal boundary layers and affect these characteristics.
  8. Distinguish between Type A and Type B uncertainty.
  9. Apply the Law of Propagation of Uncertainty to a system of multiple sensors.

Instrument Performance Characteristics

Instrument performance characteristics can be described as static, conditions where the measurand is constant or slowly varying, or dynamic, where the input to a sensor changes rapidly.

Static performance characteristics are generally determined during instrument calibration, which is a comparison of measurement values with a calibration standard of known uncertainty. Static performance characteristics include the range, span, resolution, static sensitivity, linearity, and stability - plus peculiarities such as hysteresis and sensor threshold.

Every instrument comes with technical specification information (see the example Technical Data for the Vaisala RS41-SGP radiosonde) that describes the static performance characteristics. Technical specifications also include the time constant, which is the time that it takes for an instrument, after a sudden change in measurand value, to reach more than 63 percent of the new value. The time constant is a dynamic performance characteristic that will be discussed later in greater detail.

Technical Data for the Vaisala RS41-SGP radiosonde

Technical Data for the Vaisala RS41-SGP radiosonde. Image from Vaisala.

Instrument Performance Characteristics » Definitions and Background

Some terminology will be important to know as we proceed through the lesson. Based on what you know about instrumentation, complete the following set of questions.

Question

Select the term that best matches each definition.

a) The range of the measurand for which the sensor produces valid output.
Please make a selection.
b) The difference between the highest and lowest values in the measurement range.
Please make a selection.
c) The smallest difference between two measurands that results in distinguishable outputs from the sensor.
Please make a selection.
d) The change in sensor output for unit change in the input.
Please make a selection.
e) The degree to which the static sensitivity is constant over the range of measurements.
Please make a selection.
f) The degree to which the output remains the same when the same measurand is sensed at different times.
Please make a selection.
g) Given by 𝜏, the time it takes for an instrument to reach 63.2% (100 (1-e-1)) of the response to a step input of first-order.
Please make a selection.
h) A condition affecting the output that depends on whether the input is increasing or decreasing.
Please make a selection.
i) The smallest measurand that produces an output distinct from that produced by a measurand of zero.
Please make a selection.

The correct answers are shown above.

Please make a selection.

This lesson presents the science behind many of the characteristics that appear in data sheets such as the previously shown example. Such specifications are usually the starting point for understanding performance of a sensor.

A scientist tests the functionality of sensors on an Integrated Surface Flux tower during a field campaign in Portugal.

A scientist tests the functionality of sensors on an Integrated Surface Flux tower during a field campaign in Portugal. Image from NCAR/EOL.

Instrument Performance Characteristics » Transfer Curve

One of the first descriptions of instrument performance characteristics is provided by a transfer curve. A transfer curve shows the output from a sensor as a function of the measurand. The slope of this curve is the static sensitivity, or the change in sensor output based on the change in input. If the response is linear, the transfer curve will be a straight line. In this example, the x-axis shows the measurand and the y-axis indicates the output of the sensor.

Example Transfer Curve exhibiting only random error: The fit to a linear transfer curve is Output = Intercept + Slope x Measurand.

Transfer curve showing relationship between measurand and output. Image from R. Clark.

A linear relationship between the output and the measurand is given by:

Output = Intercept + (Static Sensitivity x Measurand)

A transfer curve often is determined by calibration and then used inversely to determine an estimate of the measurand based on the output of the sensor. Deviations from a linear transfer equation can be caused by a non-linear response from the sensor, in which case the relationship between input and output might be a transfer curve represented by a higher degree (>1) polynomial, or due to systematic measurement error caused by drift, lag or hysteresis, or failure to calibrate.

Instrument Performance Characteristics » Precision and Resolution

The random error is related to the precision of the instrument and is also used to determine instrument resolution. The precision of the instrument determines the resolution, and various multiples of the precision are often quoted as the resolution, so it is necessary to consider which convention is used. If the precision of the measurement is given by a value p, you can conclude with about 50% confidence that the measurements are different if they are separated by p. At 2p, you can make the same conclusion with about 2/3 confidence [corresponding to one-standard-deviation confidence], and for 95% confidence, a separation of about 3p is needed.

The term “resolution” has many definitions, some conflicting. This lesson seeks to clarify why characterizing the resolution is difficult, and it will explain some of the varying usages of this term.

Instrument Performance Characteristics » Precision and Resolution » Definitions of “Resolution”

There are several components to a description of resolution, and in practice the term is often applied to these individual components as well as to their net result. The components include:

  1. The smallest difference between measurands that will lead to corresponding measurements indicating that the measurands are different. The desired indication will have an associated confidence level, often 68.3% or that corresponding to one standard deviation. A primary factor influencing this aspect of resolution is the precision of the instrument.
  2. The smallest change in the indicator (e.g., meter or digital display) that can be perceived. For example, the instrument resolution may be limited by the ability of an operator to read an analog display, by hysteresis in that display, or by the smallest incremental change reported on a digital display. In the case of a digitized result, the resolution is limited by the change corresponding to one unit in the digitized output.
  3. For a spectrometer, the smallest difference in frequency that can be resolved.
  4. For a time series, the temporal resolution is the increment between consecutive measurements, although if a particular sensor responds more slowly than this, the temporal resolution of that sensor may be lower.

Let’s consider definitions #1 and #2 in more detail. (Definitions #3 and #4 are specific to instrument types or to the data series and are largely self-explanatory in their definitions.)

Instrument Performance Characteristics » Precision and Resolution » Resolution Definition #1

Our first definition of resolution, and the one that has found general acceptance for instrument performance, is “the smallest difference between measurements that indicates that the corresponding measurands are different”, which normally uses single measurements of each measurand. Resolution defined this way will be limited primarily by the precision of the measurement. Consider first the case where two measurands x1 and x2 are the same. If measurements of each (M1 and M2) have a random error characterized by 𝜎M and the random errors in repeated measurements are independent, then the expected standard deviation arising from random error in the difference y between the two measurements is 𝜎y = √2𝜎M. The two measurements will differ by less than 𝜎y in about 68.3% of cases, as is appropriate for a confidence level corresponding to one standard deviation, but they will differ by more than 𝜎M, the instrument precision, in about 48% of cases. A measured difference of 𝜎M is therefore an unreliable indicator that the measurands are different; a measured difference of at least √2𝜎M is needed to indicate that the measurands are different. Figure 1 shows the probability distribution functions for the two measurements (as the blue line) and that for the difference (as the red line).

Probability distributions for two measurements, both with measurand 0 (blue line), and the probability distribution for the measured difference (red line).

Probability distributions for two measurements, both with measurand 0 (blue line), and the probability distribution for the measured difference (red line). The assumed standard deviation corresponding to random error is 1. Image from NCAR/EOL.

To find the resolution, a different limit is needed. In this case we want to find the minimum separation between measurands y' = x1' - x2' that leads to a statistically significant likelihood that measurements will show the two measurands to be different. (Here the primed quantities refer to the measurands and unprimed quantities to the measurements.) We need measurements separated by at least √2𝜎x to be confident, on the basis of measurements with precision 𝜎M, that the measurands are different. We therefore need to find the minimum separation between measurands required to give measurements separated by at least √2𝜎x in at least 68.3% of cases. Then, with one-standard-deviation confidence, we can use the measurements to conclude that the measurands are different.

The separation at which the probability “P2” of finding a difference outside √2𝜎x is about 68.3%. The probability for a separation between measurements of d = 𝜎x is only about 43%, so a separation between measurands equal to the precision of the sensor the probability is still far below the desired limit.

The Probability Density Function for a separation d between two measurands equal to the precision, which yields a probability of only 43 percent (P2) that the measurements will be different by 2 times the precision.

The Probability Density Function for a separation d between two measurands equal to the precision 𝜎x, which yields a probability of only 43 percent (P2) that the measurements will be different by √2𝜎x. Image from NCAR/EOL.

In order to achieve a probability P2 = 68.3% that two measurands are different with a one-standard deviation confidence limit, the separation between measurements must be about twice the precision, or d = 2.06 𝜎x.

The Probability Density Function for a separation d between two measurements equal to about twice the precision 2.06 times sigma, which yields a probability of only 68.3 percent (P2) that the measurands are likely to be different.

The Probability Density Function for a separation d between two measurements equal to about twice the precision 2.06 𝜎x, which yields a probability of only 68.3 percent (P2) that the measurands are likely to be different. Image from NCAR/EOL.

Sometimes it is desirable to quote a resolution that gives confidence at the two-standard-deviation limit (the 95% confidence limit) that the measurands are different. In this case, the separation d of the measurements must be about five times the precision.

The Probability Density Function for a separation d between two measurements equal to about five times the precision 5.2 times sigma, which yields a probability of 95 percent (P2) that the measurands are different.

The Probability Density Function for a separation d between two measurements equal to about five times the precision 5.2 𝜎x, which yields a probability of 95 percent (P2) that the measurands are different. Image from NCAR/EOL.

Instrument Performance Characteristics » Precision and Resolution » Resolution Definition #2

The smallest change in an indicator that can be perceived is difficult to characterize for an analog display, but it is straightforward for a digital display. For example, consider a transducer that produces voltages in the range from 0–5 V, connected to an analog-to-digital converter that produces a 12-bit signal covering this full range. Because 212 = 4096, 4096 increments are available, so the smallest voltage difference that can be indicated is 5 / 4096 ≈ 0.00122 V. This limits the resolution to no better than 1.22 mV, and that is often quoted as the resolution of the instrument. However, if the instrumental precision is larger than this, the smallest increment that can be distinguished will be larger than this value, as discussed in the preceding section. The digitizer resolution is often a poor measure of instrumental resolution if the precision is larger than this, but many instrument specifications indicate only this resolution.

The effect of digital resolution on measurements of a smoothly varying measurand is a function of the increments available over the full range. At 4-bit resolution the increments are obvious, but at 6-bit resolution the 64 levels are harder to distinguish. It is common for A-to-D devices to produce at least 12-bit resolution to avoid significant distortion of measurements. If there are no comparable or larger contributions to the precision, the digitizer resolution contributes to the precision of the instrument as 𝜎 = ∆ / √12 where ∆ is the difference in measurand corresponding to a 1-bit change in digitized output (e.g., 1.22 mV in the example in the previous paragraph). However, this relationship to precision applies to encoders using any number of bits because the value (i.e., √12) arises from the standard deviation of randomly occuring values in the range of one bit."

An illustration of the effect of 4-bit resolution on measurements of a smoothly varying measurand.

The effect of digital resolution from a 4-bit digitized signal on measurements of a smoothly varying measurand is a function of the increments available over the full range. Image by NCAR/EOL.

Instrument Performance Characteristics » Precision-Resolution Relationship

Resolution Interaction 1

Two Gaussian probability distributions with unit standard deviation

Two Gaussian probability distributions with unit standard deviation, centered on x1 and x2, which differ by d. The red curve is a Gaussian distribution with standard deviation of √2 and “P” listed in the plot is the fraction of measurements of x2 that differ more than √2 from x1. The measurements of x2 that contribute to this fraction are shown as the thicker green line. The plot shows the fraction that differ from a fixed value of x1 at the center of the distribution, but this result should be integrated over possible values of x1. “P2” is the result of that integration, so this is the value to consider when assessing confidence estimates. Image from NCAR/EOL.

Precision is the component of uncertainty arising from the random error in a measurement. An estimation of the precision is obtained from the root-mean-square (RMS) error determined from the unbiased standard deviation.

Use the images to explore the relationships between precision and resolution, then answer the following questions.

Probability Distribution 1
Two Gaussian probability distributions with unit standard deviation, centered on x1 and x2, which differ by d=2.06.

Two Gaussian probability distributions with unit standard deviation, centered on x1 and x2, which differ by d = 2.06. The red curve is a Gaussian distribution with standard deviation of √2 and “P” listed in the plot is the fraction of measurements of x2 that differ more than 0.95𝜎 from x1, or the limits on the red curve that include 50% of the distribution. The measurements of x2 that contribute to this fraction are shown as the thicker green line. The plot shows the fraction that differ from a fixed value of x1 at the center of the distribution, but this result should be integrated over possible values of x1. “P2” is the result of that integration, so this is the value to consider when assessing confidence estimates. Image from NCAR/EOL.

Probability Distribution 2
As in the previous figure but with measurand separation 2.60 and integration limits that correspond to 68.3% of the area under the red line, or 2.

As in the previous figure but with measurand separation 2.60 and integration limits that correspond to 68.3% of the area under the red line, or ± √2𝜎. Image from NCAR/EOL.

Probability Distribution 3
 As in the previous figures but with measurand separation 4.02 and integration limits that correspond to 95.4% of the area under the red line, or 22.

As in the previous figures but with measurand separation 4.02 and integration limits that correspond to 95.4% of the area under the red line, or ± 2√2𝜎. Image from NCAR/EOL.

Probability Distribution 4
As in the previous figures but with measurand separation 5.20 and integration limits that correspond to 95.4% of the area under the red line, or 22.

As in the previous figures but with measurand separation 5.20 and integration limits that correspond to 95.4% of the area under the red line, or ± 2√2𝜎. Image from NCAR/EOL.

Question

If 𝜎 is the instrument precision (at the one-standard deviation limit), what separation between measurands is required for the difference between measurements to be at least √2𝜎 for at least 80% of the pairs of measurements?

Please make a selection.

The correct answer is b.

With a separation of 2.6𝜎x or 2.6 times the precision, the measurements will differ by √2𝜎 in 80% of the cases. In this case, the integrations are done outside the √2𝜎 limits. Probability Distributions 1 and 3 use integration limits determined by 𝜎 or 2√2𝜎, respectively, and Probability Distribution 4 represents a higher likelihood and also a too-large integration limit.

Resolution Interaction 2

Probability Distribution 1
Two Gaussian probability distributions with unit standard deviation, centered on x1 and x2, which differ by d=3.74.

Two Gaussian probability distributions with unit standard deviation, centered on x1 and x2, which differ by d = 3.74. The red curve is a Gaussian distribution with standard deviation of and “P” listed in the plot is the fraction of measurements of x2 that differ more than √2𝜎 from x1, or the limits on the red curve that include 68.3% of the distribution. The measurements of x2 that contribute to this fraction are shown as the thicker green line. The plot shows the fraction that differ from a fixed value of x1 at the center of the distribution, but this result should be integrated over possible values of x1. “P2” is the result of that integration, so this is the value to consider when assessing confidence estimates. Image from NCAR/EOL.

Probability Distribution Function 2
As in the previous tab figure except that the measurand separation is 5.20 and the integration limits are 22 as needed to include 95.4% of the area under the red curve.

As in the previous tab figure except that the measurand separation is 5.20 and the integration limits are ± 2√2𝜎 as needed to include 95.4% of the area under the red curve. Image from NCAR/EOL.

Probability Distribution Function 3
As in the preceding figure except that the measurand separation is 3.28 and the integration limits are 0.95 as needed to include 50% of the area under the red curve.

As in the preceding figure except that the measurand separation is 3.28 and the integration limits are ± 0.95𝜎 as needed to include 50% of the area under the red curve. Image from NCAR/EOL.

Question

For measurement pairs to differ by at least 2√2𝜎 in 99% of cases, as would be needed for 95% confidence that the measurands are different, what separation between measurands would be required?

The correct answer is b, 5.2 𝜎.

Measurands would have to be different by this separation distance 99% of the time to be able to report findings to the 95% confidence limit. For a separation distance between measurands of 3.74 and 3.28, the confidence limits corresponding to 99% of measurements are 68.3% and 50%, respectively.

Please make a selection.
As in the preceding figures but for a separation between measurands of d=1 and integration limits for the red curve that cover 50% of the distribution.

As in the preceding figures but for a separation between measurands of d = 1 and integration limits for the red curve that cover 50% of the distribution. Image by NCAR/EOL.

Question

For measurands separated by the instrument precision, what is the likelihood (in %; i.e. enter a number like 50 for 50%) that pairs of measurements will differ by less than the true separation? Report your answer to the nearest whole number.


The correct answer is 40-42%. The approximate answer can be found using the figure below, which shows the probability distributions for a separation of d = 1 and integration limits covering 50% of the red distribution. Notice that the red vertical lines are very close to the ±d limits from the center of x1. The integration of the regions outside ±d then give about 0.60, so the fraction *inside* is 0.4. This is only approximate because the limits are not exactly ±d, but this will give an answer within a few percent of the right answer.

Answer the following three questions using this statement:

A transducer produces a voltage in the range from 0-10 V and is connected to an analog-to-digital converter covering that full range that produces a 10-bit output.

Question 1 of 3

What is the number of available digital increments covering the full range?

The correct answer is c.

Computers count by base 2. The smallest increment of data on a computer is a bi. A bit is a binary digit, that is, it can hold only one of two values: 0 or 1, corresponding to the electrical values of off or on, respectively. A 10-bit output will, therefore, have 210 available digital increments over the full range of 10 V. Answers a and b are incorrect. Answer a is the number of bits in a byte and answer b is for an 8-bit output.

Please make a selection.

Question 2 of 3

What is the finest resolution that this instrument can have?

The correct answer is b.

The finest resolution is equal to the range divided by the number of increments over that range, or 10 V / 1024 = 0.00977 V = 9.77 mV. Answer a is incorrect because it is the square root of 10 V / 1024, and answer c is 10V / 29 in mV, and has no relevancy to a 10-bit output.

Please make a selection.

Question 3 of 3

What is the instrument precision in the absence of other (larger) random-error contributions to the precision?

The correct answer is a.

The precision is given by 𝜎 = (difference in measurand corresponding to a one-bit change in digital output) / √12 = 2.82 mV.

Please make a selection.

For more information about resolution, see Appendix 1.

Instrument Performance Characteristics » Static Calibration

Instrument calibration is one of the primary processes used to determine performance characteristics and quantify measurement uncertainty. A calibration, as described here, involves both the sensor and instrument and tests the transfer curve as well as the conversion of sensor output to an estimate of the measurand. The calibration process generally requires a traceable standard input to test the output signal of the instrument.

The National Institute of Science and Technology (NIST) description for producing a traceable standard.

Calibration involves using a traceable standard input. Image by NIST.

Calibration results are used to establish a relationship between the measurand (input signal) and the output of the instrument, and if necessary, make adjustments to bring the instrument in line with the calibrator.

The calibration process typically steps up or down the measurement range of the instrument. This adjustment allows both the calibrator and the instrument to reach steady-state values for a period of time until a statistically robust mean and standard deviation can be determined for each span value over the range between the lowest and highest inputs.

Question

Which of the following are part of the process of static calibration?

The correct answers are a, c, and d.

Calibration requires a traceable standard input to provide information for stepping the instrument's measurement range up or down. A calibration curve can reveal systematic errors, including zero error or span error.

Please make a selection.

Instrument Performance Characteristics » Static Calibration » Calibration Errors

A calibration curve showing the measured value (output) versus the input value can be constructed. Differences from the identity line indicate systematic error, which can have many forms. An example of systematic error is a consistent offset, indicating the need to adjust the zero offset of the instrument. This type of zero error is shown in the figure below. A span error, where the error depends on the value of the measurement, is also shown.

example of a zero error.

Graph showing example of a zero error. Image from R. Clark.

Generalized illustration of the characteristics of a a span error (right). Span error is corrected by calibration of the static sensitivity.

Graph showing example of span errors. Image from R. Clark.

Instrument Performance Characteristics » Static Calibration » Linearization Error

If the systematic error increases or decreases linearly as values of the input change linearly, then the instrument suffers from a span error, and the static sensitivity [ ∂(Output) / ∂(Input) ] must be adjusted. With the span error and zero error removed during calibration, any remaining systematic error could be due to nonlinearity, which could either be the instrument’s true response to an input, as in the case of a thermistor (a temperature-sensitive semiconductor that produces a very large nonlinear change in resistance for a change in temperature) or the result of other error sources such as drift, hysteresis, or nonlinear span errors.

Generalized illustration of a calibration plot showing an example of a nonlinear sensor response, leading to a smaller nonlinear output (bottom curve) relative to the desired value.

Instrument Performance Characteristics » Static Calibration » Calibration Exercise

Use the interactive ShinyApp tool to complete the following questions. If you are viewing a downloaded version of this lesson, please see the "Printable Lesson" for the ShinyApp exercise information.

Our objective is to calibrate a sensor so that we can use the calibration to estimate the value x of a measurand from the output of the sensor M. We have a calibrator that produces values of a measurand with very small random error and no systematic error, so its contribution to error in the calibration can be neglected. Our sensor, on the other hand, makes individual measurements with a rather large random error, characterized by the precision σM = 1.5. The goal of calibration is to minimize systematic error in measurements using this sensor. We decide to make N = 20 measurements equally spaced over a measurand interval (x1 – x2), and as a result we get N measurements {Mi} that correspond to {xi}. Results are as listed in the following table.

x

M

x

M

x

M

x

M

1

8.9783

6

23.8731

11

49.9256

16

79.8253

2

12.4642

7

31.8451

12

58.1260

17

83.5705

3

14.7360

8

34.9857

13

61.4559

18

90.9026

4

17.9198

9

39.6738

14

67.3803

19

99.7811

5

23.4088

10

43.6210

15

71.3742

20

105.4500

Sensor Output (M) Versus Measurand (X)

Using these data, find a representation of this calibration in the form x = f(M), where f(M) can be a polynomial or other function, so that the output M from this sensor can be used to estimate the measurand x, under static conditions.

Procedure: Download the data from the table to an application or program (e.g., ShinyApp, MS Excel, MatLab, Python, etc.) that you can use to plot x = f(M) and generate a set of calibration curves.

Question

If the sensor output is 55 (M = 55), what would your estimate of the measurand x be using a linear model fit?

The correct answer is b.

To estimate the measurand x for a sensor output of M = 55, we create a plot of x = f(M) and determine the linear fit from least-squares regression. The resulting linear model is x = 0.1929M + 0.6689. Using R2 = 0.9869 yields a value ofx = 11.28 for M = 55.

Sensor Output (M) versus Measurand (X)

The residual standard deviation of the calibration values for x, given M, from that regression line is 0.6967. That is not very good, and the deviations are systematic, so it is justified to try a higher-order fit. Using a linear-model fit to determine coefficients for a relationship x = b1 + b2M + b3M2

Please make a selection.

Question

What is your estimate of x as you increase the degree of the polynomial used to fit the measurand data to a second degree polynomial? What is your estimate if fitting to a third degree polynomial?

The correct answer is b.

Estimates of the measurand for M = 55 were obtained for 2nd and 3rd degree polynomials. The calibration equations are shown in the figure, along with a plot of trend lines for each of the three polynomial fits. For M = 55, a 2nd degree polynomial yields a value for x = 11.965, leaving only 1-0.9973 or 0.27% variation unaccounted for by the regression. Using a 3rd degree polynomial improves the fit slightly with x = 11.974 and R2 = 0.998 or 1-0.998 = 0.2% variation not accounted for by the 3rd degree polynomial regression.

Calibration Equations: x = f(M) for 1st, 2nd, and 3rd degree polynomials

Calibration curves and equations for polynomials of first, second, and third degree for the data given in the Table above. There is justification for using a second degree polynomial as the calibration equation since the improvement over the first degree fit is significant, whereas the additional variance accounted for by the third degree polynomial is a negligible improvement over the second degree polynomial. Image from R. Clark.

Please make a selection.

The residual standard deviation for the new (2nd degree) fit is 0.3266, so the improvement is significant. Studies of this sensor by other means indicate that its precision is about 1.5 units in the measurement M, so with a representative slope of about b2 the expected precision in estimates of the measurand x is expected to be about 0.4155, so this result is reasonably consistent with the expected precision in measurements. That indicates that higher-order polynomial fits are probably not needed or justified. If a third-order polynomial is used in the fit, the residual standard deviation reduces only to 0.2867, so inclusion of that term does not give any significant improvement in the fit. (The decision regarding which terms are justified is often made on the basis of an analysis of variance, which in this case indicates that the addition of a cubic term to the equation does not lead to statistically significant improvement.)

For a detailed discussion of the answers above and a review of the lessons learned from the Calibration Exercise, view Appendix 2. Additional notes about the software package and code are also included.

Instrument Performance Characteristics » Additional Performance Characteristics

Drift, hysteresis, or nonlinear span errors can all influence an instrument’s true response.

Drift is a form of instrument instability that produces a time-dependent systematic error, often associated with a physical change or deterioration of components.

Hysteresis occurs when the sensor input for a given output depends on whether the input is increasing or decreasing. As an example, hygrometers exhibit hysteresis because the rates of sorption and desorption of water molecules on the surface of the sensor are different. Another example of hysteresis is illustrated by the blue curves in the figure below.

The figure provides an example of hysteresis caused by sensor time lag. The plot shows total temperature measured during a level aircraft speed run, where the airspeed is first increased and then decreased again.

The figure provides an example of hysteresis caused by temperature sensor time lag, which occurs when an aircraft changes airspeed. Dynamic heating causes the total temperature to exceed the ambient temperature of -18°C by 0.5 Cp V2, where V is the airspeed and Cp is the specific heat of air at constant pressure. The blue hysteresis curves look slightly different depending on whether airspeed is increasing, or decreasing. The temperature lags behind the correct value so it is too low as the aircraft speeds up and too high while the aircraft slows down. Image from NCAR/EOL.

Dynamic Response

Where the measurand (input) changes with time, we observe performance characteristics that are the result of the instrument’s inability to respond instantly. In these situations, the instrument output often lags the instrument input. The time constant, or response time, has been adopted as this dynamic performance characteristic. The inverse of the time constant provides the instrument’s temporal resolution. In the example shown, a time constant is applied to produce the red curve, which corresponds to output values unaffected by hysteresis.

The figure provides an example of hysteresis caused by sensor time lag. The plot shows total temperature measured during a level aircraft speed run, where the airspeed is first increased and then decreased again.

The figure provides an example of hysteresis caused by sensor time lag. The plot shows total temperature measured during a level aircraft speed run, where the airspeed is first increased and then decreased again. The temperature lags behind the correct value so it is too low as the aircraft speeds up and too high while the aircraft slows down. Advancing all the measurements of total temperature by 2.32 sec results in the green line that shows no hysteresis. This is how we know that the time constant of this particular sensor is 2.32 s. Image from NCAR/EOL.

Dynamic Response » Transfer Functions and the Transfer-Function Diagram

We use differential equations to model dynamic performance. If the instrument or sensor responds linearly, its response can be characterized by a linear differential equation. Consider an instrument that uses a sensor to produce a signal M when exposed to a measurand x. Calibration of a sensor consists of determining the static relationship between M and x, often plotted as a transfer curve showing the relationship as discussed in the section on static response. A sensor can be depicted generically using a diagram like that below, where H(x) will be called the transfer function.

Generic description of a sensor where the measurand x can be steady-state as during static calibration.

Generic description of a sensor where the measurand x can be steady-state as during static calibration, time-varying as during field operations. Image from NCAR/EOL.

The transfer curve characterizes this relationship under static conditions (M given x), but we are often interested in applications where the measurand x varies in time. The simple box-description of the sensor still applies, but now the output M will depend not only on the present value of x but also on its past history.

Dynamic Response » Transfer Functions and the Transfer-Function Diagram » Example: Temperature Response

As an example, consider a liquid-in-glass thermometer immersed at time t0 = 0 into a bath of fluid having temperature TF (the final bath temperature). If the thermometer reads T0 before immersion, it will start to move toward TF after immersion and will display a temperature T(t) that moves toward TF over some period of time. The rate at which heat is transferred to the thermometer depends on the temperature difference T(t) - TF, often in a linear relationship, so the displayed temperature will move faster at first and gradually move more slowly toward TF as the temperature difference becomes small.

A linear relationship between the time rate of change in temperature and the temperature difference between the sensor and the bath can be represented by a first-order linear differential equation like dT(t) / dt = (TF - T(t)) / 𝜏, where 𝜏 is a constant that determines the rate at which the sensor responds to its environment and TF is the final temperature. This equation has an exponential solution, such that T(t) approaches TF but never reaches that value, as shown in the figure. This first-order response is characteristic behavior for many sensors and electronic components of sensors.

The response of a first-order sensor with time constant 1

The response of a first-order sensor with time constant 1. The temperature to which the sensor is exposed (x) is shown as the green line and the measured temperature (M) as the blue line. At t = 0, x abruptly changes from 0 to Tf = 1. Image from NCAR/EOL.

Question

Select whether the statement refers to a static performance characteristic or a dynamic performance characteristic.

a) Includes zero error and span error:
Please make a selection.
b) First-order transfer function is a low-pass filter:
Please make a selection.
c) Includes sensor time lag and hysteresis:
Please make a selection.
d) Output depends directly on the measurand:
Please make a selection.
e) Output depends on the measurand and on its history:
Please make a selection.

The correct answers are shown above.

Please make a selection.

Dynamic Response » Transfer Functions and the Transfer-Function Diagram » First Order vs Second Order Systems

Sensors differ in their response characteristics, but many follow general patterns. A first-order system is one that can be described by a first-order differential equation. A second-order system requires a second-order differential equation for its description. The thermometer is an example of a first-order system in temperature that responds directly to the heat flow induced by the difference in temperature between the sensor and the bath. A wind-measuring system consisting of a wind vane and a cup anemometer, as shown in the next illustration, involves both first-order and second-order responses. The rotor in the cup anemometer has angular momentum that resists change when the wind changes, so it is basically a first-order system in velocity with no preferred orientation or position. An exponential change between the initial and final states describes the solution to a first-order system. The wind vane is a second-order system in position that will tend to oscillate about the equilibrium position when the wind direction changes. Thus, second-order solutions include sinusoidally-varying components that describe the oscillatory nature of the system, one which often includes damping, which is represented by a damping coefficient. Many other commonly used systems fall into one of these two categories.

A wind measuring system consisting of a cup anemometer and a wind vane.

A wind measuring system consisting of a cup anemometer is an example of a first-order system with a response to changes in angular momentum, and a wind vane represents a second-order system where the kinetic energy is stored in the angular motion of the vane and potential energy stored in the position of the vane relative to the wind vector. Image from R. Clark.

Question

Drag each of the following to the type of system (first order or higher order) it represents.

Derivative of the signal
with respect to time
Exhibits no energy loss
Wind vane
Thermocouple,
or rotors on a cup anemometer
Second derivative of the
signal with respect to time
Exhibits energy loss

The correct answers are shown below.

First Order System

Derivative of the signal
with respect to time
Thermocouple,
or rotors on a cup anemometer
Exhibits energy loss

Second Order System

Exhibits no energy loss
Wind vane
Second derivative of the
signal with respect to time

Question

Which of the following is(are) true of dynamic performance characteristics for a first-order system?

The correct answers are a and b.

In a first-order system, only slowly varying input fluctuations will be passed through to the output, while rapidly varying input fluctuations will be dampened or attenuated.

Please make a selection.

Dynamic Response » Transfer Functions and the Transfer-Function Diagram » Example: RC Circuit

Another example of a first-order system is an “RC” circuit, consisting of a resistor R and a capacitor C as shown in the next figure. The same equations apply to this circuit, with time constant = RC. Functionally, a capacitor can be regarded as an integrator because the voltage across it equals the time integral of the current (I(t)) that passes through it divided by the capacitance. The same current passes through the resistor and produces a voltage across it equalling I(t)R, so equating the current passing through the resistor and through the capacitor gives dVout / dt = ( Vin - Vout ) / (RC). This equation has the same form as the equation for the derivative of temperature, dT(t) / dt in the preceding example, if the time constant 𝜏 is equal to RC.

All electronic components have some resistance and some capacitance associated with their output impedance, making it a common factor influencing dynamic response.

General schematic of a first-order dynamics system.

General schematic of a first-order dynamic system. It is common for electronic systems to have some resistance R and some capacitance C associated with their output impedance, which is defined as the total opposition to alternating current by an electric circuit. Image from NCAR/EOL.

Sensors exhibiting first-order dynamic response can be more complicated, but the simplest (like the preceding examples) have exponential response and are characterized by

equation

where H*(x) represents the static transfer function or the transfer curve, giving the output value that would be produced by a steady measurand with the value x(t).

General schematic of a first-order dynamic system having a time constant.

General schematic of a first-order dynamic system having a time constant 𝜏. The measurand is x, the output is M, and the symbol Ṁ is the time derivative of the output. Image from NCAR/EOL.

The above diagram, based on the equation, is a functional depiction of a first-order dynamic system having time constant 𝜏. It shows how the output M from a sensor having first-order dynamic response depends on the measurand x . The red box is the complete instrument transfer function H. However, x and M both generally vary with time. The triangular symbols represent amplifiers producing an output that is the difference between the two inputs multiplied by the indicated gain (1 or 1 / 𝜏 in these two examples). The box with an integration symbol integrates the input (in this case, the time derivative dM(t) / dt of the measured quantity as given by the preceding equation). Integrating the input produces the measurement, M. However, that measurement is also fed back to the left amplifier, which produces an output determined by the difference between H*(x) and M. The diagram is a representation equivalent to the differential equation shown above, the RC circuit diagram, or the diagram with the generic transfer function H. The transfer function depicted can be considered a low-pass filter because only slowly varying input fluctuations will be passed through to the instrument output. Rapidly varying input fluctuations will be dampened (i.e., attenuated) and will not be passed to the output. In general, input variations of frequency less than 1 / 𝜏 (where 𝜏 is the time constant) will be passed through to the instrument output, whereas input variations at frequencies higher than the instrument’s temporal resolution (1 / 𝜏 ) will be attenuated.

Dynamic Response » Transfer Functions and the Transfer-Function Diagram » Dynamic Responses

The transfer function diagram can be used for dynamic systems of higher order. An example is the simplified response of a wind vane to the direction of the wind. [Note that a wind vane is actually a much more complex system and the following simplification is not realistic except as an example of a second-order dynamic system. For a real wind vane, the restoring torque becomes nonlinear for any but very small deflection angles, varying wind speed changes the restoring torque, and there are aerodynamic contributions to the drag term and other complicating factors.]

In this simplification, two factors control how the vane will respond to the direction of the wind when the wind speed remains constant. First, a difference between the orientations of the vane and the wind will produce a torque proportional to that difference, with proportionality constant k. However, such a system will tend to oscillate about the correct orientation, like a weight suspended on a spring. Therefore a wind vane will have some resistance to motion built into its mounting structure, and this resistance provides damping of the oscillation.

A schematic representation of the transfer function of this system is shown in the figure, where the amplifier with gain D provides the damping effect of this feedback.

General schematic of a transfer function for a damped, forced harmonic oscillation. Dots over M represent the order of the time derivatives

General schematic of a transfer function for a damped, forced harmonic oscillation. Dots over M represent the order of the time derivatives. Image from NCAR/EOL.

This schematic shows a damped forced harmonic oscillator, where k is analogous to a spring constant but in this case is the proportionality constant between the restoring torque and the angle between the orientation of the wind and of the wind vane. The symbol m in the amplifier with gain 1 / m is analogous to mass for the loaded spring but here represents the moment of inertia of the wind vane. The response to changes in wind direction is determined by the relative magnitudes of the parameters {k, the spring constant; m,the mass; and D, the damping constant}. If the damping constant is very small, the vane tends to oscillate too much, while a large value of D causes the vane to respond too slowly. “Critical damping,” for which the vane just moves to a new equilibrium position without overshooting, occurs when D = 2√  mk , and the natural oscillation without damping has an angular frequency 𝜔0 equal to √ k/m .

In contrast to the first-order transfer function, which serves as a low-pass filter, the response of a second-order transfer function can lead to erroneous large-amplitude fluctuations near the natural frequency if the damping is insufficient. In situations with large damping, the transfer function again serves as a low-pass filter.

The governing differential equation can be determined from this and other similar transfer-function diagrams, as follows:

Start at the point in the diagram labeled as expression (or whichever term is the highest-order derivative).

Working backward, incorporate each contribution into the differential equation to obtain, in the case of the diagram,

equation

It is then possible to solve the resulting differential equations for specified values of the measurand as a function of time. Another benefit of characterizing the sensor with a transfer-function diagram is that it provides a structure easily incorporated, with initial values, into a numerical solution that gives the response to any input.

To demonstrate how solutions to these equations or the transfer-function diagrams relate to characteristics of instruments, the responses to some specific types of input will be discussed in the next pages.

Dynamic Response » Response to Specific Input Functions

The first-order response is characterized uniquely by the time constant 𝜏, so a single representative response to each input function characterizes all such sensors. In the plots that follow, the time constant will be 1 s. However, for a sensor having second-order response, the response is determined by the three terms shown in the equation or as the gain factors in the transfer-function diagram, k, 1/m, and D.

schematic of a transfer function with damping included

In the transfer function diagram shown, k, D, and 1/m are gain factors affecting the instrument response.

Only the relative sizes of the gain factors are important, so the controlling factors can be reduced to two, the undamped natural angular frequency of the system, given by ω0 = √ k/m  and the ratio of the damping term to the critical-damping value, given by 𝛄 = D / (2 √ mk ).

Dynamic Response » Response to Specific Input Functions » Step Function

An abrupt change in the measurand to a new value, or step-function change, is shown as the red line in the next figure. The first-order response is an exponential (M1, shown by the blue line), but the second-order response (M2, in green) is slower in this case. However, the shape of the second-order response curve, while preserving the exponential behavior, does exhibit differences from the first-order response. These response curves provide good guidance regarding how long it is necessary to wait before errors caused by time response become negligible in a particular application. Response like that shown is commonly seen; for example, temperature sensors on research aircraft often have first-order time constants of a few seconds and this imposes important limitations on the detection of spatial structures using such measurements.

First- and second-order response to an abrupt change in the measurand leading to a step response.

First- and second-order response to an abrupt change in the measurand leading to a step response. Parameters: 𝛕=1, 𝛚=1, 𝛄=1. Image from NCAR/EOL.

Dynamic Response » Response to Specific Input Functions » Ramp Function

The “ramp” variation in the measurand occurs when the measurand increases at a constant rate, as shown by the red line in the figure.

First and second order response to a measurand changing at a constant rate (ramp).

First and second order response to a measurand changing at a constant rate (ramp). Image from NCAR/EOL.

The sensor output in both cases, after a transient period, follows the slope of the measurand but with a delay. For the first-order case, the effective delay, also known as the dynamic lag, is equal to the time constant so this is often a useful way to determine that time constant. In the case of aircraft-borne temperature sensors, this response will result in a difference between aircraft ascent and descent profiles, and the time constant of the sensors can be determined from that difference.

For a second-order sensor, the delay is 2𝛄 / ω and, for the parameters shown, is larger than the first-order delay 𝜏 .

A sounding with a lag of 2 s with a climb rate of 20 m/s and a temperature gradient of 10 deg C per km

A sounding with a lag of 2 s with a climb rate of 20 m/s and a temperature gradient of 10°C per km should produce a difference of about 0.8°C between climb (blue curve) and descent (red curve). While this diagram shows the expected difference, there is almost always enough real variability in the atmosphere to mask an effect. Image from NCAR/EOL.

Dynamic Response » Response to Specific Input Functions » Low-frequency Sine Wave

A measurand that varies as a sine wave with frequency arbitrarily selected to be 0.29 Hz (i.e., a period of 3.4 s) is shown in the next figure. This period is long in comparison to the assumed first-order time constant (𝜏 = 1 s) but still below the natural frequency of the second-order system.

Depiction of a first and second order response to a sinusoidal input exhibiting a phase lag and mild damping.

Depiction of a first (M1) and second order (M2) response to a sinusoidal input x, for which the responses show a phase lag and mild damping. Image from NCAR/EOL.

In this example, both response curves (M1 and M2) show attenuated response and a significant time lag between the input and response signals. The time lag leads to a phase lag; i.e., the peaks of the output signal (M1 and M2) waves occur after the peak of the input signal (x). For lower frequencies, the attenuation and phase lag are both reduced, but even at 0.1 Hz, important attenuation and phase lag remain evident in the response curves. There is some transient response at the start of this example because the calculation is started with the measurement and its derivative both set to zero, but the initial transient response no longer has much effect after a few cycles of the sine wave.

Dynamic Response » Response to Specific Input Functions » High-frequency Sine Wave

The next figure shows the response to a 2.5 Hz sine wave. Here the attenuation for both response signals is strong, and the phase lag approaches 90°; i.e., the peak of the output wave occurs 1/4 wavelength after the peak of the input wave.

Depiction of a high-frequency sine wave with strong attenuation for both the first and second order response

Depiction of a high-frequency sine wave with strong attenuation for both the first and second order response. Image from NCAR/EOL.

In this case (as with the previous example), the transfer function acts as a low-pass filter by attenuating fluctuations that are fast compared to the characteristic response times. It is worth noting, though, that this transfer function isn’t a very good filter because the attenuation changes very slowly over a large frequency range and the transfer function introduces substantial lag over a similar range.

Using the observed response to controlled signals like step, ramp and sine functions often makes it possible to learn the response characteristics of a sensor. For example, a simple exponential response to a step function can be studied to determine the time constant of that system, as can the offset in response to a ramp function.

Dynamic Response » Response to Specific Input Functions » Consequences for Making Measurements at High Frequency

If good time response is needed for a particular measurement, much attention must be devoted to minimizing the effects of time lags and phase shifts. Time lags introduced by delays in signal lines and instrument components are often difficult to minimize. Another concern when sampling time-series measurements is choosing an appropriate sample rate. If a system samples at a frequency f, it is not possible to detect sine-wave components with a frequency faster than f/2, called the Nyquist frequency.

A graph showing aliasing of an f=0.9 sine wave by an f=0.1 sine wave by sampling at a period of T=1.0.

The black circles on the figure indicate the frequency of sampling, f. The signal (red line) is varying at a faster rate and can’t be fully captured. Image from Creative Commons.

Furthermore, higher-frequency components can be “aliased” to appear as contaminating contributions at lower resolved frequencies. To avoid this contamination, it is best to remove components above the Nyquist frequency by filtering (using filters with better cutoff characteristics than the dynamic systems illustrated in this lesson). General guidance is to sample fast enough to give a Nyquist frequency significantly above the highest frequency of interest and then filter at or below the sample frequency to eliminate higher-frequency components that might influence the resolved frequency range. Study of the frequency content of signals often is done using spectral analysis, either with appropriate equipment or numerically. This topic is beyond the scope of this lesson, but in some studies it can be crucial to understanding the spectral content of measurements.

Dynamic Response » Response to Specific Input Functions » Transfer Function Exercise

Use the image below to answer the following question.

Step impulse with a =1 and =1 response.

Step impulse with a 𝜔 = 1 and 𝛄 = 1 response. Image by NCAR/EOL.

Question

A temperature sensor on an aircraft has first-order dynamic response, and you need to make measurements that respond to abrupt (i.e., step-function) changes by reaching 90% of the final value in 100 m of flight path, while flying 200 m/s. What is the required time constant of the sensor? Answer in seconds (within 10%).

The correct answer is c.

The answer is c, 0.22 seconds or 0.5 seconds / 2.3, where 2.3 is obtained using the following procedure. Follow blue line (first-order response) to find the time corresponding to M1 = 0.90; the desired time constant is 0.5 s (i.e. 100 m/200 m s-1) divided by this value. Answer image key is shown below.

Answer image key corresponding to figure above, except that the solution for x = 0.90 is 2.3 is shown.

Answer image key corresponding to figure above, showing the solution for x = 0.90 is 2.3.

Please make a selection.

Use the image below to answer the following question.

Ramp input with omega equal to 1 and gamma equal to 1. Use this figure to determine the natural frequency required to get a second-order time constant equal to the first-order time constant at critical damping at longer times.

Ramp input with 𝜔 = 1 and 𝛄 = 1 . Use this figure to determine the natural frequency 𝜔 required to get a second-order time constant equal to the first-order time constant at critical damping at longer times.

Question

For a ramp input with slope such that x = t/𝛕, at long times a first-order system will lag behind that input by 𝛕 (in these plots, 1 s), so this is a possible way to find the time constant and to provide corrections for such a system. For a second-order system, the response is more complicated and depends on both the characteristic angular frequency and the damping. However, for critical damping the system (at long time) again lags behind the input signal by a fixed amount, so the lag again can be used to find the characteristic angular frequency ω of this system. What value of 𝛚 is needed so that the lag of a second-order critically damped system is also 1?

The correct answer is b.

The answer is 2.06 s-1 (~ 2.1 s-1). To find the characteristic angular frequency of the system so that it is equal to the first-order system, the second-order response must overlap the first order response at a time after the transient effects have disappeared. For a second-order system with critical damping (𝛄 = 1) to have the same time constant as a first-order system, its characteristic frequency would have to be approximately twice that of a first order system. See the answer image key below.

This image shows that in order to overlap the responses of a second-order system and first-order system so that they would have the same time constant, the characteristic frequency of the second-order system would have to double.

This image shows that in order to overlap the responses of a second-order system and first-order system so that they would have the same time constant, the characteristic frequency of the second-order system would have to double.

Please make a selection.

Use the image below to answer the following questions.

System response to an impulse (step) function with damping.

System response to an impulse (step) function with damping.

To determine the type of response characteristic of an unknown system, it is often revealing to determine the system response to an impulse function. Suppose you have a measuring system with unknown characteristics, and you find it responds to an impulse as shown.

Question 1

Is this measuring system first-order or second-order? (Answer this question before continuing to the next two questions.)

The correct answer is b.

The response is not simple-exponential motion toward x. This is evident in the behavior of the x in time.

Please make a selection.

Dynamic Response » Response to Specific Input Functions » Ramp Input Exercise

Using the image below estimate the dynamic lag and the dynamic error for a second-order ramp input. You will use the figure to estimate the lag and error between the input (red line) and measurand (green line).

First-order (M1) and second-order (M2) response to a ramp input.

First-order (M1) and second-order (M2) response to a ramp input.

Question

For 𝛄 = 1, 𝜔n = 1, estimate the dynamic lag for a second-order ramp input.

The correct answer is c.

The dynamic lag is 2 seconds.

Please make a selection.

Question

For 𝛄 = 1, 𝜔n = 1, estimate the dynamic error in units of the measurand for a second-order ramp input.

The correct answer is b.

You can estimate the dynamic lag 𝚫t and dynamic error 𝚫x using the image below. Using a horizontal line, determine the 𝚫t as shown. It is important that this determination is carried out after the solution has become steady, beyond the transient period. The correct answer is 2 𝛕. We can verify our estimate quantitatively since for a second-order ramp input, the dynamic lag 𝚫t = 2 𝛄/𝛚, and since 𝛄 = 1, 𝜔n = 1 s-1, 𝚫t = 2 seconds. To obtain an estimate of the dynamic error, use a vertical line through the data as shown to determine the difference between the input and the measurand at a specific time after the transient solution has disappeared.

The values can be verified on this graph.

Answer to the questions on dynamic lag and dynamic error.

Answer to the questions on dynamic lag and dynamic error. Image from NCAR/EOL.

Please make a selection.

Dynamic Response » Response to Specific Input Functions » Principle of Superposition

First-order and second-order responses are described by ordinary differential equations (ODEs). ODEs are linear in the dynamic sense, but that does not necessarily imply linearity in the static sense. The static sensitivity may not be constant over the range of the sensor, that is, the sensitivity may be nonlinear without a constant slope but the sensor could still be described by a linear ODE. Therefore, the Principle of Superposition applies, meaning that the response of a sensor is the sum of the response of the individual inputs to the device (Brock and Richardson, 2001). This additive property is important and enables the Principle of Superposition to be applied to the frequency decomposition of complex signals by superimposing the responses to individual frequencies.

The Principle of Superposition can be illustrated by considering that the input signal produced by a time-varying measurand can be decomposed into individual sinusoidally-varying signals.

Similarly, individual discrete sources can each produce input signals that add linearly, resulting in a combined input signal that appears to be complex.

An illustration of the Principle of Superposition. The original measurand signal is shown in the bottom-right with the individual decomposed signals as sinusoidal waves and the spectral analysis (energy or energy density) along the frequency axis.

Dynamic Response » Response to Specific Input Functions » Superposition Illustration

Explore the following animations, which illustrate the Principle of Superposition. (For more details, see the Superposition of Waves page created by Daniel A. Russell at The Pennsylvania Sate University, http://www.acs.psu.edu/drussell/Demos/superposition/superposition.html.)

Opposite Pulses

Superposition of two opposition direction wave pulses. The animation shows two Gaussian wave pulses are travelling in the same medium but in opposite directions. The two waves pass through each other without being disturbed, and the net displacement is the sum of the two individual displacements.

Interference

Two waves (with the same amplitude, frequency, and wavelength) are travelling in the same direction. Using the principle of superposition, the resulting wave displacement is a travelling wave whose amplitude depends on the phase (ϕ). When the two waves are in-phase, they interfere constructively and the result has twice the amplitude of the individual waves. When the two waves have opposite-phase, they interfere destructively and cancel each other out.

Opposite Sine Waves

A travelling wave moves from one place to another, whereas a standing wave appears to stand still, vibrating in place. In this animation, two waves (with the same amplitude, frequency, and wavelength) are travelling in opposite directions. This wave is no longer a travelling wave because the position and time dependence have been separated.

Different Frequencies

Two waves of equal amplitude are travelling in the same direction. The two waves have different frequencies and wavelengths, but they both travel with the same wave speed. The resulting particle motion is the product of two travelling waves. One part is a sine wave which oscillates with the average frequency, which is the frequency perceived by a listener. The other part is a cosine wave.

General Characteristics of Measurements

General Characteristics of Measurements » Representativeness

According to CIMO (2008), the representativeness of an observation is defined as the degree to which it accurately describes the value of the variable needed for a specific purpose. It is not a fixed quality of any observation but results from joint appraisal of instrumentation, measurement interval, and exposure as these characteristics relate to the requirements of a particular application. Measurements taken at a site will always contain a degree of influence from some upstream area and will be affected by natural variables such as wind speed, turbulence, and diurnal heating and cooling but also by the instrument characteristics such as the volume of sample collected and analyzed. Site selection is paramount to achieving the expected representativeness of the spatial-temporal domain relevant to the study (See Foundations to Meteorological Instrumentation and Measurements for more on Site Selection).

This site was selected for the science objective to make spatial measurements of the velocity and scalar turbulence fields in a uniformly vegetated canopy.

General Characteristics of Measurements » Homogeneity

Homogeneity refers to similarity of a particular characteristic over some interval in space or time. There can be considerable variability in underlying properties of a measured quantity, as in a turbulent field, but the field itself may be homogeneous if the characteristics of the turbulence are similar over some interval. There is a relationship between homogeneity and representativeness. It is easier to make representative measurements of a homogeneous characteristic; for example, one sounding may represent conditions over a large area away from particular weather features but may miss important structure if there are weather systems causing significant variations between sounding stations.

As an example of horizontal homogeneity, weather conditions will have the same characteristics regardless of from which direction they are advected within this field of view.

As an example of horizontal homogeneity, weather conditions will have the same characteristics regardless of from which direction they are advected within this field of view. Image from R. Clark.

Over a uniform surface and with constant wind, an atmospheric boundary layer develops that has homogeneous properties. However, if the wind then carries the air over land having different properties, the properties of the boundary layer change as it moves toward a different structure that would form over the new surface. The result is the formation of an "internal boundary layer" that increases in depth with distance downwind from the land-surface change.

Depiction of the downwind propagation of internal boundary layers affecting a measurement tower

Depiction of the downwind propagation of internal boundary layers affecting a measurement tower. Image from NCAR/EOL.

General Characteristics of Measurements » Homogeneity » Considerations

Representativeness requires an appropriate match between the scale of the phenomenon being studied and the spatial and temporal coverage of the sensor. It also requires collecting a sufficient and appropriately distributed number of measurements covering that scale. For example, a regularly spaced set of measurements over one minute, where the wind speed is about 10 m/s, is representative of an upstream fetch of 600 meters.

The number and spacing of sensors and the sampling frequency required to give representativeness depends on the scale of the phenomenon being studied. An example of a measurand having particularly high temporal and spatial variability is rainfall.

screenshot of CoCoRAHs precipitation map showing variability in precipitation measurements

Screenshot of CoCoRAHs precipitation map showing variability in precipitation measurements. The level of rainfall uncertainty is highly dependent on density and distribution of rain gauge stations over a region. Mishra, A. K. (2013) found that over Bangladesh, as gauge spacing increases from 5 km to 40 km, correlation coefficients decrease from 82% to 21% while root mean square error (RMSE) increases from 8.29 mm to 51.27 mm. Rainfall measurements at a single gauge also require considering the type of precipitation (e.g., frontal precipitation versus isolated showers), its intensity and duration, the effect of topography, and gauge exposure (i.e., height, whether or not shielded, and surrounding obstructions). Satellite-based rainfall measurements are critical to filling in the inter-gauge gap, but their algorithms have uncertainties due to the use of gauges for ground-truthing. Image from Creative Commons CoCoRaHS

Images from the Plains Elevated Convection At Night (PECAN) project (Geerts et al.) showing the mobile mesonet.

Sensors mounted aboard vehicles allow collection of atmospheric data via mobile instrument platforms. Image from R. Clark.

In terms of representativeness, field research presents particular challenges. For example, consider the deployment of mobile instrument platforms needed to characterize the mesoscale features studied during the Plains Elevated Convection At Night (PECAN) campaign. In this study, an array of fixed and mobiles sites, some with single instruments and others with an assortment of instruments, were used to establish a representative footprint (shown in the map) covering features of interest to the project.

Images from the Plains Elevated Convection At Night (PECAN) project (Geerts et al.) showing the PECAN domain.

The domain from the Plains Elevated Convection At Night (PECAN) project (Geerts et al.) Image from NCAR.

Images from the Plains Elevated Convection At Night (PECAN) project (Geerts et al.) showing a panorama of the observing platforms using in PECAN

A panorama of the observing platforms used in PECAN. Photo courtesy of R. Clark.

Question

What factors need to be considered to determine representativeness?

All of the above must be considered to determine site representativeness.

Please make a selection.

General Characteristics of Measurements » Uncertainty

The result of a measurement is an estimate of the measurand and is incomplete unless it is accompanied by a quantitative estimate of its uncertainty.

In accordance with the Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results (Taylor and Kuyatt, 1994), estimates of uncertainty are classified into two categories, “Type-A” (evaluated by statistical methods) and “Type-B” (evaluated by other means). Uncertainty classification differs from the classification of error as either random or systematic because either type of uncertainty can apply to either type of error. For example, a component in an analysis of uncertainty may be the uncertainty in calibration, for which one component is the estimate that arises from the standard deviation of the calibration data about the calibration curve. This is a Type-A evaluation of an assumed random error in the calibration process, but the resulting uncertainty applies to the systematic error resulting from use of the calibration.

Standard uncertainty is the recommended method for reporting uncertainty, which for Type-A estimates and uncorrelated measurements corresponds to one standard deviation as given by

equation

where x̄ is the mean of the N measurements xi. The measurement would be reported as x̄ ± u and the degrees of freedom would be reported as v = N-1.

General Characteristics of Measurements » Uncertainty » Evaluating Uncertainty

Type A evaluation of standard uncertainty may be based on any valid statistical method for analyzing data. The analysis of variance (ANOVA) is particularly useful since it considers both random and systematic effects between groups relative to the random effects within groups.

In Type B evaluation cases, a target is to have the uncertainty correspond to similar coverage. Type-B estimates are difficult to defend as quantitative measures and can't be assigned rigorous confidence limits.

A Type B evaluation of standard uncertainty is usually based on scientific judgment using all the relevant information available about the measurand and the instrument, such as:

  • Previous measurement data
  • Experience with the properties and behavior of instruments
  • Manufacturer’s specifications
  • Calibration data
  • Uncertainties associated with reference data
    • Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results (Taylor and Kuyatt, 1994)
Type B evaluation of standard uncertainty.

The combined standard uncertainty of a measurement result uC is obtained by combining the individual standard uncertainties whether arising from Type A or Type B evaluations.

Question

Select the type of uncertainty (Type A or Type B) referred to by the statement.

a) Affected by previous measurement data or reference data uncertainties:
Please make a selection.
b) Uncertainty in calibration:
Please make a selection.
c) Evaluation based on statistical methods:
Please make a selection.
d) Evaluation based on scientific judgement using relevant information:
Please make a selection.

The correct answers are shown above.

Please make a selection.

General Characteristics of Measurements » Law of Propagation of Uncertainty

So how can the joint effects of many sources of uncertainty be understood? First, apply all corrections (e.g., from calibration) to compensate for the known systematic errors.The relevant uncertainty to associate with each correction is the standard uncertainty of the applied correction.

Note: The uncertainty of a correction applied to a measurement result in order to compensate for the systematic effect is NOT the systematic error in the result for that effect. Rather, it is a measure of the uncertainty of the result due to incomplete knowledge of the required value of the correction. The difference between error and uncertainty should always be kept in mind. For example, the result of a measurement after correction can unknowably be very close to the unknown value of the measurand, and thus have a negligible error, even though it may have a large uncertainty.
(From NIST Guidelines).

After compensating for the systematic errors, the combined standard uncertainty can be estimated using the law of propagation of uncertainty, since in many cases, the measurand Y (using NIST notation) is not measured directly, but is determined from N other quantities X1, X2, ...XN such that

equation

Included among the quantities needed to determine Y are the corrections for systematic errors and the correlations among the basic measurements used to determine Y.

In keeping with the NIST notation, the estimate of the measurand (output quantity) Y, denoted by y, is obtained from input quantity estimates x1, x, ...xN using the same functional relationship as above: y = f(x1, x2, ...xN). The combined standard uncertainty uc(y) of the measurement result y is taken as the positive square root of the estimated variance equationobtained from

equation

Where u(xi) is the standard uncertainty associated with the estimate xiand u(xi, xj) is the covariance associated with xi and xj. The partial derivatives are the sensitivity coefficients (static sensitivities) of the relationship between y and xi. The equation above is based on a first order Taylor series approximation of Y = f(X1, X2, ...XN) and is referred to as the Law of Propagation of Uncertainty.

Note: It is recommended that the number of degrees of freedom associated with uncertainty estimate (e.g., N-1 where N is the number of measurements entering a standard deviation) be reported where applicable. This is often omitted when large, but when small it can affect how the uncertainty estimate is used. Some recommendations regarding how to determine degrees of freedom for uncertainty estimates arising from many components are contained in the NIST document referenced earlier; see the Welch-Satterthwaite formula.

General Characteristics of Measurements » Law of Propagation of Uncertainty » Example: Ohm’s Law

Let’s consider the example of Ohm’s Law P = I2R, where P is power, I is current, and R equals resistance. The uncertainty uc in the power measurement is given by the following equation:

equation

where

equation

equation

and

equation

gives the uncertainty in the current, u.

A set of measurements produce the following estimate of the Measurands (current and resistance) during a calibration test.

I = 15.000 Amps

R = 1.0500 Ω

Variance(I) = 8.28E-06 A2

Variance(R) = 4.6888E-09 Ω2

Covariance(I, R) = -5.2E-09 A Ω

The uncertainty in the power measurement, P, can be computed based on the following equation:

equation

Use the values above to solve for equation .

Question

The correct value for equation is:

The correct answer is d.

equation

Taking the square root, the Standard Uncertainty (uc) on the Power, P, = ±0.0095 Watts.

Please make a selection.

General Characteristics of Measurements » Reporting Uncertainty

It is good practice to report random and systematic effects separately, as random effects influence the measurements differently while systematic effects influence each measurement in the same way. For instance, a zero offset (a systematic effect) will affect two measurements in the same way so that a property derived from taking the difference between the measurements will not be affected. On the other hand, that same property will be affected by the random effects on each measurement. For this reason, quantitative random uncertainty must be reported with every measurement. The convention is to use a multiple of the standard deviation, ±1𝜎, ±2𝜎, ... etc., which would be interpreted as a probability of 32% (±1𝜎) or 5% (±2𝜎) that the true (unknowable) value lies outside these uncertainties.

example of a zero error.

Illustration of zero offset error. Image by NCAR/EOL.

Errors can be reported as absolute values or percentages. It is often a personal choice. If everything remains the same during the measurement period, then it makes sense to report the uncertainty in absolute units. Conversely, if instruments, measurement techniques, and other factors come into play during the measurement period making it necessary to determine the combined uncertainty from the law of propagation of uncertainty, it is advisable to use percentages. The uncertainty can be reported in terms of standard error as

  1. ± X% of full range
  2. ± X% ( 100 𝜎√x ) or fraction ( 𝜎√x ) of the mean
  3. ± 𝜎

General Characteristics of Measurements » Uncertainty Associated with Representativeness

Uncertainty can also arise from issues with measurement representativity. Errors associated with representativeness more often appear as random errors except in cases where the correlation between a specific feature (e.g., large upwind body of water) and the measurement (wind direction, for example) introduces a systematic effect in the measurement. In this example, the directional persistence of the wind moving over the body of water will bring cooler and more moist conditions which would reveal a directional bias in the data.

Off-shore winds moving over the river adjacent to a city may cause directional bias in measurements.

Off-shore winds moving over the river adjacent to a city may cause directional bias in measurements. Image from NCAR/EOL

Representativeness requires an interpretation of the natural spatial-temporal variability of the measurand. For example, where was an air parcel earlier, what changes occurred during its advection to the observation site, and how is the value of the measurand correlated with its values at other times and locations? Such correlations must be considered when estimating uncertainty because measurements cannot be considered independent in the presence of such correlations.

Exploratory statistical correlations for paired data can be very useful in determining correlations arising from representativity. The Ordinary or Pearson Correlation coefficient is given by

equation

where Cov(x,y) is the covariance of x and y and sx and sy are the sample standard deviations. For testing temporal representativity, the autocorrelation function can be used to compute correlations for various lags. The autocorrelation for a time lag of 𝜏 samples is given by

equation

Where Cov(x,y) = E(x-x̄)(y-ȳ), where E denotes the expected value.

Let’s consider an example in which we measure component winds (u, v, w) for an hour with a sonic anemometer at 20 Hz. This sampling frequency would yield 72,000 measurements. The autocorrelation shows that the measurements have a correlation time of 0.5 minutes, that is, the e-folding time for the autocorrelation coefficient to decrease to 0.37 is 0.5 minutes.

Question

In this situation, how many independent measurements do we have? Select the best answer.

The correct answer is b.

In this situation, we get an independent measurement every 0.5 minutes, so will have 120 independent measurements instead of 72,000.

Please make a selection.

For measurements to be representative, atmospheric processes in general must be relatively homogeneous, or vary linearly, across the spatial and temporal domains of an instrument network. Representativeness cannot be defined by an observation or a specific parameter, but results from an assessment of the instrumentation, sampling frequency, exposure, and the application.

Summary

This lesson summarized the key performance characteristics of instrumentation used for meteorological measurements. Measurements are necessary to obtain quantitative information about the atmosphere. Elements of a good measurement system are those that return a robust, stable, and reliable output that is of a scale and resolution to be useful in quantifying the phenomenon under investigation. Meteorological measurement techniques are a field of steady change and progress largely but not solely driven by enhancements to electronics, microelectronics, and computer resources [Emeis (2010)]. Students and practitioners of instrumentation, measurement, and observing systems need to keep abreast of these changes to maintain timely knowledge and ensure competency and comprehension. The usefulness of a measurement result is largely determined by the quality of the statement of uncertainty that accompanies the measurement.

A research scientist on board the NSF/NCAR C-130 closely monitors her chemistry instrument to ensure its performance characteristics are optimal for quality data

A research scientist on board the NSF/NCAR C-130 aircraft closely monitors her chemistry instrument to ensure its performance characteristics are optimal for quality data. Image from NCAR/EOL.

Static performance characteristics of an instrument include range, span, resolution, static sensitivity, linearity, stability, and sensor threshold. Dynamic performance characteristics include the time constant , sensor time lag, hysteresis, and first-order transfer functions. The selection of site in terms of representativity and homogeneity can affect these performance characteristics.

The importance of calibration was also emphasized as the key step in determining the relationship between a measurand and the output of an instrument. The best calibration process uses a traceable standard that provides values of the measurand covering the measurement range.

Differences between first order dynamic systems and higher order dynamic systems were described. The lesson also covered the principle of superposition and its usefulness in signal analysis, the characterization of uncertainty, the differences between Type A and Type B estimates of uncertainty, and the propagation of uncertainty.

Thank you for completing this lesson. Test your knowledge of the main concepts presented in this training by completing the lesson quiz and share your feedback with us via the user survey.

References

Brock, F. V., & Richardson, S. J. (2001). Meteorological measurement systems. Oxford: Oxford University Press.

Cable, Mike. Calibration Principles: A technician’s guide. Vol. 1. Raleigh: International Society of Automation, 1989. Print. ISA Technician Ser.

DeFelice, T. P. (2000). An introduction to meteorological instrumentation and measurement. Upper Saddle River, NJ: Prentice-Hall.

Emeis, S. (2010). Measurement methods in atmospheric sciences: in situ and remote; with 28 tables. Stuttgart: Borntraeger.

Mishra, A. K. (2013). Effect of rain gauge density over the accuracy of rainfall: a case study over Bangalore, India. SpringerPlus, 2, 311. http://doi.org/10.1186/2193-1801-2-311

Nappo, C.J., J.Y. Caneill, R.W. Furman, F.A. Gifford, J.C. Kaimal, M.L. Kramer, T.J. Lockhart, M.M. Pendergast, R.A. Pielke, D. Randerson, J.H. Shreffler, and J.C. Wyngaard, 1982: The workshop on the representativeness of meteorological observations, June 1981, Boulder, Colorado, Bull. Amer. Meteor. Soc., 63, 761-764.

Pendergast, R. A. Pielke, D. Randerson, J.H. Shreffler, and J. C. Wyngaard, 1982. The workshop on the representativeness of meteorological observations, June 1981, Boulder, CO. Bull. Amer. Meteorol. Soc., 63, 761-764.

World Meteorological Organization, Guide to Meteorological Instruments and Methods of Observation (CIMO guide), WMO-No. 8, 2014, 1139 pp.

Contributors

COMET Sponsors

MetEd and the COMET® Program are a part of the University Corporation for Atmospheric Research's (UCAR's) Community Programs (UCP) and are sponsored by NOAA's National Weather Service (NWS), with additional funding by:

To learn more about us, please visit the COMET website.

The Instrumentation and Measurement of Atmospheric Parameters training series has been made possible by support from the National Science Foundation (NCAR Award #1642735 and Millersville University Award #1642643).

Project Contributors

Project Manager
  • Alison Rockwell, NCAR/EOL
Principal Science Contributors
  • Dr. Richard Clark, Millersville University
  • Dr. William Cooper, NCAR/EOL
Science Advisors
  • Dr. Teresa Campos, NCAR/EOL
  • Dr. Julie Haggerty, NCAR/EOL
  • Dr. Jorgen Jensen, NCAR/EOL
  • Dr. Steven Oncley, NCAR/EOL
  • Dr. Holger Vömel, NCAR/EOL
  • Cory Wolff, NCAR/EOL
COMET Project Leads
  • Amy Stevermer, UCAR/COMET
  • Dr. Alan Bol, UCAR/COMET
Instructional Design
  • Amy Stevermer, UCAR/COMET
  • Dr. Alan Bol, UCAR/COMET
Graphics/Animations
  • Steve Deyo, UCAR/COMET
Multimedia Authoring/Interface Design
  • Gary Pacheco, UCAR/COMET
  • Sylvia Quesada, UCAR/COMET
Technical Contributors
  • Lindsay Johnson, UCAR/COMET
  • Marianne Weingroff, UCAR/COMET

COMET Staff, November 2017

Director's Office
  • Dr. Elizabeth Mulvihill Page, Director
  • Tim Alberta, Assistant Director Operations and IT
  • Paul Kucera, Assistant Director International Programs
Business Administration
  • Lorrie Alberta, Administrator
  • Tara Torres, Program Coordinator
IT Services
  • Bob Bubon, Systems Administrator
  • Joshua Hepp, Student Assistant
  • Joey Rener, Software Engineer
  • Malte Winkler, Software Engineer
Instructional Services
  • Dr. Alan Bol, Scientist/Instructional Designer
  • Tsvetomir Ross-Lazarov, Instructional Designer
International Programs
  • Rosario Alfaro Ocampo, Translator/Meteorologist
  • Bruce Muller, Project Manager
  • David Russi, Translations Coordinator
  • Martin Steinson, Project Manager
Production and Media Services
  • Steve Deyo, Graphic and 3D Designer
  • Dolores Kiessling, Software Engineer
  • Gary Pacheco, Web Designer and Developer
  • Sylvia Quesada, Production Assistant
Science Group
  • Dr. William Bua, Meteorologist
  • Patrick Dills, Meteorologist
  • Bryan Guarente, Instructional Designer/Meteorologist
  • Matthew Kelsch, Hydrometeorologist
  • Erin Regan, Student Assistant
  • Andrea Smith, Meteorologist
  • Amy Stevermer, Meteorologist
  • Vanessa Vincente, Meteorologist

Back to Top