The smallest change in an indicator that can be perceived is difficult to characterize for an analog display, but it is straightforward for a digital display. For example, consider a transducer that produces voltages in the range from 0–5 V, connected to an analog-to-digital converter that produces a 12-bit signal covering this full range. Because 212 = 4096, 4096 increments are available, so the smallest voltage difference that can be indicated is 5 / 4096 ≈ 0.00122 V. This limits the resolution to no better than 1.22 mV, and that is often quoted as the resolution of the instrument. However, if the instrumental precision is larger than this, the smallest increment that can be distinguished will be larger than this value, as discussed in the preceding section. The digitizer resolution is often a poor measure of instrumental resolution if the precision is larger than this, but many instrument specifications indicate only this resolution.
The effect of digital resolution on measurements of a smoothly varying measurand is a function of the increments available over the full range. At 4-bit resolution the increments are obvious, but at 6-bit resolution the 64 levels are harder to distinguish. It is common for A-to-D devices to produce at least 12-bit resolution to avoid significant distortion of measurements. If there are no comparable or larger contributions to the precision, the digitizer resolution contributes to the precision of the instrument as 𝜎 = ∆ / √12 where ∆ is the difference in measurand corresponding to a 1-bit change in digitized output (e.g., 1.22 mV in the example in the previous paragraph). However, this relationship to precision applies to encoders using any number of bits because the value (i.e., √12) arises from the standard deviation of randomly occuring values in the range of one bit."