It is good practice to report random and systematic effects separately, as random effects influence the measurements differently while systematic effects influence each measurement in the same way. For instance, a zero offset (a systematic effect) will affect two measurements in the same way so that a property derived from taking the difference between the measurements will not be affected. On the other hand, that same property will be affected by the random effects on each measurement. For this reason, quantitative random uncertainty must be reported with every measurement. The convention is to use a multiple of the standard deviation, ±1𝜎, ±2𝜎, ... etc., which would be interpreted as a probability of 32% (±1𝜎) or 5% (±2𝜎) that the true (unknowable) value lies outside these uncertainties.
Errors can be reported as absolute values or percentages. It is often a personal choice. If everything remains the same during the measurement period, then it makes sense to report the uncertainty in absolute units. Conversely, if instruments, measurement techniques, and other factors come into play during the measurement period making it necessary to determine the combined uncertainty from the law of propagation of uncertainty, it is advisable to use percentages. The uncertainty can be reported in terms of standard error as