First point:
It is important to distinguish between error and uncertainty. Error is defined as the difference between a single result of a measurement and the true value. Therefore, the error is a single value. In principle, the known value of the error can be used to correct the result. Error is an ideal concept and cannot be known exactly.
Second point: Uncertainty is expressed in the form of an interval and applies to all measured values described by it when evaluating an analytical process and a specified sample type. The uncertainty value cannot generally be used to correct the measurement result.
Third point: The difference between error and uncertainty is also reflected in the fact that the corrected analytical result may be very close to the measured value, so the error can be ignored. However, the uncertainty may still be large because the analyst is not sure about the closeness of the measurement result.
Fourth point: The uncertainty of the measurement result cannot be interpreted as representing the error itself or the residual error after correction.
Fifth point: It is generally believed that errors have two components, called random components and systematic components;
Sixth point: Random errors usually arise from unpredictable changes in the influencing quantity. These random effects cause the results of repeated observations of the measured quantity to vary. Random errors in analytical results cannot be eliminated, but they can usually be reduced by increasing the number of observations.
Although this is stated in some uncertainty publications the arithmetic mean or the experimental standard deviation of the mean of a series of observations is not a random error in the mean. It is a measure of the uncertainty of the mean due to some random effects. The exact value of the random error of the mean due to these random effects is unknown.
Point 7: Systematic error is defined as the error component that remains constant or changes predictably during a large number of analyses of the same measured quantity. It is independent of the number of measurements and cannot be reduced by increasing the number of analyses under the same measurement conditions.
Point 8: Constant systematic errors, such as the failure to take into account reagent blanks in quantitative analysis or inaccuracies in multi-point equipment calibration, are constant at a given level of measurement, but may also change with different levels of measurement.
Point 9: In a series of analyses, the influence factors change systematically in quantity, for example, due to inadequate control of experimental conditions, which will produce non-constant systematic errors.
Example:
1. In chemical analysis, the temperature of a group of samples is gradually increased, which may cause gradual changes in the results.
2. Sensors and probes may age for the test and may introduce non-constant systematic errors.
Point 10: All identified significant systematic effects on the measurement results should be corrected. Measuring instruments and systems usually need to be adjusted or calibrated using measurement standards or reference materials to correct for systematic effects. The uncertainty associated with these measurement standards or reference materials and the uncertainty in the correction process must be considered.
Point 11: Another form of error is a false or excessive error. This type of error invalidates the measurement and is usually caused by human error or instrument failure. Reasons such as digit carry when recording data, bubbles in the spectrometer flow cell, or accidental cross-contamination between samples are common examples of this type of error.
Point 12: Measurements of false or excessive errors are unacceptable and should not be incorporated into statistical analyses. However, errors caused by digit carry can be corrected (accurately), especially when such errors occur in the first digit.
Point 13: False errors are not always obvious. When the number of repeated measurements is sufficient, an outlier test should usually be used to check whether there are suspicious data in this set of data. All positive results in outlier tests should be treated with caution and, when possible, verified with the experimenter. In general, a value cannot be eliminated based on statistical results alone.
Point 14: The uncertainty generally obtained does not take into account the possibility of false or excessive errors.