There are several ways to calibrate an instrument depending on the type of instrument and the chosen calibration scheme. There are two general calibration schemes:
Calibration by comparison with a source of known value. An example of a source calibration scheme is measuring an ohmmeter using a calibrated reference standard resistor. The reference resistor provides (sources) a known value of the ohm, the desired calibration parameter. A more sophisticated calibration source like the resistor is a multifunction calibrator that can source known values of resistance, voltage, current, and possibly other electrical parameters. A resistance calibration can also be performed by measuring a resistor of unknown value (not calibrated) with both the DUT instrument and a reference ohm meter. The two measurements are compared to determine the error of the DUT.
Calibration by comparison of the DUT measurement with the measurement from a calibrated reference standard. A variant of the source-based calibration is calibrating the DUT against a source of known natural value such as a chemical melt or freeze temperature of a material like pure water.
From this basic set of calibration schemes, the calibration options expand with each measurement discipline.
A calibration process starts with the basic step of comparing a known with an unknown to determine the error or value of the unknown quantity. However, in practice, a calibration process may consist of "as found" verification, adjustment, and "as left" verification.
Many measurement devices are adjusted physically (turning an adjustment screw on a pressure gauge), electrically (turning a potentiometer in a voltmeter), or through internal firmware settings in a digital instrument.
For example, for some devices, the data attained in calibration is maintained on the device as correction factors, where the user may choose to compensate for the known correction for the device. An example of this is RF attenuators, where their attenuation values are measured across a frequency range. The data is kept with the instrument in the form of correction factors, which the end-user applies to improve the quality of their measurements. It is generally assumed that the device in question will not drift significantly, so the corrections will remain within the measurement uncertainty provided during the calibration for the calibration interval. It is a common mistake for people to assume that all calibration data can be used as correction factors, because the short and long term variation of the device may be greater than the measurement uncertainty during the calibration interval.
Non-adjustable instruments, sometimes referred to as “artifacts”, such as temperature RTDs, resistors, and Zener diodes, are often calibrated by characterization. Calibration by characterization usually involves some type of mathematical relationship that allows the user to use the instrument to get calibrated values. The mathematical relationships vary from simple error offsets calculated at different levels of the required measurement, like different temperature points for a thermocouple thermometer, to a slope and intercept correction algorithm in a digital voltmeter, to very complicated polynomials such as those used for characterizing reference standard radiation thermometers.
The “as left” verification step is required any time an instrument is adjusted to ensure the adjustment works correctly. Artifact instruments are measured “as-is” since they can’t be adjusted, so “as found” and “as left” steps don’t apply.
A calibration professional performs calibration by using a calibrated reference standard of known uncertainty (by virtue of the calibration traceability pyramid) to compare with a device under test. He or she records the readings from the device under test and compares them to the readings from the reference source. He or she may then make adjustments to correct the device under test.