A metrological traceability chain [1] has an enormous value: it shows where the measurement result is coming from. One could say that it shows the “trace” which analysts have chosen and along which a measurement result comes to them. It shows to what ‘reference’ [2] the result is metrologically “trace”-able. In the simplest case, it leads to the (definition of the) measurement unit. A measurement unit [3] has a measurement uncertainty zero because it is not measured. It therefore, of necessity, ends the metrological traceability chain.

The inverse of this metrological traceability chain is a ‘calibration hierarchy’ [4], constituted by one or more sequential calibration steps in this hierarchy. Through a sequence of steps, it runs down from the definition of the measurement unit to the end-user’s measuring system [5] and terminates at the calibration of the analyst’s measuring system, the very purpose of its existence. Thus, it becomes natural and easy to evaluate the cumulative measurement uncertainty of the end-user’s measurement result by walking along the calibration hierarchy of that result from the definition of the measurement unit used down to the analyst’s measuring system. When analysts choose a unit—at the top of the calibration hierarchy—from an internationally agreed measurement system of units, the SI (“Le Système international d’unités”—“The International System of Units”), or any other unit system such as the cgs (centimetre, gram, second) system, or the mks (metre, kilogram, second) system, or the WHO (World Health Organisation) international unit system, they connect their measurement result to an agreed international reference system. Thus, their results are traceable to this commonly agreed ‘reference’. Any such reference ensures ‘metrological comparability of measurement results’ [6] to other measurement results for the same quantity embodied in any material and traceable to this same reference. This comparability is a basic need we want to see fulfilled: gaining the ability to metrologically compare our results in a metrologically meaningful way.

In a chemical measurement—as in any other measurement—an output quantity value of the ‘measurement function’ [7] (i.e. a measured value for the measurand) is obtained as a function of measured input quantity values, of e.g. mass, amount, electric currents, etc [8] and, usually, of influencing quantities [9] such as pressure or temperature yielding small corrections for systematic effects in the measuring system.

The measurement uncertainties in these small influencing quantity values do not usually contribute significantly to the uncertainty in the final measurement result, since they are (much) smaller than those of the input quantity values. They have their own metrological traceability chains, which can be seen as “grafted” on the main chain just as branches are grafted on the stem of a tree. Consideration of their measurement uncertainties usually is not very critical. For the purpose of this discussion, it is sufficient to note that their metrological traceability chains also are “unidirectional” i.e. from a result to a reference.

In the above discussion, we have chosen a metrological traceability chain and associated calibration hierarchy (its inverse), going up to (the definition of) a chosen measurement unit. Shorter or longer chains are possible depending on whether another reference was decided by the analyst in the planning of the measurement such as a value for the quantity measured as embodied in a specified ‘calibrator’ [10] or the value for the quantity measured as obtained by a chosen ‘reference measurement procedure’ [11]. The analyst always has a choice of possible references for his/her metrological traceability chains. This may lead to indeed different measurement uncertainties of the final measurement result because resulting from different chains.

This picture of a metrological traceability chain must be drafted before the measurement in order to be able to assist in the choice of the (working) calibrator for the analyst’s measuring system. Consequently, after the measurement, we have this picture at hand at the time of evaluating the measurement uncertainties in the form of Type A and Type B evaluations [12, 13].

Thus, logically, the existence of a metrological traceability chain becomes a pre-requisite—or a pre-condition—for the full and correct evaluation of measurement uncertainty.

An appropriate formulation of basic concepts in the VIM (in this case, the formulation of the definition of ‘metrological traceability chain’) leads to the clear and important consequence:

$$ {\text{Metrological}}\;{\text{traceability}}\;{\text{is}}\;{\text{a}}\;{\text{pre - requisite}}\;{\text{for evaluation}}\;{\text{of}}\;{\text{measurement}}\;{\text{uncertainty}}. $$

A metrological traceability chain is very useful for that purpose because the somewhat abstract concept of metrological traceability is thereby visualized.

It is amazing that sometimes the opposite is found in the chemical literature: metrological traceability is considered as being the consequence of measurement uncertainty! This is a logically erroneous view.

Unambiguous definitions of basic metrological concepts are clarifying.

As a definition of any concept in “Metrology in Chemistry” should be….

figure a

Paul De Bièvre

Editor-in-Chief

By the way, looking for justification of Metrology in Chemistry?

Brasil imports about 690 000 000 m3 natural gas monthly from Bolivia at a cost of about USD 100 000 000. The measurement uncertainty on the amount of gas is evaluated at 0.37% which is equivalent to about USD 370 000 per month or USD 4 440 000 per year.

[PETROBRAS, Eng CE Geraidine da Costa via J Jornada and HS Brandi, INMETRO Brasil]