Skip to main content
Log in

Measurement uncertainty evaluation for a non-negative measurand: an alternative to limit of detection

  • General Paper
  • Published:
Accreditation and Quality Assurance Aims and scope Submit manuscript

Abstract

The interpretation and reporting the results of measurements on materials where the concentration of the analyte is close to or may even be zero has been the subject of much discussion with the use of such concepts as limit of detection (LOD) and limit of quantification (LOQ). While these concepts have taken into account the measurement uncertainty, they have not utilised the fact that the value of the measurand, i.e., the concentration, is constrained to be zero or greater. Taking this into account the distribution of values attributable to the measurand can be derived from the probability density function (PDF) that determines the distribution of the observed values. When this PDF is normal the distribution of the values attributable to the measurand is a truncated t distribution with a lower limit of \( t_L = - x_m /\left( {{s \mathord{\left/ {\vphantom {s {\sqrt n }}} \right. \kern-\nulldelimiterspace} {\sqrt n }}} \right), \) re-normalised so that the total probability is one, where x m is the mean of the n observed values and s their standard deviation. When x m much greater than \( {s \mathord{\left/ {\vphantom {s {\sqrt n }}} \right. \kern-\nulldelimiterspace} {\sqrt n }} \) then the distribution reverts to the unmodified t distribution. The probability that the value of the measurand is above or below a limit can be calculated directly from this truncated t distribution and the interpretation of the result does not require the use of concepts such as LOD and LOQ. Also it deals with the problem of negative observations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Cowen S, Ellison SLR (2006) Analyst 131:710–717

    Article  CAS  Google Scholar 

  2. Van der Veen AMH (2004) Accred Qual Assur 9:232–236

    Article  Google Scholar 

  3. de Jongh WK (1986) International laboratory, pp 62–65

  4. ISO 11843

  5. Currie L A, IAEA-TECDOC-1401, ISBN 92-0-108404-8, pp 9–33

  6. Thomas J, Danish Atomic Energy Commission Research Establishment Risö, Report No. 70

  7. Jeffreys H (1957) Scientific inference, Cambridge University Press, London

    Google Scholar 

  8. Lee PM (2004) Bayesian statistics, 3rd edn., Hodder Arnold, London, pp 63–64

    Google Scholar 

Download references

Author information

Authors and Affiliations

Consortia

Additional information

This Report was written by Alex Williams (e-mail: aw@camberley.demon.co.uk) for the Statistical Subcommittee and approved by the Analytical Methods Committee.

Appendix

Appendix

The PDF (that is, the uncertainty) of the values attributable to the value of a measurand, taking into account the prior information available, can be calculated using Bayes theorem from Eq. (1):

$$ {\text{d}}H(\varvec{\uptheta}|{\mathbf{x}})({\text{d}}\theta _1 \ldots {\text{d}}\theta _m ) = \frac{{P(\varvec{\uptheta})P({\mathbf{x}})G({\mathbf{x}}|\varvec{\uptheta})({\text{d}}\theta _1 \ldots {\text{d}}\theta _m )}} {{\int_1 \ldots {\int_m {P(\varvec{\uptheta})P({\mathbf{x}})G({\mathbf{x}}|\varvec{\uptheta})({\text{d}}\theta _1 \ldots {\text{d}}\theta _m )} } }}; $$
(1)

where \( G({\mathbf{x}}|\varvec{\uptheta}) \) gives the probability of obtaining a set of observed values x = [x 1,...,x n ] in terms of the parameters θ = [θ 1,...,θ m ] of the probability distribution G; \( {\text{d}}H(\varvec{\uptheta}|{\mathbf{x}})({\text{d}}\theta _1 \ldots {\text{d}}\theta _m ) \) is the probability that the parameters have values in the interval θ + dθ 1 , etc. given the observed values; and P(θ)(dθ 1...dθ m ) is the prior probability that the values of parameters θ lie in the interval θ + dθ 1, etc.

In principle Eq. (1) can be used whatever the form of G, providing that the prior probabilities are known. Parameters of no interest are normally integrated over their possible values, giving dH for those parameters that are of interest. When G is normal there are only two parameters, the mean μ and variance σ 2.

First, taking G as normal and using an approach similar to Jeffreys [7], the PDF dH(μ) will be calculated without the constraint on the values of the measurand to show that this leads to the t distribution. Then it will be shown that including the prior information that μ ≥ 0, the PDF becomes a truncated t distribution [8].

Assume that there are n independent random observations x i drawn from a normal distribution with mean μ and variance σ 2, and that these values are used to determine the PDF dH(μ), and that the value of the measurand lies between μ and μ + dμ.

Using Eq. (1)

$$ {\text{d}}H(\mu ) = \frac{{\int_0^\infty {P(\mu )P(\sigma )} \prod\nolimits_n {\frac{1} {{\sqrt {2\pi } \sigma }}\exp \left( {\frac{{ - (x_i - \mu )^2 }} {{2\sigma ^2 }}} \right){\text{d}}\mu {\text{d}}\sigma } }} {{\int_{ - \infty }^\infty {\int_0^\infty {P(\mu )P(\sigma )\left( {\prod\limits_n {\frac{1} {{\sqrt {2\pi } \sigma }}} } \right)\exp \left( {\frac{{ - (x_i - \mu )^2 }} {{2\sigma ^2 }}} \right){\text{d}}\mu {\text{d}}\sigma } } }}. $$
(2)

But

$$ \prod\limits_n {\frac{1} {{\sqrt {2\pi } \sigma }}\exp \left( {\frac{{ - (x_i - \mu )^2 }} {{2\sigma ^2 }}} \right) = \left( {\frac{1} {{\sqrt {2\pi } \sigma }}} \right)^n\exp \left( { - \sum\limits_n {\frac{{(x_i - \mu )^2 }} {{2\sigma ^2 }}} } \right)} , $$

and

$$ \sum\limits_n {(x_i - \mu )^2 = n(\mu - x_m )^2 + (n - 1)s^2 = (n - 1)s^2 \left( {1 + \frac{{t^2 }} {{n - 1}}} \right)} $$

where x m is the mean of x and s is the standard deviation of the observations.

Hence by putting \( k = (n - 1)s^2 \left( {1 + \frac{{t^2 }} {{n - 1}}} \right), \) Eq. (2) becomes

$$ {\text{d}}H(\mu ) = \frac{{\int_0^\infty {P(\sigma )P(\mu )\left( {\frac{1} {{\sqrt {2\pi } \sigma }}} \right)^n \exp \left( {\frac{{ - k}} {{2\sigma ^2 }}} \right){\text{d}}\sigma {\text{d}}\mu } }} {{\int_{ - \infty }^\infty {\int_0^\infty {P(\sigma )P(\mu )\left( {\frac{1} {{\sqrt {2\pi } \sigma }}} \right)^n \exp \left( {\frac{{ - k}} {{2\sigma ^2 }}} \right){\text{d}}\sigma {\text{d}}\mu } } }}. $$
(3)

Taking \( P(\sigma ) = \left\{ {\begin{array}{*{20}c} {{1 \mathord{\left/ {\vphantom {1 {\sigma ,\sigma \ge 0}}} \right. \kern-\nulldelimiterspace} {\sigma , \quad \sigma \ge 0}}} \\ {0, \quad \sigma < 0} \\ \end{array} } \right. \) (see for example, Jefferys [7]) and integrating Eq. (3) over σ gives

$$ {\text{d}}H(\mu ) = \frac{{P(\mu )\left( {1 + \frac{{t^2 }} {{n - 1}}} \right)^{ - n/2} {\text{d}}t}} {{\int_{ - \infty }^\infty {P(\mu )\left( {1 + \frac{{t^2 }} {{n - 1}}} \right)^{ - n/2} {\text{d}}t} }}. $$

Taking P(μ) as constant for − < μ <  and integrating over μ in the denominator gives the t distribution for μ, namely

$$ {\text{d}}H(\mu ) = \frac{{\left( {1 + \frac{{t^2 }} {{n - 1}}} \right)^{{{ - n} \mathord{\left/ {\vphantom {{ - n} 2}} \right. \kern-\nulldelimiterspace} 2}} {\text{d}}t}} {{\int_{ - \infty }^\infty {\left( {1 + \frac{{t^2 }} {{n - 1}}} \right)^{{{ - n} \mathord{\left/ {\vphantom {{ - n} 2}} \right. \kern-\nulldelimiterspace} 2}} {\text{d}}t} }}. $$

When μ is constrained to be greater than zero, that is, when \( t \ge - x_m /\left( {{s \mathord{\left/ {\vphantom {s {\sqrt n }}} \right. \kern-\nulldelimiterspace} {\sqrt n }}} \right), \) we have

$$ {\text{d}}H(\mu ) = \frac{{\left( {1 + \frac{{t^2 }} {{n - 1}}} \right)^{{{ - n} \mathord{\left/ {\vphantom {{ - n} 2}} \right. \kern-\nulldelimiterspace} 2}} {\text{d}}t}} {{\int_{\frac{{ - x_m }} {{\left( {{s \mathord{\left/ {\vphantom {s {\sqrt n }}} \right. \kern-\nulldelimiterspace} {\sqrt n }}} \right)}}}^\infty {\left( {1 + \frac{{t^2 }} {{n - 1}}} \right)^{{{ - n} \mathord{\left/ {\vphantom {{ - n} 2}} \right. \kern-\nulldelimiterspace} 2}} {\text{d}}t} }} $$

which is the t distribution truncated at \( t_L = - x_m /\left( {{s \mathord{\left/ {\vphantom {s {\sqrt n }}} \right. \kern-\nulldelimiterspace} {\sqrt n }}} \right). \)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Analytical Methods Committee, The Royal Society of Chemistry. Measurement uncertainty evaluation for a non-negative measurand: an alternative to limit of detection. Accred Qual Assur 13, 29–32 (2008). https://doi.org/10.1007/s00769-007-0339-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00769-007-0339-5

Keywords

Navigation