Performance evaluation for proficiency testing with a limited number of participants

General Paper


Proficiency testing (PT) is an essential tool for laboratories to assess their competency. Also, participation in PT has become one of the mandatory requirements for laboratory to seek accreditation according to ISO/IEC 17025. For this reason, the effectiveness of performance evaluation by PT scheme is of great concern for the participants and for accreditation bodies as well. In practice, owing to unavailability of other appropriate alternatives, PT scheme providers may have to choose using consensus values to evaluate the performance of participants. However, such consensus values approach was not recommended by relevant international guidelines for PT schemes with limited number of participants. With the use of Monte Carlo simulation technique, this study attempted to investigate the effectiveness of using consensus values for performance evaluation in PT schemes with limited number of participants. The simulation process was schemed according to the statistical model provided by ISO 5725-1 for laboratory measurement results, which covered components like method bias, laboratory bias, and measurement precision. The effectiveness of the consensus value approach was expressed as the percentage of participants in a simulation run could get the same evaluation result, either satisfactory or unsatisfactory, against the “true value.” The findings indicated that the number of participants, choice of consensus values, mass fraction of analyte, method bias, laboratory bias, and measurement repeatability of participating laboratories would all affect the effectiveness of the consensus value approach but at different extent. However, under certain circumstances, use of consensus value could still be considered as an acceptable approach for performance evaluation even the number of participants was limited. Some of the findings were further verified using real data from PT schemes where appropriate certified reference materials or reliable reference values were available.


Proficiency testing Monte Carlo simulation Consensus values Consistency rate 


  1. 1.
    Thompson M, Ellison SLR, Wood R (2006) Pure Appl Chem 78(1):145–196CrossRefGoogle Scholar
  2. 2.
    ISO 13528 (2005) Statistical methods for use in proficiency for testing by interlaboratory comparison. International Organization Standardization, GenevaGoogle Scholar
  3. 3.
    ILAC G13 (2007) Guidelines for the requirements for the competence of providers of proficiency testing schemes. International Laboratory Accreditation Cooperation, SydneyGoogle Scholar
  4. 4.
    Belli M, Ellision SLR, Fajgelj A, Kuselman I, Sansone U, Wegscheider W (2007) Accred Qual Assur 12:391–398CrossRefGoogle Scholar
  5. 5.
    Kuselman I, Fajgelj A (2010) Pure Appl Chem 82:1099–1135CrossRefGoogle Scholar
  6. 6.
    ISO/IEC 17043 (2010) Conformity assessment—General requirements for proficiency testing. International Organization Standardization, GenevaGoogle Scholar
  7. 7.
    ISO 5725-1 (1994) Accuracy (trueness and precision) of measurement methods and results—Part 1: general principles and definitions. ISO, Geneva, SwitzerlandGoogle Scholar
  8. 8.
    Wong SK (2005) Accred Qual Assur 10:409–414CrossRefGoogle Scholar
  9. 9.
    Guell OA, Holcombe JA (1990) Anal Chem 62:529A–542AGoogle Scholar
  10. 10.
    Thompson M, Lowthian PJ (1995) Analyst 120:271–272CrossRefGoogle Scholar
  11. 11.
    Lawn RE, Thompson M, Walker RF (1997) Proficiency testing in analytical chemistry. Royal Society of Chemistry, CambridgeGoogle Scholar
  12. 12.
    Thompson M (2004) AMC Technical Brief No. 17. Royal Society of Chemistry, CambridgeGoogle Scholar
  13. 13.
    Thompson M (1999) Analyst 124:991CrossRefGoogle Scholar
  14. 14.
    Gonzalez AG, Herrador MA, Asuero AG (2005) Accred Qual Assur 10:149–151CrossRefGoogle Scholar

Copyright information

© Springer-Verlag 2011

Authors and Affiliations

  1. 1.Government LaboratoryHong Kong SARChina

Personalised recommendations