Welcome to the NMS chemical and biological metrology website. Please log in
to view restricted content.
Making the most of PT participation
Participation in proficiency testing should be considered an educational activity. Laboratories should aim to learn from their participation and use the results they obtain to optimise their procedures. To give a true reflection of laboratory performance, the proficiency testing samples should not be given special treatment – they should be treated in exactly the same way as routine test samples. Examples of ‘poor practice’ include:
- always getting the PT samples analysed by the most experienced analyst – over time, all the analysts in the laboratory should participate;
- carrying out replicate analyses of PT samples and reporting the average of the results or what the laboratory considered to be the ‘best’ result.
The initial assessment of z-scores was described in the section Proficiency testing performance scores. A laboratory should take action to investigate the cause of any unsatisfactory scores (i.e. |z| > 3) and put in place any necessary corrective actions to prevent the problem reoccurring. This is a requirement for laboratories accredited to ISO/IEC 17025.
Laboratories are also advised to take action if they obtain, for the same measurement:
- two consecutive questionable scores (2 < |z| ≤ 3)
- nine consecutive scores of the same sign.
Both of the above are unlikely to happen by chance which indicates there may be a problem with the measurement system. Performance over time is best monitored by plotting z-scores sequentially.
There are a number of possible causes of unsatisfactory performance scores. Analytical errors include:
- selecting an unsuitable method for the sample type (i.e. method is ‘out of scope’)
- incorrectly calibrated equipment
- instrument operating conditions not optimised
- problems with sample extraction/clean-up (e.g. the analyte may not be quantitatively extracted from the sample matrix)
- dilution errors (e.g. in the preparation of calibration standards or sample solutions)
- presence of interferences.
There are other sources of error which can be classified as non-analytical:
- calculation errors (e.g. forgetting to take account of sample dilution factors)
- transcription errors
- results reported in the wrong units.
There may be factors relating to the scheme itself which could account for an individual laboratory’s unsatisfactory performance score. It can therefore be helpful to review performance in the context of all of the results from a particular round. Issues to consider include:
- How many participants were there? Small data sets can make it difficult for the organiser to obtain a reliable assigned value if the consensus approach is used.
- Compared to other rounds, did a large number of other participants also receive unsatisfactory scores?
- Which test methods did the participants use and how was the standard deviation for proficiency assessment established? You may not be comparing ‘like with like’ if the method you used is significantly different to those used by other participants. If the standard deviation for proficiency assessment is based on the expectation that laboratories will use a particular test method this can cause problems for laboratories using methods that are less precise. Some scheme organisers group results according to the test methods used by the participants.
- Was there anything particularly challenging about the PT sample for the round for which the unsatisfactory result was obtained? For example, was the analyte concentration much lower than what you would normally encounter for routine test samples?
On occasions there may be issues with the operation of a round of a scheme which may cause laboratories to receive unsatisfactory scores. The include:
- Problems with setting the assigned value and/or the standard deviation for proficiency testing. The values used directly affect the calculation of the performance scores.
- Issues with the homogeneity or stability of samples,
- Errors associated with data entry or the production of reports.
In some PT schemes laboratories may be able to report results for a range of analytes in a single sample, which are all determined using the same test methods (e.g. toxic metals in a soil sample or a class of pesticides in a foodstuff). Evaluating the z-scores as group can help to identify general problems with the analysis (i.e. all the z-scores are unsatisfactory) or problems with a particular analyte (only one of the z-scores is unsatisfactory).
Last modified on
28 January 2009.