Advertisement

European Journal of Plant Pathology

, Volume 152, Issue 3, pp 657–676 | Cite as

Performance of diagnostic tests for the detection and identification of Pseudomonas syringae pv. actinidiae (Psa) from woody samples

  • Stefania Loreti
  • Amandine Cunty
  • Nicoletta Pucci
  • Aude Chabirand
  • Emilio Stefani
  • Adela Abelleira
  • Giorgio M. Balestra
  • Deirdre A. Cornish
  • Francesca Gaffuri
  • Davide Giovanardi
  • Richard A. Gottsberger
  • Maria Holeva
  • Aynur Karahan
  • Charikleia D. Karafla
  • Angelo Mazzaglia
  • Robert Taylor
  • Leonor Cruz
  • Maria M. Lopez
  • Joel L. Vanneste
  • Françoise Poliakoff
Open Access
Article

Abstract

The aim of this study was to characterise the performance of new molecular methods for the detection and identification of Pseudomonas syringae pv. actinidiae (Psa) and to provide validation data in comparison to the assays mentioned in official diagnostic protocols and being currently used. Eleven molecular tests for the Psa detection were compared in an inter-laboratory comparison where each laboratory had to analyse the same panel of samples consisting of thirteen Psa-spiked kiwifruit wood extracts. Laboratories had to perform also isolation from the wood extracts. Data from this interlaboratory test performance study (TPS) was statistically analysed to assess the performance of each method. In order to provide complete validation data, both for detection and identification, this TPS was supplemented by a further study of identification from pure culture of phylogenetically closely related Pseudomonas spp., Psa, and bacterial strains associated with kiwifruit. The results of both these studies showed that simplex-PCRs gave good results, whereas duplex-PCR and real-time PCR were the most reliable tools for detection and identification of Psa. Nested and multiplex-PCR gave false-positive results. The use of the most reliable detection test is suggested for routine analyses, but when Psa-free status needs to be accurately assessed, it is recommended that at least two detection tests are used. This work provides a wide comparison of the available diagnostic methods, giving new information for a possible revision of the official diagnostic protocols (e.g. European and Mediterranean Plant Protection Organization (EPPO) protocol PM7/120 for the detection of Psa).

Keywords

Bacterial canker of kiwifruit Actinidia spp. Diagnosis Validation Inter-laboratory comparison 

Introduction

Bacterial canker of kiwifruit caused by Pseudomonas syringae pv. actinidiae (Psa) was first described in Japan (Takikawa et al. 1989) and subsequently in Italy and in Korea (Koh et al. 1984; Scortichini 1994). Whereas the disease caused severe economic losses in Japan and in Korea, in Italy remained sporadic and with a low incidence for 20 years. However, in 2007/2008 economic losses started to be observed in Italy, and in 2010/2012 also in all the main areas of kiwifruit cultivation in the world (https://gd.eppo.int/taxon/PSDMAK/reporting).

Four different Psa population (named biovars) have also been previously described and characterised by different virulence (Chapman et al.2012). The biovar 1 (also named Psa1) include strains associated with the first epidemics of bacterial canker in Japan and in Italy. Biovar 2 strains (Psa2) are only reported from Korea. Biovar 1 and 2 have not been detected since 1998. Biovar 3 is currently reported from Chile, Argentina, China, Italy and other European countries, Japan, Korea and New Zealand. Biovar 3 is also referred to as PsaV or Psa3, and is the population responsible for the global pandemic first reported in Italy in 2008. Biovar 4 included low virulence strains (Chapman et al. 2012; Vanneste et al. 2013). This population, also referred to as Psa LV, has been reported by Ferrante and Scortichini (2014) different from the pathovar actinidiae and subsequently classified as a new pathovar, P. syringae pv. actinidifoliorum (Pfm) (Cunty et al. 2014).

In consideration of the high impact of Psa for the kiwifruit industries around the world, the control of this pandemic became an urgent issue. However, the application of control strategies needed a reliable detection of the causal agent (i.e in propagative material, or in a new outbreak area) by using harmonized diagnostic protocols based on high performance methods. Guidance on the validation process is reported in the EPPO Standard PM 7/98 (2) (European Plant Protection Organization 2014a) that mentions: “A test is considered fully validated when it provides data for the following performance criteria: analytical sensitivity, analytical specificity, reproducibility and repeatability”. Concerning Pseudomonas syringae pv. actinidiae (Psa) - the causal agent of bacterial canker of Actinidia spp.- an inter-laboratory comparative study on the detection methods was performed among Italian laboratories in 2011 (Loreti et al. 2014). This study showed that, among the media tested for isolation, the modified King’s B medium (KBC) (Mohan and Schaad 1987) was better for Psa isolation than the modified Nutrient Sucrose Agar (mNSA) (Mohan and Schaad 1987), the King’s B medium (KB) or the Nutrient Sucrose Agar (NSA) medium. In addition, the PCR-based assays (simplex-PCR of Rees-George et al. (2010), duplex-PCR of Gallelli et al. (2011) used directly on infected matrices (wood, leaf, pollen), were more reliable than isolation. Finally, the simplex-PCR of Rees-George et al. (2010) and the duplex-PCR of Gallelli et al. (2011) were the most inclusive and exclusive identification method, respectively (Loreti et al. 2014). Recently an EPPO standard (PM 7/120 (European Plant Protection Organization 2014b) has been published as a formal guide on procedures for Psa diagnosis. This standard includes isolation on agar plate and two PCRs: the simplex-PCR of Rees-George et al. (2010) and the duplex-PCR of Gallelli et al. (2011). This standard can be applied to different matrices (budwood, shoots, twigs, pollen, in vitro micropropagated plants).

New molecular methods have been developed by several research groups respectively for the detection and the identification of all the biovars of Psa (nested-PCR, Biondi et al. 2013; multiplex-PCR, Balestra et al. 2014) and also specifically for the Psa biovar 3 (simplex-PCR-C and real-ime PCR, Gallelli et al. 2014), but these methods have not been validated as Psa detection and/or identification methods. The purpose of this study was to provide validation data for these new molecular tests in comparison to the tests previously assessed.

Inter-laboratory test performance studies (TPS) are an essential part of the validation process of analytical methods: they are used to determine the performance of different tests among laboratories, to establish their comparability and, consequently, to provide objective evidence that the tests are suitable for a specific intended use (https://ec.europa.eu/jrc/en/ /interlaboratory-comparisons). Therefore, an inter-laboratory test performance study was conducted to select the most efficient detection methods for Psa.

This paper presents the results of this inter-laboratory comparison including nine laboratories from Europe: two from New Zealand and one from Turkey - (Table S1). Because plants are one of the main pathways for the introduction and spread of the bacterium (European Plant Protection Organization 2012), and bark canker is a typical symptom of Psa on woody plant tissue, woody extracts were used as a matrix to compare the detection tests.

Isolation of Psa on the mNSA and KBC (Mohan and Schaad 1987), simplex, duplex, nested, multiplex and real-time PCR-based methods (Rees-George et al. 2010; Gallelli et al. 2011; Biondi et al. 2013; Balestra et al. 2014; Gallelli et al. 2014) were tested on thirteen woody extracts of Actinidia deliciosa cv. ‘Hayward’ spiked with bacterial suspensions of different concentrations. These methods were evaluated using the performance criteria defined in the EPPO standards PM7/98 (2) and PM7/122 (1) (European Plant Protection Organization 2014a, c). Since each laboratory processed an identical set of samples, under different conditions, the study aimed to evaluate the benefits and disadvantages of each method. This inter-laboratory test was completed by a study performed by a subgroup of four laboratories from bacterial suspensions. The purpose of this further study was to provide validation data on the test capacity to identify Psa.

According to the results of this work (TPS and further study) different methods are proposed for the screening or identification of Psa. Similarly, one or two detection tests are recommended depending on the level of Psa acceptable for the situation (i.e. certification of propagation material or in case of low or high disease prevalence). The performance criteria obtained for each method should be taken into account for the revision of the existing EPPO Standard on Psa detection and identification scheme.

Material and methods

Study design

The study comprised two parts to individuate on the one hand the performance of the screening or detection methods, and on the other hand strain identification methods on pure culture of bacteria. The first part includes the evaluation of the Psa detection methods for the screening of wood plant material by an inter-laboratory study that involved thirteen laboratories. As the number of data collected allowed statistical analyses, the results were analysed to compare the accuracy, diagnostic specificity, diagnostic sensitivity, repeatability and reproducibility of the methods. The results were also compared using the Bayesian approach. The second part conducted outside the inter-laboratory study, aimed at evaluating the capacity of the methods to identify Psa-like colonies by assessing the analytical specificity (inclusivity, exclusivity) on pure culture collection. The aggregated results of four laboratories are presented. This second part was not intended to be an inter-laboratory study per se, since each laboratory prepared its own bacterial suspensions. Therefore, the results obtained are only giving rise to descriptive statistics.

Participant laboratories

The following 13 laboratories were candidates for the TPS: Instituto Valenciano de Investigaciones Agrarias (IVIA), Centro de Proteccion Vegetal y Biotechnologia-Spain; Deputación de Pontevedra. Estación Fitopatolóxica Areeiro-Spain; The French Agency for Food, Environmental and Occupational Health & Safety, Plant Health Laboratory (ANSES-LSV)-France; Università degli Studi di Modena e Reggio Emilia (UniMoRe)-Italy; Consiglio per la ricerca in agricoltura e l’analisi dell’economia agraria, Centro di Ricerca Difesa e Certificazione, Sede di Roma (CREA-DC)-Italy; Università della Tuscia, - Department of Agriculture and Forest Sciences (DAFNE), Viterbo-Italy; Laboratorio Fitopatologico Regione Lombardia, Servizio Fitosanitario/Fondazione Minoprio-Italy; Benaki Phytopathological Institute (BPI), Department of Phytopathology Laboratory of Bacteriology-Greece; Instituto Nacional de Investigação Agrária e Veterinária (INIAV), UEIS-SAFSV Laboratório de Fitobacteriologia-Portugal; Austrian Agency for Health and Food Safety, Institute for Sustainable Plant Production (AT-AGES)-Austria; Ministry of Agriculture and Forestry, Plant Health and Environment Laboratory, Diagnostic and Surveillance Services, Biosecurity New Zealand (MPI-PHEL)-New Zealand; Plant and Food Research (PFR)-New Zealand; Plant Protection Central Research Institute (PPCRI)-Turkey. From these candidates one laboratory decided not to participate, as a consequence the final numbe of participating laboratories was 12.

Part 1: inter-laboratory study

The samples used for the test performance study

Twenty-three identical sets, each included thirteen samples, and were prepared by ANSES-LSV. Details on the sample composition are provided in Table 1. The samples consisted of Actinidia deliciosa cv. Hayward extracts from homogenised woody tissues (canes) prepared by crushing twigs pieces in PBS-Tween as recommended in the EPPO protocol PM7/120 (2014b) spiked (or not) with suspensions of the bacterial strain Psa ISF 8.43 (biovar 3) containing 107 CFU ml−1 (D7), 105 CFU ml−1 (D5), 104 CFU ml−1 (D4), 103 CFU ml−1 (D3) or 0 CFU ml−1 (D0),. Bacterial suspensions were prepared from a loopful of a 24–48 h bacterial culture in a 0.5 ml volume of distilled sterile water and bacterial concentrations were determined spectrophotometrically (A660 = 0.1 OD corresponding to 5 × 107 CFU per ml). The sample with the highest Psa concentration (D7) and the sample with no Psa (D0) were prepared in duplicate; the other samples (D5, D4 and D3) were prepared in triplicate. Samples were randomised within each set and the sets were randomly assigned to the participants. Although the order of the samples was subject to randomisation, the preparation and constitution of the samples within each set was identical, thus maximising the sample homogeneity. After the randomisation process, each sample was labelled with a code. Each laboratory checked if samples and materials were in appropriate condition upon arrival. A protocol with the details for the detection procedures was sent to each laboratory.
Table 1

Samples used to evaluate the different performance criteria in the TPS

Sample type

Host

Sample characteristicsa

Number of replicates

Target

Actinidiadeliciosa cv. Hayward

Artificially contaminated with 107 CFU ml−1 (D7)

2

 

Actinidiadeliciosa cv. Hayward

Artificially contaminated with 105 CFU ml−1 (D5)

3

 

Actinidiadeliciosa cv. Hayward

Artificially contaminated with 104 CFU ml−1 (D4)

3

 

Actinidiadeliciosa cv. Hayward

Artificially contaminated with 103 CFU ml−1 (D3)

3

Non target

Actinidiadeliciosa cv. Hayward

Healthy (D0)

2

aArtificially contaminations were performed using the bacterial strain Psa biovar3 ISF 8.43

Isolation on mNSA/King’s B medium

Fifty μL of each wood extract sample, and its 10-fold and 100-fold dilutions, were plated onto KBC (King et al. 1954) or mNSA (Oxoid nutrient agar supplemented of 5% w/v sucrose) as described by Mohan and Schaad (1987) (semi-selective media) and incubated at 25–27 °C for 72 h. Psa strain CRA-FRU 8.43 was used as a reference to assist selection and purification of putative Psa colonies (i.e. colonies with a morphology similar to Psa) on each medium.

DNA extraction

Bacterial cells were concentrated from the wood extract samples by centrifuging 500 μl of each sample at 12000 g for 10 min and resuspending the pellet in 400 μl of the AP1 Buffer of the DNeasy Plant Mini Kit (Qiagen, Germany). DNA extraction was performed according to manufacturer’s instructions, with the following modification: after washing with Buffer AW the samples were air-dried for 10 min and the DNA was eluted in 100 μl AE Buffer. The extracted DNA was then analysed by PCR.

PCR based methods

Molecular methods, referenced as M1 to M11 are detailed in the Table 2. Participating laboratories strictly followed the methods as reported in Table S2. When negative results were obtained with undiluted samples, the decimal dilutions were tested. The molecular methods were performed by all laboratories following the procedure described in the original papers (Rees-George et al. 2010; Gallelli et al. 2011; Biondi et al. 2013; Balestra et al. 2013; Gallelli et al. 2014). The use of reagents was left to the appraisal of the laboratories following the suggestion of the original paper (e.g. enzyme).
Table 2

Methods evaluated during the interlaboratory test performance study

N° Method

 

M1

M2

M3

M4

M5

M6

M7

M8

M9

M10

M11

M12

Method

 

Simplex- PCR

Duplex-PCR

Multiplex-PCR

Simplex-PCR-C

Real-time PCR

B1-B2

B1-B2 AluI

B1-B2 BclI

B1-B2 BfmI

Nested-PCR KNF/R

Nested -PCR KNF/R BclI

Plating

Reference

 

Rees-George et al. 2010

Gallelli et al. 2011

Balestra et al. 2014

Gallelli et al. 2014

Gallelli et al. 2014

Biondi et al. 2013

Biondi et al. 2013

Biondi et al. 2013

Biondi et al. 2013

Biondi et al. 2013

Biondi et al. 2013

 

Number of laboratoriesa

 

8

11

3

9

5 (6)

5

3 (4)

4

4

5

3

11

Number of resultsb

overall

104

143

39

117

65

65

39

52

52

65

39

143

from positive samples

88

121

33

99

55

55

33

44

44

55

33

121

from negative samples

16

22

6

18

10

10

6

8

8

10

6

22

Number of indeterminate resultsb

overall

0

3

2

4

1

1

0

0

0

0

0

2

from positive samples

0

3

2

3

0

1

0

0

0

0

0

2

from negative samples

0

0

0

1

1

0

0

0

0

0

0

0

Rate of indeterminate results (%)b

overall

0

2.1

5.1

3.4

1.5

1.5

0

0

0

0

0

1.4

from positive samples

0

2.5

6.1

3

0

1.8

0

0

0

0

0

1.7

from negative samples

0

0

0

5.6

10

0

0

0

0

0

0

0

aThe number in brackets indicates the value without exclusion of data

bExclusion of results of L03 for method M7, results of L15 for method M5

Evaluation of performance criteria

Performance criteria and validation procedure were established following PM 7/76 (4) and PM7/98 (2) EPPO standards (European Plant Protection Organization 2014a, 2017) and International Organization for Standardization ISO 16140:2003 (2003). In particular, accuracy (AC) with diagnostic specificity (DSP) and diagnostic sensitivity (DSE), analytical sensitivity (ASE), reproducibility (CO), repeatability (DA) and concordance odds ratio (COR) were assessed.

The definitions and the calculations of these performance criteria (except accuracy) and all statistical tests used are detailed in Chabirand et al. (2017).

Likelihood ratios were also calculated to compare the methods using the Bayesian approach, as explained in Chabirand et al. (2017).

Evaluation of accuracy

In reference to ISO 5725–1 standard (International Organization for Standardization), accuracy (AC) was defined as the closeness of agreement between a test result (obtained with a method) and the accepted reference value (i.e. for qualitative method, the sample’s real status).

Accuracy was evaluated for all results by calculating the ratio of the sum of the number of positive and negative agreements between a method and the sample’s real status for the number of tested samples. However, as the number of positive and negative samples was not equivalent (11 positive samples vs two negative samples per panel), this ratio was weighted for each observation so that positive samples and negative samples made equal contribution to assessment of accuracy. Confidence intervals (95%) were calculated for AC criterion using the likelihood method (Rao and Scott 1984). Tests on the equality of AC (weighted data) between methods and with the sample’s real status were performed using the adjusted Wald test based on the differences between observed cells counts and those expected under independence (“survey” package in R statistical software) (Koch et al. 1975; Thomas and Rao 1990).

Indeterminate results

Indeterminate results obtained by some laboratories were processed using two hypotheses (H1 and H2) as reported in Chabirand et al. (2017), in order to use, for the calculations, only binary results (positive or negative). In particular, (H1) the laboratory hypothetically made the right decision for the indeterminate results in relation to the samples’ real status (i.e. the indeterminate results were counted as positive for positive samples and negative for negative samples) and (H2) the opposite.

Outlier results

The ISO 16140 standard (International Organization for Standardization) stipulates that the organising laboratory shall determine which results are suitable and which are outliers for use in calculations. Consequently, the results of a laboratory were excluded (considered as outliers) for a given method when the statistical analysis showed a significant difference for the number of indeterminate results obtained by this laboratory compared with others and when the number of indeterminate results obtained by this laboratory represented more than 50% of indeterminate results obtained for the method and when the number of indeterminate results obtained by this laboratory represented more than 50% of results obtained for the panel of samples (i.e. number of indeterminate results ≥7).

Results of a laboratory were also excluded for a given method (i) when the expected result for at least one control was not obtained or (ii) when the number of false results (false positives (FP) + false negatives (FN)) obtained by this laboratory represented more than 50% of false results obtained for the method and when ≥50% of false results were recorded from the panel of samples (i.e. FP + FN ≥ 7).

Data analysis

Statistical tests were performed using the R statistical software package (version 3.3.1; R Development Core Team, Vienna, Austria). Statistical tests were considered significant for a calculated p-value lower than 5%.

Not all the methods were implemented by all the participants. Table S2 summarises which methods were implemented by which participant. Thus, depending on the methods, the performance assessment of each one was based on the results of three to eleven laboratories. This creates a distortion in the precision of assessment of the methods. All the data were processed mentioning this distortion of precision with its accompanying caveats, and being aware that the non-significance of a statistical test does not mean the absence of differences, but the non-identification of differences.

Part 2: analytical specificity

Bacterial strains and cell lysis

To determine the analytical specificity of the different molecular methods, a loopful of 24 to 48 h old culture grown in KB or NSA of bacterial Psa strains (NCPPB 3739 (bv. 1); ISPAVE 019 (bv. 1); KN2 (bv. 2); ISPAVE 020 (bv.1); OMP-BO 1875,1 (bv. 3); OMP-BO 8581,1 (bv. 3); OMP-VE 4136 (bv. 3); CRA-PAV 1625 (bv. 3); CRA-PAV 1530 (bv. 3); CRA-PAV 1699 (bv. 3); CFBP 8025 (bv. 3); CFBP8047 (bv.3); SFR-TO 242a (bv. 3); CFBP8036; CRA-FRU 8.43; CFBP8053; CFBP8062; CFBP8065; CFBP8066; CFBP8092; CFBP8097; CFBP8108; BPI A1; BPI B1; BPI D1–1; BPI E3; BPI G1; BPI 10; BPI 17a; BPI 22), bacterial strains phylogenetically close related to Psa or other Pseudomonas (P. syringae pv. morsprunorum NCCPB 2995; P. syringae pv. tomato NCPPB 1106, NCPPB 2563, IVIA 2650–1; P. syringae pv. theae NCPPB 2598, CFBB 4097; P. avellanae NCPPB3487 (GR), NCPPB 3873 (IT), ISPaVe 1267 (IT); P. syringae pv. syringae CFBP4702, IVIA 3840), bacterial strains associated to kiwifruit as P. syringae pv. actinidifoliorum (Pfm) (CFBP 8038, CFBP 8051, CFBP 7812, CFBP 7951), Pseudomonas spp. (LSV 28.72), P. syringae (LSV 37.27, LSV 37.28, LSV 40.35, LSV 43.31) (Table 3) were resuspended in 0.5 ml of sterile distilled water to a density of approximately 5 × 107 CFU ml-1 and checked by CRA-PAV, ANSES-LSV, IVIA, BPI using different molecular methods. Each participant denatured 100 μl aliquot of each bacterial suspension at 95 °C for 10 min, cooled it on ice and after a centrifugation at 6000×g for 1 min, and used the lysate (2–5 μl) as template in the PCR assays. The lysate could be stored at −20 °C for subsequent analyses. The Psa strain CRA-FRU 8.43 was used as template positive control.
Table 3

Bacterial strains used to evaluate analytical specificity: inclusivity (tested on pure culture of several P. syringae pv. actinidiae strains) and exclusivity (tested on pure culture of non-target strains of several Pseudomonas spp.)

Species

Bacterial straina

Results per method

M1

M2

M3

M4

M5

M6

M7, M8, M9

M10

M11

Pseudomonas syringae pv. actinidiae

NCPPB 3739 (bv. 1)

1

1

1 (311–254) Psa J/Kb

NT*

NT*

1

Old Psae

1

1

ISPAVE 019 (bv. 1)

1

1

1 (311–254) Psa J/Kb

NT*

NT*

1

Old Psae

1

1

KN2 (bv. 2)

1

1

1 (311–254) Psa J/Kb

NT*

NT*

1

NT

1

NT

ISPAVE 020 (bv.1)

1

1

1 (311–254) Psa J/Kb

NT*

NT*

1

Old Psae

1

1

OMP-BO 1875,1 (bv. 3)

1

1

NT

1

1*

1

NT

1

NT

OMP-BO 8581,1 (bv. 3)

1

1

NT

1

1*

1

NT

1

NT

OMP-VE 4136 (bv. 3)

1

1

NT

1

1*

1

NT

1

NT

CRA-PAV 1625 (bv. 3)

1

1

NT

1

1*

1

NT

1

NT

CRA-PAV 1530 (bv. 3)

1

1

NT

1

1*

1

NT

1

NT

CRA-PAV 1699 (bv. 3)

1

1

NT

1

1*

1

NT

1

NT

CFBP 8025 (bv. 3)

1

1

(311–733) Psa Eurc

1

NT

1

1

1

1

CFBP8047 (bv.3)

1

1

(311–733) Psa Eurc

1

NT

1

1

1

1

SFR-TO 242a (bv. 3)

1

1

NT

1

1

1

NT

1

NT

CFBP8036

1

1

(311–733) Psa Eurc

1

NT

1

1

1

1

CRA-FRU 8.43

1

1

(311–733) Psa Eurc

1

NT

1

1

1

1

CFBP8053

1

1

(311–733) Psa Eurc

1

NT

1

1

1

1

CFBP8062

1

1

(311–733) Psa Eurc

1

NT

1

1

1

1

CFBP8065

1

1

(311–733) Psa Eurc

1

NT

1

1

1

1

CFBP8066

1

1

(311–733) Psa Eurc

1

NT

1

1

1

1

CFBP8092

1

1

(311–733) Psa Eurc

1

NT

1

1

1

1

CFBP8097

1

1

(311–733) Psa Eurc

1

NT

1

1

1

1

CFBP8108

1

1

(311–733) Psa Eurc

1

NT

1

1

1

1

BPI A1

1

1

(311–733) Psa Eurc

NT

NT

1

NT

1

1

BPI B1

1

1

(311–733) Psa Eurc

NT

NT

1

NT

1

1

BPI D1–1

1

1

(311–733) Psa Eurc

NT

NT

1

NT

1

1

BPI E3

1

1

(311–733) Psa Eurc

NT

NT

1

NT

1

1

BPI G1

1

1

(311–733) Psa Eurc

NT

NT

1

NT

1

1

BPI 10

1

1

(311–733) Psa Eurc

NT

NT

1

NT

1

1

BPI 17a

1

1

(311–733) Psa Eurc

NT

NT

1

NT

1

1

BPI 22

1

1

(311–733) Psa Eurc

NT

NT

1

NT

1

1

P. syringae pv. morsprunorum

NCCPB 2995

0

0

1 (311 + 609) (Psa Chinad)

0

NT

0

NT

1

1 (420 + 83)

P. syringae pv. tomato

NCPPB 1106

NT

0

0

0

NT

0

NT

1

1 (420 + 83)

NCPPB 2563

1

0

NT-

0

0

NT

NT

NT

NT

IVIA 2650–1

NT

0

2 (609)

NT

NT

NT

NT

NT

NT

Pseudomonas spp. (syn P. viridiflava)

LSV 28.72

0

0

1 (311 + 609) (Psa Chinad)

0

NT

0

NT

0

NT

P. syringae pv. syringae

CFBP 4702

NT

0

2 (311 + 609 + 254)

0

NT

0

NT

1

1 (420 + 83)

IVIA 3840)

NT

0

2 (609)

NT

NT

NT

NT

NT

NT

P. syringae pv. theae

NCPPB 2598

1

0

2 (311 + 609 + 254)

1/0

NT

1

Old Psae

1

0 (undigestedf)

CFBB 4097

1

0

NT-

0

0

NT-

NT

NT-

NT-

P. avellanae

NCPPB3487 (GR)

NT

1

1 (311 + 254) Psa J/Kb

1

NT

1

Old Psae

1

1(420 + 83)

NCPPB 3873 (IT)

1

0

NT

0

0

NT

NT

NT

NT

ISPaVe 1267 (IT)

1

0

NT

0

0

NT

NT

NT

NT

P. syringae pv. actinidifoliorum

CFBP 8038

1

0

2 (311 + 609 + 254)

0

NT

1

Old Psae

1

1 (420 + 83)

CFBP 8051

1

0

2 (609)

0

NT

1

Old Psae

1

1 (420 + 83)

CFBP 7812

1

0

2 (311 + 609 + 254)

0

0

1

Old Psae

1

0 (undigestedf)

CFBP 7951

1

0

2 (311 + 733 + 254)

1

NT

1

Old Psae

1

0 (undigestedf)

P. syringae strain from kiwifruit

LSV 38.27

NT

0

2 (311)

0

NT

1

Old Psae

1

0 (undigestedf)

LSV 40.35

NT

0

1 (311 + 733) (Psa Eurc)

0

NT

0

NT

NT

NT

LSV 38.28

NT

0

1 (311 + 609) (Psa Chinad)

0

NT

0

NT

NT

NT

LSV 43.31

NT

0

1 (311 + 254) (Psa J/Kb)

1

NT

1

Old Psae

1

1 (420 + 83)

False positive or inconsistent results

9/11

82%

1/20

5%

15/18

83%

(33% false positive; 50% inconsistent results)

4/18

22%

0/5

8/14

57%

8/14

57%

11/12

92%

7/11

60%

Exclusivity

18%

95%

17%

78%

100%

43%

43%

8%

40%

For M3 the obtained haplotypes with the respective amplicon sizes are reported. Results for all methods are expressed in the following: 0: Psa not detected, 1: Psa detected, 2: indeterminate result; NT: not tested in this study

aNCPPB, National Collection of Plant Pathogenic Bactéria, FERA, United Kingdom. CIRM-CFBP, Collection Française de Bactériesassociées aux Plantes, Station de Pathologie Végétale, France. IVIA, Instituto Valenciano de Investigaciones Agrarias - Plant Protection aNT Agro-Engineering, Spain; ISPaVe = CRA-PAV = CREA-DC, Centro di ricerca Difesa e Certificazione, Italy; LSV, Internal collection of Anses-Plant Health Laboratory, France; BPI, Internal collection of the Laboratory of Bacteriology at Benaki Phytopathological Institute, Greece

bPsa J/K = Psa Japanese or Korean haplotype following Balestra et al. (2014)

cPsa Eur = Psa European haplotype following Balestra et al. (2014)

dPsa China = Psa Chinese haplotype following Balestra et al. (2014)

eOld Psa = enzyme restriction pattern related to strains of Psa isolated in 1990’s following Biondi et al. (2013)

fundigested = amplicon not digested by enzyme restriction

*result reported in Gallelli et al. (2014)

Evaluation of analytical specificity

This performance criterion was assessed in the second part of the study in order to evaluate the methods for strain identification, and in particular, their ability to identify all the target strains (inclusivity) and their capacity not to give false positives with non-target strains (exclusivity). A set of 30 target and 20 non-target bacterial strains either phylogenetically related to Psa (Gardan et al. 1999; Sarkar and Guttman 2004) or associated to the host material, were tested (Table 3).

Results

Part 1: inter-laboratory study

Indeterminate results

Depending on the methods, the rate of indeterminate results (Table 2) ranged from 0% (methods M1, M7, M8, M9, M10 and M11) to 5.1% (method M3). Using Fisher’s exact test, no significant differences in the rate of indeterminate results were identified between methods for the overall results or when considering only positive or negative results (p-values respectively of 0.427, 0.444 and 0.595 for overall results, positive results and negative results respectively).

On the contrary, significant differences in the rates of indeterminate results were identified between laboratories for the overall results and also when considering only positive results (p-values respectively of 0.011 and 7.36 × 10–4 for overall results and positive results). When laboratory L23 is excluded from the analysis, there are no more significant differences between laboratories. The number of indeterminate results obtained by L23 represented more than 50% of indeterminate results obtained for method M2 (3/3) and for M4 (3/4), however it represented less than 50% of the results obtained from the panel of samples (23% for the two methods). So even if there were differences in the indeterminate rates between laboratories, the results of L23 for methods M2 and M4 were used for the performance assessment of methods.

Due to the small number of indeterminate results, no significant differences were identified in the performance assessment of methods between the two scenarios H1 and H2. So, only the first scenario which better reflects the reality is being presented.

Outlier results

The results obtained by some laboratories were not validated by the controls and were excluded from the analysis: they were the results of laboratory L03 for method M7 and the results of laboratory L15 for method M5. Regardless of the indeterminate results counted (scenario H1 or H2), no laboratories presented for a given method more than 50% of false results from the panel of samples and a number of false results greater than 50% of false results obtained for that method. Thus, no other outlier results were identified, and no other datasets were excluded from the analysis.
Fig. 2

Relationship between pre- and post-test probabilities of Pseudomonas syringae pv. actinidiae (Psa) infection, according to the results obtained during the inter-laboratory test performance study for each evaluated method and for the combination of both methods M2 and M5. Pre-test probability (prevalence) was defined as the proportion of plants infected by Psa in a particular population at a specific time. Post-test probability was calculated as follows: post-test odds/(1 + post-test odds) where post-test odds = pre-test probability/(1 – pre-test probability) x likelihood ratio. For each method, the solid line represents the post-test probabilities of Psa infection after a positive test result for different prevalence rates. The broken line represents the post-test probabilities of Psa infection after a negative test result for different prevalence rates. For a given method, the closer to the vertical and horizontal axes the solid (and respectively the dotted) curves are, the higher the overall method performance is

Accuracy, diagnostic sensitivity and diagnostic specificity

The performance criteria of the different methods evaluated in the TPS are summarised in Table 4 and in Fig. 1. Detailed results obtained by each laboratory for each sample are available in Table S3. The best overall performance was obtained with methods M5 and M2 with an AC of 93.2% for each method. Using an adjusted Wald test, the AC results for M5 and M2 were not significantly different from the results with methods M4 and M1, but were significantly better than results obtained with methods M3 (significant only for M5, not significant for M2), M12, M6, M8, M9, M7, M10 and M11.
Table 4

Comparison of the performance criteria accuracy (AC), diagnostic sensitivity (DSE) and diagnostic specificity (DSP) obtained during the collaborative study for the different methods

Methods/ criteria

Accuracy AC (%) ad

Diagnostic sensitivity DSE (%) bd

Diagnostic specificity DSP (%) cd

Significant variation between results produced by the method and theoretically expected results

M1

87.5 AB

(76.8–94.6)

87.5 D

(79.0–92.9)

87.5 AB

(64.0–96.5)

No for DSP (p = 0.484),

yes for AC (p = 0.003) and DSE (p < 0.001)

M2

93.2 AB

(86.8–97.2)

90.9 BCD

(84.5–94.8)

95.5 A

(78.2–99.2)

No for DSP (p = 1.000),

yes for AC (p = 0.005) and DSE (p < 0.001)

M3

86.4 BC

(74.2–94.4)

72.7 E

(55.8–84.9)

100.0 A

(61.0–100.0)

No for DSP (p = 1.000),

yes for AC (p = 0.004) and DSE (p = 0.002)

M4

91.7 AB

(84.0–96.6)

88.9 CD

(81.2–93.7)

94.4 A

(74.2–99.0)

No for DSP (P = 1.000),

yes for AC (p = 0.005) and DSE (p < 0.001)

M5

93.2 A

(78.8–99.1)

96.4 ABC

(87.7–99.0)

90.0 AB

(59.6–98.2)

No for all criteria

(p = 0.381 for AC, p = 0.495 for DSE and p = 1.000 for DSP)

M6

71.4 D

(51.8–86.6)

92.7 ABC

(82.7–97.1)

50.0 BC

(23.7–76.3)

No for DSE (p = 0.118),

yes for AC (p = 0.011) and DSP (p = 0.032)

M7

58.3 D

(33.6–80.4)

100.0 AB

(89.6–100.0)

16.7 C

(3.0–56.4)

No for DSE (p = 1.000),

yes for AC (p = 0.011) and DSP (p = 0.015)

M8

68.7 D

(45.9–86.6)

100.0 A

(92.0–100.0)

37.5 C

(13.7–69.4)

No for DSE (p = 1.000),

yes for AC (p = 0.025) and DSP (p = 0.026)

M9

68.7 D

(45.9–86.6)

100.0 A

(92.0–100.0)

37.5 C

(13.7–69.4)

No for DSE (p = 1.000),

yes for AC (p = 0.025) and DSP (p = 0.026)

M10

56.4 D

(38.0–73.6)

92.7 ABCD

(82.7–97.1)

20.0 C

(5.7–51.0)

No for DSE (p = 0.118),

yes for AC and DSP (p < 0.001)

M11

49.2 D

(27.6–71.1)

81.8 DE

(65.6–91.4)

16.7 C

(3.0–56.4)

Yes for all criteria

(p < 0.001 for AC, p = 0.024 for DSE and p = 0.015 for DSP)

M12

82.0 C

(74.6–88.2)

68.6 E

(59.9–76.2)

95.5 A

(78.2–99.2)

No for DSP (p = 1.000),

yes for AC and DSE (p < 0.001)

aAccuracy (95% confidence interval): ability of the method to detect the target when it is present in the sample and to fail to detect the target when it is not present in the sample. Values followed by the same letter in a column are not significantly different (p = 0.05) according to adjusted Wald test

bDiagnostic sensitivity (95% confidence interval): ability of the method to detect the target when it is present in the sample. Values followed by the same letter in a column are not significantly different (p = 0.05) according to Fisher’s exact test

cDiagnostic specificity (95% confidence interval): ability of the method to fail to detect the target when it is not present in the sample. Values followed by the same letter in a column are not significantly different (p = 0.05) according to Fisher’s exact test

dFor each criterion, we present data derived from the scenario H1 described in the Materials and methods section for the interpretation of indeterminate results

Fig. 1

Diagram summarising the performance of the different methods evaluated in the inter-laboratory test performance study.The figure allows an overview of the method performance (for detailed comparison of percentages, see the tables): the more the area of the polygon is important, the more the method performance is important. The figure also allows to identify for a given method, which performance criterion presents defects

Diagnostic sensitivity varied from 68.6% for M12 to 100% for M7, M8 and M9. Diagnostic specificity ranged from 72.7% for M3 to 100% for M7, M8 and M9. Isolation gave the lowest value (68.6%) due to the high number of false negatives (38/121) (Table 4). Using Fisher’s exact test, the DSE results for methods M7, M8, M9 were not significantly different from results obtained with M5, M6, M10 and M2, but were significantly better than results obtained with methods M4, M1, M11, M3 and M12.

Diagnostic specificity ranged from 16.7% for M7 and M11 to 100% for M3. Low values of diagnostic specificity were affected by false positive results: this performance criterion ranged from 16.7 and 20% (M10 and M11) to 50% (M6) and persists low by restriction analysis (16.7% with AluI (M7), 37.5% with BclI (M8) and BfmI (M9)). The values of diagnostic specificity for other molecular test ranged from 87.5% (M1) to 100% (M3) (Table 4).

Using Fisher’s exact test, the DSP results for M3 were not significantly different from the results for methods M2, M12, M4, M5 and M1 but were significantly better than results obtained with methods M6, M7, M8, M9, M10, and M11.

Only method M5 presented no significant variation from the theoretically expected results for all criteria (AC, DSE and DSP). Methods M1, M2, M3, M4 and M12 presented no significant variation from the theoretically expected results for DSP whereas methods M6, M7, M8, M9 and M10 presented no significant variation from the theoretically expected results for DSE. Method M11 presented significant variation from the theoretically expected results for all criteria. It is worth noting that, as DSP was assessed from less samples than DSE, the power of the statistical test (i.e. the probability that the test rejects a false null hypothesis) for DSP is much lower than for DSE, and consequently there were fewer chances to identify differences (if there were differences) in the DSP assessment than the DSE assessment.

Analytical sensitivity

The analytical sensitivity results for the different methods are summarised in Table 5.
Table 5

Results for analytical sensitivity (ASE), repeatability (DA), reproducibility (CO) and concordance odds ratio (COR) obtained during the collaborative study for the different methods

Sample code

Criterion

Results obtained for each method

M1

M2

M3

M4

M5

M6

M7

M8

M9

M10

M11

M12

Number of laboratories

8

11

3

9

5d

5

3d

4

4

5

3

11

D7

ASE(nb)a

15/16

22/22

6/6

18/18

10/10

10/10

6/6

8/8

8/8

10/10

6/6

22/22

(prop)

0.94

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

(p)b

0.937 NS

1.000 NS

1.000 NS

1.000 NS

1.000 NS

1.000 NS

1.000 NS

1.000 NS

1.000 NS

1.000 NS

1.000 NS

1.000 NS

DA

0.94

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

CO

0.88

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

COR

2.14

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

pbc

1.000 NS

1.000 NS

1.000 NS

1.000 NS

1.000 NS

1.000 NS

1.000 NS

1.000 NS

1.000 NS

1.000 NS

1.000 NS

1.000 NS

D5

ASE(nb)

21/24

33/33

9/9

27/27

15/15

15/15

9/9

12/12

12/12

15/15

3/9

23/33

(prop)

0.88

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

0.33

0.70

(p)

0.116 NS

1.000 NS

1.000 NS

1.000 NS

1.000 NS

1.000 NS

1.000 NS

1.000 NS

1.000 NS

1.000 NS

<0.001 S***

<0.001 S***

DA

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

0.88

CO

0.75

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

0.33

0.52

COR

infe

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

1.00

inf

6.77

p

0.004 S**

1.000 NS

1.000 NS

1.000 NS

1.000 NS

1.000 NS

1.000 NS

1.000 NS

1.000 NS

1.000 NS

0.036 S*

0.002 S**

D4

ASE(nb)

22/24

30/33

8/9

24/27

14/15

14/15

9/9

12/12

12/12

14/15

9/9

18/33

(prop)

0.92

0.91

0.89

0.89

0.93

0.93

1.00

1.00

1.00

0.93

1.00

0.55

(p)

0.339 NS

0.227 NS

0.369 NS

0.150 NS

0.537 NS

0.537 NS

1.000 NS

1.000 NS

1.000 NS

0.537 NS

1.000 NS

<0.001 S***

DA

0.89

0.88

0.85

0.85

0.91

0.91

1.00

1.00

1.00

0.91

1.00

0.88

CO

0.84

0.83

0.78

0.80

0.87

0.87

1.00

1.00

1.00

0.87

1.00

0.49

COR

1.54

1.50

1.60

1.42

1.51

1.51

1.00

1.00

1.00

1.51

1.00

7.63

p

1.000 NS

1.000 NS

1.000 NS

1.000 NS

1.000 NS

1.000 NS

1.000 NS

1.000 NS

1.000 NS

1.000 NS

1.000 NS

0.001 S**

D3

ASE(nb)

19/24

25/33

1/9

19/27

14/15

12/15

9/9

12/12

12/12

12/15

9/9

20/33

(prop)

0.79

0.76

0.11

0.70

0.93

0.80

1.00

1.00

1.00

0.80

1.00

0.61

(p)

0.006 S**

<0.001 S***

<0.001 S***

<0.001 S***

0.537 NS

0.036 S*

1.000 NS

1.000 NS

1.000 NS

0.036 S*

1.000 NS

< 0.001 S***

DA

0.94

0.84

0.85

0.90

0.91

1.00

1.00

1.00

1.00

1.00

1.00

0.76

CO

0.69

0.58

0.78

0.54

0.87

0.60

1.00

1.00

1.00

0.60

1.00

0.50

COR

7.04

3.80

1.60

7.67

1.51

inf

1.00

1.00

1.00

inf

1.00

3.17

p

0.016 S*

0.034 S*

1.000 NS

0.003 S**

1.000 NS

0.011 S*

1.000 NS

1.000 NS

1.000 NS

0.011 S*

1.000 NS

0.117 NS

D0

ASE(nb)

2/16

1/22

0/6

1/18

1/10

5/10

5/6

5/8

5/8

8/10

5/6

1/22

(prop)

0.13

0.05

0.00

0.06

0.10

0.50

0.83

0.63

0.63

0.80

0.83

0.05

(p)

0.189 NS

0.676 NS

1.000 NS

0.603 NS

0.401 NS

<0.001 S***

<0.001 S***

<0.001 S***

<0.001 S***

<0.001 S***

<0.001 S***

0.676 NS

DA

0.88

0.95

1.00

0.94

0.90

0.90

0.83

0.88

0.88

1.00

0.83

0.95

CO

0.77

0.91

1.00

0.89

0.80

0.40

0.67

0.42

0.42

0.60

0.67

0.91

COR

2.19

1.88

1.00

1.94

2.25

13.50

2.40

10.13

10.13

inf

2.40

1.88

p

1.000 NS

1.000 NS

1.000 NS

1.000 NS

1.000 NS

0.238 NS

1.000 NS

0.429 NS

0.429 NS

0.111 NS

1.000 NS

1.000 NS

 

Overall DA

0.93

0.93

0.94

0.94

0.94

0.96

0.97

0.98

0.98

0.98

0.97

0.89

 

Overall CO

0.79

0.86

0.91

0.85

0.91

0.77

0.93

0.88

0.88

0.81

0.80

0.68

aASE(n): number of positive results per /number of result, per dilution level (D) and per method (M), ASE (prop): proportion of positive results per dilution level and per method, ASE(p): p-value (exact binomial test) for the significance with the theoretical detection level of 95% (theoretical non detection level of 95% for level D0). In case of significant p-value, we can conclude that the probability of detection is significantly different from 95%

bThe italicized cells indicate statistical significance. NS: not significant (p ≥ 0.05); S*: 0.01 ≤ p < 0.05; S**: 0.001 ≤ p < 0.01; S***: p < 0.001

cp-value of Fisher’s exact test used to evaluate the statistical significance of the variation between laboratories. In case of significant p-value, we can conclude that the variation between laboratories is significant

dAfter exclusion of outlier results (results of L15 for M5 and L03 for M7)

einf: infinite (very high value)

If some results seem to be incoherent with the serial dilution: methods M1 (D5 dilution), method M11 (D5) and method M12 (D4 or D3), no evidence of outliers could be identified thus all data was included in the statistical analysis.

The best analytical sensitivity was obtained with methods M8, M9, M7 and M5 for which the target could be reliably detected up to the D3 dilution. For methods M1, M2, M3, M4, M6 and M10, this level corresponded to the D4 dilution. For methods M11 and M12, this level corresponded to the D7 dilution.

Repeatability, reproducibility and odds ratio

The overall repeatability (DA) of the PCR protocols (Table 5) was above 90%, and the overall reproducibility (CO), varied from 77% (M6) to 93% (M7). The CO was above 90% only for method M3, M5 and M7. For isolation on semi-selective media, DA was 89% and CO was 68%.

While repeatability remained good for all methods (greater than 80%), the results of reproducibility were poor for some methods (68 and 77% for M12 and M6 respectively, 79% for M1).

The concordance odds ratio was not significantly different from 1.00 for all dilutions for methods M3, M5, M7, M8 and M9 (Fisher’s exact test) meaning that no significant differences between laboratories were obtained with these methods. Significant variations between laboratories were identified for methods M2, M4, M6 and M10 only for the lowest dilution. Significant variations between laboratories were identified for the D5 dilution for method M5 and for the dilutions D5 and D3 for methods M1 and the dilutions D5 and D4 for M12.

Method comparison by Bayesian approach

Likelihood ratios are shown in Table 6. The LR+ values from methods M2, M3, M4 and M12 are high, indicating that these methods generate a large change from pre- to post-test probability. The reliability of a positive test result is, therefore higher for these methods than for M1 and M5 (moderate change) and more particularly than for methods M6 to M11 (small change). The LR- of M2, M5, M7, M8 and M9 is very close or equal to zero, indicating that these methods generate a large change from pre- to post-test probability. The reliability of a negative test result is, therefore, much higher for these methods than for methods M1, M4 and M6 (moderate change) and more particularly than for methods M3, M10, M11 and M12 (small change).
Table 6

Comparison of likelihood ratios obtained during the collaborative study for the different methods

Methods/criteria

LR+ ad

LR- bd

Value c

Change from pre-to post-probability

Value c

Change from pre-to post-probability

M1

7.70

(1.91–25.65)

Moderate

0.14

(0.08–0.26)

Moderate

M2e

20.00

(2.94–135.84)

Large

0.10

(0.05–0.17)

Large

M3

Inf

(−)

Large

0.27

(0.16–0.48)

Small

M4e

16.00

(2.38–107.62)

Large

0.12

(0.07–0.21)

Moderate

M5e

9.64

(1.50–61.91)

Moderate

0.04

(0.01–0.16)

Large

M6

1.85

(0.99–3.46)

Small

0.15

(0.05–0.45)

Moderate

M7

1.20

(0.84–1.20)

Small

0.00

(−)

Large

M8

1.60

(0.94–1.60)

Small

0.00

(−)

Large

M9

1.60

(0.94–1.60)

Small

0.00

(−)

Large

M10

1.16

(0.84–1.59)

Small

0.36

(0.08–1.73)

Small

M11

0.98

(0.66–1.45)

Small

1.09

(0.16–7.52)

Small

M12

15.09

(2.22–104.80)

Large

0.33

(0.25–0.43)

Small

The likelihood ratio (LR) is a useful tool for assessing the effectiveness of a diagnostic test, because it determines how much more probable it is to find a positive result in an infected sample than in a healthy one. In particular LR indicates how much a given diagnostic test result will raise or lower the pre-test probability of the disease in question and is a useful tool for assessing the effectiveness of a diagnostic test

aThe positive likelihood ratio LR+ (95% confidence interval) was defined as the ratio DSE/(1-DSP), where DSE refers to diagnostic sensitivity and DSP refers to diagnostic specificity

bThe negative likelihood ratio LR- was defined as the ratio (1-DSE)/DSP where DSE refers to diagnostic sensitivity and DSP refers to diagnostic specificity

cValue of likelihood ratio (95% confidence interval)

dWe present data derived from the scenario H1described in the Materials and methods section for the interpretation of indeterminate results

eThe bolded and the italicized cells identify the methods with the most efficient LR (in bold more efficent than in italic): only method M2 generates a large change from pre- to post-probability both in case of positive and negative results. Methods M4 and M5 generate either a large or a moderate change from pre- to post-probability according to the nature of the results

Only method M2 combines both a high LR+ and a high LR- (large change from pre- to post-test probability for both positive and negative results). Method M5 combines a high LR- and a moderate LR+ whereas method M4 combines a high LR+ and a moderate LR-.

The post-test probabilities of Psa (i.e. probability of the Psa infection established after a test result) can be graphically displayed (Fig. 2) as a function of the pre-test probabilities (i.e. Psa prevalence) and the likelihood ratio for each evaluated method and also for the combination of the two most reliable methods (methods M5 and M2). Let us examine the case where the population presents a prevalence of 50%. First we can consider the solid curves (i.e. the post-test probabilities of Psa infection after a positive test result): the probability of a tested individual really being infected after a positive result is higher than 90% for methods M2, M3, M4, M5 and M12; it is between 80 and 90% for method M1 and lower than 65% for methods M6, M7, M8, M9, M10 and M11. Then, we can consider the broken curves (i.e. the post-test probabilities of Psa infection after a negative test result): there is 0.0% probability that the plant is infected by Psa when tested with methods M7, M8 and M9. This probability is only 3.9% for method M5. This probability increases to 8.7%, 10.5 and 12.5% for methods M2, M4 and M6 respectively. Oppositely, relatively high probabilities of infection are reported for samples tested negative with M3, M12, M10 and particularly M11 (52.2%).

Part 2: analytical specificity

Analytical specificity was assessed through inclusivity and exclusivity. No false negatives were obtained when several Psa strains were assessed by PCR-based methods (inclusivity of 100% for all biomolecular methods). Data for analytical specificity of real-time PCR was previously reported by Gallelli et al. (2014); it should be noted that M4 (simplex-PCR-C) and M5 (real-time PCR) are not able to detect strain of Psa bv. 1 and 2 because, as reported in Gallelli et al. (2014), these methods are specific for the diagnosis of Psa biovar 3 (the virulent population that caused several bacterial canker outbreak world-wide since 2008) (Table 3).

A high risk of false positives results was observed by testing bacterial cultures of phylogenetically close related Pseudomonas sp. or kiwifruit associated bacteria (Table 3). The highest number of false positive or not conclusive results was observed with the following methods presenting low rates of exclusivity: M1 (2/11 equal to 18%), M3 (3/18; 13%), M6 (8/14; 43%) and M10 (1/12; 8%). The highest exclusivity was confirmed for M5 (100%), M2 (19/20; 95%) followed by M4 (14/18; 78%). The different bacterial species giving false positives for each method are reported in Table 3. Among four atypical bacterial strains isolated from kiwifruit, 3 resulted in false positives and one gave undetermined results when using M3 whereas M4 and M7, M8, M9 produced only one false positive each. All strains of Pfm gave undetermined results using M3, all were false positive using M1, M6 M7, M8, M9 and M10. Two out of four false positive were obtained by M11 and one false positive using M4 (Table 3).

Discussion

In recent years, the high economic impact of bacterial canker on kiwifruit production has prompted the scientific community to study the epidemiology, control, plant-pathogen interaction and diagnostic methods for Psa in order to manage this destructive pathogen. Because no full proof control strategy has been developed for Psa, special attention need to be paid to disease monitoring and to the certification of the sanitary status of the propagation material and other kiwifruit plant material. The availability of reliable and highly sensitive diagnostic methods is therefore of great importance. The study presented in this paper reported the results of 12 international laboratories and aimed to gather comparative data for several diagnostic methods in order to provide an objective value of their performance, and to provide input for improvement of the EPPO diagnostic protocol by including some of those newly validated methods.

Despite several advantages of the PCR-based methods, a potential limitation of these assays is the occurrence of false-positive results. The suitability of such test to assess accurately the phytosanitary status of plant material is measured by diagnostic sensitivity (DSE) and diagnostic specificity (DSP) (Jacobson 1997). The following methods: simplex-PCR, duplex-PCR, simplex-PCR-C and real-time PCR (M1, M2, M4 and M5, respectively) showed acceptable values of DSE and DSP, (88 to 96%), making them suitable as preliminary screening methods.

Conversely, M12 and M3 gave the highest values of DSP (95.5 and 100%, respectively) but a low DSE (68 and 72%). This latter result was predictable for isolation which notoriously has a low analytical sensitivity due to the high number of false negative results (38/121). In case of multiplex-PCR it could be influenced by the low number of participating laboratories (3) that were able to detect the pathogen in only one out of nine samples spiked with 103 CFU ml−1and with 104 CFU ml−1.

Despite high diagnostic sensitivity (82 to 100%), the nested-PCR of Biondi et al. (2013) showed a very low diagnostic specificity (50% for M6; 16·7 to 20% for M7 to M9 and 16.7 to 37.5% M10 and M11) resulted in an increase of the DSP. These results are due to the false positives responses obtained by amplification of contaminants, endophytes or epiphytes associated to the infected kiwifruit woody tissues. The presence of contaminants was observed on semi-selective mNSA (Mohan and Schaad 1987) in which the number of colonies with similar morphology to that of Psa (i.e. levan positive on mNSA medium) was higher than the spiked concentration of Psa. Therefore, additional identification tests on presumptive Psa colonies need to be performed. The high risk to obtain false positive results using the methods described by Biondi et al. (2013) can be explained because it was developed for the testing of bleeding sap samples, although extracts from kiwifruit cuttings artificially contaminated with Psa were tested as well (Biondi et al. 2013). So despite being very sensitive, the lack of specificity makes this method inapplicable as a rapid screening method for assaying kiwifruit woody tissues. Nested-PCR was reported to increase detection sensitivity and reduce the effect of PCR inhibitors (Kuchta et al. 2008; Zimmermann et al. 2004). However, the risk of false positives due to cross-contamination of reaction mixtures in routine analysis is increased by the introduction of a second PCR step and the simultaneous manipulation of the previously amplified products (Roberts et al. 1996). A realistic alternative to avoid the manipulation of the PCR tubes between the first and second round of amplification is the one tube nested-PCR followed by the identification of the amplified fragment by restriction analysis (Llop et al. 2000; Bertolini et al. 2003).

Only the real-time PCR method M5 showed no significant variation with the theoretically expected results for all criteria (AC, DSE and DSP). Methods M1, M2, M3, M4 and M12 showed no significant variation with the theoretically expected results for DSP whereas methods M6, M7, M8, M9 and M10 showed no significant variation with the theoretically expected results for DSE. Conversely, nested-PCR (Biondi et al. 2013), M11, presented significant variation with the theoretically expected results for all criteria considered.

For analytical sensitivity, the best results were obtained for methods M7, M8, M9 and M5 for which the target could be reliably detected up to the D3 dilution (103 CFU mL−1) (no significance with the theoretical detection level of 95%). For methods M1, M2, M3, M4, M6 and M7, this level corresponded to the D4 dilution (104 CFU ml−1). For methods M11 and M12, this level corresponded to the D7 dilution (107 CFU mL−1).

The first outcome of this work was a confirmation that isolation on semi selective media (KBC or mNSA) gave lower performance than the majority of molecular methods. This was not surprising since in the previous inter-laboratory testing it was already noted that direct-PCR analysis on latently infected plant material was superior for the detection of Psa (Loreti et al. 2014). This was also confirmed when analysing the repeatability and reproducibility of the methods. The repeatability and reproducibility were lower for isolation on agar plates (89 and 68%) than for the PCR-based methods. For the PCR-based methods, the repeatability was higher than 90% and the reproducibility was above 90% for M3, M5 and M7, These results highlight that direct isolation requires highly skilled personnel, able to recognise and select putative Psa colonies on agar plates, where the growth of saprophytes might be quite intense and fast.

The significant variations identified for the lowest dilution between laboratories (i.e. D5 for method M5, D5 and D3 for M1 and D5 and D4 for M12) showed such variation is laboratory dependent thus confirming that skills and experience are needed for identification on agar plate.

The comparison of methods according to the Bayesian approach shows that methods M2 (duplex-PCR, Gallelli et al. 2011), M5 (real-time PCR, Gallelli et al. 2014) and M4 (simplex-PCR-C, Gallelli et al. 2014) combine a good reliability in the test results both in case of positive and negative responses.

The Bayesian approach provide an overview of method performance, supplementing the traditional statistical approach, helping to choose the most appropriate detection scheme (i.e. combination of methods) according to the epidemiological context (Chabirand et al. 2017). The more data are available and balanced per method (large number of participants and if possible the same number of participants) and the more precise and reliable the performance assessment is. But so far, these recommendations can be difficult to combine with practical constraints and compromises are often implemented which can generate limitations in generalizing the results (e.g. only 3 laboratories implemented methods M3 and M11).

For Psa detection, the disease prevalence is usually low. In this context, routine analyses should be performed using one of the best PCR-based methods (M5, M2 and M4). In a context of certification of healthy material (involving an accurate determination of the Psa-free status), the use of two detection methods (e.g. methods M2 and M5) should be favoured. Indeed, the accuracy of a negative (resp. a positive) result is higher when both detection tests are used instead of only one test. For instance, the post-test probability of infection is lower than 1% if a negative result is obtained with methods M2 and M5 from a plant sampled in a population presenting up to 72% prevalence of infection (vs. 19% if method M5 is used alone). The risk of releasing infected material is minimised when the two test results are negative, which is essential for the certification of Actinidia spp. plants to ensure that the plant material will not present a risk to introduce or spread Psa. Similarly, the confirmation of a positive result by using two detection tests can be relevant, when presence of Psa might lead to an official decision to uproot and destroy material suspected being infected by Psa. Thus, we can see that the post-test probability of infection is higher than 90% with a positive result obtained both with method M2 and method M5 from a plant sampled in a population with at least 5% prevalence (vs. at least 32% prevalence if method M2 is used alone). The use of the methods proposed by Biondi et al. (2013) (in particular M7, M8 and M9) can be reliable in case of a negative result (0.0% probability that the plant is infected by Psa) but it must be necessarily used in combination with another method in particular in case of a low disease prevalence (10–25%), because the probability of an individual being really infected after a positive results is lower than 15–35%, so the risk of false positive is really high.

The high specificity of duplex-PCR (M2) (Gallelli et al. 2011) is ensured by the contemporary amplification of two targets, which increases the specificity of the analysis. The false positive results obtained by the simplex-PCR of Rees-George et al. (2010) with strains of Pfm, makes this latter method (M1), less reliable than simplex-PCR-C (M5). The occurrence of false positive or indeterminate results with Pfm (but also with atypical strain from kiwifruit) is also a crucial aspect for methods based on Biondi et al. (2013) (M6-M11) and on multiplex-PCR (Balestra et al. 2013) (M3). This aspect should to be taken into consideration for the identification of putative Psa colonies and for the preliminary screening of infected plant material. Pfm is a pathogen of kiwifruit which induces symptoms on leaves similar to those induced by Psa but it does not cause canker; it has a low economic impact, and it is not regulated. By using method M1, plants could be unnecessarily destroyed. Real-time PCR (M5) could be an alternative method that offers the advantages of high sensitivity, specificity and rapidity, since in contrast with conventional PCR, it does not require to run a gel electrophoresis. As previously mentioned, this method is specific for the detection of Psa biovar 3, considered to cause more serious disease based on its aggressiveness and rapid spread (Scortichini et al. 2012; Young 2012). It can be stressed that this method can be used for routine analysis as an alternative assay useful for a first screening to exclude the presence of this dangerous population or as identification test to confirm the identity of suspected colonies. Moreover, it’s use can be suggested in combination with another test in case of diagnosis of critical or symptomless samples.

Finally, the experience reported in this paper provides new information for the revision and implementation of the official diagnostic protocols (i.e. EPPO protocol 7/120 (European Plant Protection Organization 2014b).

Notes

Acknowledgements

This study was performed in the frame of ERA-NET 266505 EUPHRESCO II PSADID (Pseudomonas syringae pv. actinidiae (PSA): diagnosis, detection, identification and study of epidemiological aspects -PSADID). The authors acknowledge the staff of the participating laboratories involved in this collaborative study, in particular: V. Modesti (CREA-DC), C.Audusseau, C.François, S.Paillard, C.Rivoal (ANSES-LSV), J. Schaffer (AGES), P.E. Glynos (BPI), J. Yu.

Compliance with ethical standards

Ethical approval

This article is original and not published elsewhere. The authors discussed the results, read and approved the final manuscript. The authors confirm that there are no ethical issues in publication of the manuscript.

Conflict of interest

The authors declare no conflict of interest.

Supplementary material

10658_2018_1509_MOESM1_ESM.doc (52 kb)
ESM 1 (DOC 52 kb)
10658_2018_1509_MOESM2_ESM.doc (80 kb)
ESM 2 (DOC 79 kb)
10658_2018_1509_MOESM3_ESM.doc (403 kb)
ESM 3 (DOC 403 kb)

References

  1. Balestra, G. M., Taratufolo, M. C., Vinatzer, B. A., & Mazzaglia, A. (2014). A multiplex PCR assay for detection of Pseudomonas syringae pv. actinidiae and differentiation of populations with different geographic origin. Plant Disease, 97, 472–478.CrossRefGoogle Scholar
  2. Bertolini, E., Penyalver, R., Garcia, A., Olmos, A., Quesada, J. M., Cambra, M., et al. (2003). Highly sensitive detection of Pseudomonas savastanoi pv. savastanoi in asymptomatic olive plants by nested-PCR in a single closed tube. Journal of Microbiological Methods, 52(2), 261–266.CrossRefPubMedGoogle Scholar
  3. Biondi, E., Galeone, A., Kuzmanovic, N., Ardizzi, S., Lucchese, C., & Bertaccini, A. (2013). Pseudomonas syringae pv. actinidiae detection in kiwifruit plant tissue and bleeding sap. Annals of Applied Biology, 162, 60–70.CrossRefGoogle Scholar
  4. Chabirand, A., Loiseau, M., Renaudin, I., & Poliakoff, F. (2017). Data processing of qualitative results from an interlaboratory comparison for the detection of "Flavescence doree" phytoplasma: How the use of statistics can improve the reliability of the method validation process in plant pathology. PLoS ONE, 12(4), e0175247.  https://doi.org/10.1371/journal.pone.0175247.CrossRefPubMedPubMedCentralGoogle Scholar
  5. Chapman, J. R., Taylor, R. K., Weir, B. S., Romberg, M. K., Vanneste, J. L., Luck, J., & Alexander, B. J. R. (2012). Phylogenetic relationships among global populations of Pseudomonas syringae pv. actinidiae. Phytopathology, 102, 1034–1044.CrossRefPubMedGoogle Scholar
  6. Cunty, A., Poliakoff, F., Rivoal, C., Cesbron, S., Fischer-Le Saux, M., Lemaire, C., Jacques, M.A., Manceau, C., Vanneste, J.L. 2015. Characterization of Pseudomonas syringae pv. actinidiae (Psa) isolated from France and assignment of Psa biovar 4 to a de novo pathovar: Pseudomonas syringae pv. actinidifoliorum pv. nov. Plant Pathology, 64, 582–596.Google Scholar
  7. European Plant Protection Organization (2012). Final pest risk analysis for Pseudomonas syringae pv. actinidiae. EPPO, Paris.Google Scholar
  8. European Plant Protection Organization. (2014a). PM7/98(2)-Specific requirements for laboratories preparing accreditation for a plant pest diagnostic activity. EPPO Bulletin/Bulletin OEPP, 44, 117–147.Google Scholar
  9. European Plant Protection Organization. (2014b). PM7/120(1)-Pseudomonas syringae pv. actinidiae. EPPO Bulletin/Bulletin OEPP, 44(3), 360–375.Google Scholar
  10. European Plant Protection Organization. (2014c). PM7/122(1)-Guidelines for the organization of interlaboratory comparisons by plant pest diagnostic laboratories. EPPO Bulletin/Bulletin OEPP, 44(3), 390–399.Google Scholar
  11. European Plant Protection Organization. (2017). PM7/76(4)-Use of EPPO diagnostic protocols. EPPO Bulletin/Bulletin OEPP, 47, 7–9.Google Scholar
  12. Ferrante, P., & Scortichini, M. (2014). Redefining the global populations of Pseudomonas syringae pv. actinidiae based on pathogenic, molecular and phenotypic characteristics. Plant Pathology, 64, 51–62.CrossRefGoogle Scholar
  13. Gallelli, A., L’Aurora, A., & Loreti, S. (2011). Gene sequence analysis for the molecular detection of Pseudomonas syringae pv. actinidiae: developing diagnostic protocols. Journal of Plant Pathology, 93, 425–435.Google Scholar
  14. Gallelli, A., Talocci, S., Pilotti, M., & Loreti, S. (2014). Real-time and qualitative PCR for detecting Pseudomonas syringae pv. actinidiae isolates causing recent outbreaks of kiwifruit bacterial canker. Plant Pathology, 63, 264–272.CrossRefGoogle Scholar
  15. Gardan, L., Shafik, H., Belouin, S., Broch, R., Grimont, F., & Grimont, P. A. D. (1999). DNA relatedness among the pathovars of Pseudomonas syringae and description of Pseudomonas tremae sp. nov. and Pseudomonas cannabina sp. nov. (ex Sutic and Dowson 1959). International Journal of Systematic Bacteriology, 49, 469–478.CrossRefPubMedGoogle Scholar
  16. International Organization for Standardization ISO 16140:2003 (2003). Microbiology of foods and animal feeding stuffs-Protocol for the validation of alternative methods.Google Scholar
  17. Jacobson, R. (1997). Principles of validation of diagnostic assays for infectious diseases. In R. Reichar (Ed.), Manual of standards for diagnostic tests and vaccines (Vol. 64, 3rd ed.). Paris: Office International desEpizooties.Google Scholar
  18. King, E. O., Raney, M. K., & Ward, D. E. (1954). Two simple media for the demonstration of pyocianin and fluorescin. Journal of Laboratory and Clinical Medicine, 44, 301–307.PubMedGoogle Scholar
  19. Koch, G. G., Freeman, D. H., & Freeman, J. L. (1975). Strategies in the multivariate analysis of data from complex surveys. International Statistical Review, 43, 59–78.CrossRefGoogle Scholar
  20. Koh, Y.J., Cha, B.J., Chung, H.J., Lee, D.H. (1994). Outbreak and spread of bacterial canker in kiwifruit. Korean Journal of Plant Pathology, 10, 68–72.Google Scholar
  21. Kuchta, P., Jecz, T., & Korbin, M. (2008). The suitability of PCR-based techniques for detecting Verticillium dahliae in strawberry plants and soil. Journal Fruit Ornamental Plant Research, 16, 295–304.Google Scholar
  22. Llop, P., Bonaterra, A., Peñalver, J., & López, M. M. (2000). Development of a highly sensitive nested-PCR procedure using a single closed tube for detection of Erwinia amylovora in asymptomatic plant material. Applied and Environmental Microbiology, 66(5), 2071–2078.CrossRefPubMedPubMedCentralGoogle Scholar
  23. Loreti, S., Pucci, N., Gallelli, A., Minardi, P., Ardizzi, S., Balestra, G. M., et al. (2014). Experience from the Italian inter-Laboratory study on the detection of Pseudomonas syringae pv. actinidiae. Phytopathologia Mediterranea, 53, 159–−167.Google Scholar
  24. Mohan, K. S., & Schaad, N. W. (1987). An improved agar plating assay for detecting Pseudomonas syringae pv. syringae and P. s. pv. phaseolicola in contaminated bean seeds. Phytopathology, 77, 1390–1395.CrossRefGoogle Scholar
  25. Rao, J., & Scott, A. (1984). On Chi-squared Tests For Multiway Contingency Tables with Proportions Estimated From Survey Data. Annals of Statistics, 12, 46–60.CrossRefGoogle Scholar
  26. Rees-George, J., Vanneste, J. L., Cornish, D. A., Pushparajah, I. P. S., Yu, J., Templeton, M. D., et al. (2010). Detection of Pseudomonas syringae pv. actinidiae using polymerase chain reaction (PCR) primers based on the 16S-23S rDNA intertrascribed spacer region and comparison with PCR primers based on other gene regions. Plant Pathology, 59, 453–464.CrossRefGoogle Scholar
  27. Roberts, P. D., Jones, J. B., Chandler, C. K., Stall, R. E., & Berger, R. D. (1996). Survival of Xanthomonas fragariae on strawberry in summer nurseries in Florida detected by specific primers and nested PCR. Plant Disease, 80, 1283–1288.CrossRefGoogle Scholar
  28. Sarkar, S. F., & Guttman, D. F. (2004). Evolution of the core genome of Pseudomonas syringae, a highly clonal, endemic plant pathogen. Applied and Environmental Microbiology, 70, 1999–2012.CrossRefPubMedPubMedCentralGoogle Scholar
  29. Sortichini, M. (1994). Occurrence of Pseudomonas syringae pv. actinidiae on kiwifruit in Italy. Plant Pathology, 43, 1035–1038.Google Scholar
  30. Scortichini, M., Marcelletti, S., Ferrante, P., Petriccione, M., & Firrao, G. (2012). Pseudomonas syringae pv. actinidiae: a re-emerging, multi-faceted, pandemic pathogen. Molecular Plant Pathology, 13, 631–640.CrossRefPubMedGoogle Scholar
  31. Takikawa, Y., Serizawa, S., Ichiwawa, T., Tsuyumu, S., Goto, M. (1989). Pseudomonas syringae pv. actinidiae pv. nov.: the causal bacterium of canker of kiwifruit in Japan. Japanese Journal of Phytopathology, 55, 437–444.Google Scholar
  32. Thomas, D. R., & Rao, J. N. K. (1990). Small-sample comparison of level and power for simple goodness-of-fit statistics under cluster sampling. JASA, 82, 630–636.CrossRefGoogle Scholar
  33. Vanneste, J. L., Yu, J., Cornish, D. A., Tanner, D. J., Windner, R., Chapman, J. R., Taylor, R. K., & Mackay, J. F. (2013). Identification, Virulence, and Distribution of Two Biovars. Plant Disease, 97, 708–719.CrossRefGoogle Scholar
  34. Young, J. M. (2012). Pseudomonas syringae pv. actinidiae in New Zealand. Journal of Plant Patholology, 94, S1.5–S1.10.Google Scholar
  35. Zimmermann, C., Hinrichs-Berger, J., Moltmann, E., & Buchenauer, H. (2004). Nested PCR (polymerase chain reaction) for detection of Xanthomonas fragariae in symptomless strawberry plants. Journal of Plant Disease Protection, 111, 39–51.CrossRefGoogle Scholar

Copyright information

© The Author(s) 2018

OpenAccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  • Stefania Loreti
    • 1
  • Amandine Cunty
    • 2
  • Nicoletta Pucci
    • 1
  • Aude Chabirand
    • 3
  • Emilio Stefani
    • 4
  • Adela Abelleira
    • 5
  • Giorgio M. Balestra
    • 6
  • Deirdre A. Cornish
    • 7
  • Francesca Gaffuri
    • 8
  • Davide Giovanardi
    • 4
  • Richard A. Gottsberger
    • 9
  • Maria Holeva
    • 10
  • Aynur Karahan
    • 11
  • Charikleia D. Karafla
    • 10
  • Angelo Mazzaglia
    • 6
  • Robert Taylor
    • 12
  • Leonor Cruz
    • 13
  • Maria M. Lopez
    • 14
  • Joel L. Vanneste
    • 7
  • Françoise Poliakoff
    • 2
  1. 1.Consiglio per la Ricerca in agricoltura e l’analisi dell’Economia Agraria-Centro di Difesa e Certificazione - CREA-DCRomaItaly
  2. 2.Plant Health Laboratory (LSV), French National Agency for FoodEnvironmental and Occupational Health & Safety (ANSES)AngersFrance
  3. 3.Unit for Tropical Pests and Diseases, Plant Health Laboratory (LSV), French National Agency for FoodEnvironmental and Occupational Health & Safety (ANSES)Saint- PierreFrance
  4. 4.Department of Life SciencesUniversità degli Studi di Modena e Reggio Emilia - UNIMOREReggio EmiliaItaly
  5. 5.Deputación de Pontevedra, Estación Fitopatolóxica AreeiroPontevedraSpain
  6. 6.Department of Science and Technology for Agriculture, Forestry, Nature and EnergyUniversità ‘La Tuscia’ di Viterbo - DAFNEViterboItaly
  7. 7.The New Zealand Institute for Plant & Food Research Private Bag 3230, Waikato Mail Centre, Hamilton, 3240, New Zealand Physical AddressHamiltonNew Zealand
  8. 8.Laboratorio Fitopatologico Regione LombardiaVertemate con Minoprio (CO)Italy
  9. 9.Institute for Sustainable Plant Production Department for Molecular Diagnostics of Plant Diseases SpargelfeldstrAustrian Agency for Health and Food Safety (AGES)WienAustria
  10. 10.Department of Phytopathology, Laboratory of BacteriologyBenaki Phytopathological Institute (BPI)AtticaGreece
  11. 11.Plant Protection Central Research Institute (PPCRI) Gayret Mah06172 Yenimahalle, AnkaraTurkey
  12. 12.Plant Health & Environment Laboratory, Diagnostic and Surveillance Services, Biosecurity New ZealandMinistry for Primary Industries, (MPI-PHEL)AucklandNew Zealand
  13. 13.Instituto Nacional de Investigação Agrária e Veterinária (INIAV)UEIS-SAFSV Laboratório de FitobacteriologiaOeirasPortugal
  14. 14.Istituto Valenciano de Investigaciones AgrariasIVIA Centro de Proteccion Vegetal y Biotechnologia Carretera MoncadaMoncada (Valencia)Spain

Personalised recommendations