The ethics committee at the University of Western Sydney approved the study proposal (Protocol No. H9824).
A two-round Delphi study was conducted to inform the topics to be addressed by the HHALTER project. It consisted of two online surveys hosted by SurveyMonkey™ with each survey constituting a ‘round’ of data collection. This study was limited to two rounds because we wished to understand priority areas for the project as raised by participants and the differences between the stakeholder groups in terms of their research priorities, but were not seeking consensus on the research topics to be included in the project.
Stakeholders were identified though a number of sources, including the professional contact network of the HHALTER project steering committee, relevant conference proceedings, and stakeholder websites. Individuals contacted included: policy developers and implementers in key government agencies in all states and territories; known experts engaged in a range of Hendra virus-related activities; research leaders in charge of National Hendra Virus Research Program funded projects; members of the Intergovernmental Hendra Virus Taskforce; and public health leaders in Hendra virus-affected states. Those initially contacted via email were asked to identify stakeholders who may have been missed, enabling the research team to identify additional stakeholders.
Round one questionnaire
A HHALTER project overview and link to the round one questionnaire was distributed to all of the stakeholders using an email collector created in SurveyMonkey™. On page two of the questionnaire participants were advised that “completion of the following questionnaire indicates that you have understood to your satisfaction the information regarding participation in the research project and agree to participate”. Stakeholders were asked to specify the jurisdictional level at which their organization operated. Response options included local, state or federal/national. Those participants whose organization operated at the state level were then asked to specify the state where their organization operated. Finally participants were given the following instruction: ‘please list topic areas relating to horse owners and Hendra virus that you think should be priority areas for questions posed to horse owners in the surveys conducted by the HHALTER project’. Ten separate spaces for listing topic areas were provided. An additional documentation file contains this questionnaire in its entirety (see Additional file 1).
Development of the round two questionnaire
Responses from round one were analysed and used to construct a round two questionnaire. Working independently, two members of the research team (KS and MRT) read and re-read through the responses to the final question in the questionnaire from round one and coded them using codes that emerged from the data . The aim was to identify concepts, the basic units of analysis in thematic analysis. During the identification of concepts, the central meaning of each suggested topic area was described in a short statement, referred to here as a code. Concepts were grouped into categories, groups of suggested topics that shared common features. Similarly, categories were organized around themes . Thematic analysis allowed KS and MRT to establish categories and themes that occurred throughout the data and organize the data around those themes. They then compared their findings and discussed them at length in order to arrive at an agreed upon list of 18 themes. A desired number of themes was not established at the beginning of data analysis, although it was agreed that if possible the list should not exceed 20 as a number of themes greater than 20 would translate into a questionnaire in round two that would take longer than 15 minutes to complete. This analysis was carried out using Microsoft Excel (Microsoft Office Excel 2011).
Implementation of the round two questionnaire
The proposed themes and associated categories generated through analysis of the round one data were then presented to those who completed the round one questionnaire in a second questionnaire. In the round two questionnaire themes and categories were referred to as topics and subtopics. Participants were shown each topic and its associated subtopics on a separate page of the questionnaire and asked to rate each topic individually according to rate the importance of each topic area to their role/professional position on a 5-point unipolar scale. Each rating scale was fully labeled with the following labels: ‘not very important’, ‘somewhat important’, ‘moderately important’, ‘important’, and ‘extremely important’. The order in which the topics were presented was randomized for each participant. On the final page of the questionnaire the complete list of topics was shown to participants at which time they were asked to select their top five priority topic areas for the HHALTER project. The ordering of this list was also randomized for each participant. An additional documentation file contains this questionnaire in its entirety (see Additional file 2).
Statistical data analysis
All statistical analyses were performed using SAS statistical software (release 9.3 © 2002–2008, SAS institute Inc., Cary, NC, USA) and checked using R (version 2.15.2. © 2012, The R Foundation for Statistical Computing) . All figures were produced using Microsoft Excel (Microsoft Office Excel 2011).
Contingency tables were used to assess the response rate to the round one and round two questionnaires, as well as the overall response rates, for the different stakeholder groups. Contingency tables were also used to examine the relationship between topic ratings and stakeholder group. The distributions of nominal variables, including jurisdictional level and state, were assessed with frequency distributions.
Exploration of the contingency tables for topic ratings revealed that, as expected, responses from participants were skewed toward the upper end of the scale and a large number of cells contained numbers less than five. Therefore the decision was made to create two outcome variables based on the individual topic ratings. To create the first outcome variable, considerably important, the rating categories ‘important’ and ‘extremely important’ were collapsed into a single rating category (considerably important coded 1) and the rating categories ‘not very important’, ‘somewhat important’, and ‘moderately important’ were collapsed into a single rating category (not considerably important coded 0). To create the second outcome variable, extremely important, the rating category ‘extremely important’ was coded 1 and the rating categories ‘not very important’, ‘somewhat important’, ‘moderately important’, and ‘important’ were collapsed into a single rating category (not extremely important coded 0).
Exploration of the overall rating and ranking of the topic areas
All of the questions that involved rating topics were stacked to create a single variable representing the overall rating for each of the 18 topic areas. This variable was used as an outcome in logistic regression analyses to assess the probability of each question being rated as ‘considerably important’ or ‘extremely important’. An individual stakeholder identification number variable was included as a random effect to account for similarity in responses by a stakeholder. The top five priority areas selected on the final page of the round two questionnaire were coded initially as a binary variable (yes/no) and then all the priority areas were stacked to create a single variable representing priorities for all stakeholders for all questions. Similar analyses as above were conducted to evaluate stakeholders’ ranking of priorities.
Exploration of the response rates and topic area ratings by stakeholder group
The associations of stakeholder group with the response rate to the round one and round two questionnaires, as well as the overall response rates were assessed in logistic regression analyses.
The association of stakeholder group with each of the binary level of importance outcome variables for each of the proposed topics was assessed in logistic regression analyses. However, even when the rating categories were collapsed to create two binary outcome variables for level of importance there were small values in some of the cells of the contingency tables, including zeros, and as a result the parameter estimates in logistic regression were biased. In order to deal with this separation of data, each of the binary response rate variables was assessed with the logistf package in R that utilizes the penalized maximum likelihood approach developed by Firth [9, 10].