Introduction

Clinical Practice Guidelines (CPGs) are designed both to guide treatment decision making through synthesis of the best available evidence, and to reduce unwarranted clinical variation [1]. Cancer treatment CPG-adherent care is associated with improved patient outcomes [2, 3]; however, non-adherence persists [4,5,6,7,8,9,10,11,12,13,14,15]. In order to explore perceived barriers and facilitators to cancer treatment CPG adherence in Australia, a mixed methods study [16] was conducted. Previously published results include: a review characterising perceived determinants of cancer treatment CPG adherence [14]; a review presenting cancer treatment CPG-adherence rates and associated factors in Australia [15]; and a qualitative study identifying perceived barriers and facilitators to cancer CPG adherence in New South Wales (NSW), Australia, mapped across five themes [17]. This manuscript represents the second empirical phase of this sequential study [16] that aimed to quantify and generalise previously identified qualitative findings [17] in a broader sample of oncologists across Australia. Survey results are presented and integrated with previous qualitative findings.

Main text

Survey methods

A purpose-designed survey was developed that was informed by interview findings from a previous study [17] and the literature [18,19,20,21]. The 13-question, self-administered survey assessed clinician attitudes and demographic details. One question was the previously validated tool ‘Attitudes towards clinical practice guidelines’ requiring a 6-point Likert scale response (strongly agree to strongly disagree) to 18 sub-questions [22, 23]. Approximately n = 200 completed surveys were anticipated [16].

Three approaches to recruitment were used

  1. 1)

    In February 2020, senior oncologists from seven hospitals (across four geographical catchments, representing half of the population of NSW, Australia [24]) invited, via email, an unspecified number of hospital-based oncologists to complete the survey, sending a reminder in March 2020. The invitation included a survey pack (an online survey link, a participant information and consent form, and a gift-card draw entry-form). Purposive sampling by the senior oncologists ensured clinicians from a range of seniority and disciplines were invited [25]. Recruitment was paused because of concerns regarding coronavirus disease (COVID-19) pandemic-related clinician burnout [26] and recommenced in March 2021.

  2. 2)

    In May 2021, the Clinical Oncology Society of Australia (COSA) emailed invitations to participate, with a survey pack, to 257 oncology specialist Society members; reminders were sent in June 2021.

  3. 3)

    In June 2022, 290 hard copy invitations and surveys (along with a survey pack and reply-paid envelope) were posted to clinicians across the seven hospitals who were listed on a NSW Government website of oncology Multidisciplinary Team (MDT) members (“CANREFER” [27]). This excluded clinicians who had previously completed the survey. All participants were encouraged to forward the survey link to colleagues.

Analysis

Descriptive statistics of the characteristics of the clinician sample were calculated using the Statistical Package for Social Scientists, version 21 (SPSS, Chicago, USA), and presented as counts and proportions. An attitude score was calculated using the Attitudes Regarding Practice Guidelines tool [22, 23]. An analysis of variance was conducted to assess the statistical significance of differences in mean scores across clinician subgroups (Supplementary File 1). The associations between frequency of referring to CPGs, clinician demographics and practice patterns was explored, with statistical significance assessed by Fisher’s exact test [28]. Thematic analysis was conducted to examine open-text responses [29].

Mixed method data integration of survey findings with previously reported interview findings was conducted at the methods level, through building, where interview findings informed the survey development [30]. The triangulation of the two data sets and data collection methods aimed to enhance trustworthiness [31, 32] and corroborate the interview findings through a larger sample [30]. The thematically coded interview data was previously quantitised [33] and integrated with the survey data at the data interpretation and presentation level [30] through a visual display of a thematic conceptual matrix [30, 34,35,36]. Each comparable finding from the interview and survey studies was identified and assigned an alphanumeric code, using the Methods for Aggregating The Reporting of Interventions in Complex Studies (MATRICS) method [35] (Interview findings represented by “A”, survey findings by “B”, each interview subtheme represented by a number). Survey findings were assessed as: (1) convergent with interview findings (in agreement); (2) as offering information on the same issue that was complementary; or (3) as contradictory (discordant) [35, 37]. Interview subthemes that weren’t assessed in the survey were labelled silent (expected) [37]. Findings were labelled discordant, neutral [38], or in agreement, if less than, exactly, or more than 50% of survey respondents reported the finding, respectively. Findings were merged into summary statements [35] and presented within the qualitative thematic framework [17].

Survey Results

In total, 48 surveys were completed (19, 15 and 14 surveys returned in each wave respectively), yielding an estimated 5.8% and 4.8% response rate in second and third waves (with six postal surveys returned to the sender). An overall response rate was not calculated owing to snowball sampling and an unknown number of clinicians approached in wave one.

Most clinicians were aged 40–69 years and practicing in NSW. Most were medical oncologists (MOs; n = 15), radiation oncologist (ROs; n = 10) or surgeons (n = 15), practising as staff specialists or Visiting Medical Officers, who graduated from medicine between 1980 and 2009, completing their medical and oncology training in Australia. Clinicians practiced a mix of public and private practice and typically only practiced in metropolitan hospitals. Most clinicians were members of an MDT, attending 1–4 MDTs more than once per week. The most common cancers treated were breast, colorectal, upper gastrointestinal, lung, skin and haematological cancers. Just under half of the clinicians treated one cancer stream, while almost a third treated four or more cancer streams (Table 1). Most surgeons (11/15) and haematologists (3/3) reported treating one cancer stream, while most MOs (12/15) and ROs (8/10) reported treating multiple cancer streams.

Table 1 Participant demographic characteristics

Clinicians reported staying up-to-date by attending conferences (n = 39), reading journals (n = 36), attending MDTs (n = 23), discussing cases with colleagues (n = 15) and attending journal clubs or educational meetings (n = 12). Many clinicians routinely or occasionally referred to CPGs when making treatment decisions (33/46) and estimated that their practice was routinely adherent with CPG recommendations (32/46). Of clinicians who routinely referred to CPGs, most (13/17) treated two or more cancers, while those who referred to CPGs less frequently, typically only treated one cancer stream (17/29). Clinicians mainly reported using CPGs developed by: eviQ (n = 22), National Comprehensive Cancer Network (n = 22), American Society of Clinical Oncology (n = 15), Cancer Council Australia/National Health and Medical Research Council (n = 13) and the European Society of Medical Oncology (n = 12) CPGs. Clinicians referred to CPGs to ensure their practice was current and evidence based (n = 23), to support treatment plans for complex/rare/unfamiliar cases (n = 12), and/or to seek consensus opinion when evidence was lacking (n = 2). Clinicians reported not referring to CPGs when CPGs were out-of-date (n = 6), when clinicians felt they were well informed through MDTs and journal club (n = 5), and when CPGs were not locally relevant (n = 2) or too generic (n = 2).

The mean CPG attitude score was 42.6 (95%CI 40.4–44.8), ranging from 23 to 59, with 60 being the most positive score possible. No significant differences in mean scores were found across clinician subgroups (Supplementary File 1): average scores indicated a tendency for positive attitudes towards cancer CPGs. The only clinician characteristics that were significantly associated with frequency of referring to CPGs was the age of clinicians (p = 0.007) and number of MDTs clinicians attended (p = 0.03), with younger clinicians and those attending more MDTs referring to CPGs more frequently (Table 2). This may indicate that clinicians attending more MDTs (who treat more cancer sites), utilise CPGs to remain current with the evidence base across multiple cancer sites. Similarly, younger clinicians may engage with CPGs more frequently to support their professional development. Neither higher attitude scores, nor referring to CPGs necessarily result in CPG adherence, however, as demonstrated by the wide variation in rates across Australia and different cancer streams detected previously [15].

Table 2 Frequency of referring to CPGs by clinician characteristic

Integration with previously published interview findings

Comparable findings from this survey and the previous interview study [17] were integrated and are presented below [35] (Table 3). Results that were discordant or complementary are labelled.

Table 3 Thematic conceptual matrix of integrated qualitative and quantitative findings

[Findings 1AB, 2AB, 3AB, 4AB, 5AB]

Clinicians considered CPGs to be helpful, educational tools that are reassuring frameworks for supporting treatment decisions. They were perceived to reduce clinical variation and improve patient care, while simultaneously being unable to cater for patient complexities. Facilitators included regular CPG updates, and inclusion of a summary of evidence that justifies a recommendation and highlights the level of underpinning evidence. Barriers included a lack of agreement with the CPG interpretation of evidence (discordant) and CPGs being difficult to navigate or too rigid (complementary).

[Findings 6AB, 7AB, 8AB, 9AB]

Patient preference was a barrier to CPG adherence and potential litigation a facilitator. Younger clinicians and those who attended more MDTs referred to CPGs more often. Barriers such as other clinicians’ hubris; equipoise; and disciplinary preferences; plus concern about CPG-recommended treatment side effect; access challenges for rural patients; and concerns that publishing CPGs increased liability, were discordant across the studies.

[Findings 10AB]

Easy access to CPGs was a facilitator, while concern that other clinicians’ limited awareness of CPGs was a barrier; however, almost all surveyed clinicians reported being familiar with, and having access to, CPGs.

[Findings 11AB, 12AB]

Peer expectations to adhere to CPGs, multidisciplinary engagement and peer review of treatment decisions were facilitators, while lack of clinician time was a barrier. Surveyed clinicians were divided over whether: limited access to CPG-recommended drugs was a barrier; and if there was enough support and resources to implement CPGs (complementary).

[Findings 14AB, 15AB, 16AB]

Clinical audits, CPGs being based on unbiased synthesis of evidence or expert opinion and being developed by trusted organisations were facilitators. Adapting or tailoring international CPGs to meet local Australian needs and developing living CPGs, managed by a centralised national group, were proposed improvements.

The majority of these findings are in agreement across the two studies, and many have been previously recognised as barriers and facilitators to cancer CPG adherence in the literature [14]. The discordant findings, however, warrant further exploration. Further research is needed to understand treatment access barriers for patients living rurally (who are less likely to receive CPG adherent care than those in metropolitan areas [39, 40]) and the impact on adherence [41]. Similarly, limited access to international CPG-recommended drugs that lack Pharmaceutical Benefits Scheme (PBS) subsidisation and the impact on adherence warrants investigation: universal PBS insurance ensures affordable cancer care by subsidising approved drugs [42]. Further understanding of the impact of organisational support on CPG adherence in Australia is also needed, given organisational support is an established determinant of CPG adherence [43] [21, 44].

Limitations

The purpose of this study was to assess the frequency of previously published qualitative findings by surveying a broader population of oncology specialists [45]. The sample was dominated by clinicians in NSW, limiting the generalisability of findings. Results from this study should be interpreted with caution, as associations may have been over- or under-estimated [46], the full survey was not validated, and participant self-selection [47] may influence findings. Response rates were smaller than anticipated, despite using recommended strategies (postal surveys, incentives, follow-up) to increase participation [48], potentially reflecting COVID-19-related clinician busyness or burnout [26]. The extended recruitment period may potentially influence findings, although no data discrepancies were noted across the recruitment waves. The small sample size limited the study’s power to statistically compare characteristics and CPG attitude scores across clinician subgroups. Low response rates are common in clinician surveys [48,49,50], reducing the power to meaningfully compare use of CPGs across respondent subgroups. The comparison of means (e.g., mean CPG attitude scores) generally requires substantially smaller sample sizes, therefore validated scales should be incorporated into surveys, wherever possible, to assess differences between subgroups. In eTable 1 (Supplementary file) we provide means and standard deviations of mean CPG-attitude scores to enable estimation of sample sizes required to detect differences in attitude scores between sub-groups.

The qualitative data that was quantified [33] in the interview study [17] was not validated by a second reviewer [33], limiting its reliability and comparability, and potentially weakening the integration of the data sets [51]. Given the discordant findings identified, a follow-on confirmatory study with a larger sample size, and more nuanced questions, is needed to establish a clearer and more in depth understanding of the determinants of cancer treatment CPG adherence in Australia.

This study has characterised key determinants of cancer treatment CPG adherence in Australia. These findings are intended to inform the development CPG implementation recommendations and strategies to mitigate barriers and utilise facilitators of adherence.