Background

The POUT multicentre randomised controlled trial (a phase III randomised trial of Peri-Operative chemotherapy versus sUrveillance in upper Tract urothelial cancer, CRUK/11/027) investigates the role of adjuvant systemic chemotherapy in patients with upper tract urothelial carcinoma (UTUC) [1]. UTUC affects the ureter and renal pelvis and is a rare disease [2], with an approximate annual incidence of 1–2 cases per 100,000 people [3]. Standard treatment comprises surgery to remove the affected kidney and ureter (nephroureterectomy), followed by surveillance for recurrence. Incidence of recurrence is high, with disease returning within 5 years of the initial surgery in 30–50% of patients with localised disease [4]. There are few data available regarding optimal adjuvant treatment strategies following surgery, and uncertainty exists regarding the value of chemotherapy, with no international consensus [5].

The POUT trial aims to establish whether UTUC is sensitive to platinum-based chemotherapy. The primary outcome measure is disease-free survival (time from randomisation to cancer recurrence or death), with secondary outcome measures including safety and quality of life. Following surgery, participants were randomly allocated to either adjuvant chemotherapy or surveillance, with treatment according to local practice if recurrence occurred (Fig. 1). Participants were identified by their urologist or oncologist, recruited from secondary care hospitals across the United Kingdom (UK), and could be approached about participation either before or after surgery.

Fig. 1
figure 1

POUT trial schema

It is well established that recruitment to clinical trials can be challenging, with surveys of clinical trial coordinating centres (CTCs) in the UK identifying the improvement of recruitment rates as a top priority [6, 7]. It was anticipated that recruitment may be particularly difficult in the POUT trial due to the difference between the treatment strategies being investigated [8, 9]; the investigators thought potential participants may be disinclined to join a trial in which they could be allocated to a group which did not receive immediate treatment, as has been observed in other studies [10, 11]. In order to raise awareness of the trial at an early stage in the patient pathway, a brief pre-surgery patient information sheet was prepared to introduce the trial to patients prior to nephroureterectomy. This was in addition to the main patient information sheet, given to all those approached following surgery as part of the written informed consent process. A qualitative recruitment study was initiated to investigate recruitment activity at sites in depth [12]. Sites were also provided with a screening log with the intention of tracking potential participants and helping the CTC to centrally monitor recruitment activity at sites.

Screening logs were implemented as a result of the investigators’ prior experience, including within the SPARE trial (a feasibility study comparing treatment modalities in muscle-invasive bladder cancer) [13], as a tool to centrally monitor and support recruitment activity at sites. Screening log data collected in the SPARE trial indicated that recruitment to a large-scale phase III trial was not feasible due to a lack of eligible patients and the trial therefore closed early [8]. The use of screening logs is recommended by the UK Medical Research Council’s Hubs for Trials Methodology Research Recruitment Working Group in their advice on optimisation of recruitment [14]. They suggest screening logs act to raise awareness of the trial, ensure all potential recruits are reviewed, and enable central review of eligibility criteria.

Previous studies reporting use of screening logs have used the information gathered to justify expansion to additional sites and revision of eligibility criteria, and to monitor recruitment rates [15,16,17]. One study found that sites submitting fewer than 50% of expected logs for two stroke trials achieved half the monthly recruitment rate of those submitting over 50% [18]. This was supported by similar findings in another study which demonstrated that sites with the highest recruitment screened the most patients per month and recruited a higher proportion of patients than lower-recruiting sites [19].

It has, however, been noted that there is no standard definition of a ‘screened patient’ [19]; therefore, data can be difficult to generalise. This represents a challenge when trying to compare data between studies and raises questions about the validity of including screening data in randomised controlled trial publications, as recommended in the latest version of the CONSORT Statement [20]. The recent SEAR (Screened, Eligible, Approached, Randomised) publication [21] suggests a standardised framework for collecting screening data similar to that used in the POUT trial and, if adopted, may facilitate generalisability between trials in the future.

The aim of this exploratory retrospective analysis is to investigate the utility of screening logs within the POUT trial, assess the impact of one of the changes to the trial documentation made midway through the screening period, investigate correlations between reported screening activity and actual recruitment and identify potential topics for future prospective studies.

Methods

Research teams at participating NHS hospitals were provided with a screening log (Fig. 2) based on the CTC’s standard template, and screening criteria were defined in the trial protocol. Sites conducting surgery were asked to record all patients receiving nephroureterectomy for suspected UTUC. Sites to which patients were referred following surgery were asked to record all patients fulfilling the core eligibility criteria, i.e. diagnosis of locally advanced, non-metastatic UTUC. Data requested included basic eligibility information and details of the recruitment process. The aim was to capture information about all patients fulfilling the screening criteria, with date of surgery, histological details, dates both patient information sheets were provided and the recruitment status (ineligible, not approached, declined, randomised, pending or awaiting patient decision). If patients were happy to disclose the reason they declined participation, this was also recorded. Each patient was allocated a sequential screening number, and the status was updated by the site as required until a final outcome was reached (ineligible, not approached, declined, randomised). No patient identifiers were collected centrally; therefore, patient consent was not required.

Fig. 2
figure 2

Screening log template provided to participating sites

The log was provided to centres as a Microsoft Excel spreadsheet, and it was recommended that it was used as a local tool to track potentially eligible patients from before surgery to the final outcome. Sites were asked to submit the logs to the CTC monthly. The CTC also sent regular reminder emails to request logs from all sites which had been open for at least 1 month.

Upon receipt, logs were reviewed for any discrepancies in eligibility criteria. If any patients appeared to have been incorrectly deemed ineligible, this was raised with the site in real time with the aim of ensuring no eligible patients were overlooked. Sites were also regularly reminded to report all recruited patients on the logs if any had been omitted.

Each log received from sites was entered into a central MS Excel spreadsheet at the CTC, and data were cleaned to remove any patients reported in error, for example, patients who had not had surgery or those who did not have UTUC. Incorrect reporting of patients was notified to sites for training purposes.

Data were summarised by the CTC and reviewed throughout the trial with the aim of identifying any issues with the recruitment. Patients reported as recruited, declined or not approached were classified as eligible. Approach rate was calculated as the proportion of those eligible who were approached (recruited + declined). Acceptance rates were calculated as the proportion of patients recruited of those approached. Free-text responses were categorised according to the standardised trial operating procedures.

Aggregate data were reviewed by the Trial Management Group (TMG) which provides ongoing day-to-day oversight of the trial, comprising the chief investigator and key co-investigators from a cross-section of participating sites. TMG review took place every 6 months, and screening data were used to inform revisions to the trial documentation as deemed appropriate and to estimate the impact of any such changes. Screening data were also regularly reviewed by the qualitative recruitment researcher.

In this retrospective exploratory analysis conducted by the CTC, groups were compared using Mann-Whitney U and, for proportions, Fisher’s exact test. Correlations were assessed using Pearson’s correlation coefficient (r). Data were analysed using Microsoft Excel and Stata v15.

Results

The POUT trial opened to recruitment in May 2012. Detailed screening data were collected to August 2016. Seventy-one recruiting sites were open, representing 2768 centre months of recruitment activity in total during which 1138 patients were reported on screening logs and 191 participants joined the trial (Fig. 3).

Fig. 3
figure 3

Screening data summary

Emails requesting logs were sent in 43/50 months between June 2012 and August 2016, a total of 2300 requests out of a possible 2762. The return rate following requests was 56% (1293 logs returned in response to 2300 requests), with a total of 18 logs submitted in the months when no request was sent (18/462 expected, 4% return rate). Overall, 1311/2762 expected monthly logs were received (47%). Seven centres (open for 71 centre months in total) did not return any logs.

Of the 191 patients actually recruited to August 2016, 54 (28%), recruited at 30 sites, were not reported on logs. Despite the ongoing CTC process of review and data cleaning in liaison with individual sites, including reminders to add all recruited patients to their logs, under-reporting of participants occurred throughout the trial with rates increasing towards the end of the screening period (Table 1).

Table 1 Under-reporting of recruited participants

Of the eight sites reporting no patients fitting the screening criteria throughout the duration of the screening period, three recruited one participant each. Six of the 25 sites which reported some screening data but no recruited patients had actually recruited participants (seven patients recruited across six sites).

Of the 1138 patients reported on the logs, 380 were categorised as eligible and 344 had been approached about participation (91% approach rate); 137 were reported as joining the trial—a 40% acceptance rate (207 declining participation).

The median monthly number of approached patients reported per centre was greater when a screening log request had been sent at the end of the month than when no request had been sent (median, 0.138 vs 0.062 patients/centre; Mann-Whitney p = 0.04) (Table 2). However, sending a request did not increase the median number of patients actually recruited the month after the request was sent, when we would expect such patients to join the trial according to protocol timelines (median, 0.062 vs 0.070 patients/centre; Mann-Whitney p = 0.39).

Table 2 Screening and recruitment numbers (all sites combined) by whether or not a screening log request was made

A moderately strong positive correlation of 0.47 was observed between the mean monthly number of patients reported on screening and the mean number of participants each site actually recruited per month (Fig. 4). Sites recruiting more than the overall median of 0.04 patients per screening month reported a median of 0.38 screened patients per month, vs 0.20 for the lower recruiters (Mann Whitney p = 0.015). Higher-recruiting sites also reported a higher acceptance rate (p < 0.0001) (Table 3).

Fig. 4
figure 4

Overall recruitment rate per site vs patients reported as screened

Table 3 Acceptance rates and screening activity by recruitment activity

Sites were also categorised by whether they returned greater than the median 47% of expected logs to the CTC (high-returning sites) or not (low-returning sites). Whilst numbers reported as screened and eligible per month were greater for sites with higher numbers of screening log returns (p = 0.001 and p = 0.04), actual monthly recruitment was similar (p = 0.66) (Table 4.)

Table 4 Acceptance rates, screening and recruitment activity by compliance with returning screening logs

There was considerable variability in screening activity per site, with no clear relationship between screening or recruitment activity and overall acceptance rate (Fig. 5).

Fig. 5
figure 5

Screening data reported and actual recruitment per site by overall acceptance rate

The pre-surgery short patient information sheet did not appear to have any effect on acceptance rates. Of the 71 eligible patients who received the pre-surgery information sheet, 24 joined the trial (34%), whilst the acceptance rate amongst the 273 patients who did not receive a pre-surgery sheet was 41% (113/273) (p value, 0.277). The principal reason reported for declining was not wanting to receive chemotherapy, 99/207 (48%). Very few patients (4/207, 2%) declined due to preference for chemotherapy. In light of this observation, seen from the outset of the trial and supported by findings from the parallel qualitative study [22], the patient information sheet was reviewed and revised in 2014, with the aim of ensuring the information regarding the potential benefits and drawbacks of both surveillance and chemotherapy were outlined more clearly. From a retrospective review of reported screening data, the revisions to the patient information sheet made little discernible difference to the proportion of decliners who preferred not to have chemotherapy (48% with both versions); however, the overall acceptance rate did marginally increase following the implementation of the revised information sheet (38 to 42%) (Table 5).

Table 5 Impact of revisions to patient information sheet (PIS)

Overall trends in recruitment rates remained relatively stable over time (Figs. 6 and 7).

Fig. 6
figure 6

Screening patterns over time

Fig. 7
figure 7

Decliners due to not wanting chemotherapy over time

Discussion

Screening logs were a useful tool within the POUT trial as they demonstrated that the most common reason for patients declining participation was a preference not to undergo further treatment following surgery, contrary to the investigators’ initial expectations that patients would be keen to receive chemotherapy [22]. In the absence of the qualitative recruitment study, if screening logs had not been used, the investigators would have had a fundamental misunderstanding about why the majority of patients declined. In addition, the investigators’ expectation that providing information earlier in the patient pathway would improve trial acceptance was not confirmed: there was no indication that those patients approached prior to surgery demonstrated higher acceptance rates than those approached afterwards.

The proportion of eligible patients declining to participate was within the wide range of decline rates reported by other studies (15 to 80%) [15, 17, 23,24,25,26]. Whilst a preference against chemotherapy was not anticipated by the POUT investigators, declining due to treatment preference is consistent with the findings of a systematic review of participation in oncology trials [27]. Previous studies reporting screening and recruitment data do not provide details of the reasons for declining, instead presenting aggregated data or none at all [15, 18, 23, 24, 28]. Approach rates were good throughout the trial; however, centres may have been disinclined or unable to report patients who were missed, so the number not approached may have been under-reported.

We have demonstrated that screening logs were helpful in identifying a major reason for patients declining participation in the POUT trial; however, the attempt to redress this by amending the patient information sheet did not appear to have a major impact. Overall acceptance rates as reported on screening logs showed little variation throughout the trial. It is possible that prospective use of screening data to assess the impact of changes to essential documents or other recruitment interventions may have been more effective than the retrospective review conducted here [24].

The use of screening logs allowed near real-time central oversight of recruitment activity at those sites which complied with screening data reporting, enabling the CTC to identify and feedback any misinterpretations of eligibility criteria throughout the trial. Unfortunately, such interventions were not systematically recorded by the CTC; therefore, the impact of this feedback cannot be reported.

Screening data can also be used to inform revisions to eligibility criteria if patients in the target population are inadvertently excluded; however, no such changes were made within the POUT trial, despite the large proportion of ineligible patients reported. We suggest any substantial alteration of criteria should be approached with caution to avoid making the interpretation of results challenging or risk invalidating them entirely.

Whilst the CTC not sending a request for logs did reduce the number of patients reported for any given month, this did not appear to impact on the number of patients recruited in the subsequent month, suggesting that keeping and submitting screening logs may not in itself be sufficient to improve or support recruitment at sites.

Despite consistent requests for screening logs and querying of submitted data, under-reporting continued to be observed. Although this could only be confirmed by the CTC in the discrepancy between the number of patients reported as recruited and those actually randomised, it is likely that this under-reporting occurred across all categories. Whilst CONSORT recommends inclusion of screening data in clinical trial reports [20], our experience suggests that it is challenging and time-consuming to collect robust data from all participating sites, something that we did not succeed at despite our best efforts. Hence, the data provided to adhere to CONSORT recommendations may not be fully representative of all screening activity. The potential for bias should also be considered. It is feasible that those sites which routinely return screening data are unrepresentative of those which do not maintain or submit screening logs. Whilst we observed a correlation between the number of patients reported on screening and those recruited, as seen by previous investigators [19], this is likely to be confounded by the size of the patient population available for screening (and thus recruitment) at each site and so may not be an appropriate metric by which to measure trial activity and engagement. No correlation was observed between the proportion of expected logs submitted to the CTC by individual sites and their recruitment activity, which is inconsistent with previous findings [18].

The resource required to obtain robust screening data should not be underestimated. In our study, over 900 emails requesting logs received no response. Under-reporting of recruited patients persisted throughout the screening period, with the lower rate at the beginning reflecting protracted data cleaning efforts by the CTC and sites.

Maintenance and analysis of screening logs are time-consuming, and whilst they are intended to be a beneficial tool for sites, it may not be appropriate to implement them for all trials. There is a debate on their utility within the literature [18, 19], with little discussion of data quality and completeness in cases where they have been used [15, 17, 24]. If a trial has a large eligible patient population and recruitment is on target, the necessary workload may not be beneficial, either centrally or at site. In some studies which have reported screening data, the acceptance rate has been extremely low—for example, a prostate cancer trial reported 13,022 eligible men on screening, yet only 731 men were randomised [23]. This pattern has been observed in other studies [18, 29]. There is a risk that the workload associated with maintaining screening logs may disincentivise clinicians from trial participation [27, 30], especially if large numbers of ineligible patients have to be reported, so screening criteria should be carefully considered if logs are used.

In the POUT trial, the anticipated eligible patient population was small, so the workload for centres was not anticipated to be onerous. The main time requirement was at the CTC, principally as a result of the relatively large number of centres and the iterative process required to obtain clean data. On reflection, given that the approach and acceptance rates and the reasons for declining remained relatively stable throughout data collection, logs could have potentially been collected for a shorter period.

The inconsistency in definitions and reporting of screening data across studies makes it challenging to generalise findings across trials. If the SEAR framework, which proposes a similar data collection format as used here, is adopted, this should help standardise reporting of screening data [21]. It is possible that the utility of screening data varies between disease areas; however, standardised reporting of such data would help elucidate this. Across all settings, the cost-benefit ratio of collecting screening information should be considered, as it remains unclear whether routine use of logs substantially contributes to trial oversight or improves recruitment rates. We intend to investigate this further by embedding a study within future randomised controlled trials to prospectively investigate the utility of screening logs, the associated resource requirements and any impact they have on trial oversight and recruitment rates.

Conclusions

Screening logs provided insight into reasons for non-participation within the POUT trial. Data reported remained consistent throughout the trial's duration, and no evidence was found that central collection of screening logs improved recruitment rates. The use of screening logs may not be appropriate for all trials or for the full duration of any given trial, and the resource requirements, both centrally and at site, should be carefully considered prior to their implementation. Despite their relatively widespread use, there exists a lack of evidence on the utility of screening logs in supporting or improving recruitment and this warrants further investigation within prospective studies.