Background

Trainee research collaboratives (TRCs) have pioneered methods of rapidly delivering high quality, prospective, cross-sectional ‘snapshots’ of surgical practice and outcomes [1, 2]. TRC studies are led by frontline clinicians and students, without the need for significant additional infrastructural resources or funding. They capture data over short time periods across multiple centres, collating large datasets [3,4,5] that can be used to generate hypotheses for future randomised trials and identify targets for national quality improvement [6,7,8,9,10].

TRC studies are delivered by student and postgraduate trainees who rotate between hospitals at least once every 12 months, which would create discontinuity within local teams if studies were run over protracted periods of time. Consequently, most surgical TRC studies follow patients up to the point of discharge or to postoperative day 30; no published observational studies from TRCs have undertaken outcome assessment beyond 6 months [6,7,8,9,10]. In planning longer term follow-up, a particular challenge is ensuring safe local storage of patient identifiers so that patients can be followed-up at one-year even if the original study collaborators at that site have rotated to continue their training at another centre.

Outcomes After Kidney injury in Surgery (OAKS) was the first TRC cohort study to attempt to collect to one-year follow-up data. The aim of this study was to evaluate one-year follow-up and data completion rates, and to identify factors associated with improved follow-up rates.

Methods

Student Audit and Research in Surgery (STARSurg)

Student Audit and Research in Surgery (STARSurg) is the UK’s national medical student research collaborative. It is coordinated by a team of medical students and postgraduate trainees. The collaborative model and the educational benefits to participating students have been described previously [11, 12]. STARSurg studies are delivered by ‘mini-teams’ at each centre consisting of consultant surgeons, junior doctors and medical students.

Outcomes After Kidney injury in Surgery

Outcomes After Kidney injury in Surgery (OAKS) [13] is a multicentre study which prospectively identified patients in the UK and Republic of Ireland undergoing major gastrointestinal or liver resection, or reversal of ileostomy or colostomy from 23 September 2015 to 18 November 2015. In the United Kingdom, the South-East Scotland Research Ethics Service (reference: NR/1506AB4) confirmed that ethical review was not required, as this observational study only collected routine, non- patient identifiable data. Individual participating UK centres were responsible for registering the study locally as either clinical audit or service evaluation. In the Republic of Ireland, participating centres were responsible for securing research ethics approval locally, as required by institutional regulations. The 30-day outcomes from the OAKS study have been reported previously [14, 15].

Data was collected on the Research Electronic Data Capture (REDCap) system, an online platform for secure web-based data collection. The REDCap platform was developed in 2004 at Vanderbilt University, which is a secure data collection tool meeting the Health Insurance Portability and Accountability Act (HIPAA) compliance standards. Patients’ hospital or NHS identification numbers and linked study-specific identification numbers were stored in accordance with local Caldicott Guardian approvals; either centrally on the REDCap system or within an encrypted spreadsheet held securely on the local hospital computer network by a member of the data collection team (a local investigator, supervising consultant, or audit officer).

In the period November 2016 to May 2017 the STARSurg network collected one-year outcomes for the patients identified in the initial prospective patient enrolment phase of OAKS. Patients were excluded from one-year follow-up if they had died within 30 days of index surgery, as there would be no additional data to collect from these patients since the 30-day follow-up that had already been completed previously. At centres that had participated in initial OAKS data collection, new mini-teams were recruited to complete one-year follow-up. The clinical endpoints collected at one-year were (1) mortality at 1-year, (2) myocardial infarction or cerebrovascular accident at 1-year, (3) total combined hospital length of stay up to 1-year postoperatively, (4) the most recent available serum creatinine value up to 1-year, (5) nephrology review at 1-year, and (6) dialysis at 1-year. These clinical endpoints based on a review of the literature on postoperative AKI [16,17,18,19]. In this observational study, clinic follow-up visits and blood tests were arranged by clinical teams according to their normal practice. No additional follow-up visits or blood tests were arranged for this study. Follow-up was considered to have been achieved if patients’ records had been successfully reviewed, even if no creatinine tests had been completed by the clinical team during the follow-up period.

Centres were considered to have registered for collection of one-year follow-up if a data collection mini-team was established at the site, institutional approval was granted for collected of follow-up data, and at least one patient was followed-up at the site.

Outcome measures

The primary outcome for this report was the mortality follow-up rate for mortality. This was the proportion of patients for whom the primary endpoint (mortality) was followed-up at 1-year. The secondary outcome was the data completeness rate in centres that registered to collect one-year follow-up. The data completion rate was the proportion of patients with complete data for all six clinical endpoints.

Investigator feedback survey

Following locking of the OAKS database, an electronic survey was disseminated to all investigators who had participated in one-year follow-up (Additional file 1: Table S1). This assessed investigators’ experience of one-year follow-up data collection. 5-point Likert scales were used to assess investigators’ experience of the following (from 1 = very difficult, to 5 = very easy): identifying a supervising consultant; registering the audit; linking patient hospital identifier to the study-specific identifier; collecting data using local hospital computer systems, or paper records. For analysis, scores of 4 to 5 out of 5 were categorised as “Positive”, and scores of 1 to 3 out of 5 were categorised as “Negative” responses to create a dichotomous variable.

Statistical analysis

The baseline characteristics of patients lost to follow-up were compared to those patients who were successfully followed-up. Continuous variables were expressed as mean with standard deviation, or median with interquartile range. Continuous variables were analysed using t-test or Mann-Whitney test, where appropriate. Categorical variables were expressed as percentages and analysed using Chi-squared test, or with Fisher’s exact modification if expected cell counts were less than five. For all analyses, a p-value of < 0.05 was considered as statistically significant. Data analysis was undertaken using R Foundation Statistical Software (R 3.2.1, R Foundation for Statistical Computing, Vienna, Austria).

Results

Centre registration

Of 173 centres that had collected baseline data in the initial phase of OAKS, 126 centres registered to participate in one-year follow-up. Of the 47 centres that did not register, 35 were unable to obtain patient ID link sheets, and 12 were not granted audit and/or Caldicott Guardian approval prior to the data collection deadline (Fig. 1). Centres in Scotland, Ireland, and England all achieved similar levels of registration to participate in one-year follow-up (88.9% vs 78.6% vs 72.5% respectively, Table 2), however there were significantly fewer centres in Wales (40.0%). Centres at which a junior doctor was engaged in the process were more likely to register to enter data (80.6% vs 46.2%, p < 0.001). Centres which had stored patient hospital identifiers on the central REDCap system during the initial data collection phase had a significantly higher participation rate in one-year follow-up (83.6% vs 67.0%, p = 0.019). However, prior centre participation in STARSurg projects preceding OAKS did not affect the likelihood of centres registering to collect one-year follow-up (74.8% vs 61.5%, p = 0.160).

Fig. 1
figure 1

Flowchart of 1-year follow-up in the OAKS study

Follow-up rates

The initial data collection phase of OAKS captured 5745 patients, of whom 5585 remained alive at 30-days following their index procedure (Fig. 1) and eligible for one-year follow-up. Overall 62.3% (3482/5585) of patients were followed-up at 1 year. Of the 2103 patients lost to follow-up, 65.2% (1372) were from the 47 centres that did not register to participate in one-year follow-up. In registered centres, the follow-up rate was 82.6% (3482/4213).

Characteristics of patients followed-up at one-year

There were no significant differences in age, American Society of Anaesthesiologists (ASA) grade, Revised Cardiac Risk Index (RCRI), urgency of surgery and contamination between patients with and without follow-up at one-year (Table 1). However, patients who were followed-up had significantly higher rates of open surgery compared to patients that were not followed-up (61.5% vs 52.0%, p < 0.001). There were no significant difference in rates of AKI (12.7% vs 10.7%, p = 0.060) between patients with and without one-year follow-up.

Table 1 Characteristics of patients with one-year follow-up completed

Data completeness

Centre characteristics associated with ≥95% completeness are presented in Table 2. In centres registered to collect 1-year follow-up outcomes, overall data completeness was 83.1%. Of the 126 of centres that participated, 57.9% (n = 73) had ≥95% data completeness. Scotland had significantly more centres with ≥95% data completeness (100.0%) compared to England, Ireland, and Wales (55.8% vs 36.4% vs 0%, p < 0.001). The more patients a centre had to follow-up, the less likely it was to achieve ≥95% data completeness (< 15: 77.4% vs 15–29: 59.0% vs 30–59: 51.4 vs > 60: 36.8%). Centres storing patient identifiers on the central REDCap system had significantly higher rates of ≥95% data completeness than those storing identifiers locally (72.5% vs 48.0%, respectively, p < 0.001).

Table 2 OAKS centre characteristics, centre activity and data completeness at one-year postoperatively

Investigator feedback survey

Survey responses were received from 285 students and junior doctors, a 78% (285/365) response rate. At least one response was received from 86% (148/173) of centres that participated in initial data collection in 2015. Of centres that returned the survey, 59 (40.0%) had 100% data completion of one-year follow-up, 72 (48.6%) had ≥95% data completion and 23 (15.5%) did not register to submit one-year follow-up data. Table 3 summarises respondent characteristics and experience of OAKS-2 by data completeness. Only collaborators with positive experience of linking patient ID were more likely to achieve > 95% data completeness (71.6% vs 37.6%, p < 0.001). There was no association between perceived difficulty with registering audit, data collection, and identifying a supervising consultant, and > 95% data completeness. Following this study, a summary of recommendations for future multi-centre collaborative studies with longitudinal follow-up were developed and presented in Table 4.

Table 3 OAKS collaborator survey responses, centre activity and data completeness at one-year postoperatively
Table 4 Summary of recommendations for future multi-centre collaborative studies with longitudinal follow-up

Discussion

OAKS was the first TRC prospective cohort study to attempt to complete longitudinal one-year follow-up. This report demonstrates that most centres were able to collect one-year follow-up data with high levels of data completeness. Although the overall follow-up rate was only 62%, there was no evidence of systematic bias in patients being followed-up. Factors associated with increased likelihood of achieving > 95% data completeness were lower numbers of patients to be followed-up, and central storage of patient hospital identifiers. As TRCs have now been set up across Europe [20, 21], validating this methodology will have broad international benefits.

Most studies that complete longitudinal follow-up of prospectively identified patients [22] require patient consent, ethical approval, and significant research infrastructural funding. Even in well resourced, funded trials loss to follow-up of up to 15% is expected and built in to sample size calculations [23]. In the UK, National Research Ethics Service regulations permitted collection of one-year outcomes to be completed as clinical audit, without the need for research ethics approval. Without ethical approval it was not possible to collect identifiable data centrally. This report demonstrates that satisfactory follow-up is achievable within this regulatory framework, and without dedicated funding.

The most commonly reported barrier to achieving one-year follow-up was inability to identify linked patient records. Methods for maintaining linkage between hospital identifiers and study-specific identifiers were either holding hospital identifiers directly on the REDCap system or holding a cross-reference of hospital and study-specific identifiers on hospital computer systems by audit offices or consultants. Collaborating investigators found it easier to complete data collection when approved hospital identifiers were stored on the REDCap system. In Scotland, where national approval was gained for Community Health Index (CHI) identifiers to be stored on REDCap, data completion rates were higher. Therefore, future studies should seek local or national Caldicott guardian approval to store approved hospital identifiers on REDCap.

Loss to follow-up presents a major risk to the internal validity of a study as it leaves a specific population where outcomes remain unassessed, which may differ between groups. In the OAKS study, there were no significant differences in the patient-level demographics, operative indications or ASA grades between the group that underwent one-year follow-up and those that did not. The AKI and mortality rates at 30-days postoperative follow-up also were not significantly different between the groups that did and did not achieve one-year follow-up data.

A significant limitation to the method of follow-up in this study was its restriction to the hospital where the index surgery was performed. A small number of patients may choose to move their care to another centre, and some patients may be readmitted to a different hospital. Consequently, when patients were followed-up at the index hospital, there may have been no record of readmissions, treatments, and blood tests that took place at other hospitals. In addition, since no specific clinic visits were arranged for this study, if clinical teams did not arrange any postoperative clinic visits, or patients did not attend arranged visits, it is possible that the hospital records that were reviewed as source data for this study may not have been fully accurate.

The evaluation of this study’s methodology was limited by the broadness of the barriers explored in the investigator survey. A qualitative approach with detailed interviews with investigators may have been more likely to identify specific difficulties that precluded follow-up from being completed. Incorporating such a qualitative component to future studies may improve follow-up by identifying more solutions [24].

Conclusion

The OAKS study has demonstrated that prospective TRC cohort studies can successfully complete one-year longitudinal follow-up, with acceptable data completeness rates. Future studies may maximise follow-up rates by optimising procedures for storage of patient identifiers, embedding collaborators with previous experience of TRC studies within data collection team, and tracking regional variation in performance throughout the study.