Background

Clinical trials provide evidence about the safety and effectiveness of interventions (Table 1). They underpin health policy and regulation, and they inform patient and provider healthcare decision-making. Because many trials are not published [1,2,3,4,5,6], and because many published reports do not include all of the information needed to understand trial methods [7,8,9,10] and results [11,12,13,14,15,16,17], decisions based on published evidence alone may not lead to the best balance of benefits and harms for patients [18,19,20,21].

Table 1 Glossary of terms

To help participants enroll in trials, improve access to information, and reduce bias, authors have long proposed registering all trials prospectively [22,23,24,25,26,27]. The Food and Drug Administration Modernization Act of 1997 led to the creation of ClinicalTrials.gov, a publicly accessible database maintained by the National Library of Medicine (NLM), which launched in 2000 [28]. In 2004, the International Committee of Medical Journal Editors (ICMJE) announced that trials initiated from 2005 would have to be registered to be considered for publication [29, 30]. Title VIII of the Food and Drug Administration Amendments Act of 2007 (FDAAA) [31] required that certain trials of drugs, biologics, and medical devices be registered and that results for trials of approved products be posted on ClinicalTrials.gov. The FDAAA also authorized the Food and Drug Administration (FDA) to issue fines for non-compliance, currently up to $11,569 per trial per day [32]. “The Final Rule,”, which took effect on January 18, 2017, clarified and expanded requirements for registration and reporting (Box 1) [33, 34]; organizations were expected to be in compliance by April 18, 2017. In a complementary policy, the National Institutes of Health (NIH) issued broader requirements that apply to all trials funded by the NIH, including early trials and trials of behavioral interventions [35, 36].

There is little evidence about how academic organizations support trial registration and reporting, but some evidence suggests that they are unprepared to meet these requirements. For example, academic organizations have performed worse than industry in registering trials prospectively [37, 38] and reporting results [39,40,41,42,43,44,45,46].

Methods

Between November 21, 2016 and March 1, 2017, we conducted an online survey of academic organizations in the USA. We surveyed administrators who are responsible for maintaining organization accounts on ClinicalTrials.gov. For each eligible ClinicalTrials.gov account, we asked one administrator to describe the policies and procedures and the available resources to support trial registration and reporting at their organization (Box 2).

Identifying eligible PRS accounts

The online system used to enter information in the ClinicalTrials.gov database is called the Protocol Registration and Results System (PRS). Each study registered on ClinicalTrials.gov is associated with a “record” of that study, and each record is assigned to one PRS organization account. A record may or may not include study results. A single organization, such as a university or health system, might register trials using one or many accounts. For example, “Yale University” is one account; by comparison, “Harvard Medical School” and “Harvard School of Dental Medicine” are each separate accounts.

We used the PRS account as the unit of analysis because accounts related to the same organization often represent schools or departments that have separate policies and procedures related to trial registration and reporting. Furthermore, we are not aware of a reliable method to associate individual accounts with organization. For example, the “Johns Hopkins University” account includes mostly records from the Johns Hopkins University School of Medicine. Investigators at Johns Hopkins University also register trials using the accounts “Johns Hopkins Bloomberg School of Public Health,” “Johns Hopkins Children’s Hospital,” and “Sidney Kimmel Comprehensive Cancer Center.” Schools and hospitals related to Johns Hopkins University have distinct policies, faculties, administrative staff, and institutional review boards (IRBs).

We included all “active” accounts categorized by ClinicalTrials.gov as a “University/Organization” in the USA. We received a spreadsheet from the NLM with the number of records in each eligible account on August 4, 2016, and we received PRS administrator contact information from the NLM on September 28, 2016 and December 12, 2016.

Survey design

We developed a draft survey based on investigators’ content knowledge and evidence from studies that were known to us at the time. We organized questions into three domains: (1) organization characteristics, (2) registration and results policies and practices, and (3) staff and resources. We also invited participants to describe any compliance efforts that our questions did not cover. We then piloted the survey among 14 members of the National Clinical Trials Registration and Results Reporting Taskforce. The final survey used skip logic so that participants saw only those questions that were relevant based on their previous answers. Responses were saved automatically, and participants could return to the survey at any time; this allowed participants to discuss their answers with organizational colleagues before submitting. We conducted the survey using Qualtrics software (www.qualtrics.com/); a copy is available as a Word document (Additional file 1) and on the Qualtrics website (http://bit.ly/2tCSqyl).

Participant recruitment

One or more persons, called “PRS administrators” by ClinicalTrials.gov, may add or modify records in each account. Some PRS administrators are employed specifically to work on ClinicalTrials.gov, but many PRS administrators have little or no time budgeted by their organizations to work on ClinicalTrials.gov.

For each eligible account, we created a unique internet address (URL) which we emailed in an invitation letter to one administrator. For accounts with more than one administrator, we first contacted all administrators and asked them to select the appropriate administrator to complete this survey; then, we sent the survey to the nominated administrator. If an administrator did not complete the survey, EMW sent at least two reminders from his university email account after approximately 2 weeks and 4 weeks. For accounts with multiple administrators, we emailed all administrators if the designated administrator did not respond after two reminders. We instructed administrators associated with multiple accounts to complete a separate survey for each account.

Participants indicated their consent by continuing past the opening page and by completing the survey.

Analyses

To analyze the results, we first excluded accounts that did not complete three required questions about whether they had: (1) a registration policy, (2) a results reporting policy, and (3) computer software to manage their records. We then calculated descriptive statistics using SPSS 24. For categorical data, we calculated the number and proportion of accounts that selected each response. For continuous data, we calculated the median and interquartile range (IQR) depending on the distribution of responses.

We conducted subgroup analyses to determine whether organization characteristics might be related to policies and resources. We compared:

  1. 1.

    Accounts affiliated with a Clinical and Translational Science Award (CTSA) versus other accounts

  2. 2.

    Accounts affiliated with a cancer center versus other accounts

  3. 3.

    Accounts with < 20 records, 20–99 records, and ≥ 100 records

We conducted a sensitivity analysis to determine whether the results might be sensitive to non-response bias by comparing accounts that responded before the effective date for The Final Rule (January 18, 2017) with accounts that responded on or after The Final Rule took effect.

Results

Characteristics of eligible accounts

We identified 783 eligible accounts (Additional file 2), which had 47,701 records by August 2016. The median number of records per account was 7 (IQR = 3–36), ranging from 1 (two accounts) to 1563 (mean = 61, standard deviation (SD) = 155). A minority of accounts are responsible for most records; 113/783 (14%) accounts had ≥ 100 records by August 2016, and these accounts were responsible for 38,311/47,701 (80%) records.

The median number of administrators per account was 1 (IQR = 1–3), and one organization had 182 registered administrators.

Survey participation

Of 783 eligible accounts, we found no contact details for 16 (2%) and attempted to contact 767 (98%). In four cases (< 1%), we were unable to identify a usable email address. Of eligible accounts, 10/783 (1%) emailed us to decline, 306/783 (39%) did not participate in the survey, and 81/783 (10%) did not provide sufficient information to be included in the analysis (Fig. 1). Two accounts reported that they had multiple policies related to the same account; we asked them to complete questions about their account characteristics but not to complete questions about their specific policies and resources.

Fig. 1
figure 1

Flowchart: PRS accounts included in the survey

Included accounts were responsible for 40,351/47,701 (85%) records registered by eligible accounts. We received a partial (43) or complete (323) survey for 366/783 (47%) eligible accounts (Additional file 3).

The first account completed the survey on November 21, 2016, and the last account completed the survey on March 21, 2017; 31/366 (9%) accounts completed the survey after January 17, 2017. Because of skip logic and because some accounts did not answer all possible questions, accounts answered between 6 and 42 questions (median 19, IQR 17–29).

Policies and practices

Of 366 accounts, 156 (43%) reported that they have a registration policy and 129 (35%) have a results reporting policy (Table 2). Policies came into effect between 2000 and 2016 (median = 2013, IQR 2010–2015; mode = 2016).

Table 2 Clinical trial registration and results reporting policies

Among those accounts with policies, most policies require registration of trials applicable under FDAAA (118/140, 84%) and funded by the NIH (72/140, 51%) (Additional file 4). Polices include different requirements for time of registration (Table 3); most require that trials be registered before IRB approval is granted (15/156; 11%), before enrollment begins (49/156; 35%), or within 21 days of beginning enrollment (31/156; 22%). A minority of policies address handling trials associated with investigators joining (57/156; 37%) and leaving organizations (38/156; 24%).

Table 3 Resources to support clinical trial registration and results reporting

Responsibility for registering trials is most often assigned to principal investigators (72/129; 56%). Responsibility for monitoring whether results are reported on time is most often assigned to principal investigators (54/115, 47%) and administrators (68/115, 59%).

Some policies allow organizations to penalize investigators who fail to register trials (27/115; 18%) or fail to report results (21/114; 18%). One account (< 1%) reported that their organization had penalized an investigator for non-compliance.

Resources

Few accounts use computer software to manage their records (68/366; 19%). Of those that use computer software, two use the application programming interface (API) to connect with ClinicalTrials.gov (Table 3).

Among the 287/366 (78%) accounts that allocate staff to fulfill ClinicalTrials.gov registration and reporting requirements, the median number of full-time equivalent (FTE) staff is 0.08 (IQR = 0.02–0.25). Among the staff who support ClinicalTrials.gov registration and reporting requirements, the staff member with the highest level of education has a graduate degree (232/411; 75%) more often than a bachelor’s degree (68/411; 22%) or a high school diploma (11/411; 3%). At the time of this survey, 34/338 (10%) planned to hire more staff, while 217/338 (64%) and 87/338 (26%) did not plan to hire more staff or did not know, respectively. Among accounts affiliated with a CTSA, 24/109 (22%) receive support for ClinicalTrials.gov compliance from the CTSA.

Staff perform various roles, including educating investigators individually (151/342; 44%) and in groups (61/42; 18%), entering data for principal investigators (174/342; 51%), maintaining educational websites (57/342; 17%), notifying investigators about problems (241/342; 70%), assisting with analysis (58/342; 17%), responding to questions (241/342; 70%), and reviewing problem records (262/342; 77%).

Subgroup analyses

Registration and reporting policies are more common among the following accounts: (1) those with many records, (2) those affiliated with CTSAs, and (3) those affiliated with cancer centers (Table 4). For example, most cancer centers have a registration policy (61/97; 63%) and a reporting policy (52/97; 54%); a minority of other accounts have a registration policy (94/267; 35%) or a reporting policy (77/267; 28%).

Table 4 Subgroup analysis

Non-response bias

We found direct and indirect evidence of non-response bias, which suggests that our results might overestimate the amount of support available at academic organizations. For example, one administrator who declined to participate replied that their organization “does not have any central staff managing clinicaltrials.gov and does not utilize an institutional account.”

Account size was related to survey participation, and many participating accounts were large (Table 5). Of those accounts we invited to complete the survey that included < 20 records, 171/532 (32%) participated. By comparison, 98/113 (87%) accounts with ≥ 100 records participated.

Table 5 Characteristics of participants

Participation might have been related to organization resources. Nearly all CTSAs (62/64; 97%) and most National Cancer Institute (NCI) cancer centers (55/69; 80%) participated in the survey (Table 5), including 48 accounts affiliated with both a cancer center and a CTSA. Furthermore, some included accounts were related; for example, 107 accounts were affiliated with one of the 62 CTSAs.

In a sensitivity analysis (Additional file 5), we found no clear differences in policies and computer software when comparing early and late responders. Most participants completed the survey before the effective date, so late responders included only 31/366 (8%) accounts.

Discussion

Summary of findings

To our knowledge, this is the largest and most comprehensive survey of organizations that register and report clinical trials on ClinicalTrials.gov. We had a high participation rate, and accounts that completed the survey conduct the overwhelming majority of clinical trials registered by academic organizations in the USA. We found that some organizations were prepared to meet trial registration and reporting requirements before The Final Rule took effect, but there is wide variation in practice. Most organizations do not have policies for trial registration and reporting. Most existing policies are consistent with FDAAA; however, most are not consistent with the ICMJE registration policy. Nearly half of existing policies do not require registration of all NIH-funded trials, though organizations could adapt their polices in response to the new NIH requirements. Few policies include penalties for investigators who do not register or report their trials. Although some organizations use computer software to monitor trial registration and reporting, only two have systems that connect directly with ClinicalTrials.gov (i.e., using API). Most staff who support trial registration and reporting have other responsibilities, and most organizations do not plan to hire more staff to support trial registration and reporting.

Implications

Our results suggest that most organizations assign responsibility for trial registration and reporting to individual investigators and provide little oversight. Previous studies indicate that senior investigators often delegate this responsibility to their junior colleagues [47].

To our knowledge, the FDA has never assessed a civil monetary penalty for failing to register or report a trial, and the NIH has never penalized an organization for failing to meet their requirements. The ICMJE policy is not applied uniformly [48], and many published trials are still not registered prospectively and completely [37, 49,50,51,52]. Organizations may be more likely to comply with these requirements if they are held accountable for doing so by journals, FDA, and funders (see, e.g., http://www.who.int/ictrp/results/jointstatement/en).

Improving research transparency in the long term will require changes in norms and culture. Organizations could take four immediate steps to improve trial registration and reporting. First, organizations could offer education to help investigators understand these requirements. Second, organizations could implement policies and procedures to support trial registration and reporting. For example, organizations could require that investigators answer questions on IRB applications to identify clinical trials that require registration. Organizations could also require that investigators provide trial registration numbers before allowing trials to commence. Third, organizations could identify trials that do not meet trial registration and reporting requirements and help individual investigators bring those trials into compliance. Notably, software could provide automatic reminders when trial information needs to be updated [53] or when results will be due, and software could help organizations identify problems that require attention from leaders. Prospective reminders would allow administrators and investigators to update information before they become non-compliant with reporting requirements. Finally, organizations could ensure there are consequences for investigators who fail to meet trial registration and reporting requirements. For example, organizations could stop enrollment in ongoing trials or stop investigators from obtaining new grants [54].

Limitations

Although we sent multiple reminders and gave participants months to respond, our results might be influenced by non-response and social desirability. However, such biases would lead us to overestimate support for research trial registration and reporting. Participating accounts conduct more trials than non-participating accounts, and they appear to be most likely to have policies and resources to support transparency.

Because we analyzed results by account, our results are not directly comparable with studies that grouped trials using the data fields “funder” [39, 40, 43], “sponsor” [41, 44], “collaborator” [41], or “affiliation” [42]. We analyzed results by account because (1) the account should usually represent the “responsible party,” which is the person or organization legally responsible for fulfilling trial registration and reporting requirements, and (2) because we were not aware of another method to identify all trials, or even all accounts, associated with each organization.

We could not always determine which trials were associated with specific organizations, and organizations might not know which accounts their investigators use. Organizations could work with ClincalTrials.gov to identify non-working email addresses, update administrators’ contact information, assign and identify an administrator responsible for overseeing each account, and create a one-to-one relationship between each account and organization. For example, ClinicalTrials.gov could identify multiple accounts managed by administrators at the same organization and help organizations move information into a single account. Organizations would need to prepare before centralizing their records; centralized administration could reduce trial registration and reporting if administrators lack the time, training, and resources to manage these tasks effectively.

We requested information from one administrator at each organization, and administrators might have been unaware of policies and practices that affect other parts of their organizations (e.g., IRBs, grant management). Finally, some organizations were misclassified on ClinicalTrials.gov (e.g., non-US organizations); we do not know how many organizations were inadvertently included or excluded because of misclassification.

Future research

Further research is needed to determine how to support trial registration and reporting at different types of organizations. Some large organizations register several trials each week, while other organizations register a few trials each year. For small organizations, hiring staff to support trial registration and reporting could be prohibitively expensive. Further qualitative research could explore how different types of organizations are responding to these requirements.

Future surveys could examine predictors of compliance with trial registration and reporting requirements. Although there are important variations in policy and practice, additional quantitative analyses would have little immediate value because most organizations have low compliance [37,38,39,40,41,42,43,44,45]. Instead, detailed case studies might be most useful for identifying best practices. For example, Duke Medicine developed a centralized approach [55], and the US Department of Veterans Affairs (VA) described multiple efforts to support transparency, including an “internal web-based portal system” [54]. The National Clinical Trials Registration and Results Reporting Taskforce is a network of administrators who meet monthly by teleconference, share resources (e.g., educational materials), and provide informal peer education. As industry appears to be doing better than academia [37, 39,40,41,42,43,44], it might be useful for academic organizations to understand the methods industry uses to monitor and report compliance (see, e.g., [56]).

We surveyed organizations after the publication of The Final Rule, and most accounts completed the survey before The Final Rule took effect, several months before the compliance date [34]. Our results should be considered a “baseline” for future studies investigating whether organizations adopt new policies and procedures, and whether they allocate new resources, to fulfill registration and reporting requirements. The federal government estimates compliance costs for organizations will be $70,287,277 per year [34]. This survey, and future updates, could be used to improve estimates of the costs of compliance.

Conclusions

To support clinical trial registration and results reporting, organizations should strongly consider adopting appropriate policies, allocating resources to implement those policies, and ensuring there are consequences for investigators who do not register and report the results of their research.