Background

Identification of relevant randomized controlled trials (RCTs) is an integral part in the conduct of systematic reviews of intervention efficacy and effectiveness. RCTs are usually identified through searching electronic databases (e.g., PubMed, EMBASE) and handsearching (manually screening biomedical journals, conference proceedings and other publications). Recently, the Institute of Medicine (IOM) published guidelines stating that results from conference abstracts should be included in systematic reviews because conference abstracts provide an important source of unpublished trials [1]. This is because only about 60% of controlled clinical trials presented as conference abstracts are subsequently published as journal articles, and trials with negative or null results are published less frequently than those with positive findings [2]. The end result is that inclusion of the grey literature in systematic reviews and meta-analyses leads to smaller treatment effect sizes overall [3]. This presents a problem for systematic reviewers, however, as reporting of study design in abstracts is poor [4] and results reported are often preliminary. Furthermore, there are concerns that abstracts rarely undergo peer review. Although adoption of CONSORT reporting guidelines for conference abstracts may lead to some improvement over time, controversy remains about whether to include conference abstracts in systematic reviews [57].

One approach to learning more about critical design elements of trials reported only in conference abstracts is to seek information from other sources. For example, a trials register such as ClinicalTrials.gov includes key protocol items relevant to determining eligibility and performing critical appraisal of a study.

We hypothesized that information included in a trials register record could be used to supplement the sparse information on study design presented in a conference abstract. To test the hypothesis, we obtained reports of RCTs presented at the annual meeting of the Association for Research in Vision and Ophthalmology (ARVO) from 2007 to 2009; this is an international meeting of more than 10,000 attendees. The ARVO organizers require that any abstract submitted for presentation at the annual meeting that describes a concurrently controlled trial must be registered in an electronically searchable, publicly available trials register [8]. The online conference abstract submission form includes a box in which abstract authors reporting controlled clinical trials are asked to supply the name of the trials register and the trial registration number where the trial was registered. Accompanying instructions include a definition of a clinical trial and a hyperlink to a frequently asked questions page that describes trials registers, including a drop-down menu of acceptable registers. A previous study showed that when authors complied with this requirement, about 90% reported registering trials at ClinicalTrials.gov [9].

Methods

Identification of included randomized controlled clinical trials

We reviewed all abstracts presented at the ARVO meetings from 2007 through 2009 (submitted through December 2008). Abstracts were classified as RCTs using the definition provided in the Cochrane Collaboration’s Handsearching Training Manual: “a study in which individuals (or other units) followed in the trial were definitely assigned prospectively to one of two (or more) alternative forms of health care using random allocation” [10]. One person hand searched the ARVO annual meeting abstracts online at http://www.arvo.org. A second person reviewed all studies classified as an RCT by the handsearcher and another person reviewed a sample of abstracts not classified as an RCT [9]. Discrepancies were resolved by consensus.

Two individuals independently extracted information about trial registration as reported for each abstract, including the name of the organization listed in the trial registration box and information in the box designated for the registration identification number. Abstracts listing more than one registration number or not listing any number were excluded from further analysis. Because 88% (276/312) of RCTs reporting a trials register listed ClinicalTrials.gov as the trials register [9], we chose to include only ClinicalTrials.gov records in our study. We entered the trial register number provided in the search box at http://www.clinicaltrials.gov and classified numbers as valid if a matching trial was identified in ClinicalTrials.gov, and invalid if the search yielded no results. We did not attempt to identify a ClinicalTrials.gov registration record for trials whose investigators had not included a registration number. We retrieved all ClinicalTrials.gov records for trials as posted (i.e., including amendments incorporated in the record at the time of retrieval) from May and June, 2009.

Two persons independently reviewed the abstract-ClinicalTrials.gov pairs for inclusion. We excluded abstract-ClinicalTrials.gov pairs that described secondary analyses of trial data (e.g., analyses of ancillary study data), nested case–control studies from RCT data, and methodological studies associated with the RCT, because the objective of our study was to compare descriptions of the original RCT design; we also excluded pairs where the ClinicalTrials.gov record stated that assignment to treatment was not randomized.

Abstraction of study design characteristics

We extracted information from both the abstract and the ClinicalTrials.gov record, including type of randomized comparison (parallel, cross over, cluster), multi-center status (yes/no), number randomized, and inclusion criteria related to demographic characteristics of the study population (i.e., adults, children, included sexes, presence of a disease or condition, and/or healthy volunteers). Masking was characterized separately as yes/no for study participants, treatment administrators, and outcome assessors. The study intervention, primary outcome, secondary outcomes, and study funder(s) were extracted verbatim from the abstract and the ClinicalTrials.gov record. We classified an outcome as the “primary” outcome in the abstract only if it was explicitly stated as such, and in the ClinicalTrials.gov record only if it was included in the “Primary Outcome” field in the tabular view of the record. We classified all other outcomes reported as “non-primary” outcomes. The study funder was abstracted from the “Support” field of the abstract and either the “Sponsor” or “Collaborator” field in the ClinicalTrials.gov record. We classified funders as industry or non-industry. If the study funder was unclear, we classified it as non-industry as a conservative measure. We also collected information about study status as reported in ClinicalTrials.gov (“not yet recruiting”, “recruiting”, “active”, or “not recruiting, completed”). If more than one abstract had the same registration number, we extracted information from all relevant abstracts and reported it on a single data abstraction form. We also extracted the presence of a contact name, telephone number, and e-mail address from the ClinicalTrials.gov record. All data were abstracted independently by two abstractors (RWS, LH, KD, AE, JT) on pre-tested paper data collection forms; we extracted all information from the abstract before we extracted information from the ClinicalTrials.gov record. Discrepancies were resolved by consensus. All data from the paper forms were entered into an Access database.

Data analyses

We assessed frequencies of reporting variables in the abstract and the trials register. We assessed the concordance between study design characteristics that were described in both the conference abstract and the trials register record by comparing reports of variables across abstract-ClinicalTrials.gov pairs. We determined whether a design characteristic that was not described in the abstract was present in the trials register. We also compared information reported in both sources and characterized the level of agreement as full, partial, or no agreement. If the abstract and the ClinicalTrials.gov record agreed exactly or nearly exactly, we classified this as full agreement. A more precise definition of an outcome in the abstract (or register) was classified as partial agreement while a completely different outcome in the abstract was classified as a new outcome. We further categorized partial agreement by type of disagreement and source of additional information (i.e., the abstract or the ClinicalTrials.gov record). We used SAS Version 9.2 (SAS Institute, Carey, NC) to perform all analyses.

Results

Registration of clinical trials

The handsearching results have previously been reported [9]. Only 2.8% (496/17,953) of all abstracts presented at ARVO from 2007 to 2009 described results of an RCT; 276 of these reported registration in ClinicalTrials.gov. Excluding abstracts not meeting our eligibility criteria resulted in 158 abstracts. We also excluded 4 abstracts in which the ClinicalTrials.gov record classified the study as not randomized. We linked each of the remaining abstracts with a single ClinicalTrials.gov registration number, resulting in 154 abstract-ClinicalTrials.gov pairs for analysis (see Figure 1).

Figure 1
figure 1

Flow chart of conference abstract-ClinicalTrial.gov register pairs used for comparison of randomized controlled trial characteristics. RCT = randomized controlled trial; CT.gov = ClinicalTrials.gov.

General study design

We observed generally good agreement on the randomized intervention comparison described in the abstract with what was reported in ClinicalTrials.gov (80.5%, 124/154) (see Table 1). Whether a trial was single or multi-center was the same in 39 pairs, different in 13, and not reported in either source for 12 pairs. For the remaining 90 pairs, no information on multi-center status was available in the abstract, but was reported in ClinicalTrials.gov.

Table 1 Agreement between conference abstract and ClinicalTrials.gov register on design of randomized comparison (n = 154 abstract-ClinicalTrials.gov pairs)

Inclusion criteria

There was agreement between the abstract and ClinicalTrials.gov record on inclusion criteria related to age, the presence of a disease or condition, and whether the trial participants were healthy volunteers (see Figure 2). The inclusion criterion of sex was rarely reported in the abstract, but when it was reported, the information usually agreed with what was reported in ClinicalTrials.gov. In contrast, the inclusion of men and/or women was always included in the ClinicalTrials.gov record.

Figure 2
figure 2

Agreement between abstract and ClinicalTrials.gov register on eligibility criteria. Bars show percent of abstract-ClinicalTrials.gov pairs that agree (black) or disagree (gray) on eligibility criterion, or where information on a criterion was provided in the ClinicalTrials.gov record but not the abstract (white). Eligibility criteria assessed are inclusion of adults, children, healthy volunteers, presence of a condition, men and/or boys, and women and/or girls and were categorized as present or not present. CT.gov = ClinicalTrials.gov.

Intervention

At least one experimental and one control intervention was reported in both the abstract and the ClinicalTrials.gov record. We observed good agreement in the intervention reported in the abstract with that reported in ClinicalTrials.gov when looking at the broad categories of treatment, prevention, diagnostic tests, and other (Table 1). However, there was less agreement when we compared more precise descriptions of the interventions. The interventions described in the abstract and the ClinicalTrials.gov record agreed exactly for 56% (88/154) of pairs, while 37% (57/154) of pairs agreed partly and 6% (9/154) of pairs did not agree (see Table 2 for examples of disagreements). In eight pairs, partial agreement involved small differences in the description of the intervention (e.g., 100 μl versus 110 μl of a drug), and in 49 pairs, partial agreement involved additional information about the intervention (e.g., dosage, time of administration, duration, etc.) in one source that was unavailable in the other. In 12 pairs, additional information was present only in the ClinicalTrials.gov record, and in 29 pairs additional information was present only in the abstract. For eight pairs, additional complementary information was present in both sources. Partial agreement also involved additional treatment arms: authors reported between one and four additional treatment arms in 11 ClinicalTrials.gov records and between one and six additional treatment arms in 25 abstracts.

Table 2 All interventions reported in ClinicalTrials.gov register and abstract that were classified as disagreements

Masking or blinding

We extracted information on masking separately for study participants, treatment administrators, and outcome assessors, finding that the ClinicalTrials.gov record frequently provided information on masking that was not available in the abstract (Figure 3).

Figure 3
figure 3

Agreement between abstract and ClinicalTrials.gov register on masking by study role. Bars show percent of abstract-ClinicalTrials.gov pairs that did not provide information on masking or blinding (black), agree (dark gray), or disagree (light gray), or where information was provided in the ClinicalTrials.gov record but not the conference abstract (white) on masking of study participants, persons administering the treatment, or persons measuring outcomes. Masking was categorized as present or not present. CT.gov = ClinicalTrials.gov.

Sample size

The number of study participants or sample size was reported in both the abstract and ClinicalTrials.gov for the majority (136/154, 88%) of studies. There were eight studies in which a sample size was reported in the trials register, but not the abstract. We compared the reported sample sizes for pairs where the abstract author stated that the number of participants represented all randomized study participants and the status on the ClinicalTrials.gov record was either “completed” or “active, not recruiting” (see Table 3), and found good agreement.

Table 3 Comparison of number of randomized study participants reported in abstract with number reported in ClinicalTrials.gov

Outcomes

Authors reported 800 outcomes in 152 abstracts; no outcomes were reported in 2 abstracts. Thirty four percent (52/154) of abstracts and 82% (126/154) of ClinicalTrials.gov records explicitly described a primary outcome (Table 4). Of the 80 primary outcomes reported among the 40 abstract - ClinicalTrials.gov pairs, 14 (18%) were classified as being in complete agreement and 39 (49%) as partial agreement (see Table 5). Partial agreement typically involved a more explicit description of an outcome as shown in in Table 6. Of the remaining 27 primary outcomes reported in the abstract, 13 were reported elsewhere in the ClinicalTrials.gov record. There was poor agreement between non-primary outcomes reported in the abstract with those reported in ClinicalTrials.gov (see Table 5).

Table 4 Agreement between conference abstract and ClinicalTrials.gov register on reporting of one or more primary outcomes (n = 152 abstract-Clinical Trials.gov pairs)
Table 5 Agreement between conference abstract and ClinicalTrials.gov register on outcomes (n = 40 abstract-ClinicalTrials.gov pairs reporting ≥ 1 primary outcome and 152 abstract-clinicaltrials.gov pairs reporting a non-primary outcome)
Table 6 Examples of primary outcomes reported in ClinicalTrials.gov register with primary outcome reported in abstract classified as “partial agreement”

Funders

One hundred forty-six unique funders were identified in either the abstract or the ClinicalTrials.gov record. At least one funder was reported in each ClinicalTrials.gov record, but only in 62% of abstracts (95/154). More than one funder was reported in 41 ClinicalTrials.gov records and in 19 abstracts. The same funder(s) were reported in both sources for 10 studies. We observed partial agreement across funders for 37 pairs, in which an additional or different funder was reported in either the abstract or in the ClinicalTrials.gov record.

Contact information

The name of a contact person was included on 83% (128/154) of ClinicalTrials.gov records, but a phone number was found on only 25, and an email address on 26 records. No contact information was provided on any conference abstract except author name and affiliation.

Discussion

A substantial amount of additional information on study design was available on the ClinicalTrials.gov record that was not presented in a corresponding conference abstract. Information on multi-center status, eligibility criteria with respect to sex, who is masked, and primary outcome is present to varying degrees in the ClinicalTrials.gov register record. In addition, the name of a contact was included in 83% of ClinicalTrials.gov records, so that if desired information about the trial was not available, or conflicted with what was reported in the abstract, a systematic reviewer could contact the study author directly, although contact information was provided in a minority of ClinicalTrials.gov records.

Thus, if a trial registration number is available for a study to be included in a systematic review, a systematic reviewer may be able to find additional information about study design in a trials register record. However, there are caveats to these findings. First, information about trial registration may not be required nor provided by conference organizers generally. Second, we encountered numerous disagreements between the information provided in the conference abstract with information contained within the ClinicalTrials.gov record on a number of items. ClinicalTrials.gov was originally intended to provide a source of information about the existence of a trial for patients and clinicians, to diminish redundant research effort, and to alert researchers to the possibility of publication bias. Currently it provides limited protocol information, although it would be a natural repository for full protocols. A published protocol or design and methods paper would provide more information, but until public availability of all trial protocols is achieved, trials registers serve as surrogates for study protocols.

The findings from our study relate specifically to ARVO abstracts and ClinicalTrials.gov and may not be applicable to other conference abstracts or trials registers. Furthermore, our requirement that a valid ClinicalTrials.gov registration number be reported on the abstract submission form could mean that studies were actually registered but the number not recorded or incorrectly recorded.

Previous studies have identified discrepancies between trials register records and associated full length publication on study design characteristics, especially related to the description of primary and secondary outcomes [1116]. In our study, we found discrepancies related to the detailed descriptions of study interventions and outcomes reported in the conference abstract compared with those reported in the ClinicalTrials.gov record. Frequently there was more information available in the abstract than was in the ClinicalTrials.gov record (e.g., dose or duration of treatment). Most disturbing was the appearance of additional treatment arms in the abstract that were not included in the register, suggesting that investigators are not regularly updating the trial register record as required or that abstracts represent preliminary findings, before an arm was dropped.

The discrepancies in outcomes may be an indication of error, selective outcome reporting, or simply due to investigators not updating the ClinicalTrials.gov record in a timely manner. We obtained register records that were available shortly following presentation of abstract results at ARVO, so any additional outcomes or changes in the study design should have been incorporated into the trial register as an amendment to the protocol items. In addition, we compared outcomes in only one direction, i.e. from abstract to Clinicaltrials.gov record. We did not do the reverse analysis (i.e. comparing all outcomes reported in ClinicalTrials.gov with those reported in the abstract) because we would not expect an abstract to include all outcomes reported in ClinicalTrials.gov. In so doing, we did find that sometimes the outcome reported in the abstract as the “primary” outcomes was a secondary outcome in the ClinicalTrials.gov record. In the future it will be of interest to compare outcomes reported as results with the specified outcomes in the ClinicalTrials.gov record. Results reporting in ClinicalTrials.gov opened in September 2008. When we retrieved the records in May and June 2009, none of the records included any study results when downloaded.

Other studies have also found discrepancies between the description of an RCT as reported in an abstract with what was reported in a full length publication [1721]. It would be of interest to make a three-way comparison of a conference abstract, trial register and results record, and full length publication to determine congruence between these sources of information. It would also be useful for authors to include conference abstracts in the ClinicalTrials.gov record as a publication, in addition to full length publications.

Whether the study results reported in an abstract should be included in a systematic review or used to make clinical decisions is unclear. That results from abstracts are being included in systematic reviews has been reported in a sample of meta-analyses indexed in Medline [7], for health technology assessments [5], and for drug formulary decision-making [22]. Including abstracts in systematic reviews is recommended by IOM [1], the Cochrane Collaboration [23], and the Agency for Health Care Research and Quality [24], although caution is urged due to the preliminary nature of abstract results [25, 26]. Results from a Cochrane systematic review found that there was an overall smaller treatment effect if systematic reviewers included abstract results in the review compared to the treatment effect size without abstracts included, although this finding was not statistically significant [3]. Additional investigators report similar results, showing small reductions in the size of the treatment effect when abstract results are included in a systematic review [27, 28].

Overall, given our findings, we would encourage systematic reviewers to take advantage of the information provided in the ClinicalTrials.gov record to supplement the information provided in an abstract. We did not record the amount of time it took to extract information for each record. Because the trials register number was available, matching the abstract to the registry report was straightforward. Extracting all the information we did for our study was somewhat time consuming, and we found that it was more efficient to obtain the information from the tabular view in ClinicalTrials.gov rather than the full text view. Most likely, it would take much less time for a systematic reviewer looking for specific information. We would caution systematic reviewers to exercise care in using data from a trials register record, however, as with any unpublished study result. In many cases, systematic reviewers may not be able to distinguish between conflicting reports and still find it necessary to contact the study investigators. For this situation, the ClinicalTrials.gov record frequently provides contact information that is almost always missing from a conference abstract.

Conclusions

Systematic reviewers may find additional information about an RCT in the ClinicalTrials.gov record to supplement the information provided in a conference abstract. However, it may still be necessary to contact study investigators for information not included in either the abstract or ClinicalTrials.gov record, or that is in conflict between the two sources.