Introduction

An abstract is a condensed version of a full scientific paper that describes the aim of a study, the methods employed, the results, and the conclusions, including implications for policy and practitioners [1]. The abstract of every article is important to inform the reader about the results that are communicated [2]. In particular, the abstract is relevant as readers often make their preliminary assessment of the study at this stage. In fact, some readers, particularly clinicians, may use information from abstracts to inform their clinical decisions, due to their having limited time and resources [3].

Conversely, some researchers may never publish studies as full journal articles, and so the only published record of a study might be the abstract in the conference proceedings. The abstracts for a conference always yield insights, questions, and interpretations that alter and improve the final manuscript, supposing the authors decide to publish such studies in peer-reviewed journals. In particular, effective abstracts describe the importance of the scientific research performed [1, 4]. The participants in a conference usually make their preliminary assessment of a study using the information presented in the conference abstract. However, abstracts presented at conferences have largely been criticized as poor [1, 2], particularly in disability research. The poor reporting in conference abstracts may have several implications, particularly communicating incomplete information on findings and conclusions.

Recently, several studies have been undertaken on reporting in abstracts in disability research [5,6,7,8,9]. These studies have largely focused on poor reporting on the methods employed, including sampling, sample size selection, design, and ethical considerations [7, 8, 10]. However, none of these studies have attempted to assess poor reporting in conference abstracts. A literature search that was conducted identified few reviews and commentaries on abstracts, but rather focused on the reporting quality in abstracts in a randomized controlled trial in psychiatry [3], as well as practical lessons for writing conference abstracts [1, 2, 4]. None of these studies have attempted to assess poor reporting in abstracts from a scientific conference on disability.

Consequently, the African Network for Evidence-to-Action in Disability (AfriNEAD), which is a stakeholder group in disability that works to strengthen evidence-based intervention and policies, has organized a series of expert meetings and symposia in different settings in Africa. In previous symposia, the network upgraded the medium into a scientific conference, so as to strengthen collaboration and transform evidence into action. The College of Health Sciences at Kwame Nkrumah University of Science and Technology collaborated with the University of Stellenbosch to host the fifth scientific AfriNEAD conference for 2017 in Ghana.

This study aims to assess incomplete reporting in abstracts presented at the 5th AfriNEAD Conference in Ghana. In particular, the study assesses the content of abstracts in relation to information on the methods used, the results, and the conclusions, as well as how the abstracts meet the standards for reporting in abstracts. The study was facilitated by the following standards for reporting in abstracts: Strengthening the Reporting of Observational studies in Epidemiology (STROBE) Statement—Items to be included when reporting observational studies in a conference abstract [11, 12], as well as previous literature addressing methodological issues in abstracts [13,14,15].

Methods

Eligibility criteria

The study employed a descriptive design to assess the reporting in abstracts presented at the 5th AfriNEAD Conference, held on 7–9 August 2017 in Ghana. The study assessed the content of the abstracts against the standards for reporting [11, 12]. Abstracts included in the study were those that focused on one of the conference sub-themes, namely the following: children and youth with disability; education: early to tertiary; economic empowerment; development process in Africa: poverty, politics, and indigenous knowledge; health and HIV/AIDS; systems of community-based rehabilitation; holistic wellness, sport, recreation, sexuality, and spirituality; and research evidence and utilization, and abstracts of side events. The included abstracts were either structured or unstructured. However, one criterion was that the content of structured and unstructured abstracts should have adequate information that covers the background to the study, the methods used, the results, and the conclusions. Abstracts were also excluded if they were unstructured but did not adequately capture information on the background, the methods, the results, and the conclusions, but merely gave a brief narrative about the study.

Selection of the included abstracts

Three reviewers independently reviewed the titles and the content of the printed conference proceedings, and then approved on those that met the selection criteria. All the conference abstracts that were approved were included in the study. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow chart for systematic reviews [16] was used to illustrate the selection processes (see Fig. 1).

Fig. 1
figure 1

Flow chart of studies included in the review

Data extraction

A data extraction form was developed to extract information from all the included abstracts (see Additional file 1). The data extraction form was developed using the following reporting standards: Strengthening the Reporting of Observational studies in Epidemiology (STROBE) Statement—Items to be included when reporting observational studies in a conference abstract [11, 12], and variables of interest that have been captured in previous literature [13,14,15]. The data extraction form was divided into subsections, and it covered information on the background of the authors, the sub-themes, the objective of the study, the methodological issues, and the results. Three reviewers were involved in the extraction of data from all the included abstracts.

Data synthesis

Descriptive statistics, including frequencies, means, standard deviations, and percentages, were used to present the findings. Tables and figures were used to present the results. The analysis was performed using Stata version 15.

Results

Description of the abstracts reviewed

The study screened a total of 76 titles of conference abstracts. Of these, 59 met the inclusion criteria, while 17 were excluded. After a review of the full abstracts, a further five were excluded. Overall, 54 abstracts were included in the study (see Fig. 1).

Characteristics of the included abstracts

More than half of all the included abstracts (32/54; 59.26%) were studies that reported findings from Ghana. About a third of the included abstracts (16/54; 29.6%) focused on the sub-theme “education: early to tertiary,” while more than a tenth each focused on the sub-themes “holistic wellness, sport, recreation, sexuality, and spirituality” (8/54; 14.8%), “children and youth with disability” (7/54; 12.96%), and “health and HIV/AIDs” (7/54; 12.96%). More than two fifths (24/54; 44.44%) of the abstracts targeted people with disabilities, 17/54 (31.48%) used professionals (nurses, doctors, teachers, and stakeholders, including education directors and coordinators), and 5/54 (9.26%) used parents and caregivers (see Table 1).

Table 1 Characteristics of included abstracts

The reporting of methods in the conference abstracts

Two thirds (36/54; 66.67%) of the included abstracts reported sample size in the abstracts, while 18/54 (33.33%) had no information on sample size (see Fig. 2). Most of the included abstracts (37/54; 68.5%) did not report the study design. Of the 17 abstracts that reported the study design, almost half (8/17; 47.06%) used a descriptive design (see Table 2). Most of the abstracts (45/54; 83.33%) reported the methods employed, while 9/54 (16.66%) had no information on the methods employed. Of the abstracts that reported the methods, 35/45 (77.77%) stated that qualitative methods were used (see Table 2).

Fig. 2
figure 2

The reporting of methods in the conference abstracts

Table 2 Reporting of methods

The study showed that half of the included abstracts (27/54; 50%) did not report the sampling techniques used. Of the abstracts that reported the sampling, 18/27 (66.67%) used purposive sampling (see Table 2). More than half of the abstracts (30/54; 55.56%) did not report the type of analysis performed. However, of the abstracts that reported such information, 17/24 (70.84%) reported thematic analysis.

The majority of the included abstracts (50/54; 92.59%) did not report the analysis software used for the study. Only a few of the abstracts (4/54; 7.41%) reported SPSS as the statistical tool for the analysis. None of the included abstracts reported the date of conducting the study in the abstract.

The reporting of findings in the conference abstracts

The study extracted information about the results reported in the abstracts (see Table 3). None of the included abstracts reported the age distribution of participants in the abstracts. Similarly, most of the included abstracts (53/54; 98.15%) did not report information about the gender of the participants. Most of the included abstracts (37/54; 68.52%) reported results thematically, while a few (7/54; 12.96%) used descriptive statistics (see Table 3).

Table 3 Reporting of findings

The majority of the included abstracts (48/54; 88.89%) did not report quantitative information that can be used to established associations between the dependent and the independent variables. Of the six included abstracts that were eligible to report such information, only one abstract reported such associations. Most of the included abstracts (43/54; 79.63%) were eligible to report on the primary outcome of the participants. Of the abstracts that were eligible to report on the primary outcome, 39/43 (90.69%) reported on such outcome, while 4/43 (9.30%) did not report on such outcome (see Table 3).

Discussion

Strengths and limitations

Our study has some strengths and limitations, which need to be explained. In terms of strengths, the study developed a data extraction form to extract information. Also, the authors followed due process, to ensure that adequate information was gathered and that the information was checked, so as to limit the risk of bias in the reporting of findings (see Table 4). Three reviewers independently reviewed the included abstracts. The reporting of the abstracts confirmed the findings of previous studies on methodological issues in disability research.

Table 4 Methods used in the included abstracts

Our study has several limitations, however, which are mostly associated with the scope and type of the included abstracts. The study was limited to abstracts from one AfriNEAD conference. This suggests that the sample size is too small to make inferences about disability research in general. Limiting abstracts to one AfriNEAD conference may limit access to similar incomplete reporting in past AfriNEAD symposia.

The reporting of methods and results in the conference abstracts

In the current study, 68.5% of the included abstracts lacked information on the study design, while 14.8% did not report the type of data. This finding implies that there is poor reporting of methodological information, namely study design and type of data used. The incomplete reporting in abstracts implies that readers may have difficulty understanding how the study was conceptualized, as well as the type of data that was used to achieve the results. In particular, reporting study design and methods in conference abstracts is important to inform readers about the broader picture of the study, including the mix of data that is required to achieve the study objective [2]. Omission of such information at the abstract level may create uncertainty among readers. Poor reporting of methods means that readers cannot make concrete and firm conclusions about the subject. This finding can inform future conference organizers on effective ways to address methodological issues. In particular, future scientific abstracts should adequately highlight the relevant methodological issues, such as study design and methods to effectively communicate the findings [2].

The study highlighted that more than half of the included abstracts reported the sample size, while a few did not report such information. Reporting sample size in the abstract is relevant to provide evidence about the participants. Reporting sample size further enables the reader to better understand the representativeness and generalizability of the findings. Although most of the included abstracts reported the sample size, the 33.3% that lacked information on the sample size could provide misleading information to readers. This implies that readers may not be adequately informed about the findings presented in the abstracts. The few abstracts that lacked information on sample size demonstrate poor reporting. This finding confirms the findings of earlier studies on incomplete reporting [1, 2, 4]. Conference abstracts, particularly in disability research, should therefore adequately report the sampling approaches used, so as to inform readers. Scientific committees of conferences, particularly in disability research, should ensure that the sample size of participants is captured in the abstracts, to effectively communicate the findings.

In addition, reporting of the sampling technique used in abstracts is relevant to inform readers about the representativeness of participants, so as to avoid bias. However, about 50% of the included abstracts did not report on the sampling technique. Lack of information on sampling technique in the abstract implies that readers may not be able to generalize the findings reported in the abstract. This finding confirms earlier incomplete reporting in disability research [7, 8, 10]. In particular, the poor reporting in conference abstracts in previous disability research is mostly associated with poor sampling. Our finding demonstrates that conference abstracts should aim to report information on the sampling approach, in order to help readers understand the process involved in selecting participants.

Furthermore, the current study highlighted that 55.56% of the included abstracts did not report the type of analysis performed (whether descriptive or inferential statistics or a qualitative analysis approach). Similarly, some background characteristics, namely age distribution and the gender of participants, were not reported in the abstracts. This finding demonstrates that there is incomplete reporting of results in the abstracts. The results section of the conference abstract appears to be the most significant section that addresses the background characteristics of participants and the primary and secondary outcomes [2]. However, the poor reporting of findings indicates that conference participants will not be adequately informed about the research question and therefore will be unable to explore outcomes, associations, or risk factors. This finding demonstrates that conference abstracts should ensure that the results section includes all relevant information, including age and gender of participants. The poor reporting of results in conference abstracts confirms the findings of earlier studies in disability research [7, 8, 10]. The poor reporting in disability research has largely pertained to incomplete reporting of findings. In some instances, incomplete reporting is largely recorded in full papers, rather than in abstracts.

Conclusion

The study aims to assess the reporting in the abstracts presented at the 5th African Network for Evidence-to-Action in Disability (AfriNEAD) Conference in Ghana. Our findings confirm that there is poor reporting of methods and findings in conference abstracts. Poor reporting is associated with lack of information about the study design, the methods used, the sampling, the sample size, and the type of analysis performed. Our findings established that reporting evidence in conference abstracts should adequately address all relevant issues. In particular, future conferences on disability research should aim to address the study design, the type of data included, the sampling, the sample size, and the type of analysis employed.

Conference organizers should critically examine abstracts to ensure that these methodological issues are adequately addressed, so that findings are effectively communicated to the participants. The call for abstracts should clearly elaborate the reporting standards, particularly the required content in terms of objectives, methods, results, and conclusions, as well as practical implications for policy and practice. This can help to avoid any incomplete reporting of information in conference abstracts.