Background

The aim of designing research agendas

Research agendas list research questions addressing knowledge gaps that require further investigation. They serve as a guide for scientists and funders to coordinate and focus research on areas deemed most relevant and impactful. In the past, they were designed primarily by researchers. Currently, the expertise of paediatric patients, their parents, or carers is recognised increasingly as a critical component in paediatric research [1]. In 1995, Chalmers stated that increased involvement of patients and the public in designing research agendas would likely result in a more open-minded approach about which research questions are worth addressing [2]. Later, he found a mismatch between the research questions considered as important by patients and the public and those addressed by researchers – a mismatch that resulted in research waste [3]. Increasingly, funding agencies value and researchers, journal editors, and policymakers demand that waste in research be reduced [3,4,5,6]. To address mismatches and reduce research waste, Chalmers and colleagues developed the James Lind Alliance (JLA) Research Priority Setting Partnerships (PSPs) [7]. The PSPs ensure that research questions of patients, carers, and healthcare professionals are prioritised.

Involving children in designing paediatric research agendas

Unfortunately, children and young people (CYP) are still rarely involved in designing research agendas on paediatric topics, so-called paediatric research agendas (PRAs). A systematic review published in 2017 showed that CYP were involved in only four PRAs [8]. Additionally, our recently published review showed that CYP are now involved in 22 additional PRAs since 2017 [9]. One of the reasons CYP are infrequently involved is because there is still a tendency to underestimate the value of their voices rooted in a belief that they are not competent to speak about health issues related to their bodies [10]. Partnering with CYP to design PRAs is crucial for understanding what is important to them. Research teams sometimes spend years developing agendas in partnership with patients, parents, and carers, but we found that little is known about whether the research priorities are elaborated upon [9, 11]. The question whether the agenda had an impact on what research is undertaken remains [12]; this is what we refer to as the academic impact.

Describing the academic impact of paediatric research agendas

Describing the academic impact of PRAs holds significant importance. First, the academic impact should be described to evaluate whether the aim of reducing research waste is met. Second, describing the academic impact can help identify areas where progress has been made and where additional research efforts are needed. Third, when evaluating the academic impact, it can be determined whether funding agencies use the priorities. Describing the academic impact of the PRAs in which CYP are involved is particularly crucial. Pediatric research agendas that involve CYP require a significant investment of both time and resources [13,14,15]. Researchers need to be flexible when scheduling research meetings with CYP because they have multiple commitments, such as school and sports, during the typical working hours of researchers. This flexibility may involve conducting research activities during evenings, weekends, or school holidays when CYP are more readily available. Furthermore, researchers must overcome ethical considerations, such as power dynamics and facilitating environments, which helps to make CYP feel secure and confident to express themselves freely. Moreover, researchers should respect the authenticity of CYP’s voices [15]. Finally, one of the participants of the JLA PSP on Juvenile Idiopathic Arthritis considered the PSP a waste of time and money should the project end with the publication of the top 10 research priorities [16] and no attention was paid to whether the priorities were being used. Regrettably, little or no attention is given to reporting the academic impact of PRAs [9].

Example of a research team describing the academic impact of their paediatric research agenda

To the best of our knowledge, Geldof and colleagues stand alone in evaluating the academic impact of their PRA [12]. Staley and colleagues performed a qualitative evaluation of what happens after JLA PSPs, however, they did not conduct a systematic search or evaluation of whether the research priorities included in research agendas are elaborated on [17]. Geldof and colleagues evaluated the impact of their agenda six years after its initiation and three years after publishing the PRA [12]. Most of the studies based on their PRA were pharmaceutical-driven studies that focused on prioritising the development and validation of new medical treatments (71%). The authors concluded that the extent to which the current research landscape adequately represents the viewpoint of patients is debatable [12].

Methods for identifying the research impact of general studies and describing the academic impact of paediatric research agendas

Identifying the academic impact of PRAs differs from identifying the research impact of other studies. Identifying the research impact of general research aims to demonstrate “the contribution that excellent research makes to society,” as defined by the Economic and Social Research Council. [18]. Nevertheless, the impact of such research is difficult to identify, partly because impacts originating from consecutive activities may accumulate in the longer term. Given this, it becomes difficult, sometimes even impossible, to ascertain which activity ultimately contributes to impact [19]. Several approaches, such as the Payback Framework developed by Donovan and Hanney [20] and the Research Impact Framework (RIF) developed by Kuruvilla and colleagues, [21] have proven robust and useful for describing research impact. The RIF is divided into four broad areas: research-related impact, policy impacts, service impacts and societal impacts. The checklist was developed for academics interested in describing and monitoring the impact of their research. The Payback Framework was originally developed to examine the impact of health services research, but has been adapted to assess the impact of research in other areas such as the social sciences [14]. The Payback Framework consists of five categories: [1] Knowledge, [2] Benefits to future research and research use, [3] Benefits from informing policy and product development, [4] Health and health sector benefits, and [5] Broader economic benefits. Both approaches consist of almost identical categories. Each approach can be used for different circumstances for which researchers may seek to describe impact [19]. The limitation of these approaches is that they are not specifically developed to describe the impact of PRAs on what research is undertaken after publishing the agenda.

The aim of our study was two-fold. First, to devise a reliable method for describing the academic impact of PRAs. Second, to describe the academic impact of PRAs designed together with CYP. We chose to focus only on describing the academic impact of the PRA in which CYP had been involved because we sought to improve the quality of CYP involvement in designing PRAs. Therefore, we believe that describing the academic impact of these agendas is of utmost importance.

Methods

Design

We developed a method to describe the academic impact of PRAs based on the research-related impact of the RIF (Table 1) and the first two categories of the Payback Framework: Knowledge, and Benefits for Future Research and Research Use (Table 2). The categories of the RIF and the definitions of the Payback Framework that we used are highlighted in both tables. We used these categories because they resemble the academic impact of the PRAs we aimed to evaluate. The other three areas of the RIF and the Payback Framework are related to impacts other than academic impact (e.g., policy, services, and societal impact for the RIF, and policy, health, and economic impact for the Payback Framework). The authors of the RIF state that the themes can be adjusted, including removal, addition, grouping, or modification, to align with the research being described and relevant to assessment criteria [21]. In consultation with a medical information specialist and a methodologist, we portrayed our method as a Descriptive Academic Impact Analysis of Paediatric Research Agendas (DAIAPRA). The following section describes the development of the method.

Table 1 The broad areas and descriptive categories of the research impact framework
Table 2 Example of the multidimensional categorisation of paybacks of the Payback Framework

Developing the descriptive academic impact analysis of paediatric research agendas

In preparation for creating the impact tool, we defined the academic impact of PRAs using three identifiable factors: [1] The number of citations referencing the agenda, [2] The number of new studies based on the priorities, and [3] The variation in authorship between the original PRA and the subsequent studies. We opted for this approach because the data on citations, new studies, and research teams were readily accessible. Furthermore, we added an evaluative factor to the impact tool, which considered the ease of determining whether a study was based on one of the PRAs. It is important to note that impact encompasses various elements, but our study concentrated solely on those that could be quantified.

We defined the different steps of the impact tool, starting with Step 1: Identifying the PRAs. Next, we determined the data sources and metrics that would serve as the basis for describing impact. We based our PRA impact tool on three components of the section Research-Related Impacts: Publications and Papers, Type of Problem/Knowledge Addressed, and Research Networks and User Involvement (Table 1) [21]. Additionally, our tool drew upon two components of the Payback Framework: Journal Articles and Better Targeting of Future Research (Table 2).

In Step 2, we linked the component Publications and Papers of the RIF and the component Journal Articles of the Payback Framework to the number of citations generated by the PRA. Then, in Step 3, we linked the RIF component Type of Problem/Knowledge Addressed and the component Better Targeting of Future Research of the Payback Framework to new, PRA-based studies. Finally, in Step 4, we linked the components Research Networks and User Involvement to the difference in authorship between the PRA and the new studies. To make the impact analysis tool readily accessible, we included only publicly available, easily assessable, metrics or metrics. To determine whether a study should address the priority of the PRA, we included Step 5. We based Steps 1, 3, and 5 on a more subjective evaluation; hence, these steps should be performed independently by at least two people. We based Steps 2 and 4 on objective variables that could only be interpreted in one way. To use the method efficiently, we recommend Steps 3 and 5 to be performed simultaneously (Fig. 1). The steps of the DAIAPRA are explained in more detail in the section below.

Fig. 1
figure 1

Descriptive academic impact analysis of paediatric research agendas

Step 1. Identification of paediatric research agendas

The research team, in partnership with an information librarian, developed the literature search strategy. The strategy utilised Medical Subject Headings and keywords for ‘children’, ‘priority setting partnerships’, and ‘paediatric research agenda’. The search terms from each category were combined using the “OR” operator, and the three categories were linked using the “AND” operator. The search was conducted on MEDLINE, EBSCOhost, Web of Science and Google Scholar. We utilised both forward and backwards citation chasing to ensure that we did not miss any important PRAs. This was done by checking the reference lists of the included studies. Furthermore, the James Lind Alliance page, which lists all JLA Priority Setting Partnerships, was reviewed to identify additional Priority Research Areas that were not captured by our search strategy. The resulting articles were then uploaded to the Rayyan screening tool, developed by Qatar Computing Research Institute (Doha, Qatar), and duplicate entries were eliminated. Several inclusion criteria were applicable in the process of identifying the research agendas (Table 3). The above described search strategy was utilized in our recently published review [9]. To include more pediatric research agenda in this study we added the PRAs identified by Odgers and colleagues in which CYP were included, and we repeated the same search after publication of our review.

Table 3 Inclusion criteria for step 1

Step 2. Citation analysis

We uploaded the identified PRAs to Scopus, SciVal, and Altmetric. Scopus is an expertly curated abstract and citation base. It provides access to reliable data, metrics, and analytical tools. We extracted data on citations from the database. A medical information specialist from the University of Groningen helped us download the desired information from all the studies into a Microsoft Excel file format.

Step 3. Impact analysis

Next, we screened the studies that cited one of the PRAs and examined the context in which the PRA was cited. Two researchers (LP and SB) independently screened the citations and examined whether the PRA was referred to because the study addressed one of the priorities. When disagreements arose, the researchers engaged in discussion until they reached a consensus. The inclusion criteria for Step 3 can be found in Table 4.

Table 4 Inclusion criterion for step 3

Step 4. Author assessment

We compared the authors of the studies included in Step 3 to the authors of the PRA on which the studies were based. We examined whether the first, second, or last author of the PRA was involved in the new studies that were based on a specific PRA.

Step 5. Classification of ease of tracing

Finally, we classified the studies that were included during the impact analysis into three categories based on the ease of tracing whether a study addresses a research priority of the PRA. We distinguished three categories for ease of tracing: easy - the research priority is explicitly stated in the publication, medium - the research priority is not explicitly stated but we could infer it from the text and difficult - the research priority is not stated and could not be inferred from the text.

Results

Step 1. Identification of paediatric research agendas

We included 31 PRAs in which CYP were involved [13, 14, 22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50]. Twenty-two PRAs were included because we had identified them in our recently published review [9], four were identified by Odgers and colleagues [8], and five were identified after the publication of our review. At the time of this study, the newest PRA was published in November 2022 and the oldest in June 2010. The themes of the included PRAs are shown in Table 5. Little variation was found in the methods used to develop the PRAs. The JLA method was most frequently used (n = 22, workshops (n = 2), focus groups (n = 2), Research Prioritisation by Affected Communities (RPAC) method (n = 1), online survey (n = 1), or a combination of the methods mentioned (n = 3).

Table 5 Themes of the included paediatric research agendas

Step 2. Citation analysis

The 31 PRAs were cited 517 times, ranging from 0 to 71 citations per PRA, with a mean of 17 citations per PRA (Supplemental Material 1). The agenda of Batchelor and colleagues received the highest number of citations. Four PRAs have not yet been cited [22,23,24,25]. Figure 2 presents an overview of the citation count for each PRA.

Fig. 2
figure 2

Citations per paediatric research agenda

Step 3. Impact analysis

Cumulatively, the 31 agendas were cited 517 times and 131 new studies (25%) were published based on at least one of the research priorities from the PRAs, ranging from 0 to 23 new studies per PRA, with a mean of 4 new studies per PRA. Hollis and colleagues’ PRA attracted most of the new studies [26]. Eight PRAs, however, did not yield new studies [13, 22,23,24,25, 27,28,29] as we show in Fig. 3.

Fig. 3
figure 3

New studies per paediatric research agenda

Step 4. Author assessment

Sixty studies (46%) were developed by at least one of the first, second, or last authors of the PRA on which the study was based. Seventy-one studies (54%) were developed by other researchers who did not author the PRA (Fig. 4 and Supplemental Material 2). It is apparent from Fig. 3 that some research teams, such as Baldacchino [30], Lam [31], Medlow [32], Birnie [14], Peeks [33] Aldiss [34], Parsons [35], Layton [36] and Batchelor [37], developed most of the new studies themselves, based on their own PRAs. For Peeks and colleagues, 10 of the 12 new publications included the original author of the agenda. Batchelor and colleagues’ agenda resulted in 18 new studies, 16 of which included the original author of the agenda. The authors developed most of the new studies themselves (89%). The agendas of authors who had not elaborate on the research priorities themselves, as was the case for eight of the agendas, resulted in the publication of an average of two new studies. The PRA developed by Hollis and colleagues was most successful and resulted in 23 new studies; only one of which was published by these authors themselves (6%).

Fig. 4
figure 4

Paediatric research agendas authored by the same authors versus other authors

Step 5. Classification of ease of tracing

Out of the 131 studies analysed (Supplementary Material 2), we could classify 44 studies (34%) as easy to trace, meaning that the article directly quoted the research priority from the agenda that they were addressing. Sixty-two studies (47%) we classified as medium, indicating that even though the research priority was not directly stated in the article, it was clear to us that the priority focused on was based on the aim and research question of the study. The remaining 25 studies (19%) we classified as difficult, meaning that although in the publication the authors claimed that following up on research is deemed as most important by the agendas, it was not clear to us which of the research priorities the studies focused on.

Discussion

To the best of our knowledge, this is the first study to evaluate the academic impact of multiple PRAs. To achieve this, we developed the DAIAPRA, a five-step tool. The first step identifies the PRAs, followed by three steps that evaluate the academic impact of PRAs. The last step classifies the ease of tracing whether a study addresses a research priority of the PRA. Using this tool, we found that the citations ranged from 0 to 71 per PRA. New studies based on a PRA ranged from 0 to 23. Furthermore, 46% of the new studies were developed by at least one of the first, second, or last authors of the PRA on which the study was based. Finally, only 34% of the new studies explicitly stated the research priority the study focused on, indicating that in these cases it was easy to trace it.

We found that the number of new studies based on a PRA varied between 0 and 23. A factor that might have influenced this wide range is that we included PRAs from 2010 up to and including 2022. Perhaps older PRAs already had more impact than newer ones. New agendas still need time to create impact. Another possible influence is that new agendas were published during the COVID-19 pandemic. Raynaud and colleagues showed a dramatic increase in COVID-19 publications and a substantial decrease in non-COVID-19 research during that time [51]. Another element that might play a significant role was whether the agenda received advance funding for elaborating on the research priorities. If that was the case, researchers could start elaborating on the research priorities immediately, instead of first finding appropriate research funders. Moreover, some funding programmes set their priorities for research and then advertise for research teams to conduct the research [52], which might have resulted in certain agendas being studied more frequently. Another aspect that caught our attention is that eight of the ten PRAs leading to the most new studies were developed in collaboration with the JLA, suggesting that partnerships with established organizations like the JLA can greatly amplify the impact and reach of PRAs. Another factor that could have played a role is that some researchers were unaware of, and thus paid no attention to, the dissemination and implementation of the PRAs [53]. We interviewed researchers and CYP, who had designed a PRA together, about the academic impact of their PRAs. Authors of the PRA could provide valuable information and examples of impact of their own work, which would otherwise have been unavailable to us. That is why we interviewed researchers and CYP, who had designed a PRA together, about the academic impact of their PRAs. We found that researchers were hardly aware of new studies based on their PRAs [11]. This lack of awareness is easily addressed by emphasising the importance of disseminating and implementing the PRAs. The awareness of and emphasis on the implementation of PRAs might enhance their academic impact. The JLA guidebook was updated in 2021, and Chaps. 9, 10, and 11 deal with the dissemination and publication of the research agenda, prioritising the research funders and long-term follow-up [52]. Concentrating on the phase following the PRAs’ design might already create the awareness that researchers need to take responsibility for encouraging the research and funding community to address the research priorities.

Another strong argument in favour of prioritising the implementation of the research agenda, is to ensure continuous and transparent communication with the CYP involved. Keeping CYP updated regarding the progress of research priorities is essential. It shows them that their input is valued and has contributed to meaningful changes [54]. Mawn and colleagues found that researchers can be criticised for failing to engage or update CYP as research progresses [55]. The result of doing nothing is that CYP may lose their trust. They may get the impression that what concerns them is unimportant, or that it may not be as important or valued by others in a position to fund research [53].

Interestingly, almost half of the new studies based on the PRAs were developed by the first, second, or last author of the PRAs. Researchers who design a PRA can use the agenda as a roadmap for their work, helping them to identify research questions and develop studies that are likely to contribute to the broader goals of their field [56]. Our study opens the door to discussion about whether the academic impact of the PRAs is achieved when a substantial portion of the new studies that are based on the PRAs, are authored by the same researchers as those who developed the agendas. The primary aim of a PRA is to change the broader context of research; it can be questioned whether this aim is achieved when nearly half of the new studies are published by the same researchers. We believe that an important distinction should be made when evaluating the academic impact of PRAs. The expected academic impact of a PRA depends on whether it is designed by an entire research field as opposed to designed by a specific research team. When all key researchers are involved in the design of a PRA, it is inherent that they are the ones who elaborate on the priorities together with their research teams. This approach is particularly fitting for a research agenda designed within a niche. However, when a research agenda is designed for a broad subject such as diabetes, it is practically impossible to include all key stakeholders. Therefore, one might expect that the research priorities of that agenda are elaborated by researchers other than those who designed the agenda. To date, it is impossible to determine whether the PRA was designed by an entire research field or by a specific research team. Consequently, placing the academic impact of a research agenda in context becomes more challenging.

If researchers, who were not involved in designing a PRA, continue to address research questions that they consider important instead of focusing on priorities of the agenda, we question whether the design of a PRA has the intended impact of changing the broader context of research.

Evaluation of the DAIAPRA

We developed the DAIAPRA because no reliable method was available that described the academic impact of PRAs. Our method focuses on quantifiable aspects of impact, such as citations, new studies based on the PRA, and the difference in authorship between the PRA and the new studies. We acknowledge that the method does not consider all potential factors that may contribute to the impact of a PRA, such as conference presentations on the PRAs, or receiving funding for the priorities. It should be noted that the method is still in its infancy. Additional metrics could be incorporated to provide a more complete understanding of academic impact.

Limitations of this evaluation

While we managed to evaluate the academic impact of 31 PRAs, our study has several limitations. The first issue was that the academic impact of the PRAs could not be compared one-on-one because they were published in different years and involved different research areas. Furthermore, it is important to recognise that our study offers an initial perspective on the academic impact of PRAs. With this first evaluation, we included quantifiable aspects of impact only. The qualitative forms (such as: improved collaboration, influence on policy and practice or increasing public understanding of research) of impact that are not measured with our method are nonetheless crucial for understanding the full impact of a research agenda. Therefore, we acknowledge that potentially unknown positive or negative impacts are missed. Our goal was not to classify the agendas according to their levels of impact, but rather to present a comparative analysis of their respective academic impacts. The original inclusion criterion was limited to PRAs with CYP below the age of eighteen. This initial search delivered a limited amount of results. Five more studies were then included, two PRAs in which the age of the CYP was not specified, and an additional three in which the age of the CYP was below 20 or 25 years. Currently, it is challenging to specifically examine research agendas involving only CYP under the age of 18, as the age of the CYP are not always clearly described in the PRA. Furthermore, we acknowledge that by evaluating the English literature only, we may have excluded valuable research published in languages other than English. However, given that English is the predominant language of academic communication and our focus was on studies published in recognised academic databases, it was necessary to prioritise English-language publications. Consequently, important work in non-English or non-academic sources may have been overlooked. The objective of this study was to initiate discussion and create awareness about the academic impact of research agendas. We did not aim to classify research agendas in terms of ‘good’ or ‘bad’ academic impact because the extent to which academic impact is achieved depends on many factors. All those factors must be considered when comparing the agendas, making it challenging to compare the PRAs.

Implications and future research

The DAIAPRA can be used by other researchers when evaluating the academic impact of research agendas. The findings of our study once again indicated the importance of the post-PRA phase. A research project does not end when the top 10 research priorities have been agreed upon. Researchers should disseminate the results of their PRAs to increase exposure to potential funders and researchers. In addition, our results showed that it is difficult to determine whether a research priority is elaborated on. This could argue in favour of establishing a system that enables us to trace a study to determine whether it is based on one of the priorities of a research agenda. For example, researchers could include a statement in the PRAs specifying how the priorities listed should be cited or referenced when researchers elaborate on one of the priorities. Providing an overview of which priorities are addressed by whom and in which country, for each research agenda accessible to everyone, could contribute to more transparency. This clarity also systematically highlights the remaining priorities that require further investigation. Our results indicated that it is challenging to contextualise the academic impact of a PRA because it is unclear who designed the PRA. Therefore, we suggest adding a statement indicating whether the agenda is designed by all key stakeholders or researchers, or whether it has been designed by a specific research team.

Future studies should focus on why some PRAs generated more new studies than others, to guide researchers in creating academic impact. We focused only on the academic impact of the PRAs. Future research should also focus on the policy and societal impact of the PRAs. This is important because, alarmingly, an estimated 85% of medical research evidence never finds its way into clinical practice [5].

Conclusion

Our study contributes to the development of a methodology to evaluate the academic impact of PRAs and we provide initial insight into the academic impact of 31 PRAs. Our findings could be used to inform future PRA design, especially by incorporating provisions for tracing the academic impact of new studies related to the research priorities outlined in the agenda. Overall, our study provides a valuable foundation for further research into the evaluation of academic impact in the field of paediatric research.