Background

Pilot/feasibility (aka proof-of-concept, exploratory trial, preliminary study, evidentiary, vanguard) studies play an essential role in the process of conducting larger-scale clinical trials by providing information about the potential efficacy and feasibility of an intervention [1] and addressing uncertainties around conducting a larger-scale study [2,3,4,5,6,7]. This is evidenced by the importance funding agents place on conducting pilot/feasibility studies, such as the National Institutes of Health (NIH) and the Medical Research Council (MRC), where multiple mechanisms (e.g., R34, R01 small pilot studies from the NIDDK, R21, R03, National Prevention Research Initiative) provide financial support for conducting early-stage, preliminary studies.

Key information collected during pilot/feasibility studies includes trial feasibility—can we recruit and retain the target population; intervention feasibility—can we deliver the intervention and do participants like it; and preliminary efficacy—does the intervention show a preliminary signal of promise [8,9,10]. Of these, trial- and intervention-feasibility metrics have garnered heightened attention over the past decade. Multiple reporting frameworks [10,11,12,13,14,15,16] emphasize and reinforce that trial- and intervention-feasibility indicators are essential to determine whether a trial can be successfully conducted, if changes to the design and/or implementation are warranted, and whether participant retainment is high enough so individuals receive a sufficient dosage of the intervention to result in improved outcomes. Funding agencies like the NIH and the MRC make clear that the role of pilot studies is to assess whether an intervention can be done, and such information is essential to making decisions regarding whether one should proceed with a larger-scale trial of an intervention [1, 17].

Undertaking studies that gather information on trial- and intervention-feasibility creates a foundation for the optimization and successful scaling-up to larger-scale trials [11, 12, 18]. In the behavioral sciences, where obesity-related interventions often consist of delivering complex interventions across multiple levels/settings and use a range of behavior change techniques, collecting and reporting key aspects of feasibility during the initial testing of the intervention is of heightened importance. Previous research on behavioral interventions has shown that increasing complexity decreases understanding of how the intervention operates and increases the difficulty of delivering the intervention as intended [19]. By focusing on feasibility during the preliminary stages of obesity-related behavioral intervention development (pilot/feasibility studies), researchers put themselves at lower risk of designing interventions that fail at scale due to a lack of understanding about effective design and/or implementation [18, 20].

Reporting guidelines, frameworks, and translational science models advocate for the conduct of high-quality early-stage pilot/feasibility studies as an important step in developing maximally potent and implementable prevention and treatment interventions, many of which are obesity-related [21,22,23,24]. Comprehensive pilot/feasibility studies reporting on key information (trial and intervention feasibility) provide the best evidence for decision making when scaling up to a larger trial. To date, no review has examined the reporting of feasibility indicators within obesity-related behavioral intervention pilot/feasibility studies. Questions remain as to how the field of obesity-related intervention science reports feasibility indicators from pilot/feasibility studies and to what extent the focus on feasibility indicators has evolved over time. Understanding how the field of obesity-related behavioral science has historically utilized feasibility indicators and how the design and conduct of pilot/feasibility studies has evolved over time is an important perspective to gain in order to optimize preliminary behavioral interventions. The aims of this study, therefore, are to (1) conduct a historical scoping review of the reporting of feasibility indicators in obesity-related behavioral pilot/feasibility studies published up to and including 2020, and (2) describe trends in the amount and type of feasibility indicators reported in obesity-related pilot/feasibility studies published across three time periods that span four decades from 1982 to 2020.

Methods

This scoping review was conducted and is reported according to the Preferred Reporting Items of Systematic Reviews and Meta-Analyses extension for scoping reviews (PRISMA-ScR) guidelines [25].

Search strategy

A systematic literature search was conducted in four online databases including PubMed/Medline, Embase/Elsevier, EBSCOhost, and Web of Science in September 2021. A combination of Medical Subject Heading (MeSH), EMTREE, and free-text terms and Boolean operators as appropriate for each database were used to identify eligible publications. Each search included one or more of the following terms to identify pilot studies—pilot, feasibility, preliminary, proof-of-concept, vanguard—and the following terms to identify obesity-related behavioral interventions—obesity, overweight, physical activity, fitness, exercise, diet, nutrition, sedentary, or screen. The following additional filters were applied in databases when available: English language, human species, articles only, and peer-reviewed journals.

Eligibility criteria

Published pilot studies that employed a behavioral intervention strategy on a topic related to obesity were considered for inclusion in this scoping review. Behavioral interventions were defined as interventions that target actions which lead to improvements in health indicators [26, 27], separate from mechanistic, laboratory, pharmacological, feeding/dietary supplementation, and medical device or surgical procedure studies. Pilot studies were defined as those studies which are conducted separately from and prior to a large-scale trial and are designed to test the feasibility of an intervention and/or provide evidence of preliminary effects before scale-up [3, 21, 28]. Exclusion criteria were articles that only described the development of a pilot study (protocols), studies that employed a non-behavioral intervention strategy (as described above), studies that did not deliver an intervention to participants (observational/cross-sectional), qualitative studies, and dissertations.

Sampling strategy

Given the broad nature of this review’s search strategy and inclusion criteria, an a priori decision was made to use a multi-stage sampling procedure in order to select a random sample of studies from three distinct time points for comparison. There was an unequal distribution of published studies across all timepoints, with most studies (83.2%) published between 2011 and 2020. Thus, a simple random sample would be unlikely to capture enough articles to sufficiently represent those published in earlier years. Starting with the earliest published citation, titles/abstracts, and full texts were screened in chronological order by year of publication until 200 studies were identified that met the inclusion criteria. This resulted in a collection of studies that spanned from 1982 to 2006. It is important to note that 1982 was the starting point for this scoping review simply because no relevant records pre-1982 were retrieved from the databases. The two additional time points were then determined to be 2011–2013 and 2018–2020, which represents an equal 5-year gap between each successive time point. Studies published between 2011 and 2013 and between 2018 and 2020 were assigned random numbers with STATA’s “rannum” command and screened in order of randomization until an additional 200 studies that met the inclusion criteria were identified from each group [29].

Power analysis

The final sample size of 600 (200 per group) was based on detecting a minimal difference between the three distinct timepoints in the probability (binary presence/absence) of reporting a feasibility marker. Considering an alpha of 0.05, power analysis revealed that a logistic regression, with the binary variable for a feasibility indicator as the dependent variable and two “dummy variables” representing two time period categories, could detect a minimal difference of an odds ratio of 1.35. This represents a difference in the reporting of a feasibility indicator of 20% of articles versus 26% of articles.

Screening process

Database search results were electronically downloaded as a RIS file and uploaded to Covidence systematic review software (Veritas Health Innovation, Melbourne, Australia) for review. Duplicate references were identified as the RIS files were uploaded to Covidence and were screened out of the review process. Title and abstract screening were completed in duplicate by two reviewers (CDP and MWB) to identify references that met the eligibility criteria. Disagreements were solved by having a third member (LV) of the research team review the reference and make a final decision. Full-text PDFs were retrieved for references that passed the initial title and abstract screening process and were reviewed in duplicate by three members of the research team (CDP, MWB, and LV).

Data extraction and coding

Study- and participant-level descriptive characteristics

Relevant study-level and participant-level descriptive characteristics were extracted from included studies by five members of the research team and were coded in an Excel spreadsheet. These included characteristics such as study location, publication year, design, treatment length, sample size, age and sex of participants, intervention setting, and behaviors targeted by the intervention. Because of the large amount of data extracted, it was not possible to doubly extract and code each individual study. Instead, the lead author (CDP), another member of the research team (LV), and three research assistants doubly coded a training set of studies until 100% consistency was reached. At that point, individual studies were assigned to members of the research team for a single round of data extraction and coding.

Feasibility indicators

Table 1 provides the operational definitions of each feasibility indicator identified for this review and the outcomes extracted and/or calculated. The seven feasibility indicators, which were chosen a priori after a review of reporting guidelines/frameworks related to preliminary studies [10,11,12, 17, 30] included indicators of trial feasibility—recruitment capability and retention, and indicators of intervention feasibility—participant acceptability, attendance, compliance, cost, and treatment fidelity. Definitions for each of these indicators were adapted from the NIH [17] and other peer-reviewed sources [11, 12, 30]. Because the terms “feasibility” and “acceptability” are sometimes used synonymously [7], it was important to define individual feasibility indicators a priori. Thus, while the term “acceptability” may have been used to describe other aspects of feasibility (recruitment, retention, etc.), if those aspects were not related to participant acceptability (enjoyment, satisfaction, tolerability, safety, etc.), they were not coded as an acceptability-related indicator but were instead coded based on our definitions of feasibility indicators.

Table 1 Operational definitions of trial- and intervention-related feasibility indicators

Data extracted for each feasibility indicator included descriptions of the types of feasibility indicators being measured and any reported quantitative outcomes related to those measures. The presence of any qualitative feasibility indicators was also cataloged, including participant interviews, open-ended survey response questions about intervention acceptability, and mixed methodology related to participant acceptability. Each of these variables was classified as either “present” or “absent”. Other variables of interest included the reporting of any funding, mentioning feasibility-related parameters in the purpose statement, citing any guidelines or frameworks related to the reporting of preliminary studies, and reporting/conducting any statistical tests to determine preliminary efficacy. When studies had cited a separate process evaluation or other publication related to feasibility of the original pilot study, those cited publications were also searched for the reporting of feasibility indicators.

The identification of feasibility indicators and the other variables of interest as reported in pilot studies was done by utilizing a combination of text mining [32] and manual search procedures. Text mining procedures were conducted in NVivo 12 Plus Qualitative Data Analysis Software (QSR International, 2021) by three members of the research team and consisted of full-text searches with keywords related to feasibility outcomes. A full list of keywords was manually created by training with a randomized sample of 50 pilot study articles and scanning full-text articles for keywords relate to feasibility until saturation had been achieved. Once the presence of feasibility outcomes had been detected in all pilot studies with text mining procedures, manual extraction of specific feasibility outcome-related information was completed by three members of the research team. Full-text PDFs were also manually searched to ensure text mining procedures identified all possible reported feasibility indicators.

Statistical analysis

Descriptive statistics were compared between each of the three time periods using Kruskal-Wallis tests and chi-square tests when appropriate. These tests were also conducted to ensure the random sampling procedure used to select studies did not produce any systematic differences between groups. A series of univariate logistic regression models were employed to assess changes in the reporting of feasibility indicators across time. The presence or absence of each feasibility indicator was treated as the binary dependent variable while two dummy variables representing time periods 2011–2013 and 2018–2020 (with 1982–2006 as the reference category) were independent variables. Predicted marginal probabilities were also calculated with STATA’s “margins” command to determine the probability of reporting each feasibility indicator at each time period, holding all other variables at their means. Finally, hierarchical Poisson regression models were employed to assess the associations between the quantity of feasibility indicators reported and reporting funding, mentioning feasibility-related parameters in the purpose statement, and citing any guidelines/frameworks related to the reporting of preliminary studies. The presence or absence of funding, mentioning feasibility-related parameters in the purpose statement, and citing guidelines/frameworks were treated as binary independent variables while the number of feasibility indicators reported was treated as the dependent variable. A hierarchical predictor variable entry method was employed to examine the independent association of funding, mentioning feasibility in the purpose statement, and citing guidelines/frameworks (Model 1) and the statistical control of time period (Model 2). The n alpha level of p < 0.05 was considered suggestive and p <0.005 was considered formally statistically significant. Analyses were carried out using STATA v17.0 statistical software package (College Station, Texas, USA).

Results

Search results

Figure 1 displays the PRISMA consort diagram, which communicates the screening process. A total of 51,638 citations were identified across databases. After duplicates were removed, 16,365 citations remained and 6873 of those citations were screened. Due to the multi-stage randomization procedure for selecting studies (see “Screening process” section) a total of 9492 of the 16,365 articles were never screened. Full-text screening was done until we obtained 200 eligible articles from each time period, which resulted in 600 articles being included in this review. For the 1982–2006 time period, 310 full-text articles were screened, while a total of 350 and 248 full-text articles were screened for the 2011–2013 and the 2018–2020 time periods, respectively. The supplementary file contains a reference list of all included studies.

Fig. 1
figure 1

Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) consort diagram

Descriptive characteristics of included studies

Study- and participant-level descriptive characteristics are reported in Table 2. This information is reported for all studies and is also reported separately for studies in each time period category. Most studies were conducted in North America (n=399, 66.5%), were a RCT (299, 49.8%), had two arms (n=326, 54.3%), measured outcomes at two timepoints (n=428, 71.3%), had adult participants (n=427, 71.7%), and included both male and female participants (n=434, 72.3%). The median treatment length was 12 weeks (IQR = 8–26 weeks), and the median baseline sample size was 48 participants (IQR = 28–89 participants). The mean age of youth participants was 10.9 ± 3.5 years, and the mean age of adult participants was 48.6 ± 15.1 years.

Table 2 Characteristics of included studies (N=600)

Many studies were conducted in clinic (n=154, 25.7%) or community settings [e.g., YMCAs, Boys and Girls Clubs, parks and recreation facilities, free-living environments] (n=122, 20.3%), although settings such as remote-delivery, K-12 schools, homes, workplaces, universities, care facilities, churches, and prisons were also represented. Physical activity (n=465, 77.5%) and nutrition (n=347, 57.8%) were the most represented target behaviors, with many studies targeting both behaviors (n=239, 39.8%).

Purpose statements, framework usage, funding, and preliminary efficacy

A total of 357 (59.5%) studies mentioned feasibility-related parameters in the purpose statement, 58 (9.7%) cited a guideline/framework for the reporting of preliminary studies, and 402 (67.0%) reported a funding source of any kind. The most commonly cited guidelines/frameworks included the Medical Research Council guidance [1] (n=17, 29.3%), the CONSORT extension for pilot and feasibility studies [10] (n=15, 25.9%), and the RE-AIM framework [33] (n=12, 20.7%). The most commonly cited funding sources were the NIH (n=142, 35.3%) and foundation/center grants (n=118, 29.4%). Significant differences in study characteristics between the three time periods were only found in the reporting of funding, which increased with each later year category, X2 (2, N=600) = 14.5, p<0.001. Conducting/reporting statistical analyses related to preliminary efficacy was identified in 561 (93.5%) studies. Of studies that conducted/reported statistical analyses related to preliminary efficacy, 371 (66.1%) made statements about preliminary efficacy in the conclusion. Of these studies, 298 (80.3%) made positive statements about the preliminary efficacy of the intervention.

Reporting of feasibility indicators

Table 3 provides the number and percentage of articles reporting each feasibility indicator for the total sample and across time periods.

Table 3 Presence or absence of the reporting of feasibility indicators across time

Trial-related feasibility indicators

For the total sample, 428 (71.3%) studies provided information necessary to calculate recruitment rates and 595 (99.2%) provided information necessary to calculate retention rates. The mean recruitment rate was 69.6 ± 29.1% (median = 76.3, IQR = 48.8–95.4) and the mean retention rate was 83.6 ± 18.8% (median = 89.7, IQR = 75–100).

Intervention-related feasibility indicators

For the total sample, 192 (32%) included a description of a quantitative measure of acceptability, 219 (36.5%) reported a quantitative outcome related to acceptability, 143 (23.8%) included a description of a qualitative measure of acceptability, 157 (26.2%) reported a qualitative outcome related to acceptability, 109 (18.2%) included a description of how intervention attendance rates were captured, 199 (33.2%) reported intervention attendance, 162 (27%) included a description of how intervention compliance was measured, 187 (31.2%) reported intervention compliance, 23 (3.8%) provided information about the monetary cost of the intervention, 109 (18.2%) provided a description of how treatment fidelity was assessed, and 85 (14.2%) reported outcomes related to treatment fidelity.

Reporting of feasibility indicators across time

Results from univariate logistic regression models for reporting feasibility indicators across time are communicated in Table 4. When compared to the Early Group (1982–2006), preliminary studies in the Late Group (2018–2020) tended to be more likely to report recruitment data (OR=1.60, 95%CI 1.03–2.49), and they were significantly more likely to report descriptions of quantitative measures (OR=3.05, 95%CI 1.97–4.72) and qualitative measures (OR=3.32, 95%CI 2.04–5.40) of acceptability, acceptability-related quantitative (OR=2.68, 95%CI 1.76–4.08) and qualitative (OR=2.32, 95%CI 1.48–3.65) outcomes, descriptions of compliance measures (OR=2.04, 95%CI 1.30–3.20) and compliance outcomes (OR=2.29, 95%CI 1.49–3.52), as well as descriptions of fidelity-related measures (OR=2.56, 95%CI 1.48–4.42) and fidelity outcomes (OR=2.13, 95%CI 1.21, 3.77). Late Group (2018–2020) studies were also significantly more likely to mention feasibility-related parameters in the purpose statement (OR=2.39, 95%CI 1.58–3.61) and to have cited a guideline or framework related to the reporting of preliminary studies (OR=28.7, 95%CI 6.87, 120.3). Marginal predicted probabilities for the reporting of feasibility indicators are communicated in Table 5. For each successive time period, the probability of reporting feasibility indicators significantly increased for all indicators but attendance.

Table 4 Summary of univariate logistic regression analysis for reporting feasibility indicators in included studies across time (N=600)
Table 5 Predicted probabilities for reporting feasibility indicators across time

Reporting of feasibility indicators and purpose statements, framework usage, and funding

Results from multivariate Poisson regression models for the number of reported feasibility indicators are presented in Table 6. Reporting funding, mentioning feasibility-related indicators in the purpose statement, and citing guidelines/frameworks for the reporting of preliminary studies all significantly and positively associated with the number of feasibility indicators reported. These relationships held after controlling for time period.

Table 6 Parameter estimates from Poisson regression models predicting the number of feasibility indicators reported in pilot and feasibility studies

Discussion

This was a historical scoping review of the reporting of feasibility indicators in a large sample of obesity-related behavioral pilot/feasibility studies published between 1982 and 2020. We describe trends in the amount and type of feasibility indicators reported in studies across three time periods evaluating 200 studies from each period: 1982–2006, 2011–2013, and 2018–2020. Improvements over time were found for the reporting of most feasibility indicators; however, the rates of reporting remain modest, even in the latest group of studies published from 2018 to 2020. The majority of obesity-related behavioral pilot studies reported three or fewer feasibility outcomes, the most common being recruitment and retention, while almost all studies conducted/reported statistical analyses related to preliminary efficacy.

The primary finding from this study was the suboptimal rate of reporting key feasibility indicators within obesity-related behavioral pilot/feasibility studies. While trial-related feasibility was reported in the majority of studies (recruitment and/or retention), key intervention-related feasibility indicators, including participant acceptability, adherence, attendance, and intervention fidelity, were not widely reported. These results are supported by several reviews of pilot/feasibility studies conducted in other domains [34, 35], which all found a lack of trial- and intervention-related feasibility indicator reporting as well. While recruitment and retention are important trial-related feasibility indicators to capture, intervention-related feasibility indicators are important to assess during the preliminary phases of implementation. For example, participants’ perceptions of programs (acceptability) are associated with rates of attrition [36], intervention attendance is positively associate with obesity-related health outcomes [37, 38], and implementation fidelity during a pilot/feasibility study is associated with obesity-related main outcomes in scaled-up trials [39,40,41,42] and is shown to moderate the association between participant acceptability and behavioral outcomes [43].

The lack of reporting feasibility indicators coupled with the high rate of statistical testing for preliminary efficacy is concerning as well, although this does seem to be common across domains. For example, in a review of nursing intervention-feasibility literature, Mailhot et al. [34] found that almost half of the included feasibility studies focused exclusively on testing effectiveness. While preliminary efficacy can be reported in pilot/feasibility studies, results should be interpreted with caution, and outcomes related to feasibility should take priority.

Results from our study suggest that this is largely not the case in the behavioral sciences and reasons why remain unclear. It could be that intervention funders are invested in the outcome data. In other words, those agencies which fund preliminary studies might want some evidence that the intervention will have beneficial impact (regardless of its precision) before they continue to invest considerable time and money in a large, definitive trial. Several published guidelines, checklists, frameworks, and recommendations for pilot/feasibility studies exist [2, 5, 7, 10, 11, 44,45,46], many of which argue against the use of and focus on statistical testing for preliminary efficacy. However, pilot/feasibility studies have only just recently garnered attention from larger agencies. For example, the CONSORT extension to randomized pilot and feasibility trials [10] was published in 2016 and the majority of other literature that is used to guide pilot/feasibility studies has been published within the last decade as well. For pilot/feasibility studies included in this review, the most commonly cited guidelines/frameworks included the Medical Research Council guidance [1], the CONSORT extension for pilot and feasibility studies [10], and the RE-AIM framework [33]. Other guidelines used less often included Bowen et al. [12], Thabane et al. [5], and Arain et al. [28] Researchers conducting obesity-related preliminary studies today are encouraged to use the available literature in an effort to design high-quality preliminary interventions that can provide rich data to support the successful scaling up to a larger trial.

While our review does highlight some concerns for obesity-related behavioral pilot/feasibility studies, there were also some encouraging findings. We found studies conducted between 2018 and 2020 had higher odds of reporting most feasibility indicators when compared to studies published between 1982 and 2006. While reporting was still only modest in the later studies, results do show that improvements are occurring among behavioral pilot/feasibility studies. This may coincide with recent initiatives that have been undertaken in the field, including the publishing of several frameworks, guidelines, and recommendations related to pilot/feasibility studies [2, 5, 7, 10,11,12, 18, 45, 46]. Our results demonstrate that the reporting of feasibility indicators positively associated with citing a guideline/framework for the reporting of preliminary studies. Researchers conducting pilot/feasibility studies should utilize these guidelines/frameworks to inform the design, conduct, and reporting of their preliminary work, as our results support the idea that these guidelines/frameworks can improve the completeness of reporting in pilot/feasibility studies. We also found that the reporting of feasibility indicators positively associated with mentioning feasibility-related outcomes in the purpose statement of the published pilot/feasibility study. This may demonstrate the importance of stating clear objectives. Alternatively, it may also suggest that authors of these papers were generally more sensitized to the need to be explicit about aspects of feasibility.

The reporting of feasibility indicators also significantly and positively associated with a study being supported by funding of any kind. It is well established that pilot/feasibility studies play an essential role in the development of larger-scale trials and virtually all funding agencies require evidence gathered from these preliminary studies to support the justification for scaling up to a larger trial. This highlights the importance of funding structures that are designed to support the conduct of pilot/feasibility studies specifically. Recent initiatives like the NIH Planning Grant program (R34) [47], the CIHR Health Research Training Platform (HRTP) Pilot Funding Opportunity [48], and the NIHR Research for Patient Benefit (RfPB) program [49] represent important steps forward in the field of pilot/feasibility research.

Strengths and limitations

A strength of this review is the inclusion of a large sample (N=600) of obesity-related pilot/feasibility studies published across four decades. Even though this was a scoping review and not every study published between 1982 and 2020 was included, we did not limit the inclusion of studies based on location, design, or health behavior topic, as long as the intervention contained at least one component related to obesogenic behaviors. As such, results can be generalized to a larger audience of health behavior researchers. There were also limitations to this review. First, we only considered health behavior interventions related to obesity for inclusion. While results may generalize to pilot/feasibility studies in the realm of health behavior, they cannot apply to non-behavioral preliminary studies including mechanistic, pharmacological, or rehabilitation interventions. Another limitation is that studies in the Early Group span a much greater length of time (1982–2006) compared to studies in the Middle (2011–2013) and Late (2018–2020) Groups and each year is not equally represented. Because of this grouping structure, comparisons between each time period, especially between the Early Group and Late Group, should be interpreted with caution. This was a function of the limited number of pilot/feasibility studies published in earlier years compared to later years. Also, due to the multi-stage randomization procedure used to screen studies for this scoping review, there were 9492 citations which were never screened. It must also be noted that some feasibility indicators may not have been relevant to collect for certain intervention designs. For example, attendance would most likely not have been an applicable feasibility indicator for an mHealth intervention, but participant compliance may have been a feasibility indicator of interest. In other words, depending on intervention design and the specific components of each pilot/feasibility study, it would be impossible or irrelevant for some studies to collect 100% of the feasibility indicators for which we coded. Furthermore, some of the feasibility indicators are difficult to code. We used both text mining and manual approaches to maximize accuracy in capturing this information, but some items may have been erroneous. Finally, reporting of a study is not identical to the conduct of a study. Not reporting of some aspect does not mean that the study authors had not taken it into consideration.

Conclusions

The reporting of feasibility indicators within obesity-related behavioral pilot/feasibility studies has improved over time, but key aspects of intervention-related feasibility are still not reported in the majority of studies. Aspects of intervention-related feasibility, including fidelity, play a key role in the development of larger-scale trials, alongside the widely reported trial-related feasibility indicators of recruitment and retention. Given the importance of behavioral intervention pilot/feasibility studies in the translational science spectrum, there is a need for improving the reporting of feasibility indicators. Researchers who plan to conduct a pilot/feasibility trial with the intent to scale up to a future larger trial are encouraged to use the available literature on the design, conduct, and reporting of preliminary studies to improve design and maximize the potential success of the larger-scale trial.