Introduction

Endurance athletes have been reported to not achieve optimal carbohydrate (CHO) intake for competition despite guidelines with strong scientific evidence supporting the use of optimal CHO intake to enhance endurance performance [1,2,3,4,5,6]. In addition to laboratory-based research, real-world sports nutrition interventions have demonstrated improved performance in endurance athletes when optimal CHO practices are followed [7,8,9]. Given the strong support for optimal CHO practices in laboratory and real-world settings, a reason why athletes do not meet the CHO intake guidelines may be lack of knowledge, but there is currently no tool to quickly and systematically address this to inform and guide athletes’ nutrition coaching.

While increased knowledge or awareness of CHO guidelines may not necessarily translate to a change in behaviour, we do not currently know the levels of knowledge endurance athletes possess on this topic [10]. By systematically assessing athletes’ knowledge of CHO requirements for competition, sports nutrition practitioners could better design, facilitate and evaluate targeted nutrition interventions to address knowledge gaps to ultimately optimise competitive performance [11]. Existing nutrition knowledge questionnaires assessing general and sports-specific nutrition knowledge of athletes are available [12,13,14,15]. However, none of these questionnaires focuses exclusively on current CHO guidelines to assess knowledge and help explain the role between knowledge of CHO guidelines and practice within competition [2]. Therefore, the aim of this study was to develop and validate a novel questionnaire to systematically and rapidly assess endurance athletes’ key knowledge of CHO requirements for optimal performance in competition.

Methods

Development of the Carbohydrate for Endurance Athletes in Competition Questionnaire (CEAC-Q)

The CEAC-Q was developed by a team of four expert sports dietitians and performance nutritionists leading research on nutrition for endurance performance and working in applied practice with amateur as well as professional endurance athletes ranging from club to world-class level. The questionnaire was formed and based on the current American College of Sports Medicine (ACSM) guidelines and recent sports nutrition research findings on CHO for endurance sports [2, 16, 17]. Five topic areas of key core knowledge for CHO and competition were identified by the research team, namely: CHO storage and metabolism; CHO loading; CHO meal prior to an event; CHO during an event; and CHO for recovery (Fig. 1). The questionnaire was pilot-tested on sports dietitians and endurance athletes to ensure the questions reflected current CHO guidelines and clarity of instructions during completion, providing qualitative feedback for each question and subsection. The research team reviewed the pilot questionnaire results and incorporated feedback on clarity of questions, suitability, amended and endorsed content and face validity of the final questionnaire.

Fig. 1
figure 1

Schematic of development of the CEAC-Q

The final questionnaire (Table 1) consists of demographic questions and 25 multiple-choice questions divided into the 5 key core knowledge areas with a total possible CEAC-Q score of 100. Each question was assigned +4 points for the correct response, +1 for a partial response if there were multiple answers and 0 for incorrect or unsure responses [11, 18]. All questions included an ‘unsure’ option to reduce the possibility of guessing answers and differentiate participants with correct, incorrect or no knowledge. The questionnaire was administered online using SurveyMonkey software (https://www.surveymonkey.com, San Mateo, California, USA) in English with question order presented to participants in a random manner to avoid order bias [19, 20]. Participants were encouraged not to guess and could provide open-ended comments for the overall questionnaire and each individual question to provide opportunity to explain answers or identify need to clarify wording [12]. SurveyMonkey recorded time to complete the questionnaire which could only be completed once without time constraints. Only completed questionnaires were included for analysis.

Table 1 Carbohydrate for Endurance Athletes during Competition Questionnaire Scoring Sheet (CEAC-Q)

Construct validity assessment

To assess construct validity, three groups with a priori hypothesised varying levels of sports nutrition knowledge were recruited. In order of expected level of knowledge, these were: (1) general population who did not participate in any endurance sport (GenP), (2) endurance athletes with > 12 months training experience (EA), and (3) sports dietitians/nutritionists (SDN) who were registered members of Sports Dietitians Australia, the Sport and Exercise Nutrition Register, British Dietetic Association Sport Nutrition Group, International Society of Sports Nutrition or Board Certified Specialist in Sports Dietetics with the American Academy of Sports Dietitians and Nutritionists. Participants were invited to participate through social media and email lists of sport nutrition regulatory bodies. All participants were provided with the participation statement and online consent form and agreed to participate electronically.

Test–retest reliability

To assess test–retest reliability, the athlete group were provided with the choice of completing a second questionnaire 10–14 days later based on validation protocols used by previous nutrition knowledge questionnaires for athletes [21, 22]. A period of less than 3 weeks is considered long enough for the questions to be forgotten yet short enough to minimise any change in nutrition knowledge [23]. Athletes who volunteered to complete the questionnaire a second time were contacted by email 10–14 days later with a personalised link to the test–retest questionnaire minus demographic questions. No formal nutrition education was provided or advised between tests. To account for any learning effect after completing all questions in the retest, athletes were asked additional questions regarding learning effect and perceived changes in knowledge or scores (Table 1). Open-ended responses were provided by 26 participants who believed their sports nutrition knowledge had changed to explain how and why it changed between tests. Responses were grouped into four categories; raised awareness of current knowledge gaps, self-directed learning, consultation with a coach or dietitian for advice and rushing or selecting unsure to avoid guessing incorrectly.

Statistics and data analysis

One-way analysis of variance (ANOVA) was used to compare the total score and the five CHO subsections between groups with post hoc analysis Scheffe due to unequal group sizes. Statistically significant differences in knowledge total and subsection scores between the three groups was seen as evidence of construct validity of the questionnaire [12]. Each of the five subsections was assessed separately for internal consistency as each addressed a different area of CHO knowledge. Internal reliability for each subsection was measured against the psychometric requirements to determine reliability with Cronbach’s α > 0.7 indicating acceptable internal consistency [14, 24]. Differences in knowledge scores between groups were assessed using non-parametric (Kruskal–Wallis) analysis of variance (ANOVA) with Tukey post hoc analysis to determine which group differed when results were significant. A Bonferroni correction applied to non-parametric post hoc analysis and p values for significance testing set at < 0.017 [21]. Upon retest Pearson’s correlation compared nutrition knowledge scores of athletes between time points to provide evidence of test–retest reliability. With regard to potential learning effects between test- and retest-dependent samples t test were conducted to evidence stability of the CEAC-Q [25]. All data were analysed using IBM SPSS (version 24) with a significance level of p = 0.05. Graphs were created in Graphpad Prism 8 (GraphPad Software, Inc. v8, La Jolla, CA, USA).

Results

Participants. Of the 393 participants who commenced, the CEAC-Q was completed by a total of 272 individuals with a completion rate of 69%, consisting of the general population (n = 68), endurance athletes (n = 145) and sports dietitians/nutritionists (n = 60) Table 2. Sports Dietitians and Nutritionists were registered with the UK Sport and Exercise Nutrition Register SENr (n = 35), Sports Dietitians Australia (n = 22) or the American Academy of Sports Dietitians and Nutritionists (n = 3) with < 1 year (n = 7), between 1 and 5 years (n = 31) or > 5-year experience (n = 22).

Table 2 Demographics of endurance athletes and general population

CEAC-Q scores. There was a significant difference between the groups for total nutrition scores as determined by one-way ANOVA, [F (2, 269) = 172.86, p < 0.0001] (Fig. 2A). In regards the total score of the test, the GenP had the lowest score (17 ± 20, mean ± SD), followed by EA (46 ± 19) with the highest knowledge observed in the SDN group (76 ± 10, p < 0.001; Fig. 2A). Mean subsection scores were also significantly different between the groups for GenP (3.4 ± 4.7), EA (9.2 ± 5.2) and SDN (15.2 ± 3.5, p < 0.001, Fig. 2B–F). No significant differences were observed between subsection scores for any population. EA mean scores were similar for each subsection: CHO storage and metabolism (9.2 ± 5.0); CHO loading (9.4 ± 5.9); CHO meal prior to an event (9.5 ± 5.0); CHO during an event (9.7 ± 4.8); and CHO for recovery (8.1 ± 5.1) with wide inter-participant variation observed in subsection scores between individual athletes.

Fig. 2
figure 2

CEAC-Q total and subsection scores of GenP, endurance athletes (EA) and sport dietitians and nutritionists. Total score (A), carbohydrate metabolism (B), carbohydrate loading (C), carbohydrate pre-event meal (D), carbohydrate during event (E), carbohydrate for recovery (F). Data are means ± SD. Different letters on top of each column represent statistically significant differences between the groups (p < 0.001)

Reliability of the final CEAC-Q as measured by Cronbach’s alpha in the athlete group was 0.82. Reliability scores for the individual CEAC-Q subsections were as follows: CHO metabolism (0.72), CHO loading (0.74), Pre-event CHO meal (0.79), during event CHO (0.85), and Post-event recovery CHO (0.79). All scores were > 0.7 demonstrating acceptable evidence for internal consistency and reliability of each subsection of the questionnaire [14, 24]. Removal of any questions and subsections reduced the internal consistency of the CEAC-Q.

Test–retest reliability. Of the total 145 athletes initially recruited, 59 EA completed a second test. The retest showed a significant learning effect between test (45 ± 20) and retest (53 ± 18, p < 0.001). Test–retest reliability of the total CEAC-Q was determined (r = 0.742, p < 0.001 Table 3), but not for the individual subsections. Scores increased by an average 8.5 ± 13.6 points (p < 0.001, Table 3) with a wide inter-participant variation range in the change. The difference in scores between the initial test was primarily the result of fewer athletes selecting unsure (0 points) in the second test (14.8 ± 9.2 vs 10.6 ± 9.6, p < 0.0001) and choosing an alternative answer that was either incorrect (0 points) or correct (4 points). The majority of athletes (76.3%, n = 45) increased CEAC-Q scores on retest by an average + 13 ± 12 points, while 14 participants’ scores decreased between tests by − 6 ± 4 points. The majority of athletes (91.5%, n = 54) indicated that completing the CEAC-Q inspired them to learn more about sports nutrition, with 72.9% (n = 43) believing their knowledge had increased between tests. Qualitative comments suggest this difference in scores may partially be explained if participants selected unsure to avoid guessing incorrectly in one test but not for the other; “I think I clicked ‘unsure’ more the first time whereas this time I didn’t at all, but I don’t think my knowledge has changed.”. By completing the CEAC-Q that athletes may have been made aware of gaps in their own knowledge which instigated self-learning “After answering “Unsure” on most of the questions the first time round I looked up some of the info online to get a better understanding”. However, as one participant reported increased knowledge or awareness does not necessarily translate to a change in behaviour: “It’s made me think I SHOULD learn more. But whether or not I act on it is questionable.

Table 3 Endurance athlete CEAC-Q scores test–retest reliability (mean ± SD)

Test time to completion. When performing the questionnaire only during retest (without demographics questions), the CEAC-Q took athletes an average 10:36 ± 07:45 min to complete.

Discussion

The main findings of this study were that (1) the carbohydrate for endurance athletes in competition questionnaire (CEAC-Q) is a fast and valid tool to assess CHO knowledge for competition in endurance athletes, and (2) the CEAC-Q can identify knowledge gaps and raise awareness of that gap within athletes.

To our knowledge, the CEAC-Q is the first CHO-specific nutrition knowledge questionnaire designed for use with endurance athletes in a competition setting to understand gaps in knowledge of current CHO guidelines. Previous general nutrition knowledge questionnaire studies have observed both poor CHO-specific knowledge in athletes [26] as well as inadequate CHO intakes during competition in elite and amateur athletes [5, 6, 27]. Inadequate nutrition knowledge is one of multiple barriers influencing athletes’ capacity to eat appropriately [28]. A key role of nutrition practitioners is to provide targeted nutrition coaching based upon topics that are poorly understood by their athletes [21]. Using the CEAC-Q to specifically evaluate knowledge of CHO for optimal performance before, during and after competition, it is possible for nutrition practitioners to rapidly identify these knowledge gaps to provide bespoke education during the nutrition-coaching process [29, 30].

Although the CEAC-Q focuses specifically on CHO, knowledge scores were comparable to those of other general sports nutrition knowledge questionnaires conducted in athletes. Two longer original 89-item nutrition for sport knowledge questionnaire (NSKQ) [22] and shortened 37-item abridged nutrition for sport knowledge questionnaire (A-NSKQ) [21] reported nutrition knowledge scores in athletes of 49% and 46%, respectively. Five of seven studies included in a meta-analysis of general nutrition knowledge questionnaires reported athletes with mean knowledge scores greater than 50%, ranging from 42.7 to 67.7% [11]. In a general sports nutrition knowledge questionnaire, Trakman and Forsyth [21] determined construct validity between individuals with formal nutrition education (65%) and individuals with no formal nutrition education score (52%). Karpinski and Dolins [15] found 55.4% of athletes correctly answered a general sports nutrition knowledge questionnaire where out of a total 11 points, athletes scored 3.5 ± 3.0 (31.8%) against sports dietitians 7.8 ± 2.4 (70.9%). Similar to these, the CEAC-Q total scores (Fig. 2A) show that EA had superior nutrition knowledge (46%) than the GenP (17%), but less than SDN (76%). The clear distinction demonstrates construct validity of the CEAC-Q. Future CEAC-Q scores from a large cohort of athletes will identify factors affecting inter-individual variation in knowledge to clarify which topics are poorly understood and how this relates to practice.

An unexpected finding was a small but significant learning effect of the CEAC-Q to allow athletes to self-identify gaps in their own knowledge that may have motivated self-directed learning to fill these knowledge gaps. Indeed, retest of the CEAC-Q 10–14 days after, in a subgroup of 59 EA resulted in an increased test score for 54 athletes (91.5%), with a mean increase in score of 8.5 ± 13.6% (p =  < 0.001). This occurred despite no feedback regarding scores or formal education being provided between tests. The majority of athletes reported that their knowledge increased after the initial completion (n = 43, 72.9%) and wanted to learn more about sports nutrition for competition (n = 54, 91.5%). Systematic examination of open-ended comments about the questionnaire made by 26 of the athletes who reported changes in knowledge following in retest questionnaire unveiled athletes becoming aware of gaps in their own knowledge and participated in self-directed learning on the topic or sought external advice. Similarly, qualitative comments indicate that changes in scores may be the result of EA selecting unsure in the first test then selecting an answer in the retest. The act of completing the CEAC-Q may set in chain thinking processes leading to new insights or knowledge [31]. Athletes naturally seek to gain any competitive advantage; becoming aware of gaps in their knowledge may seek to improve between two tests [25]. As scores are expected to increase following self-education, a different and small random error in repeat tests indicates good reliability and construct validity as it suggests learning processes at work [32, 33].

A key role of sports dietitians is to support positive change in the dietary behaviour of athletes utilising a range of nutrition-coaching interventions [29, 30]. In the theoretical framework of the COM-B model of behaviour change, improving the physical and psychological capability and the motivation of individuals are essential to drive behaviour change [34, 35]. Our findings support the idea that using the CEAC-Q as a screening tool could help increase the theoretical and practical knowledge (capability) by identifying gaps in knowledge of current CHO guidelines that may require targeted education. An unexpected finding was the ability of the CEAC-Q to internally motivate an athlete to instigate self-directed learning to correct knowledge gaps, despite no feedback being provided on results. Although increased knowledge or awareness of areas for improvement does not necessarily translate to a change in behaviour [10], a good nutrition-coaching program should enhance enablers and reduce barriers to support change [29]. Thus, the CEAC-Q can be a useful tool for sports dietitians aiming to influence and motivate their athletes to change nutritional intake during competition for optimal performance.

The main limitations of the current questionnaire are the time-frame between tests, control over participant test conditions, self-learning and bias in nutritional beliefs. Previous nutrition knowledge validation studies considered a period of 3 weeks long enough for answers to be forgotten yet short enough to minimise any change in nutrition knowledge [12, 14, 21, 22]. Test conditions should be consistent in repeat trials, however, for a self-administered test, no control could be placed over distractions or how much attention a participant takes when completing [32]. No nutrition education or feedback on scores were provided between tests, however, athletes who participated in the retest may have been personally invested in the topic and more motivated to increase their knowledge, which could not be controlled by investigators [21] and it would have been useful to conduct a formal education program to further test the capacity to detect changes in knowledge.

The current findings open up avenues for future research to assess and optimise dietary practices of endurance athletes. Completing the CEAC-Q with a larger cohort of endurance athletes will allow differentiation between known confounders of nutrition knowledge in a competitive setting: age, sex, level of education as well as potential confounders including living situation, level physical activity, ethnicity, athletic calibre and type of sport [26]. However, as increased knowledge or awareness will not necessarily translate to a change in behaviour [10] future studies should use the CEAC-Q in a competitive setting to assess barriers, attitudes and the relationship between CHO knowledge and practice. This will allow nutrition practitioners to further understand why athletes fail to achieve recommended CHO intakes and subsequently develop more effective, improved athlete nutrition education resources and programs to optimise endurance performance.

Conclusion

The CEAC-Q is a valid online tool taking ~ 10 min to rapidly assess baseline knowledge of current carbohydrate guidelines and recommendations in athletes. Following administration of the CEAC-Q, a small significant learning effect was observed, demonstrating a potential use to identify knowledge gaps prior to targeted nutrition coaching and increase the capability and motivation of athletes to change behaviour. Future studies should evaluate the relationship between CEAC-Q knowledge and carbohydrate intake during competition with a larger cohort to define differences between known confounders of nutrition knowledge.