Is There Actually a Time and Place for PMF Monitoring?
In our opinion and experience there is a frequent disconnect between opportunities to monitor and what can actually be achieved in practice. The experimental scenarios used in the scientific literature are unrepresentative of and unrealistic for application in professional settings. For instance, research commonly examines ‘acute’ fatigue responses in the 24 h period following match-play [3]. Yet in the event of a typical one-match week, participating players frequently have a rest day following competition. Therefore, fatigue monitoring in the acute phase is not always feasible.
During two-match weeks, PMF data can in theory again inform workload adjustment and evaluate readiness for ensuing competition. However, the realities of between-match preparation frequently reduce any potential impact. The 24- to 72-h period post-match coincides with the preparation phase leading into the next match. The day after matches, clubs tend to conduct post-match recovery modalities (e.g. cold water therapy) in an attempt to alleviate fatigue and quicken recovery [26]. These recovery processes are prioritised over the collection of information on fatigue [7]. While players are usually on-site, gathering data in the interval between successive matches can be logistically difficult. Travel and match preparation—the latter including team talks, video sessions, short tactical training sessions, in-day sleep strategies and media duties for certain players—considerably reduce opportunities for monitoring. Also, on the second day following competition, coaching practitioners generally want every player on the training pitch to prepare collectively for the forthcoming match, disregarding any individual requirements. The timing of kick-offs in certain matches can affect opportunities to monitor markers at the same timepoints typically used in the literature (e.g. match + 24 and + 72 h) and restrict comparisons with existing findings. If measures have been obtained in the acute post-competition phase, data could, in theory, be used to make inferences about the magnitude of fatigue over the following 48- to 72-h period if further data collection is not possible for the mentioned reasons. However, attempting to predict fatigue or responses in certain variables at + 72 h based on + 24 h values is challenging. Recovery status, in our experience, is influenced by a myriad of factors including previous match locomotor activity, the use or not of post-competition recovery strategies (e.g. ice baths, nutrition), individual physical characteristics and/or training workload between match + 24 and + 72 h.
Another practical burden is that assessments of PMF are considered necessary for every participating player due to the considerable inter-individual differences in fatigue-related responses and recovery potential [1]. Given the logistical burden as well as availability and willingness of players (and that of coach and other support staff) to participate, this is difficult. Yet sports science practitioners typically accept what data they can obtain, irrespective of the number of players the information has been collected on, to gain an idea of the team response and, at best, attempt to tailor recovery in these players.
Finally, a combination of metrics is recommended to enable holistic interpretation of acute and residual fatigue status, which is multifactorial in nature [4]. Anticipation of fatigue prior to forthcoming match-play can be challenging if a limited number of measures are available. Residual responses in certain markers vary in relevance at later timepoints [4], due in part to previous match locomotor activity [27]. For example, jump performance can require 48 h to recover fully, while perceptions of fatigue might persist at 72 h [28]. In our experience, practitioners are only able to collect two or three measures due to the aforementioned practical burden. However, they can still tailor their recovery modalities to at least target the fatigued system(s) they have data on. Conversely, if several measures are available, the time and resources necessary to collate, clean, analyse, interpret and report the data can be considerable despite advances in software that automate processes [29].
Critical Appraisal of Tools and Protocols for Collecting Data
There can be considerable burden due to the number of staff and resources required to run daily operations and difficulties are frequently encountered regarding the tools and protocols that are available to practitioners and commonly used in scientific research. The biochemical and metabolic procedures employed in research are considered expensive even by key stakeholders within clubs at the very highest standards of the game. Players are reluctant to accept blood or saliva sampling as this is considered invasive. Sampling also requires specialist equipment and training, although portable and user-friendly devices now exist. In addition, biological markers are prone to a considerable intra-assay and inter-assay variability and consensus on the optimal or practically most relevant biological parameter has not yet been reached [30]. Time of collection, diet and presence of injury influence biochemical responses [31]. Other tools including nerve stimulation, electromyography and muscle function analyses have been used to explore fatigue [28]. However, due to user and athlete burden, it is unlikely these tools can be routinely and simultaneously employed in a large squad of players. Moreover, laboratory-based assessments clearly cannot be employed in the field so it is difficult to verify in professional players the information that is frequently provided by current research.
A recent review [2] identified a large range of field-based physical testing methods for examining PMF, including repeated sprint and intermittent endurance assessments. Unfortunately, these tests frequently place intense physical demands on already ‘fatigued’ players and performing multiple assessments over the recovery period is evidently impossible. The validity of repeated sprint ability tests in representing the real-world demands of the game is also debatable [32]. Nevertheless, we concede that findings have added to the literature base, especially as similar investigations cannot be performed at professional standards of play. As an alternative, submaximal versions of exhaustive tests implemented as part of a standardised warm-up can provide relevant information on training status [33]. Similarly, assessments such as a countermovement jump (CMJ) on a portable platform are quick and easy means of determining neuromuscular fatigue. Yet, in applied settings, there can be reluctance by coaches, support staff and players themselves to perform such tests as they are explosive in nature and require maximal effort and additional loading. These factors reduce applicability even when testing is performed conveniently following a customary warm-up prior to training. Motivating individuals not to perform assessments as a token gesture is also essential but not easy in practice, while practitioners must ensure players do not alter mechanics in an attempt to maximise jump performance [31]. Finally, consensus is necessary on the choice of variables measured during jump testing. For example, the ratio of flight time to contraction time is shown to be a more sensitive measure of recovery compared to jump height in professional soccer players performing a CMJ [34].
As soccer is a sport mainly involving horizontal motions, sprint testing might be a more appropriate means for evaluating real-world performance rather than jump assessments. However, short straight-line sprint performance (e.g. 10–20 m) has recently been shown to lack sensitivity as a post-match (24–48 h) indicator of physical fatigue in semi-professional soccer players [28]. Analysis of decrements in maximal velocity capability in soccer players over longer sprint distances (> 30 m) is suggested to improve evaluation of fatigue following match-play [35]. Again, constraints related to sprint testing (e.g. injury risk, additional fatigue, player compliance, coach ‘buy-in’, place of test within the day-to-day working practices) need to be carefully weighed up.
The association between post-exercise fatigue and skill-related performance has been examined using controlled assessments such as the Loughborough soccer passing and shooting tests [3]. The former specifically lacks feasibility [36] and no information is available on its validity for assessing in-game passing performance, which might be affected by the non-controlled effects of crowd and match context as well as mental fatigue potentially caused by fast ever-changing game dynamics. In our experience, professional players are simply unwilling to perform skill-related tests and practitioners would never even contemplate their usage.
An alternative to these tools and protocols is to collect external workload data derived from time–motion analyses in the preceding match and make inferences about PMF. However, the technology frequently employed has methodological limitations, especially for key variables associated with neuromuscular fatigue (e.g. acceleration, decelerations and high-speed running) [37]. For instance, commonly used optical-based player tracking systems do not provide information on force load and stride characteristics to assess the neuromuscular and mechanical demands of play. Therefore, they do not allow direct associations to be made with PMF responses from jump tests. While global positioning systems (GPS) enable collection of such data and are permitted in competition, players can be reluctant to wear devices. Also, there is contrasting evidence on correlation strength between match running indicators and muscle damage and neuromuscular performance observed at 48 h after a match [13, 27]. The pertinence of time–motion metrics in anticipating PMF might be limited to the first 24 h after match-play (when players are often resting or in recovery) and caution is necessary if these are used to inform training load or readiness status for competition thereafter [38]. Determining critical match load thresholds using these technologies to inform subsequent recovery status is also difficult due to the large inter-individual variability in workload distribution. Players complete relatively more or less low-speed activity, high-speed running, accelerations, decelerations and changes of direction than peers yet produce the same absolute match load, mainly due to differences in playing position, tactics and physical characteristics [34].
Self-reports permit collection of subjective perceptions of fatigue and well-being during the post-match phase. These are easily administered and scientifically legitimate alternatives to objective measures [39]. Yet in our experience some players are reluctant to provide information on their perceptions post-match. Opportunities for data collection are frequently result-dependent and findings may not reflect true perceptions following a loss or a poor performance. Player education and language barriers, and changes in collection methods, timing or the practitioner conducting the monitoring can confound the problem [7]. Self-reporting is influenced by outside influences (e.g. expectations of supporters and media) [40] and individuals might answer in a ‘socially desirable’ manner during intensive competitive schedules, over-reporting favourable responses and under-reporting unfavourable responses to appear to be coping [41].
Functional Relevance and Real-World Meaningfulness of Data and Their Application
A key concern is the functional relevance and real-world meaningfulness of changes in PMF responses during the recovery period. Accounting for technical and biological test measurement error so that meaningful decrements (e.g. ‘red flags’) in fatigue and performance can be distinguished from natural variations in measurements is evidently a key issue [24]. Some practitioners might use pre-set cut-off thresholds (e.g. defined as the smallest worthwhile change [SWC], arbitrary ± 5–10% or, more correctly, 0.2 of between-players standard deviation [SD] or fraction/multiples of individual SD, depending on the variables of interest [42]) for detecting meaningful changes. Yet we can ask, for example, what would be the real-world effect of a 2.8% reduction (i.e. greater than the SWC) in CMJ peak power output (PPO) values reported at 48 h post-match reported in reserve team professional soccer players [43] on the proportion of duels won/lost in a match played shortly after?
In general, the degree to which PMF data, even if collected robustly in a standardised and reliable way, are actually employed in practice to modify subsequent training delivery is unknown. Anecdotal evidence reports that information can inform adjustments of training workload to ensure players are not under- or over-loaded in the lead-up to ensuing matches. A simple subjective measure of muscle soreness conducted 36–48 h following match-play can aid decision-making on readiness status for a typical mid-week high-intensity aerobic conditioning session or conversely indicate the need for an additional recovery day [44]. However, how do practitioners weigh up the cost versus benefit between allowing a player an additional half- or full-days rest or missing a key tactical training session, for example, to recover a substantial 6.6% decrease in PPO derived from a CMJ 24 h following match-play (reported in the aforementioned professional reserve team players [43])? In our experience, sports science practitioners simply have no choice other than to judge changes in PMF responses on face value and make key decisions using their experience and know-how while accounting for the present context. An upskilling of staff and coaches in sport science and data analysis is arguably necessary! For additional information on identifying meaningful changes in data and decision-making consequences using monitoring systems, the reader is referred to two recent papers [24, 45].