Introduction

Improved therapeutics for patients with amyotrophic lateral sclerosis (ALS) is desperately needed to reduce functional decline, improve quality of life, and extend survival. The history of ALS therapeutic development includes a long list of negative and failed clinical trials, necessitating a critical review of past clinical trial designs and an innovative approach for study design methodologies moving forward [1,2,3]. While negative trials are an inevitable part of drug development, it is hoped that improved clinical trial designs can avoid failed or ambiguous clinical trials that do not provide decisive results, allowing trials of ineffective or unsafe therapeutics to be terminated faster and allowing effective therapeutics to be identified more efficiently [4].

A multifaceted approach is needed to optimize ALS clinical trial design. Careful outcome measure selection is necessary to leverage psychometric and statistical advantages while also capturing clinically meaningful results that are relevant to the mechanism of the therapeutic agent of interest [5]. The use of ALS prediction algorithms and statistical enrichment techniques can serve as a valuable tool to improve trial efficiency and ability to detect a treatment effect [6, 7]. Continued development and validation of biofluid biomarkers will serve as a valuable and necessary adjunct to clinical outcome measures by assessing target engagement, and the importance of biomarker incorporation into early phase clinical trials is increasingly recognized [8, 9]. Other recent innovations such as at-home technologies for measuring progression are promising tools for future trials that could reduce participant burden while increasing frequency of outcome measurement [10]. Another promising development in the ALS research landscape is the increased inclusion of patient advocates and advisors in all stages of ALS clinical trial development [11].

This review will discuss review past, present, and future approaches in ALS clinical trial design and will highlight areas of recent progress and innovation, in particular focusing on clinical trials for sporadic ALS.

ALS Outcome Measures

Strategic outcome measure selection and clinical trial design is particularly important for ALS studies, where investigators often hope to detect modest treatment effects in a patient population with variable rates of disease progression and heterogenous phenotypes. Considerations for outcome measures include reliability and reproducibility of the outcome measure, responsiveness, or ability of the outcome measure to detect change when change has actually occurred, and clinical relevance. Optimizing outcome measure selection will vary for each study depending on the goals of the study, the patient population, and mechanism or target of the therapeutic intervention, and therefore, a one-size-fits-all approach cannot be applied when selecting an outcome measure. Here, we will review the strengths and weakness of the commonly used ALS outcome measures (summarized in Table 1) and highlight recent innovations and outcome tools that are available or in development for future use.

Table 1 Summary of ALS outcome measures and examples of use in clinical trials

Survival

Many early ALS clinical trials used survival as the primary outcome measure [12, 13]. Survival as a primary outcome measure is appealing at face value because it is unequivocally objective with clear clinical relevance. The specific survival endpoint used in most ALS trials is time to tracheostomy-free survival or time to permanent assisted ventilation [14], although there is no single universally accepted approach for the best way to define the survival endpoint in an ALS trial. The use of tracheostomy-free survival or time to permanent assisted ventilation allows more study participants to reach the survival endpoint for data analysis, but this approach introduces unintended factors in the analysis besides time to death due to ALS, such as variation in clinician treatment practices or participant acceptance of ventilatory interventions [15]. The disadvantage of survival as a primary outcome measure is reduced efficiency, due to limited ability to analyze outcomes for participants that do not meet the mortality endpoint during the follow-up period. As a result, studies with survival as a primary outcome measure have typically required large sample sizes and long trial durations. As an example, the confirmatory ALS study of riluzole enrolled over 900 participants for an 18-month study [16]. The current pipeline of ALS drug development demands a more efficient approach.

Some recent ALS studies, including AMX0035 (NCT03127514, NCT03488524) [17] and edaravone (NCT01492686) [18], have included open-label extension studies that allow for longer-term survival analysis as an exploratory outcome measure. This is an appealing strategy that uses alternate, more efficient primary outcome measures for the randomized controlled trial portion of the study, and then allows long-term survival analysis in support of the primary study findings. The AMX0035 study also utilized a firm, OmniTrace, to obtain complete survival data in study participants using public records and databases, allowing for a more complete and robust survival analysis [17]. Inclusion of an open-label extension study also offers a patient-centric study design where participants initially assigned to placebo will later get definite access to the experimental agent, which is expected to help with recruitment and retention. It is important to note that the FDA has expressed concerns about considering the survival data from open-label extension studies as sufficient for supporting efficacy, and thus, the primary data from the randomized controlled trials remains critical for drug approval [19].

The Revised ALS Functional Rating Scale

Many contemporary ALS clinic trials rely on the revised ALS Functional Rating Scale (ALSFRS-R) as the primary outcome measure [14, 20]. This is an ordinal scale that includes 12 questions, each rated 0 through 4, that assess an ALS patient’s ability and need for assistance in various activities or functions. The scale provides a total score (best of 48) from four subscores which assess speech and swallowing, (bulbar function), use of upper extremities (cervical function), gait and turning in bed (lumbar function), and breathing (respiratory function) [20]. A slower rate of ALSFRS-R decline correlates with longer survival [21]. In the clinical trial setting, the ALSFRS-R is typically administered and scored by a trained staff member based on the patient’s self-report, and the scale has also been validated over the telephone [22] or as a self-administered scale [23]. Test–retest reliability has been reported at values between 0.87 and 0.96, with higher reliability when a consistent evaluator is administering the scale [24, 25].

There are several important limitations with the ALSFRS-R as an outcome measure. A one-point change can represent a small or a large amount of functional change depending on the question and the item, and thus, a one-point change is not a quantifiable unit of functional change across the scale [26]. Additionally, analyses of the ALSFRS-R show that the scale is not unidimensional, meaning that items on the scale measure domains other than functional status [27]. These properties create mathematical limitations when the ALSFRS-R sum score is used as an outcome measure. In other words, each one-point change on the scale represents a different quantity of functional change, and furthermore, some one-point changes on the scale represent a change in a domain other than functional status, and thus, the sum score as a primary measure of functional status is flawed. These measurement discrepancies also create ambiguity when determining the clinical significance of small ALSFRS-R changes. A survey study of ALS clinicians showed that the majority of clinicians surveyed believe that a 20% change in ALSFRS-R slope is clinically meaningful [28], but more rigorous or universal definitions of clinically meaningful change for the ALSFRS-R are lacking. This definition is particularly limited when considering that a 20% change in slope represents different levels of functional change across the scale due to its scoring structure.

Additionally, decline in the ALSFRS-R score is often assumed to be linear for the purposes of statistical analysis, but in reality, the scale declines in a curvilinear manner, with difficulty detecting changes at the extremes of the scale [29]. While overall average decline on the ALSFRS-R is often reasonably linear over the course of a clinical trial period for the treatment and/or placebo groups, linear decline in the pre-treatment period cannot be inferred, and decline on the individual level often follows a non-linear trajectory (Fig. 1) [30]. The ALSFRS-R may also lack responsiveness, which is the ability for scale to detect change when change has actually occurred. A study of the ALSFRS-R from a pooled clinical trial database showed that 25% of placebo-group patients showed no change in the ALSFRS-R score over a 6-month period, while 16% of placebo patients had no change in ALSFRS-R over a 12-month period [31]. These measurement plateaus have significant implications for clinical trials that are investigating agents aiming to slow progression of disease by reducing the ability to detect treatment effects. In the setting of a clinical trial where a difference of several points in the ALSFRS-R can determine the success or failure of an ALS therapeutic [32, 33], these psychometric considerations are of practical importance.

Fig. 1
figure 1

Individual ALSFRS-R trajectories are displayed from the Emory ALS Center clinic population. The significant heterogeneity in disease progression rates as well as non-linear decline at the individual level are noted [126]

The Combined Assessment of Function and Survival

The Combined Assessment of Function and Survival (CAFS) is a non-parametric tool that allows analysis of both survival and functional status in a single outcome measure. Participants that survive the duration of the study period are ranked as having a higher score/better outcome than participants who do not survive the study period. Within the surviving cohort, participants are placed in rank order based on rate of progression according to the ALSFRS-R. For participants who die or reach permanent mechanical ventilation during the study period, outcome rank order is based on survival time. This tool not only allows survival data to be prioritized as the most important outcome for deceased participants, but also allows analysis of all participants by analyzing surviving participants as well as, according to functional decline, improving study efficiency compared to a study using survival alone as the primary outcome [34]. The FDA has voiced support of a joint assessment of function and survival compared to analyzing function alone [19]. Because ALSFRS-R is the tool used to measure functional decline for this composite measure, the limitations that apply to ALSFRS-R also apply to the CAFS, and the addition of survival analysis may not be helpful in shorter studies where few participants meet the death endpoint. Due to the non-parametric rank-order approach of the CAFS, this outcome can also be challenging to interpret clinically, as it does not provide a rate of functional decline or a measure of treatment effect.

The Rasch-Built Overall ALS Disability Scale

The Rasch-Built Overall ALS Disability Scale is a 28-item patient reported outcome measure assessing overall disability level for patients with ALS, and this scale was created to overcome the psychometric limitations of the ALSFRS-R [35]. Unlike ordinal scales, this scale is linearly-weighted, meaning that a one-point change is a consistent unit of disability measurement across the scale (and a two-point change indicates twice the amount of disability). Rasch analyses indicate that the scale is unidimensional, meaning that the items on the scale are all measures of disability and not other domains, and thus, the sum score is indeed a valid measure of overall disability. The ROADS questions have improved item targeting compared to the ALSFRS-R, meaning that a broader range of ability levels are assessed, which is expected to help with scale responsiveness. The scale is also more reliable than ALSFRS-R, with a test–retest reliability of 0.97. These psychometric advantages are encouraging and are expected to confer statistical advantages when ROADS is used as an outcome measure in a clinical trial, but longitudinal data and real-world clinical trial data are still needed for confirmation. The scale has been translated and validated in Chinese [36] and Italian [37], but additional translations of the scales are required before this tool can be utilized in broader international clinical trials.

Neurophysiologic Outcome Measures

A variety of neurophysiologic outcome measures have been tested in ALS research studies [38, 39], with the hopes that these tools can serve as objective, quantitative markers of motor neuron loss. Motor unit number estimation (MUNE) [40] and motor unit number index (MUNIX) [41] are two techniques that have showed promise as biomarkers of lower motor neuron measures in ALS research trials, as these measures decline over time in patients with ALS and correlate with other ALS clinical and functional outcome measures [42]. Reproducibility varies depending on the protocol used as well as the skill, experience, and training of the examiner, and real-world use of these tools has proved to be challenging, particularly as a mainstream measure for larger studies [43, 44]. There is certainly theoretical appeal to these neurophysiologic measures, but refinement of techniques to improve the reproducibility and accessibility of testing protocols are needed before this becomes an accepted surrogate marker of lower motor neuron loss [45]. In addition, because these techniques predominantly allow for testing on distal limb muscles, they may not serve as an overall measure of motor neuron loss.

Another electrophysiologic measure of interest is electrical impedance myography (EIM). EIM testing involves application of a painless electrical current through surface electrodes to measure the compositional properties of muscle [46, 47]. Recent advances in EIM technology have improved the ease and accessibility of testing, and at-home testing could be an option for future trials that broadens real-world use of this technique [10]. One study has suggested that using EIM as the primary outcome measure could result in a fivefold reduction in needed sample size [48], but to date, EIM has only been used as a secondary or exploratory outcome in ALS clinical trials. A disadvantage of EIM is that the data obtained from this testing is not accessible or interpretable in real time to the clinician or investigator, and thus, it is less familiar to many ALS investigators and not used as a mainstream clinical tool.

Transcranial magnetic stimulation (TMS) methods have also been explored as biomarkers to measure cortical motor excitability and upper motor neuron dysfunction in ALS. However, studies to date have not led to consistent results [49,50,51], and some studies have demonstrated technical limitations with inability to record responses in a significant number of study participants [52]. These techniques may serve as useful adjunct measures to show target engagement in the future, but more work is needed to refine these tools.

Measures of Muscle Strength

Quantitative measures of muscle strength are appealing as an ALS outcome measure given that loss of muscle strength is a clinical hallmark of disease progression [53]. Measuring limb strength using a portable hand-held dynamometer is a common approach used for ALS trials, where individual muscle measurements are standardized based on data from healthy controls, and then, an overall combined megascore is utilized as the outcome of interest [54, 55]. This approach has good reliability with adequate training, but floor and ceiling effects are observed with this measurement tool, and strength is not adequately quantified when muscle strength of the study participant exceeds the strength of the examiner. Fixed dynamometry, as is performed with the Accurate Test of Limb Isometric Strength (ATLIS) device, can overcome the floor and ceiling effects and reliance on examiner strength to improve sensitivity [56], but the equipment required for this approach in past trials in large and cumbersome, limiting more widespread use and practical application of this technique. Recent work has been done to validate a portable fixed dynamometer, which could be a promising tool that combines the convenience of hand-held dynamometry with ATLIS’ range of measurement abilities [57].

Measures of ventilatory strength are commonly captured in ALS trials, as these outcomes are of clinical relevance and correlate with survival [58,59,60]. Vital capacity is typically the measure of interest [61, 62], although other parameters such as inspiratory force and voluntary cough parameters have also been studied [63,64,65]. Slow (SVC) and forced vital capacity (FVC) measurements correlate highly with each other and also correlate with survival and functional measures [58, 66]. FVC requires a fast expiratory effort, which may be impeded by spasticity, and both tests can be impaired by bulbar weakness due to upper airway collapse during maximal exhalation [67]. SVC has served as the primary outcome measure in several recent ALS clinical trials of fast skeletal muscle troponin activators, where the drug is hoped to improve diaphragm contractility (NCT03160898, NCT02496767) [68, 69]. The COVID-19 pandemic created logistical challenges with obtaining spirometry measurements due to infection prevention concerns and limitations with in-person study assessments [70], but at-home spirometry measurements have emerged as a viable alternative [71]. Ventilatory measures have been noted to have a higher coefficient of variation and decreased sensitivity compared to other typically used primary outcomes for ALS [53], so from many studies, spirometry measures are more useful as a secondary or exploratory outcome measure.

At-Home Outcome Measures

With advances in technology, at-home assessments are increasingly becoming a strategy of interest for measuring disease progression [10]. At-home measurement offers the possibility of increasing the frequency of outcome measurement, which should in turn improve responsiveness and reduce the impact of measurement error. Traditional patient-reported outcomes can be captured on a smart phone or computer, but other more novel approaches include assessments of speech, motion analysis using wearable sensors, and GPS tracking to assess travel patterns [72,73,74,75]. However, the analysis of these parameters requires sophisticated machine learning techniques and is typically complex. Additional research is needed to refine the analysis algorithms for these technologies, to determine the best candidates or compositive measures to serve as trial outcome measures, and to translate the complex data output into clinically understandable data points.

Statistical Enrichment

Prediction algorithms for anticipating expected ALS disease progression serve as valuable tools for improving clinical trial efficiency and reducing sample size. Simulated ALS clinical trials that incorporate prediction algorithms into the study design show the potential to reduce sample size by 15–20% [76, 77]. Models for disease progression can also be used for participant stratification or as part of study inclusion/exclusion criteria to select an appropriate study cohort [6, 78]. Applying prediction algorithms that rely on only baseline variables to design inclusion and exclusion criteria for a selected population can also avoid the need for an observational lead-in period, allowing earlier initiation of treatment and creating more patient-centric study designs [79]. Prediction algorithms are particularly valuable for exploratory analyses, for example, when trying to identify subgroups of treatment responders by comparing prediction progression to actual progression [80], or even evaluate individual-level disease progression and compare observed vs expected progression [81]. Current prediction algorithms rely on ALSFRS-R as a predictor variable and predict ALSFRS-R or survival as the outcome of interest based on currently available data in existing databases, but the same mathematical prediction techniques can be used to incorporate novel outcomes in the future as more data on newer tools becomes available [76, 81].

One approach that has been increasingly utilized to improve statistical power in ALS clinical trials is selecting a more homogenous patient population in terms of rate of disease progression so that treatment effects can be observed using a smaller sample size over a shorter duration. As an example, early studies of edaravone (NCT00330681) in heterogenous ALS populations did not identify statistically significant treatment benefits [82], but post hoc subgroup analyses identified a cohort of potential responders. A new trial of edaravone was designed to study a targeted cohort of ALS patients with well-defined, more uniform rates of progression to improve statistical power [32]. Specifically, study participants had to be in relatively early stages of disease with a high baseline functional status, have a vital capacity > 80% predicted, have scores of at least 2 on each individual ALSFRS-R item, and have disease duration of less than 2 years. Additionally, participants with moderate levels of progression were selected based on a decline of the ALSFRS-R of 1–4 points over a 12-week lead-in observation period, excluding participants with the slowest and fastest progression rates from participation. This targeted study of only 137 participants showed a statistically significant difference in the primary outcome measure and led to FDA approval of edaravone. Disease prediction models were also used to support the notion of generalizability of edaravone efficacy to a broader population [83] and to provide additional evidence of AMX0035 efficacy in the open-label extension study [84]. The phase 2 study of AMX0035 also utilized statistical enrichment techniques to improve statistical power [85], in this case utilizing past data from a pooled clinical trial database (PRO-ACT) [86] to identify early onset participants that were predicted to have fast disease progression. This study enrolled participants with disease duration of 18 months or less meeting El Escorial criteria for definite ALS. This approach did not require a lead-in period, which allows earlier initiation of treatment and is certainly preferred by study participants. Run-in periods also have the potential to introduce bias, as excluding participants in the run-in period may decrease the generalizability of study results to the broader treatment population, and inaccurate capture of lead-in outcome measures might occur due to desire to qualify for active treatment [87]. Additionally, it is possible that delayed initiation of some therapeutic agents could reduce treatment efficacy.

Given the regulatory success of edaravone and the promising data on AMX0035, other recent ALS clinical trials have followed by designing studies seeking to reduce heterogeneity in baseline functional status and rates of disease progression. However, this approach has resulted in some ambiguity for clinical use of ALS treatments in the real-world setting [88,89,90,91]. It is not certain that drugs that are beneficial at early stages of ALS still have the same beneficial effects when initiated in late stages of disease, and questions have also been raised about clinical significance of these treatments, even in the setting of statistical significance. Additionally, there is no consensus about whether or not positive findings observed in smaller targeted trials require confirmation in a larger sample size to confirm reproducibility and generalizability of results. This is a challenging issue when trying to balance the dire and time-sensitive need for better ALS treatments with concerns for scientific rigor and avoidance of unnecessary risks, burdens, and costs.

Platform Trial

Recently, an innovative platform trial (NCT04297683) approach has been launched for ALS [92]. This trial creates a shared ongoing clinical trial infrastructure where different ALS drugs can be tested on an ongoing basis using a shared protocol. This design allows for improved efficiency owing to ongoing trial infrastructure, and fewer participants are assigned to the placebo arm due to analysis of pooled placebo groups among regimens. Platform trials for other diseases, particularly in the cancer field, offer additional adaptive advantages, such as disease subtype stratification based on biomarkers or phenotype and response-adaptive randomization, where frequent efficacy analyses allow changes to randomization allocations to be adjusted in real time [93, 94]. The current ALS platform trial does not yet incorporate this adaptive agility due to current limitations in biomarkers and inability to quickly measure treatment response. The current ALS platform trial also enrolls a relatively heterogeneous ALS population and utilizes the ALSFRS-R as a primary outcome measure, which may reduce study power compared to other current trial approaches that are using updated outcome measures or statistical enrichment techniques.

Biomarkers

ALS researchers are increasingly recognizing the importance of ALS biofluid biomarkers in clinical trials. For many past ALS trials with negative results, it is unknown whether the drug failed to engage the intended target or if the mechanism of interest is not an effective strategy for ALS treatment. Biomarkers of target engagement are critical for informative and decisive clinical trials and should be incorporated into early phases of drug development. The ideal biomarker candidate would show response to treatment faster than currently used clinical measures, which would greatly improve trial efficiency and allow for more adaptive trial designs. For candidate biomarkers to serve as a surrogate measure of clinical response, research is needed to define how changes in biomarker levels correlate with clinical outcome measures and how biomarker levels or changes predict clinically relevant outcomes.

Neurofilaments have emerged as promising biomarkers to assess treatment response. Neurofilament light chains and heavy chains in both CSF and serum have been shown to be elevated in patients with ALS compared to controls and correlate with disease progression as well as survival [95, 96]. Neurofilament light chain levels may have a stronger association with poorer prognosis [97, 98]. Studies have also shown relative stability of neurofilament levels over time, a promising feature for a biomarker that potentially could be used as an indicator of treatment response [98], assuming a correlate relationship between neurofilament levels and disease progression. Neurofilament levels are currently a useful adjunct measure for many clinical trials, but it is still unknown what measurement of neurofilament change should be considered meaningful or how to define a treatment response. Recent studies highlight these uncertainties; the phase 1–2 study of Tofersen in SOD1 ALS showed reduction of neurofilament light chains with treatment without a corresponding benefit on clinical outcome measures [99], while the AMX0035 study showed benefit on its primary clinical outcome measure without a significant reduction in neurofilament heavy chains [85]. While the future of neurofilaments as a marker of treatment response is unclear, research in pre-symptomatic ALS gene carriers has shown that neurofilaments could serve as a promising biomarker of conversion to symptomatic ALS, preceding the detection of clinical ALS symptoms [100]. This novel approach is being employed in the ongoing ATLAS study (NCT04856982), where pre-symptomatic SOD1 gene carriers will initiate treatment after detection of elevated neurofilament levels but before onset of clinical symptoms [101].

Selection of appropriate biomarkers depends on the mechanism of the therapeutic agent and the specific patient population being tested. Other potential biomarkers that warrant further study include measures of oxidative stress or neuroinflammation [102]. PET imaging has been considered as a biomarker of target engagement for drugs targeting CNS neuroinflammation due to its ability to characterize microglial activation and correlations with rate of disease progression [103, 104]. However, there are some practical limitations including cost, limited availability at select tertiary imaging centers, and need for participants to lie flat for testing [105]. In the future, it is hoped that biomarkers can be used to guide a precision medicine treatment approach; for example, ALS patients with specific inflammatory profiles might be responsive to treatment with a targeted anti-inflammatory agent. This type of approach has not been successful to date [106] but warrants further exploration, particularly as the scientific community furthers the understanding of underlying ALS disease mechanisms. Even in ALS therapeutic development for seemingly sporadic ALS, it is possible that genetic modifiers may play a role in differential response to treatment, as suggested by a post hoc analysis of the pooled lithium trials showing that UNC13A carriers demonstrated a beneficial treatment response [107].

Diversity, Equity, and Inclusion in ALS Research

Future research studies must prioritize equity and diverse participant recruitment to ensure that treatment advances will benefit all patients with ALS. According to the Centers for Disease Control National ALS Registry, which compiles data from three national administrative databases and included self-reported data, 83% of ALS incident cases in the USA occur in white patients [108]. However, the Pooled Resource Open-Access ALS Clinical Trials (PRO-ACT) database, which combines pooled clinical trial data from more than 17 representative ALS clinical trials, shows that 95% of ALS clinical trial participants are white [86]. The reasons for this discrepancy are likely multifactorial. Many ALS trials recruit participants with shorter disease durations, but black patients have on average a diagnostic delay that is 8 months longer than white patients, reducing the window of clinical trial eligibility [109]. Broader studies examining barriers to research participation for people of minority communities identified a weak relationship with the medical and research community as well as a high cost to study participation as common concerns that limit inclusion in research [110]. Future ALS clinical trials must consider ways to engage minority and underserved communities for inclusion in research to improve equitable access to trials and the generalizability of study results.

ALS Patient Advocacy and Patient-Centric Trials

A recent trial of Lunasin (NCT02709330) serves as an interesting example of a patient-centric trial approved drugs [111]. The trial had very broad inclusion criteria, including participants with long disease duration or on mechanical ventilation that are not typically eligible for interventional trials. The study was an open-label design with outcome measures collected online. The study had rapid recruitment and high adherence and retention rates. As this study was for a non-prescription supplement, the protocol was published so that interested participants could follow along at home without formal study enrollment. This study also included biomarker analysis, studying histone acetylation, that showed a lack of target engagement. The Lunasin ALS study provided an efficient design to evaluate for the presence or absence of a large treatment effect and could be used as a model for studying other supplements or medications that are already FDA-approved and have known favorable safety profiles. It is important to note that this study design would not allow detection of modest treatment effects and would be insufficient when robust safety monitoring is needed. In addition, the use of historical controls as was used in this trial are specifically discouraged by FDA due to scientific limitations, and thus, placebo-controlled trials are required for FDA approval of new drugs [112].

Many recent ALS clinical trials have benefited from increased input from patient advisors and advocates. Both ALS Clinical Trial Guidelines and FDA guidance for drug development recognize the importance of patient input in clinical trial design [113]. Patient input into study protocols can help reduce unnecessary or intolerable burden of study activities, and advocacy efforts have led to more frequent open-label extension programs, which in addition to helping recruitment efforts also adds to scientific discovery by providing additional long-term safety and exploratory efficacy data. The ALS Clinical Research Learning Institute (ALS-CRLI), a patient-driven program that trains ALS research ambassadors about research and clinical trials, serves as a valuable template for building partnerships between the scientific community and the patients they serve. Graduates of the ALS-CRLI have gone on to serve as trial advisors and patient advocates, strengthening the design of trials and building trust within the ALS community [11]. ALS advocates have called for increased access to experimental treatments, citing higher risk tolerance in the setting of an incurable fatal disease and principles of autonomy. However, ALS clinicians and scientists have concerns about the risks of bypassing appropriate regulatory oversight and the potential for predatory practices to harm ALS patients that are in a vulnerable position [89, 114].

Past experience with diaphragm pacing systems (DPSs) for the treatment of ALS serves as a cautionary tale for unproven treatments. Initial use of DPS in ALS was based on results of small, uncontrolled studies in the early 2000s [115,116,117]. The process for FDA approval of devices through a Humanitarian Use Device Exemption requires little scientific justification and does not require the same level of testing as drug approval, and as a result, DPS was FDA approved for use in ALS in 2011 [118]. While some ALS clinicians were wary of the safety profile and scientific rationale for DPS in ALS [119], other ALS clinicians presumed that DPS was safe and possibly effective and offered DPS as part of standard treatment at the request of interested patients. Years later, well-controlled randomized controlled trials showed that these devices were actually harmful in patients with ALS. A UK randomized controlled trial was terminated early due to safety concerns when patient implanted with DPS were found to have median survival of 11 months compared to median survival of 22.5 months in the group using non-invasive ventilation alone [120], and these findings were replicated in second French randomized controlled trial that also terminated enrollment early due to accelerated mortality in the DPS-treated group [121]. This experience served as a reminder of the importance of well-designed randomized controlled trials and the harms of relying on uncontrolled or anecdotal data. While predictive modeling and well-matched historical controls can be considered as supplementary approaches for randomized controlled trials, and statistical techniques can be used to reduce the size of placebo groups, typical ALS trials will require a placebo arm to adequately assess safety and detect efficacy given the modest effects of most candidate treatments and the variability of disease progression. Moreover, FDA guidance currently discourages reliance on historical controls to support drug development [112].

ALS stakeholders and advocates have brought increasing attention to expanded access programs and “Right-to-Try” laws as a means of accessing promising experimental therapies before regulator approval occurs. To date, practical and logistical barriers have limited utilization of these programs. It is hoped that the recent passage of the Accelerating Access to Critical Therapies for ALS Act (ACT for ALS) in December 2021 [122] will allow more patients to access investigational drugs while still promoting concurrent high-quality clinical trials to obtain decisive safety and efficacy results. This bill establishes a grant program to support expanded access programs, which is hoped to overcome the time and resource barriers that have prevented adaptation of these programs in the past. The impact of this bill on reducing barriers and improving participation in expanded access programs is still unknown, but this is a promising approach for meeting the priorities of the ALS advocacy community without compromising scientific rigor or the ability to complete decisive phase 3 clinical trials.

Conclusion

ALS clinical trial design is complex and depends on the specific goals of the study and mechanism of the therapeutic agents. Future ALS trials will be strengthened by combining the strategies discussed in this review—refining clinical outcome measures, applying thoughtful statistical enrichment techniques, reducing infrastructure barriers, and utilizing drug-specific biomarkers of target engagement. ALS investigators and pharmaceutical companies should select outcome measures and power studies carefully based on mechanism of action, expected variability and statistical power, and psychometric properties of the measurement tool. Investigators should strongly consider the inclusion of promising novel outcome measures as secondary or exploratory outcome measure to facilitate faster validation of new tools. Biomarker development is critical to assess target engagement and should be incorporated in early phase studies. However, current biomarkers are only able to serve adjunct tools and are not adequate for predicting clinically relevant outcomes. As biomarker and mechanistic research improves, so will the ability to pursue precision-medicine treatment approaches for ALS. Investigators are urged to prioritize diversity, equity, and inclusion to improve validity and generalizability of their research findings. Ongoing partnerships with patient advocates and stakeholder will continue the development of patient-centric approaches that serve the needs of the ALS community while still maintaining scientific rigor. Current ALS therapeutic development relies on stepwise incremental progress, and optimizing clinical trial design will improve the efficiency and reliability of this process.