Introduction

Why do we create computational models? Simply put, to help us move from costly and inefficient trial-and-error empiricism towards mechanistic, hypothesis-driven, and evidence-based systematic processes to develop clinical treatments and products [1, 2]. Along these lines, computational modeling has had profound impacts on our scientific understanding. For example, computational models of sensorimotor control and plasticity underlie the design and application of neuromodulatory approaches to enhance motor function during development, aging, and following neurological injury or disease [3,4,5,6,7,8,9,10,11]. Additionally, computational musculoskeletal models have been used to inform treatment decisions in orthopedics and sports medicine. Computational models describing the trajectories of development, disability, and recovery have the potential to help prioritize and focus treatment. Moreover, models of musculoskeletal dynamics and neural control are regularly used by researchers to design and implement control strategies of assistive technologies [12,13,14,15,16,17,18]. The impacts of computational modeling are only set to increase in the coming decades: the emergence of multimodal remote sensors, machine learning, and multiscale datasets (from genomics to behavior) will enable a future in which personalized neurorehabilitation that adapts throughout the course of treatment becomes the norm. In fact, the interest in these areas continues to accelerate (Fig. 1).

As interest and potential impact in these areas accelerates, we convened the NSF DARE Conference: Transformative Opportunities for Modeling in Neurorehabilitation that brought together experts and trainees to discuss critical questions, challenges, and opportunities at the intersection of computational modeling and neurorehabilitation. We identified four Focus Areas prior to the conference through discussions amongst the PI team, Advisory Board, and federal funding representatives. These areas were identified as areas of high growth and potential impact for computational modeling in rehabilitation research (Table 1). They represent—in our opinion—pressing challenges and timely opportunities within neurorehabilitation where advances in science, computational methods, and implementation can converge for actionable change. These areas also exemplify the potential for innovation and impact when merging multiple modeling methods (e.g., machine learning and physics-based models) with technology (e.g., exoskeletons or wearable sensors) to support scientific understanding, target neurorehabilitation outcomes, and improve quality of life. These areas are highly synergistic with NSF’s Disability and Rehabilitation Engineering Program and NIH’s 2021 Research Plan on Rehabilitation.

In this paper we summarize and comment on the perspective and insights from the expert community assembled to help establish a foundation by which computational modeling can create the scientific directions, theories, and actionable platforms to improve the efficacy and personalization of neurorehabilitation. This is based upon discussions with participants during the meeting in general, and among the co-authors during the writing of this paper. Additional concluding remarks are included in the three other companion papers that reflect key take-away points from other conference constituents (e.g., trainees, federal funding representatives, and clinician scientists). Mirroring the structure of the meeting, we now visit and comment on each focus area. It is important to note that the main conclusions are summaries drawn by the authors. As such, they represent the viewpoints of the authors, and not a consensus process of conference attendees. The companion papers further clarify this distinction where each set of authors provides their point of view and provide further conclusions and key take-aways from other constituent groups who participated in the conference (e.g., trainees, clinician scientists, and federal funding representatives). Those additional three papers in this same issue are titled, respectively:

  • NSF DARE—Transforming Modeling in Neurorehabilitation: A Patient-in-the-Loop Framework

  • NSF DARE—Transforming Modeling in Neurorehabilitation: Clinical Insights for Personalized Rehabilitation

  • and

  • NSF DARE—Transforming Modeling in Neurorehabilitation: Perspectives and Opportunities from US Funding Agencies

Fig. 1
figure 1

The field of computational neurorehabilitation has grown tremendously over the preceding decades. Usage of the terms “computational” and “neurorehabilitation” in articles indexed by Pubmed. Data generated by Pubmed by Year (https://esperr.github.io/pubmed-by-year/)

Table 1 Focus areas for the DARE 2023 conference

Modeling adaptation and plasticity

Brief background

Adaptation and plasticity are central to our nervous system’s ability to acquire new abilities, adapt them to changing situations, and recover function after injury. The role of synaptic plasticity in sensorimotor learning and adaptation is the subject of much work described in several reviews [19,20,21,22]. Here, our main interest is the role of plasticity in the recovery of function after injury for neurorehabilitation. For example, after stroke, neurons re-wire connections both immediately surrounding the injury and across distant brain areas [23,24,25,26,27,28], and these changes in connectivity are correlated with improvements in function with and without rehabilitation [27, 28]. Critically, adaptation and plasticity are not guaranteed to be fast, well-guided, or beneficial (i.e., adaptive plasticity). Plasticity can also be maladaptive [29, 30], a neurorehabilitation analog to focal dystonias [31, 32].

A key goal of neurorehabilitation therapies is to promote plasticity mechanisms that improve function while also mitigating maladaptive changes to neural circuits. Many therapeutic approaches have been proposed and attempted to achieve this goal—with varying degrees of success—ranging from targeted behavioral training to implantable devices that stimulate neural circuits.

Computational modeling of adaptation and plasticity can provide paths to maximize the impact of neurorehabilitation therapies [3, 33]. The plasticity and adaptation that occurs after injury span many levels of the nervous system, from cellular processes (e.g., changes in ion channel expression) [34], to network interactions (e.g., changes in synaptic connections) [27], to behavioral changes (e.g., compensatory strategies) [35]. As a result, a wide range of modeling methods have been used to describe plasticity at different levels [36]. In lieu of an exhaustive survey of existing models, we highlight some useful categories of model types. For a particular phenomenon, such as synaptic plasticity, models may focus on different levels of abstraction. For example, phenomenological models like Hebb’s rule describe input-output relationships between the rate/timing of neural activity and connection changes, describing the computational principle without directly modeling the biological implementation [37]. Biophysical models of spike-timing dependent plasticity, in contrast, describe the physiological changes within neurons that give rise to synaptic changes [38]. Data-driven models (i.e., machine learning) have also been employed for modeling a variety of plasticity phenomena (e.g., [39]).

A variety of model types, spanning different levels of the nervous system, have been used to describe how the nervous system will respond to an intervention to inform rehabilitation therapies [3, 40]. Mechanistic models of how behavior evolves as we adapt to altered dynamics like a split-belt treadmill, for example, informed training interventions to improve gait post-stroke [41]. Phenomenological Hebbian plasticity models informed stimulation protocols to increase the functional connections among regions in the brain [11, 42, 43], and from the brain to muscles or spinal circuits [44,45,46,47]. Models describing nervous system changes over time are also valuable for predicting outcomes and to guide clinical decision-making. Many examples of these models rely on data-driven discovery from large datasets. For example, the increasing prevalence of neural imaging technologies in clinical practice have led to large datasets to develop algorithms that predict functional recovery after stroke [48]. Machine learning approaches have also been used to assess whether devices like non-invasive brain-computer interfaces will be effective [49, 50] and optimal parameters for therapies like deep brain stimulation [51].

These examples highlight the diversity of plasticity models and applications in neurorehabilitation. As with any other computational modeling effort [2], decisions must be made about the level of abstraction and detail. The challenge of these decisions is acutely clear in the realm of adaptation and plasticity, where mechanisms span spatial scales from synapses to behavior, and timescales from milliseconds to months [20]. Many existing models used for neurorehabilitation focus on a single scale (e.g., describing behavioral changes). Models that bridge neurological mechanisms of plasticity to behavior will likely be needed to improve the precision of neurorehabilitation therapies. Such models will require cross-disciplinary collaboration to develop and validate. Similarly, most existing models focus on describing a particular time-point, such as functional recovery after a certain time with a particular, static therapy. The many time-scales of adaptation and plasticity present challenges for modeling overall trajectories, including the impact of interventions and changes in treatment over time [52].

Extending models of plasticity to span spatial and temporal scales could open new ways to harness the power of computational methods in neurorehabilitation. The dynamic nature of the nervous system creates a variety of challenges for building therapies. Assuring an assistive device provides meaningful functionality for extended periods of time requires characterizing plasticity that may occur in response to the device and developing devices that can adapt accordingly [53]. Similar considerations are needed for therapies where protocols may need to adapt over time as abilities change [54]—a form of meta-adaptation that mirrors meta-plasticity (i.e., ‘plasticity of plasticity’ [36]). Achieving the goal of smart, personalized, and adaptive neurorehabilitation therapies will require models that can capture the dynamics of plasticity processes as well as human-device interactions that will influence those dynamics. This will require new approaches to bridge across models that predict how the nervous system will change in response to a given intervention and those to describe how interventions inpact the trajectory of changes in the nervous system and changes in behavior over time.

Commentary

The DARE workshop highlighted many fundamental challenges and opportunities in modeling adaptation and plasticity for rehabilitation applications. Multiple presentations (see Appendix for speaker summaries) speak to the promise of using computational models to disentangle diverse learning mechanisms used by the nervous system (e.g., Roth, Mariscal). Mechanistic models such as those used by Roth shed light on the neurophysiological underpinnings of disorders. Their findings, for instance, suggest that Parkinson’s disease can lead to deficits in a single learning mechanism while leaving others intact. Similarly, data-driven methods to identify components of learning used by Mariscal allowed them to characterize how learning generalized to new contexts more precisely than past studies. A critical next step missing from most workshop submissions is using these model-derived insights to directly guide clinical therapies. Future work towards these efforts will have to contend with challenges closely related to those faced in personalization efforts (see below). For example, do models and their parameters need to be estimated on populations of people or individuals? Models may also require updating over time as learning proceeds, closely mirroring challenges faced in human-device interactions.

Multiple presentations (e.g., Liew, Orsborn, Hight, Schwock) aimed to characterize plasticity that occurs as the result of therapies and interventions or injuries. Schwock’s work highlights the potential benefits of computational models to quantify changes between regions of the nervous system when they are embedded within a large network (see also [55]). This work highlights the challenge of identifying the most useful measures of nervous system plasticity, since nearly all metrics will be approximations. Data-driven studies probing how physiological measurements relate to clinical outcomes will likely be critical to identify the most useful experimental and computational measures of plasticity for neurorehabilitation. Collaborations between researchers developing novel assays of plasticity and those using large clinical datasets to predict clinical outcomes, such as discussed by Liew, will be invaluable for future research and translation. Though such collaborations will likely involve navigating the challenges of measurement feasibility, such challenges highlight the potential promise of extending neurorehabilitation ‘in-the-wild’ (see below) and research into quantifying plasticity metrics.

Designing interventions that induce plasticity is central to any rehabilitation effort. Data-driven predictive models, such as those developed by Liew, provide methods for predicting how someone may respond to an intervention dose. However, these models have largely been used to predict a single endpoint, which may miss dynamic interactions between plasticity and an intervention, as highlighted by other presentations (e.g., Orsborn, Hight). Research with brain-computer interfaces and cochlear implants demonstrate that even interventions that intend to replace a function (rather than rehabilitate) induce plasticity. This plasticity may be influenced by how the device is designed (e.g., Orsborn’s investigations into co-adaptation with brain computer interfaces), and could be further manipulated by purposeful device interventions (e.g., vagus nerve stimulation presented by Hight). User-device interactions to shape plasticity open a huge opportunity to shape plasticity for rehabilitation. Capitalizing on this opportunity, however, will require improving models of how devices induce plasticity. Translating methods to shape plasticity with devices into meaningful clinical therapies will also require methods to predict functional outcomes.

Beyond these examples of scientific challenges, we also noticed important challenges and opportunities to create the scientific community needed to tackle these challenges. All talks focused on plasticity, but we were particularly struck by the topic diversity. For instance, the presentations spanned upper limb movements (Roth, Orsborn), locomotion (Mariscal), clinical sensorimotor function assessments (Liew), and hearing/speech (Hight). There was also a diverse range of methods used to quantify plasticity, from behavior (Roth, Mariscal, Liew), clinical neuroimaging (Liew), and high-resolution electrophysiology (Orsborn, Schwock). This breadth fostered rich discussions across sub-fields that do not regularly interact. Integrating the knowledge gained from this diversity of methods and applications and refining models of plasticity and adaptation for rehabilitation will require bridges across these communities and translating terminology between fields.

Modeling for personalization

Brief background

Computational models of neuromuscular function for neurorehabilitation can be used to aid the clinical (i) classification, (ii) explanation, and/or (iii) prediction at any or all of the stages of patient intake, treatment, or follow-up. While there are multiple computational modeling approaches and techniques [2], they often fall into the two broad categories of statistical or descriptive vs. mechanistic models, both of which are, in George E.P. Box’s words, ‘useful fictions’ [56]. What is personalization in this context? Importantly, the degree of personalization is in fact a spectrum of granularity: from a particular ion channel, cell or neuron, to a neural circuit, to an individual, to a subset of individuals, to a particular population (Fig. 2). As per Occam’s Razor, modelers should aim to model at the coarsest necessary level with the fewest number of assumptions to ask and answer questions about function, recovery, or interventions in a useful and mechanistic manner.

Fig. 2
figure 2

The personalization spectrum for a particular population across physiological scales and numbers of individuals. Models at a given scale that are assumed to characterize a specific individual are often called ’patient-specific’, and population models ‘generic’ as they are assumed to apply to many individuals. Free images adapted from Clipart Panda, Muscular Systems, and pngegg

There have been long standing debates on whether both statistical and mechanistic models could, or should, be generic (i.e., apply to the entire population) vs. patient-specific (i.e., apply to a single individual). From this perspective, the current discussion about personalized models, ‘digital twins’, personalized medicine, etc. is simply the latest iteration of this long standing debate that can be traced back to the epistemological origins of clinical diagnosis (which assumes multiple patients can be thought of as having the same disease), clinical trials (where all patients are expected to have an average response to a same treatment), and biomechanical modeling (where a model can in principle represent a given population or individual well-enough). What is often not stated explicitly about digital twins is the degree to which they are aspirational, because they are, by construction, difficult to develop and impossible to validate. They inhabit the bottom left corner of the personalization spectrum as they should apply to a specific individual and be accurate to a sufficiently small scale to capture the physiological processes relevant to the clinical question. To bring clarity to these longstanding questions and their recent iterations, without claiming to resolve them, it is important that we be clear on what a computational model is, and how this informs our efforts to achieve modeling for personalization.

A computational model for neurorehabilitation is, at its best, a mathematical or numerical representation of hypotheses about function, dysfunction, and/or response to treatment. Thus, it is important to explicitly distinguish between a model’s topology vs. its parameter values [2, 40, 57]. The topology of a computational model is its structure (at the appropriate scale) explicitly defined by the type, number, and organization of the elementary ‘building blocks’ and their interactions. The parameters values are then the particulars associated with each building block and their interactions. The hypothesis can be cast in the form of patterns or associations (for statistical models) or principles at work (for mechanistic models).

In the case of the broad category of statistical models, a data set is used to tune the parameters of a given mathematical representation of relationships (e.g., linear regression, principal components analysis (PCA) , artificial neural networks (ANNs) via, respectively, slope and intercept, PC loadings and variance explained, and weights among nodes and layers). Statistical models are often called ‘black-box’ models, because the model topology and its parameters have little or no direct physically or physiologically evident relationship between cause and effect. That is, they can be considered to be unencumbered by a hypothesis—if by hypothesis we mean a mechanistic explanation of a phenomenon. While the term ‘prediction’ is often used in statistical models (e.g., a strong and significant linear regression allows feature x to ‘predict’ feature y in a given population), this does not imply causality. Nevertheless, statistical models are useful, have their place, and are popular because they can be quite powerful in finding unplanned associations by using all features in large data sets.

The topology of mechanistic models, in contrast, is the unambiguous statement of the assumed physiological mechanisms and the causal relationships among them (i.e., a formal hypothesis). For example, computational models can be based upon modules simulating muscles, the spinal cord, and/or the control of movement. However, modelers very quickly find themselves having to make difficult decisions about modeling scope and level of detail—which are determined by the experience and skill of the modeler, the computational resources available, and the clinical question being asked. These decisions also directly define and affect the ability of the mechanistic model to be personalized. These modeling choices have naturally led to valuable debates about choosing ‘simple’ vs. ‘complex’ models. The dilemma across levels of complexity is that more detailed models are more personalizable (i.e., they have a greater number of free parameters to adjust to match a given individual), but require more and better knowledge of mechanisms—and experimental data to validate their more detailed topologies and greater number of parameters.

Commentary

Considering that ‘personalization’ is a spectrum across scales and numbers of individuals (Fig. 2) what we need are models within that spectrum that provide good-enough assessments of impairment for a given patient (or population) to improve recovery [58,59,60]. From the literature and outstanding presentations at the conference (see Appendix), we were reminded that for successful personalization, one must be clear about the goal or targeted level of function and follow through with appropriate statistical or physics-based models with topology and parameters that suffice for a specific clinical need. For example, human-in-the-loop-optimization is a field that is becoming practical, as demonstrated by Collins’ work on adaptable exoskeletons. This allows guiding the interactions among the optimization towards improving the client’s function by controlling the hardware-algorithm-human dynamical system (which is often difficult to build). Saul made a clear case for the level of personalization needed to achieve useful results. Patton reminded us that personalized assistive devices and approaches need not be complex actuated devices. Computational modeling, which can be complex, can be used to create personalized devices that use simple, passive elastic elements. In fact, such devices are able to reach much greater numbers of users world-wide than powered, computer-controlled versions. Lajoie underscored how the within- and inter-subject variability of response in neurostimulation (both in mapping stimulation location and parameters at a given location) continues to be a critical bottleneck that prevents effective personalization of neurorehabilitation. However, they demonstrated a framework requiring few data points (i.e., few-shot) to adapt the response to neurostimulation that would enable effective personalized neurostimulation (a form of meta-learning, or learning to learn concepts quickly [36]). Allen presented evidence that computational methods to extract motor coordination strategies fare better at capturing the control of walking when they also include the control of whole-body balance. Finally, Lin emphasized how critical it is to collect outcome measures at the point of care that are relevant to treatment. Moreover, carefully chosen outcome measures available in the clinical setting can have meaningful and neurologic underpinnings that could enhance the utility of computational modeling in neurological conditions and stroke.

The amount of data needed to personalize a given therapeutic approach is a common thread in all approaches presented. While the distinction between generic and patient-specific models can be seen as a dilemma in terms of the amount of data needed, one can argue it is, in fact, a false choice. One possible way out of this dilemma can be found by combining mechanistic and statistical approaches to create data-driven clusters of a finite number of models (i.e., stochastic models) [57] that allow a meta-learning few-shot framework. Similarly, available data to fit models in the form of repeated clinical tests have been typically sparse, leading to models that were necessarily simple to avoid overfitting [8, 61] (see [40]). Compounding the problem is that parameter estimates are often based on ‘point estimation’ methods such as least-squares to fit average or individual data. However, recent developments in Bayesian modeling, e.g., [52], seamlessly generate credible intervals for these predictions and can incorporate prior information (e.g., parameter mean and variance) from previous studies to further improve individual predictions. In addition, Hierarchical Bayesian modeling, which involves simultaneous determination of the population parameters, as well as the individual-level parameters given the data from all participants, can improve predictions when little data is available for new patients by ‘borrowing’ knowledge from previous patients—see [52] for a recent example.

As a final comment, we would be remiss if we did not mention the undergoing revolution in machine learning that Transformers and their variants (e.g., Large Language Models (LLMs) and Generative Pre-Trained Transformer (GPT)) that rose to prominence c. 2017 [62] and are now accelerating exponentially. There are exciting novel opportunities for this technology in all Focus Areas (Table 1). As a particular example, it is important to know what this technology has the very promising ability to bridge the gap between generic and patient specific computational models. An instant classic is the example of ChatGPT’s ability to ‘learn’ a language, and then tailor sentences to the style of writing of a given author [63], which is a form of transfer learning from generic to particular. Such approach is already being applied to, for example, training a transformer on numerous examples of brain structure to then detect a patient-specific anomaly (i.e., a tumor) in spite of inter-subject variability [64].

Modeling human–device interactions

Brief background

Devices are ubiquitous in our everyday lives. Since the advent of simple tools and machines such as the lever, wheel, and pulley (as well as assistive devices like canes, pouches, and wheelchairs) humankind has developed increasingly complex assistive devices to reduce the physical effort and time required to move and support ourselves and objects in our environment. Assistive devices have a long history since prehistoric times, with Egyptian hieroglyphs showing the use of staffs and canes [65] through medieval innovations like Gottfried von Berlichingen’s development of the iron hand in 1504 [66]. Since the middle of the past century, robotic devices have been formally proposed as potentially transformative therapeutic tools for physical rehabilitation [67,68,69,70].Footnote 1 Today, rehabilitation robotic devices can be configured to allow for precise control of the position and orientation of select body segments or the load applied to them, and this can be done over many repetitions without marked changes in the robot’s performance. As a result of these capabilities, rehabilitation roboticists and therapists recognized the opportunity by which devices could potentially offload some of the physical demands that therapists encounter during taxing rehabilitation interventions such as body-weight supported treadmill training [71, 72]. Robots can also facilitate the repetitive, task-specific practice necessary to provide the dose needed to drive motor learning via adaptation and plasticity and, if integrated with game-based interfaces, also provide a means by which therapy can be made more engaging [73]. Outside the clinic, devices such as powered wheelchairs, exoskeletons, and prostheses can act as assistive devices that compensate for weakness and potentially increase overall activity levels [74]. Due to the large number of potential devices and parameters, there is an urgent need to use modeling, among other approaches, to move away from empiricism in the design and operation of such devices [1].

When considering the design and ultimate real-world use cases of devices in rehabilitation, one must determine an appropriate engineering control strategy that aligns with the theoretical and physiological basis for a given therapeutic intervention. Early rehabilitation robots relied on ‘position control’ to guide the user’s limbs through prescribed trajectories [72] based, in part, on animal studies which showed that passive movement could engage spinal circuits involved in behaviors such as walking [75]. While there may still be select applications in which this control strategy is appropriate, it is often inadequate because patients need to be actively involved in rehabilitation, but naturally ‘slack’ if a robotic device is allowed to do all the work of moving the limb for them [76]. Assist-as-needed algorithms partially address this concern by only providing assistance if the client also exerts a given level of effort and does not become a free-rider [77]. A key element that is often overlooked when designing and evaluating devices for rehabilitation is how sensory deficits, which are particularly common but often difficult to assess, impact the efficacy of training. Additionally, we still lack a strong theoretical basis upon which we can personalize the dose of robotic rehabilitation interventions, though Schweighofer and others are actively developing statistical modeling approaches to forecast long-term recovery patterns [52].

Devices used to facilitate rehabilitation are, of course, not only confined to robotics. Both passive and active implantable devices are also critical elements of the rehabilitation continuum and present unique challenges not present with devices that are picked up and used, or worn such as exoskeletons that can be donned and doffed. Implanted joint replacements and osseointegrated prostheses rely on a semi-permanent or permanent physical interface between a synthetic device and the musculoskeletal system [78]. To be successful, surgeons must choose the correct implant for a patient’s unique anatomical structure, and once the device is implanted, they often rely on past experience to estimate the level of function and quality of life after surgery [79]. Traditional imaging methods such as X-ray and MRI can be used to create digital, anatomical models to improve the fit of implantable devices, but these methods are currently unable to determine how the surgery and subsequent recovery will affect the afferent feedback from the joint or the joint’s mechanical properties. Thus, computational models that integrate a finite element representation of the device along with dynamic models of local tissue properties and their changes over the course of recovery could potentially transform the field.

Neuromodulatory devices have become a critical tool for both assessing neuromotor function and delivering neurorehabilitation interventions. These devices can stimulate regions of the central and peripheral nervous system using either direct current or by generating magnetic fields to induce current in underlying neurons. Non-invasive transcutaneous stimulation of peripheral nerves is often used to measure nerve conduction velocity and test for the presence of peripheral neuropathies [80]. Similarly, transcranial magnetic stimulation has been used to estimate the functional integrity of corticospinal tract [81], and these measures of integrity, as measured by the size of stimulation-evoked electromyographic responses, form a central element of the PREP algorithm for predicting motor recovery in people post-stroke [82]. Invasive techniques such as deep brain stimulation have become indispensable elements of neurorehabilitation for people with Parkinson’s disease as these devices help alleviate motor symptoms such as tremors and bradykinesia [83]. More recently, stimulation of the vagus nerve via implanted electrodes has shown promise for improving upper limb function in people post-stroke [84], presumably via mechanisms that include reduced systemic inflammation, heightened angiogenesis, and improvements in axonal regeneration [85]. Together, this broad spectrum of devices that support neurorehabilitation and daily function represent an area of high need and impact for computational modeling.

Commentary

One of the central themes that emerged during the DARE Conference was the need for both basic scientific studies and computational models to augment sensation for prosthesis users and other assistive technology. Commercial prostheses typically lack a means by which users can accurately perceive tactile information about the physical interaction between the device and the external world. If this information could be provided to the user through appropriate afferent channels, it may be possible to dramatically improve their ability to manipulate objects with an upper limb prosthesis or improve balance control when this information is integrated into lower limb prostheses. However, the speakers at the meeting (see Appendix) noted two key areas of opportunity in this space. First, they acknowledged the need for precise computational models capable of encoding information about the mechanical interaction between the prosthesis and the external world. Second, they highlighted the need for computational models to inform decisions about how and through which afferent channels this information should be transmitted to the user.

The long-term effectiveness of devices for neurorehabilitation relies on the assumption that humans can adapt their sensorimotor control strategies to acquire the potential benefits of a given device-driven intervention. For decades, neuroscientists and engineers have focused on developing mathematical representations of the learning processes (see section on Adaptation and Plasticity) at play during human-device interactions [86,87,88], and this knowledge has been used to design control strategies for rehabilitation robots [89]. However, while models of sensorimotor learning often focus on aspects of learning thought to be mediated by supraspinal structures and cortico-spinal pathways, many neuromotor impairments result in hyperexcitability (and/or inhibition) of subcortical structures and brainstem and proprio-spinal projections to \(\alpha\) and gamma motoneuron pools, resulting in incorrect voluntary and/or undesirable involuntary responses when reacting to imposed movement or loads from robotic and wearable devices [90, 91]. Computational models will be indispensable for understanding how distributed networks throughout the central and peripheral nervous system influence performance and learning in health and disease.

Another major challenge that remains for developing computational models of human-device interactions is that one would often like to know what objective(s) drives an individual’s behavior as they interact with a device, but this is an ill-posed, inverse problem, as a given behavior could be ‘best’ for an infinite number of potential objective functions. There remains a need to develop new approaches to estimate the objectives that drive our behavior beyond work that focuses on minimizing energy cost or performance errors so that rehabilitation interventions can be better aligned with the patient’s explicit (e.g., walking faster) and implicit (minimize the likelihood of a fall resulting from a slip or trip) goals. Efforts to infer objectives from observed behavior inevitably require computational models. These models are used to simulate the behavior that is optimal for a given objective function, and both the structure of the objective function and the corresponding costs are varied until one finds a function that produces a reasonable estimate of observed behavior [92,93,94]. However, these methods often consider only a limited set of potential costs and assume that humans can find optimal actions for a given cost function. We need to understand when these theories fail to predict behavior accurately and identify alternative theories that better capture the human side of human-device interactions. For reviews on habitual, feasible, good-enough, sub-optimal, and optimal motor learning and performance, see [59, 60, 95].

The promise of wearable robotic devices requires that these devices be untethered from bulky power supplies and autonomously adaptable to the varying demands faced in the real world. One of the major recent innovations in the control of wearable exoskeletons is the development of online control optimization strategies that work when walking in the real world [96]. While these strategies have yielded promising reductions in metabolic cost in young adults, it remains to be seen if these strategies work in populations with neuromotor impairments, such as people post-stroke. In addition, there is a need for continued innovation in device design to improve the likelihood that potential end-users will use these devices regularly. Computational methods can be used as part of a model-based design optimization to reduce weight and cost and potentially increase the accessibility of these innovations to diverse communities.

Similarly, ‘where’ computation happens is critical to the deployment and use of ‘smart’ assistive and rehabilitation devices. One option is the traditional von Neumann architecture, requiring a central processor and memory that takes inputs and produces outputs. Nature, in contrast, has evolved hierarchical distributed sensorimotor neural architectures, where computation happens throughout (centrally, in middleware, and ‘the edge’). This form of biological edge computing happens at subcortical, spinal, and even anatomical levels [97,98,99]. Therefore, successful smart neuro-assistive or neuro-rehabilitation devices (which are, in fact, a hybrid human+robot system engaged in a game-theoretic dance) would, like robots in general, do well to learn from such forms of biological edge computing for physical action.

Improving the quality of life for people with disabilities also requires that we develop more effective means by which people can navigate the digital world. The digital revolution has given us a range of powerful and relatively inexpensive devices we use to communicate with people worldwide. Although many of these devices have only existed for a short time, innovations driven by experts in human-computer interaction have resulted in intuitive user interfaces to control these devices. However, many neuromotor disorders, such as stroke, spinal cord injury, and amputation, reduce the number of options with which people can interact with the physical world. Rehabilitation engineers need to devote effort to improving accessibility to people with different levels of ability, and computational models provide a valuable means by which interfaces can be designed for efficient and inclusive use.

Modeling ‘in-the-wild’

Brief background

Extending impact outside of the clinic or laboratory (i.e., truly improving activities of daily living) is an essential element for effective, reliable, and personalized neurorehabilitation. Behavior ‘in the wild’ was the crucible in which evolution occurred. Thus it is ironic that, from the perspective of scientific inquiry and clinical applications, it is work and research ‘in the wild’ which has taken the longest to develop. It is only now that, for example, at-home clinical and rehabilitation applications are becoming affordable, possible, and even reimbursable in the United States via remote therapeutic monitoring mechanisms.

In contrast, clinics and laboratories have been the default controlled environments where patients can be asked to conduct standardized assessments to evaluate function and recovery. In these environments, we aim to use highly repeatable and informative assessments that can provide actionable clinical insight to guide and inform neurorehabilitation. However, once we move outside of these environments into—the wild’—we lose many of the pillars, processes, and attitudes that support traditional clinical and scientific inquiry. Executing specific motions or having highly-trained clinical hands guide an action is no longer possible nor desirable, and we must treat, control, monitor, learn, and infer from noisy data collected during variable real-work actions in non-idealized environments. Yet, being able to assess and extend neurorehabilitation into these environments is essential to support long-term function and quality of life—and the true need of our clients.

Beyond the scientific utility of in-the-wild research, we also have the social imperative of expanding services into-the-wild to increase access, including delivery of essential services to under-served populations. Transportation to/from clinic visits remains one of the largest barriers to rehabilitation care [100,101,102,103]. During the pandemic, the growth and impact of telerehabilitation demonstrated the feasibility and potential of remote assessment and monitoring to improve health outcomes [104,105,106,107].

How we got to the current model of limited in-patient and out-patient options for rehabilitation at a clinic is a function of historical, social, economic, and political influences. But we are seeing a paradigm shift, made possible by innovations in internet connectivity, wearable and ubiquitous sensors, cloud storage/analytics, and, most importantly, social and economic changes in the reimbursement landscape due to the COVID-19 pandemic that make ‘telemedicine,’ and by extension, remote therapeutic monitoring, possible and even desirable [108, 109]. In the traditional practice of out-patient neurorehabilitation, the client must travel to the clinic where they may receive one to four hours of physical or occupational therapy each week for assessment and focused training. Outside of these few hours, they may need further supports for at-home exercise programs, translating new motion patterns to activities of daily living, identifying unsafe (e.g., fall risk) scenarios, monitoring function, and guiding future in-clinic therapy sessions. There are immense opportunities for computational modeling to support and optimize neurorehabilitation in each of these scenarios—both in the clinic and at home.

New technology, like wearable sensors and environmental monitoring (e.g., via video, voice, typing, or space) can now be used to monitor and assess function in-the-wild, yet there are a dearth of tools available to leverage this data for clinical assessment or recommendations [110,111,112,113]. In the last decades, advances in wearable and ubiquitous sensing have led to extensive growth in the availability and amount of data available from daily life to potentially inform and enhance rehabilitation. This was not necessarily by design or intent. The market for devices designed exclusively for rehabilitation has often been viewed as ‘too small’ to attract serious industrial or financial investment. As a result, many neurorehabilitation systems for clinic and home use struggle to become affordable, widely used products, such as several types of rehabilitative electrical stimulators (e.g., Freehand System, BIONs, Second Sight) and exoskeletons and rehabilitation robots (ZeroG, Lokomat, Manus, Kinarm). Rather, the larger markets for military and industrial systems, and consumer products led to large-scale design and production that provided affordable, miniaturized, and widely available technologies that rehabilitation communities then adapted and adopted. A few examples (from many) are the computer mouse; graphical user interfaces; the 3D gaming wands; VR gaming goggles; global positioning system locations (GPS, Department of Defense); accelerometers (e.g,. for airbags in the automotive industry); and ultra-fast graphical processing units. This entirely non-medical and non-research ecosystem now provides sensors, communication infrastructure, algorithms, hardware, and software that can be, and has been, brought to bear on enhancing neurorehabilitation in-the-wild.

As a result, the large majority of adults in the United States and the industrialized world can afford a smartphone+smartwatch combination with accelerometers, inclinometers, GPS, cameras, heart-rate monitors, blood-oxygen saturation sensors, fast processors, and sufficient memory with sufficiently high bandwidth to monitor metrics of health and performance (e.g., step count and walking speed) or provide app-based guidance on rehabilitation exercises [114]. Recent studies have demonstrated that video-based techniques and wearables can be used to perform standardized clinical assessments [115]. For example, in Parkinson’s Disease finger tapping tasks or passive monitoring of movement characteristics can be used to tune medication dose/timing and monitor disease progression [116, 117]. Advances in robotic and haptic technology that can provide assistance or resistance, as well as virtual and augmented reality provide additional tools to develop and deploy novel neurorehabilitation approaches in the home and community [118, 119]. Computational modeling is an essential component to enable and leverage these new tools and data to guide neurorehabilitation. Machine learning often underlies these modeling methods, whether using large training datasets to prospectively monitor new patients or using unsupervised learning to identify deviations from typical or desired patterns of activity [120,121,122]. Physics-based modeling, such as musculoskeletal modeling and simulation, can complement and improve the accuracy of movement or exercises captured with wearable sensors [123,124,125]. For example, using musculoskeletal modeling and dynamic simulation can improve the accuracy of video-based techniques that estimate joint positions and movement patterns. At the population level, these techniques can evaluate the impact of novel rehabilitation techniques, differences in recovery responses, and expected trajectories of recovery or disease progression. On an individual level, personalized models can use data collected in-the-wild to customize exercise programs (e.g., adjusting challenge level), monitor daily activities (e.g., fall risk or medication responses), and provide quantitative feedback to the clinical team. The intersection of computational modeling with neurorehabilitation in-the-wild represents an exciting and high-potential area for understanding and improving outcomes, managing disease progression, and accelerating recovery.

Commentary

The potential applications and impact of computational modeling in-the-wild to support neurorehabilitation make this an area of high priority to expand access and improve outcomes. Given the importance of expanding access and reducing burdens of rehabilitation, we were surprised and disappointed that only 15% of submissions focused on applications in-the-wild. Further, most studies focused on the sensing technologies to monitor movement and not the methods to translate results into actionable clinical insight nor tools to implement rehabilitation outside the clinic. While there is often great enthusiasm about being able to monitor and measure outside the clinic, figuring out how to bridge the gap between technology (sensors, robotics, augmented reality) and integration with rehabilitation practices is a persistent challenge. This gap contributes to continuing inequities in care.

If appropriately deployed, computational modeling can be a bridge between data, insight, and access. McGinnis’s examples of deploying multimodal sensing to collect the large datasets necessary to guide population-based and individualized insights for rehabilitation provide a compelling model for other clinics to follow and partner (see Appendix). Ideally, systems that could capture, share, and integrate data with electronic medical records across institutions would be available to create large datasets to guide future practice. Similarly, McGinnis and Scheidt provided examples of how unobtrusive sensing can extend measurements outside the clinic to monitor falls, disease progression, and mental health. Song and Collier demonstrated how complementing these sensing techniques with cloud computing, theoretical models, and neuromechanical simulations can deepen our understanding of the mechanisms driving rehabilitation responses. Integration of complex systems will be required to leverage these advances towards personalized and optimized neurorehabilitation. Ultimately, we would envision a system that could quantify the specific mechanisms contributing to an individual’s functional capacity, use that insight to develop a personalized neurorehabilitation plan that minimizes patient, caregiver, and clinician burden, repeatedly monitor relevant digital biomarkers outside of the clinic, integrate information into clinically meaningful and interpretable charts in a patient’s electronic medical record, and use this monitoring to continuously adapt and optimize rehabilitation and predict long-term outcomes.

Moving towards this level of personalization and precision will require development of new methods to induce neuroplasticity, personalize care, and improve human-device interaction to support rehabilitation goals—the other focus areas of this conference. We recognize that these advancements also are in opposition to realities of reimbursement and provision in the American healthcare system. At the most basic level, we need to determine the methods to reimburse and capture the new sources of data that will be essential to drive rehabilitation. While we can schedule and reimburse for a session of physical therapy or an MRI, how and when to reimburse for the use of a wearable sensor or robotic device deployed in the home, and how to provide the technical and analytical expertise to integrate the data from these sensors into the clinical routine remain open challenges. Beyond implementation and reimbursement, the more complex challenges will require figuring out how to ensure patient privacy, develop equitable and inclusive algorithms, and responsibly monitor individuals in-the-wild. New standards need to be developed alongside technology to enable safe and effective neurorehabilitation that leverages computational modeling for deployment in-the-wild.

Conclusions

Key insights

The conference brought concrete, cutting-edge examples of how computational modeling can provide foundational insights about data that cannot be obtained experimentally, and support the formulation of useful hypotheses and the design of assistive technology and other innovative technology that can accelerate and optimize neurorehabilitation.

Computational models are, after all, hypotheses formulated as mathematical constructs—be they statistical black or gray boxes, or mechanistic paradigms. As per the scientific method, observations and experiments (two forms of data) are crucial for proper hypothesis development and testing. Furthermore, in the clinical realm, computational models are a means to transform information into actionable insights and therapeutic decisions. There were four common threads across the Focus Areas that underpin future needs to create useful data and models for neurorehabilitation:

  1. (i)

    The need to capture and curate appropriate and useful data necessary to develop, validate, and deploy useful computational models. This step is critical to models for applications that span from classification of clients (as with the ENIGMA Stroke Recovery Working Group to use structural neuroimaging to classify clients), to real-time online use of data (as in human-in-the-loop systems).

  2. (ii)

    The need to create multi-scale models that span the personalization spectrum from individuals to populations, and from cellular to behavioral levels. Figure 2 emphasizes this point as most of the work presented can be placed on this two-dimensional spectrum to clearly identify the types of questions a model is addressing, and the generalizability of its findings. This was particularly critical for models related to adaptation and plasticity, which can occur across all ranges of scale and number.

  3. (iii)

    A strong case was made to pursue means to extract as much information from available data (meta-learning), while requiring as little data as possible from each client. This is because there are important ethical, practical, and algorithmic limitations on just how much, and what kind of, data a given individual can contribute to the development or tuning of a model or a human-in-the-loop system.

  4. (iv)

    The insistence on leveraging readily available sensors and data systems to push model-driven treatments away from the lab, and into the clinic, home, and workplace. However, translating knowledge into action is a longstanding and difficult challenge in medical research [126, 127]. The era of the Internet-of-Things and the Internet-of-Medical-Things, nevertheless, is here to stay—even if it faces skepticism and resistance [128]. We should embrace it. In addition to the traditional need to translate research into action in the form of devices and methods, we now also have the robust public debate about the costs and dangers of bringing a relative newcomer, Artificial Intelligence (AI) into clinical practice. The conference, however, brought into clear focus the costs of not using data-driven computational modeling for healthcare. Simply put, there is ample opportunity to personalize care by using existent technologies ‘in-the-wild’ based on consumer products that can have real positive and ethical impact to neurorehabilitation at low cost, deployable at scale, and without compromising privacy. But these efforts need to be done in close collaboration with clinicians and clients, while avoiding the temptation to decouple data from physiology.

The way forward

In addition to the Commentary made for each section above, it is important to reiterate that very recent developments in transformers and their variants (when feasible) will provide exciting novel opportunities in all Focus Areas (Table 1) and across scales (Fig. 2) that we can scarcely imagine at this point. Notwithstanding that transformers are hugely ‘data-hungry’ and require large training times, they can find applications in neurorehabilitation. In ‘Modeling for Personalization’ applications, we mentioned the example where they can help us bridge the gap between generic and patient-specific computational models for, say, tumor detection in a given patient based on thousands of structural MR images [64]. Transformers and their foundation on the ‘attention mechanism’ are also finding traction in other relevant applications where large datasets are available. For example, ‘Modeling human-device interactions’ provides a fertile ground where transformers can learn from large sets of EEG or ElectroCorticoGraphy (ECoG) signals [129, 130] collected over days of recordings in patients interacting with neuroprosthetics or neuromodulation systems like deep brain stimulation (DBS). Similarly, wherever wearables or markerless video provide large sets of movement data. Thus ‘Modeling ‘in-the-wild” will be able to provide generic and personalized movement patterns and syntax that can be readily used to track adaptation and plasticity in human environments [131, 132].

It is our firm belief that the open, sincere, and robust discussion at the conference, and the resulting videos, journal articles, and ideas sparked from collaborative conversations serve as strong medicine against these maladies. The conference enabled and forced us to examine the nature and appropriate uses of existing computational techniques to now refine them or develop new ones. And they confronted us with the need for a tight closed-loop interaction with the functional and clinical reality of our health care professionals and clients to focus our computational neurorehabilitation work on useful, urgent, and relevant problems and solutions. We look forward to the next few years to reconvene the next iteration of this conference to re-assess the progress in these critical Focus Areas and bring to fruition the many opportunities to catalyze progress in neurorehabilitation. We stand on the shoulders of giants, and the best is yet to come.