Advertisement

Towards predicting sensorimotor disorders in older adults via Bayesian probabilistic theory and mixed reality

Abstract

In this work, we present the Overwatch-M platform, a novel approach that combines Bayesian theory with mixed reality (MR) to assess the user’s sensorimotor control. This platform is composed of a set of simple MR tests, which are short, and guides the user through a series of activities while monitoring the user’s interaction. The main advantage of this strategy relays on its ability to combine prior and observed user’s interaction to determine the impact on the user’s functional performance. This approach aims to detect changes in the user’s functional performance that could lead to sensorimotor problems in older adults at an early stage. For validation, a Bayesian model is implemented into the system to collect and analyze datasets and to provide a quantitative estimation of functional performance by providing a probabilistic conditioning assessment of their test-interaction over time. The strategy has been implemented to test the sensorimotor function, eye-hand coordination.

Introduction

Scientific studies have found that many sensorimotor functions, like body coordination, eye-hand coordination, postural, gaze function, and spatial perception, tend to decay as a person’s age. Sensorimotor control is a nervous system mechanism within the human body that receives sensory stimuli from its environment, and it transmits, processes, and integrates the neural signal as motor command outputs to accomplish a goal [14]. As people age, daily life activities such as walking, cooking, showering, and manipulating objects become difficult tasks to achieve due to a reduction in their functional performance [2]. As a result, many older adults are at risk of accidents or injuries, causing an individual to relinquish their independence. Estimations show that in the United States, during 2014, 28.7% of adults 65 years or older fell, with an approximated cost of $31.3 billion to Medicare [6]. Thus, motor performance deficit is a severe problem caused by a dysfunction of the central and peripheral nervous system.

One of the aspects of this disorder is impaired coordination [16], such as eye-hand coordination. In neurophysiology, the study to model eye-hand coordination is essential to determine the spatial position of the hands concerning the body [1]. This coordination is vital since it integrates two critical senses, vision and proprioception, which are used by the central nervous system (CNS) for motor planning [5]. Proprioception is a sense that provides spatial information of different limbs to CNS; it encodes information about all the joint angles between the hand and the rest of the body [5]. Therefore, a reduction in the transmission of these senses through the CNS can alter neuromuscular control [14]. Different experiments have been developed to acquire sensorimotor information; for example, observational cross-sectional and matching tasks [4, 14]. The matching task experiment calculates the precision of the spatial probability distribution of the localization of proprioception and visual senses [4]. For instance, a proprioception localization test allows a user to point to a visual target with his unseen hand. The exercise is repeated over time to obtain a spatial probability distribution of the hand’s proprioceptive and visual localization of the target [4]. However, the sensory system is susceptible to variability and noise, which limits the precision of our senses. Therefore, computational principles for sensorimotor control have been explored and implemented to reduce uncertainty. Techniques like Bayesian methods reduce sensory uncertainty because our brain’s perceptual representation and Bayesian theory, share a similar system of conditioning probability for unknown parameters [3, 11].

In this work, we present a novel approach that combines a machine learning technique based on Bayesian theory with Mixed Reality (MR) to assess the user’s sensorimotor control and impact on functional performance. The base of this approach is on four main phases, as shown in Fig. 1. First, Mixed Reality Training Sample phase, where the process of collecting training sample information from the user as it performs a series of short physical tests. Second, Motor Control Estimation phase, where collected sensorimotor information is analyzed to estimate the user’s optimal motor control and minimizing uncertainty and finding the likelihood parameters that maximize the probability of the observed samples. Third, Multisensory Integration phase, where the maximized models for vision and proprioception are integrated and mixed to represent a joint probability distribution. Forth, Dynamic Bayesian Network Prediction phase, where we condition the joint probabilities of the Multisensory Integration phase with other conditional nodes in a Dynamic Bayesian Network (DBN) to predict eye-hand coordination disorders at different periods of time. This phase is not part of this work; however, we give a brief description to explain how it could be used as future work.

Fig. 1
figure1

Predicting eye-hand coordination disorder via Bayesian theory and mixed reality

Fundamental sensorimotor mixed reality (MR) tests, short and easy to implement, guide the user through a series of activities while monitoring user interaction. The Bayesian model collects and analyzes data sets from the tests and determines the impact on the user’s functional performance. The advantages of this strategy rely on its ability to combine prior and observed user’s interaction within simple tests. Other methods like neural networks, naive Bayes, logistic regression, support vector machines, and regression trees can attain results. However, these methods solely classify the information and fail to produce a probability distribution, which provides a better quantitative estimation of the data [13]. This approach aims to detect changes in the user’s functional performance that could lead to sensorimotor problems by providing a probabilistic conditioning assessment of their test-interaction over time.

Mixed reality training samples

In this phase, we collect training sample information from the user as they perform a series of short physical tests. Hence, an interface in Mixed Reality (MR) was developed to simulate a match tasking test. MR gets defined as the blend of the real world and digital elements, or holograms, anchored into our environment for manipulation [1]. Therefore, the interface aims to facilitate the user to grab, hold, and drag a virtual object in an MR environment while aiming to reach a holographic target, as shown in Fig. 2.

Fig. 2
figure2

Overwatch-M system: set of basic mixed reality tests, short in duration and easy to complete, were developed to assess user’s interaction. Each test collects data such as objects’ position error and time response

From these motions, the program will extract the spatial motor output distribution to understand the user’s precision of their sensory input. The cues provided to the user are holographic objects displayed in a headset called the Hololens developed by Microsoft The Hololens provides a substantial increase to the user’s interest in the training because of its immersive and realistic display, which reduces cognitive load [9]. The definition of cognitive load is the total mental effort used to perform a new task by using adequate instructions [18]. Therefore, this aspect impels older adults to focus on the program. Also, the device is portable and compact, and no extra features like mirrors, tables, laser pointers, or video cameras are required to acquire users’ motor output data because the Hololens is equipped with sensors to capture the user’s motion. Additionally, this system collects information on the fly, which can be used to provide immediate results to the user after completing the training. The training purpose, as mentioned earlier, simulates a match tasking exercise similar to the one proposed by van Beers [4] in which a user tries to point to a target with their unseen hand using as reference a mirror reflection. In our test, the user tries to reach a target using holographic cues that move at specific time intervals. Thus, the procedure to collect spatial training samples requires the user to control a manipulating object from an initial to a final position, as shown in Fig. 3.

Fig. 3
figure3

Diagram depicting the different objects that construct the mixed reality test, which enables the user to interact for short periods of time

The location of the target for this task is the cross-section of the vertical time guide and horizontal path. However, the user gets conditioned to be within certain bounds of the window time frame. Additionally, the user must traverse those bounds at a certain speed interval to the best of their abilities. Otherwise, the program will detect if the user is not controlling the object effectively, and it will force the user to restart the repetition. Some of the different objects that the user will interact with the short tests are:

  1. 1.

    Manipulating Object Interactive object, which after selection it allows the user to hold and drag around the virtual world using his/her hand motions.

  2. 2.

    Training Selection Predefined test objects that, when selected, with the Manipulating Object, condition the test’s repetition number and speed.

  3. 3.

    Path Static object and visual cue to the user to determine if the Manipulating Object is within an acceptable manipulating range or not. User should keep the Manipulating Object in contact with the Path at all times, this will lit the Path green. Otherwise, it will turn red, indicating that the user is not within an acceptable range.

  4. 4.

    Time Guide It moves parallel to the Path at a certain speed as a visual cue to move the Manipulating Object.

  5. 5.

    Window Time Frame It is used primarily as a visual capsule for optimal range manipulation.

  6. 6.

    Nodes They are waypoints objects properly arranged along the path. Their purpose is to collect spatial data of the Path and the Manipulating Object at every instance the Time Guide reaches a Node. Forty different waypoints construct the spatial position of the target’s position over time. In other words, the program collects spatial information of the target and the path at forty different points.

  7. 7.

    Initial and Final Spheres They are flags located at opposite ends of the path, which condition the origin and culmination of a repetition (round-trip path).

Following the acquisition of the training samples, the information gets stored into the Azure cloud, and the target and user’s control position information gets saved in the cloud for download. The documents contain the node number, X, Y, and Z spatial information, the direction of the movement, and a time counter from when the training got initiated. After the collected information gets stored in the cloud, the information is available for analysis.

Motor control estimation

Analyzing the collected sensorimotor information entails estimating the user’s optimal motor control and minimizing uncertainty and finding the likelihood parameters that maximize the probability of the observed samples. In this phase, we use the information collected from the interface developed in MR and stored in the Azure cloud to solve these problems. The final output of this section is a set of probability density models of precision of state of the hand given the sensory stimuli for vision and proprioception for each Node. However, to reach these conclusive models, the collected information needs to get filtered and fit accordingly to maximize the probability of the sampled data. This analysis is essential because sensory information contains noise which propagates through all states of the sensory system. Therefore, that noise needs to get reduced, and the user’s optimal motor control distribution parameters found to provide appropriate estimations about the state of the hand. Hence, to minimize the uncertainty in the sampled information, Bayes’s rule is applied to enhance the belief of the data as more information becomes available.

For visual localization, collected spatial data got assessed in azimuth movements along the plane or X-axis, and proprioception estimated in depth or Z-axis. Once data gets read and classified between vision and proprioception, the information for these two senses must be structured accordingly to the node from which they got collected. In this work, forty nodes have been defined (nodes’ location are illustrated in Fig. 3); therefore, forty different distributions for vision and proprioception respectively will be generated. Afterward, Bayes’s rule, Eq. 1, is applied to these 80 distributions to minimize uncertainty.

$$\begin{aligned} P(\theta |S_k)_i = \frac{P(S_k|\theta )_i P(\theta )_i}{P(S_k)_i} \end{aligned}$$
(1)

Bayes’ rule in Eq. 1, estimates the posterior probability density of the state of the hand \(\theta\) given the observed sensory information \(S_k\), where k is the sensory input, for vision or proprioception, and i indicates the collected node index, and each node contains a set of collected samples. Thus, as described in Section sec:1, the MR programs collect sensory information at forty nodes. Hence, if an MR test asks to complete three repetitions, in a forward and backward direction, each node will have a minimum of six observations. Therefore, we calculate forty posterior probabilities per sense or eighty distributions in total. Hence, we calculate the likelihood \(P(S_k|\theta )_i\) assuming a normal distribution of the sampled sensory information.

Additionally, the prior distribution \(P(\theta )_i\) provides the previously estimated posterior probability density. Finally, for normalization, the factor \(P(S_k)_i\) sums all the probabilities of the observed information to scale the posterior distribution between zero and one. This posterior distribution estimates the probability of precision of the hand state given the sensory input. Then, this posterior becomes the new prior, which continually updates based on new sensory information. This method properly weights both distributions, prior and likelihood, given the precision obtained from the sensory input [3]. Then, we need to make the data the most likely because humans determine the maximum likelihood estimate (MLE) when confronted with different sources of information to perform a task [7, 10, 19]. Nevertheless, when prior information is missing, there is no difference between maximizing the likelihood and the posterior distribution. However, in the case in which we know the prior information of the training, we use the Bayes’ Rule Equation 1 to estimate the posterior [19]. Thence, we calculate the maximum point of the posterior distribution, or the maximum a posterior (MAP), to estimate the optimal parameter of the hand state.

Hence, these Bayesian estimations trade bias over variance because the posterior distribution will be more biased towards the mean of the prior than of the likelihood. Therefore, if someone misses a target on average, it does not mean that they do not know how to perform the training but rather that they are optimal for statistics that differ from the one we know [19]. Therefore, we consider the idea that as the user learns to manipulate a virtual object, it learns to infer the probabilities of its sensory system by combining Bayes’s rule and MLE to determine how the observed and unobserved parameters are related. The result of this estimation is a model with the most reasonable belief of the location of the target, given their sensory input. Ultimately the highest point in this model is the most likely parameters of the precision of the state of the hand [19]. Thence, it calculates the maximum estimation from the dataset for the sensory input S given by \(f(S|\theta )p(\theta _j)\) where \(\theta _j\) is the observed hand states and \(P(\theta _j)\) is the probability of the hand state.

To estimate the MLE distribution with prior information, we draw the probabilities of the hand state per node for likelihood \(P(S_k|\theta _j)_i\) and prior \(P(\theta _j)_i\) distributions, where \(j = 0,\ldots ,s\), represent the index of the observed hand state probability. Then, for each node sample, we generate new distribution parameters using an iterative maximization algorithm. Consequently, the optimal global solution of the algorithm determines the distribution parameters that maximize the likelihood of the sample’s information. Thence, with these calculated parameters, we generate a maximum likelihood distribution, which becomes a node’s new posterior probability distribution. However, in case of no prior information, the same process unrolls but with just the likelihood distribution samples. With these final distributions, we intend to find the most likely distribution parameters of the precision of the hand’s position per node using the MAP point.

Multisensory integration

The ultimate objective of the system is to provide a predictive estimation of the user’s risk of developing an eye-hand coordination disorder given hand state information and medical data. However, to estimate that value, the joint probability distribution of the sampled information must be derived. Calculating the estimated control distribution at forty different positions/nodes results in forty different distributions. Therefore, a joint probability distribution estimates the model parameters of the mixed models for all forty distributions for vision and proprioception. Then use this model as evidence in a Bayesian network (BN) with the user’s personal and medical information for prediction of sensorimotor problems. However, first, we must integrate the output of the user’s control estimation for vision and proprioception into a unimodal localization distribution of the hand state. This integration takes into account the direction-dependence of vision and proprioception precision to integrate both senses [5]. Thus, the output of this process is a normal distribution with its parameters describing the bimodal distribution of the hand localization. Therefore, as defined in Eq. 2, the parameters of the unimodal hand localization integrate the bimodal sensory model, where \(\Sigma\) is the covariance matrix, and \(\mu\) is the mean of the integrated distribution; p and v represent proprioception and visual sense, while PV show the integration of p and v.

$$\begin{aligned} \Sigma _{PV}= & {} (\Sigma _p^{-1} + \Sigma _v^{-1})^{-1}\nonumber \\ \mu _{PV}= & {} \Sigma _{PV}(\Sigma _p^{-1}\mu _p + \Sigma _v^{-1}\mu _v) \end{aligned}$$
(2)

Henceforward, we approximate the joint probability distribution of each model to estimate its mixed parameters in a BN. With this process, we approximate a posterior probability distribution by combining each multisensory integrated model. Hence, to compute the joint probability distribution, we choose a Gaussian Mixture Models (GMM). This method weights the combination of the multisensory models x to estimate its joint probability in a conditional BN with M mixture components described in Eq. 3.

$$\begin{aligned} p(x|\Theta ) = \sum _{l=1}^{M}\alpha _lp_l(x|\theta _l) \end{aligned}$$
(3)

The parameters of the GMM are \(\Theta =(\alpha _1, \ldots , \alpha _M, \theta _1, \ldots , \theta _M)\), \(\alpha _l\) is a mixing proportion which sum equals to one, and \(p_l\) is a Gaussian probability density with \(\theta _l = (\mu _l, \sigma _l), l = 1, \ldots , M\) [12]. After building the mixed model, its distribution becomes an observed node in a BN, with its objective to predict eye-hand coordination disorders. The benefits of using BN rely on its ability to combine heterogeneous data, and the interpretability of the graphical network [15]. Also, BN excels at handling events in which information might be missing and can compute prior knowledge with information on a current event [17]. Hence, the prediction formulation conditions the observed nodes of the hand localization precision estimation, medical, and personal information of the user to detect patterns of eye-hand coordination.

Experimental results

The experimental section estimates the user’s sensorimotor control after a user completes a test for vision and proprioception sensory input using a Bayesian framework. Then, the estimated sensory models, for vision and proprioception, get integrated to estimate a unimodal distribution per node. Finally, all the unimodal distributions are mixed to generate a Gaussian mixed model (GMM) of all the nodes in a test. As preliminary results, we show the analysis of four users with no previous coordination problems after performing the same five tests, with three repetitions each.

Samples collection

As a first step, the user completes a mixed reality (MR) tutorial to familiarize themselves with the system. Following after, the user sits down to set the head position as origin of the virtual world; otherwise, the user might perform the test poorly since they might lose perception of the origin to which the program gets anchored. Next, the user must tap and drag the manipulating object to select one of the tests. For instance, training one requires three repetitions at a slow speed of approximately 28 cm per second with a relative path of five meters in the MR World, as shown in Fig. 4. The testing time lasts approximately two minutes if the training gets completed flawlessly. Otherwise, it will run until the program completes all the repetitions correctly.

Fig. 4
figure4

User grabs virtual object and tries to match the object’s centroid with the cross sectional position of the Time Guide and the Path. Each test contains different speeds and number of repetitions. After completing the test, information is stored and analyzed by the Bayesian model

Depending on the training and accuracy of the user, the number of collected samples points in every node might vary. For example, if the user moves the manipulating object away from the path, the path’s color turns red and while the time guide is at node n the direction of the training will change to a retrospective movement, so instead of moving to \(n^{+1}\) it will move to \(n^{-1}\) towards \(n^{initial}\). Therefore, if this event happens frequently, nodes between node n and \(n^{initial}\) will contain more sample points than nodes between n and \(n^{final}\). In this experiment, the motion of the repetitions is horizontal, extended right arm moving towards the left shoulder, or left extended arm moving away from the body. This motion forces the user to stretch their arm from the initial to the final sphere position. Also, the user needs to ensure that their hands are always within the HoloLens’s field of view. If the object is not within the view of the headset, the manipulation event and the Time Guide will pause. Additionally, a counter shows the number of completed repetitions to aid the user in maintaining a record of their progress. When the user completes all the repetitions, it can stop dragging and holding the Manipulating Object and exit the program with a bloom gesture. Video at https://drive.google.com/open?id=1mgQ3ZRQ8z4lY7DN4s3g8x0AtxKCfcxYo.

Motor control estimation results

Experimental results show that the Bayesian framework provides a quantitative estimate for motor control, given vision, and proprioception because it estimates the users’ most likely control and minimizes uncertainty. By finding the most likely parameters of the likelihood model \(P(S|\theta )\), it filters the sampled information to estimate the optimal motor control MAP in Fig. 5 (\(MAP_0\)).

Fig. 5
figure5

Posterior and likelihood distributions for vision sense

For test two analysis uses Eq. 1 to minimize uncertainty about visual sensory precision, by multiplying the prior and likelihood models to estimate a posterior model. Then, we calculate the maximum posteriori estimates of visual input by finding the model that maximizes the likelihood of the posterior model and the sampled information to determine optimal control, as seen in Fig. 5 \(MAP_1\). After this, the posterior model adjusts accordingly to the user’s sensory input with every new test, from test 2 to test 5 using the Bayesian framework and maximum likelihood. Likewise, this method is also applied for proprioception, as shown in Fig. 6.

Fig. 6
figure6

Posterior and likelihood distributions for proprioception sense

Moreover, Fig. 7a, b show maximum probabilities for proprioception and vision, respectively at node 30 for four users. Thus, we observe the case of user 3, with changes in hand position precision during test 4 due to a diverging performance in relationship with other users, which suggests bad performance at that point in space-time. Nevertheless, results seem to ascertain a steady rate of progress, indicating a slow but stable rate of control at that node for most users.

Fig. 7
figure7

a Proprioception’s MAP for four different users. b Vision’s MAP for four different users. From these graphs we can observe the correlation between vision and proprioception of four users after completing a test for node thirty. Hence, suggesting a clear dependence between these two senses

Multisensory integration results

Motor control estimation results presented individual posterior probabilities for vision and proprioception at a single instance of space (e.g., node 30). The next step is to integrate all distributions in both, vision and proprioception, among all nodes within the path. This distribution combination will create a more robust representation of the hand state. Thence, if vision and proprioception are equally precise, this indicates that the mean of the integrated models will get calculated between them. Otherwise, if a sense is more precise than the other, the integrated model will be biased towards that sense [3]. After integrating all sensory distributions, their final models (specific to the user) would look like in Fig. 8a for user 1 and Fig. 8b for user 4. The GMM joints the probability of the forty nodes along the path. The results of mixing the integrated distributions show that for instance, user 1 in Fig. 8c, on average, the standard deviation remains relatively constant during those five tests (0.623, 0.6451, 0.7568, 0.773 and 0.755), which resulted in a consistent average probability of hand state precision for all five tests. This consistency improves precision and suggests that the user controlled the object at the best of its ability through a persistent approach. This pattern can also be seen for user 2 but with higher average standard deviation than user 1.

Fig. 8
figure8

a Gaussian mixed model per test for user 1. b Gaussian mixed model per test for user 4. c Average probability of precision, standard deviation, and mean of the multisensory integration distributions. The unit of the probability of precision is probability in decimals, while the standard deviation unit is a distance in meters

Furthermore, user 4 GMM in Fig. 8b displays that during the first two tests, their probability of precision increased because their average standard deviation was relatively small. Nevertheless, after test 3, the pattern of the model changed, decreasing the probability of precision of the hand state. Postulating that from test 2 to test 3, the user changed their method to perform the test, which suggests that they failed to attain a technique to manipulate the object because it was not steadfast to its original method of manipulation. We observe this pattern can in user 3 within test 3; its standard deviation increased drastically compared with the previous test, as shown in Fig. 8c.

Conclusions

In this work, we developed a strategy that combines Bayesian Theory with mixed reality (MR) to asses sensorimotor functions. To validate the approach, we selected the sensorimotor case, eye-hand coordination disorder. Within the MR test, the user interacted through a series of short tests, and data sets of spatial information collected for analysis. Results display that users’ performance or precision in executing the task improves over time. Experimental results from the eye-hand coordination case, show that the level of precision increases given the sensory information from the previous to the current test. Within the first tests, the user experiences a quick increment in precision, as it adapts to the new technology. Eventually, the user reaches a certain precision level that remains almost constant (small increments in precision as tests continue). Also, we conclude that GMM is a suitable representation for pattern recognition to determine changes in performance within the eye-hand coordination case, given both vision and proprioception senses.

As future work, the observed mixed model will be used as a continuous probability distribution evidence in a Dynamic Bayesian Network (DBN) (Fig. 9) to condition the probability of an eye-hand coordination disorder as presented in Eq. 4.

$$\begin{aligned}&P(Pred\_CD^{t} | Age^{t-1}, Med\_Hist^{t-1},\nonumber \\&\quad Current\_CD^{t-1}, Hand\_State^{t-1}) \end{aligned}$$
(4)

A DBN is a model with a repeated structure of a static Bayesian network (BN), in which arcs display local or transitional dependencies among variables. A DBN’s structure is time-invariant because it remains fixed without modification to its dependencies topology throughout time. Also, a DBN only depends on its parent at a current time t and prior time \(t-1\), which constitutes that a node is independent of all previous states before \(t-1\), known as Markovian property [8]. Furthermore, two essential aspects construct a DBN; developing a network structure, and learning the parameters of the network. Hence, after each test, the network learned the likelihood of a sensorimotor disorder given the observed test performance and expected performance from the user’s medical and personal information throughout time. Eventually, additional sensorimotor functions such as posture, and balance, will be included in the system to perform a complete analysis of functional performance. Also, we plan to evaluate the performance of the system by comparing healthy subjects with subjects presenting sensorimotor problems.

Fig. 9
figure9

Proposed dynamic Bayesian network with four observed nodes and a hidden, Pred_CD, node to predict eye-hand coordination

References

  1. 1.

    Aruanno B, Garzotto F, Rodriguez MC (2017) HoloLens-based mixed reality experiences for subjects with alzheimerś disease. In: Proceedings of the 12th biannual conference on Italian SIGCHI chapter—CHItaly ’17. ACM Press

  2. 2.

    Baca J, Ambati MS, Dasgupta P, Mukherjee M (2017) A modular robotic system for assessment and exercise of human movement. In: Chang I, Baca J, Moreno H, Carrera I, Cardona M (eds) Advances in automation and robotics research in Latin America. Springer, Berlin, pp 61–70. https://doi.org/10.1007/978-3-319-54377-2_6

  3. 3.

    Bays PM, Wolpert DM (2007) Computational principles of sensorimotor control that minimize uncertainty and variability. J Physiol 578(2):387–396

  4. 4.

    van Beers RJ, Sittig AC, van der Gon JJD (1998) The precision of proprioceptive position sense. Exp Brain Res 122(4):367–377

  5. 5.

    van Beers RJ, Sittig AC, van der Gon JJD (1999) Integration of proprioceptive and visual position-information: an experimentally supported model. J Neurophysiol 81(3):1355–1364

  6. 6.

    Bergen G, Stevens MR, Burns ER (2016) Falls and fall injuries among adults aged \(\ge\) 65 years—united states, 2014. MMWR Morb Mortal Wkly Rep 65(37):993–998

  7. 7.

    Ernst MO, Banks MS (2002) Humans integrate visual and haptic information in a statistically optimal fashion. Nature 415(6870):429–433

  8. 8.

    Ghahramani Z (2001) An introduction to hidden markov models and bayesian networks. Int J Pattern Recognit Artif Intell 15(01):9–42

  9. 9.

    Hoe ZY, Lee IJ, Chen CH, Chang KP (2017) Using an augmented reality-based training system to promote spatial visualization ability for the elderly. Universal Access in the Information Society

  10. 10.

    Jacobs RA (1999) Optimal integration of texture and motion cues to depth. Vis Res 39(21):3621–3629

  11. 11.

    Knill DC, Pouget A (2004) The bayesian brain: the role of uncertainty in neural coding and computation. Trends Neurosci 27(12):712–719

  12. 12.

    McLachlan G, Peel D (2000) Finite mixture models. Wiley, Hoboken

  13. 13.

    Orphanou K, Stassopoulou A, Keravnou E (2016) DBN-extended: a dynamic Bayesian network model extended with temporal abstractions for coronary heart disease prognosis. IEEE J Biomed Health Inform 20(3):944–952

  14. 14.

    Röijezon U, Faleij R, Karvelis P, Georgoulas G, Nikolakopoulos G (2017) A new clinical test for sensorimotor function of the hand–development and preliminary validation. BMC Musculoskeletal Disorders 18(1):407

  15. 15.

    Roos J, Bonnevay S, Gavin G (2017) Dynamic bayesian networks with gaussian mixture models for short-term passenger flow forecasting. In: 2017 12th international conference on intelligent systems and knowledge engineering (ISKE). IEEE

  16. 16.

    Seidler RD, Bernard JA, Burutolu TB, Fling BW, Gordon MT, Gwin JT, Kwak Y, Lipps DB (2010) Motor control and aging: links to age-related brain structural, functional, and biochemical effects. Neurosci Biobehav Rev 34(5):721–733

  17. 17.

    Shangari TA, Falahi M, Bakouie F, Gharibzadeh S (2015) Multisensory integration using dynamical bayesian networks

  18. 18.

    Sweller J, van Merrienboer JJG, Paas FGWC (1998) Cognitive architecture and instructional design. Educ Psychol Rev 10(3):251–296

  19. 19.

    Wolpert DM (2007) Probabilistic models in human sensorimotor control. Human Mov Sci 26(4):511–524

Download references

Author information

Correspondence to José Baca.

Ethics declarations

Conflict of interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Martinez, J., Baca, J. & King, S.A. Towards predicting sensorimotor disorders in older adults via Bayesian probabilistic theory and mixed reality. SN Appl. Sci. 2, 129 (2020). https://doi.org/10.1007/s42452-019-1666-y

Download citation

Keywords

  • Machine learning
  • Healthcare
  • Sensorimotor problems
  • Bayesian theory
  • Mixed reality