Introduction

There has been little consensus on the nature and number of body representations. The main focus of this paper is neither to settle the debate in favour of one view over the other, nor to count how many body representations there are. Instead, the present paper tries to combine the different conceptual and experimental approaches to this topic into a more holistic view. In part one, we discuss two specific problems that have been encountered while dissociating multiple body representations in healthy individuals with the use of bodily illusions. In part two, we propose a different approach that might overcome these two main problems. In the last part, we discuss two example datasets of bodily illusion experiments, which serve as a technical illustration. We re-evaluate and reanalyze previously published rubber hand illusion data, which provide a good example of the two main problems. So instead of asking how many body representations can be dissociated purely based on experimental output, we identify different tentative models of the possible weighting of the multisensory input in the example bodily illusion experiment, and test these against each other directly using Bayesian model selection. This provides a possible way to avoid the problems recently encountered when dissociating multiple body representations in healthy individuals with the use of bodily illusions.

Challenges in the study of body representation

The way the body is mentally represented has been investigated by many different fields of research. For example, neuropsychologists have investigated patients with impairments in mentally representing and/or acting with the body, philosophers have explored the phenomenology of our bodily experiences and our conscious bodily self, experimental psychologists have studied multimodal integration with bodily illusions, and neuroscientists have tried to find the neural correlates of our mental body representation. However, there is no consensus on the number, definitions and/or characteristics of body representations. There are currently two main psychological and philosophical models of body representations: (a) a dual model of body representation distinguishing the body image and the body schema (Gallagher and Cole 1995; Rossetti et al. 1995; Paillard 1999; Dijkerman and De Haan 2007) or short-term and long-term body representations (O’Shaughnessy 1995; Carruthers 2008), and (b) a triadic model of body representation that makes a more fine-grained distinction between a visuo-spatial body map and body semantics within the body image, in addition to the body schema (Schwoebel and Coslett 2005; Buxbaum and Coslett 2001).

The first main problem that current models of body representations encounter when tested experimentally is conceptual. The distinctions between body representations are often made on a single dimension, such as availability to consciousness (Gallagher 2005), temporal dynamicity (O’Shaughnessy 1995), or functional role (Paillard 1999). Depending on the criterion, different distinctions are possible, leading to widespread confusion (de Vignemont 2007). Even more importantly, there are more dimensions on which body representations can be dissociated than the three highlighted above, such as the relative importance of different bodily sensory input signals, the spatial frame of reference, etc. For example, the body schema most probably includes short-term information (e.g., body posture) as well as long-term information (e.g., the size of the limbs), both self-specific information (e.g., strength) and human-specific information (e.g., degrees of freedom of the joints). By contrast, when investigating the body image one needs to try to group together the heterogeneous concepts of body percept, body concept and body affect in the dual model (Gallagher and Cole 1995). Although the triadic model does attempt to split the body image up into two components (the structural description and semantic knowledge), how and where do we implement the body affect? Should we postulate a fourth type of body representation?

The second problem that one encounters is the nature of the evidence that current models of body representations rely upon. Neuropsychology has been the main starting point for investigating mental body representations, whereby Head and Holmes (1911–1912) were among the first to describe several patients with dissociable deficits concerning the representation, localization and sensation of the body. However, because there is disagreement on the number and definitions of body representations, there is also disagreement on the classification of bodily disorders. For example, personal neglect is interpreted both as a deficit of body schema (Coslett 1998) and as a deficit of body image (Gallagher 2005), whereas Kinsbourne (1995) argued that it is due to an attentional impairment, and not to a representational impairment. The problem here is that most classifications of body representations rely primarily on a very heterogeneous group of neuropsychological disorders that can be divided or classified on a number of different levels (for an extended discussion, see de Vignemont (2009).

Attempts to classify multiple body representations in healthy individuals also run into several problems. The general approach holds that the experimenter induces a sensory conflict which often results in some form of bodily illusion. This sensory conflict can be evoked within a unimodal information source (for example, illusions due to tendon vibration—described below in more detail (Kammers et al. 2006; Lackner 1988)) or between multisensory sources (for example the rubber hand illusion (Botvinick and Cohen 1998) and mirror illusion (Holmes et al. 2006)). If the response to the bodily illusion is sensitive to the context or to the type of task, then this is often taken as evidence for the involvement of distinct types of body representations. In other words, when significantly different responses to the same sensory conflict/bodily illusion can be identified, these distinct responses are taken to be subserved by dissociable body representations. An example of an illusion which induces unimodal conflict is the kinaesthetic tendon vibration illusion. Vibration of a tendon induces an illusory displacement of a static limb by influencing the afferent muscle spindles (de Vignemont et al. 2005; Kammers et al. 2006; Lackner and Taublieb 1983). Lackner and Taublieb (1983) showed that consciously perceived limb position depends not only on afferent and efferent information about individual limbs in isolation, but also on the spatial configuration of the entire body. More recently, it was shown that a perceptual matching task (to test the body image) was significantly more affected by this vibrotactile illusion than an action reaching response (to test the body schema) towards the perceived location of the index finger of the vibrated arm (Kammers et al. 2006). This shows that the weighting of the information from the vibrated muscle might depend on the kind of output that is required, i.e., the type of task, which was taken as evidence for dissociable underlying body representations.

The kind of body representation that underlies a specific type of task is controversial as well. There is no consensus on how each body representation can best be experimentally tested (whether in healthy individuals or patients). For example, matching of a body part’s illusory orientation can be taken as a perceptual response, which would be a way to investigate the body image in the dual model. By contrast, it would most likely be a measurement of the body schema in the triadic model since it involves active muscle movement. Furthermore, for the triadic model, semantics should be included in the task to tap into the body image. This diversity illustrates the main problem when trying to dissociate and classify multiple body representations in healthy individuals, the risk of identifying as many body representations as there are tentative classifications or significantly different experimental outputs.

A last and important example of this plurality is the range of body representations that can be identified with the rubber hand illusion (RHI). The RHI is evoked when the participant watches a rubber hand being stroked, while their own unseen hand is stroked in synchrony. This results in feeling ownership over the rubber hand and induces a relocation of the perceived location of one’s unseen own hand towards the location of the rubber hand (Botvinick and Cohen 1998). Feeling of ownership over the rubber hand is often measured with a standard questionnaire (Botvinick and Cohen 1998). A psychometric approach using a more extensive questionnaire showed that the illusion induces different components of embodiment after synchronous versus asynchronous stroking indicating that both stimulations might induce different bodily experiences (Longo et al. 2008).

Asynchronous stroking is often applied as a standard control, which not only shows reduced feeling of ownership, but also shows a smaller relocation of the participant’s own unseen hand towards the seen rubber hand compared to synchronous stroking (Botvinick and Cohen 1998; Tsakiris and Haggard 2005). Interestingly, the proprioceptive drift has even been found without any tactile feedback (Holmes et al. 2006). With use of the mirror illusion (where the rubber hand is replaced by the mirror image of the participant’s own hand) Holmes et al. (2006) showed that no tactile feedback has a differential effect on the perceived relocation of the hand compared to asynchronous feedback. They showed that asynchronous feedback (tapping of the finger) in the mirror illusion significantly decreases the proprioceptive drift compared to no tactile feedback. For the RHI it remains somewhat unclear whether the synchronous stroking increases the proprioceptive drift or whether the asynchronous stroking decreases the proprioceptive drift. Nevertheless, the difference between the two is taken as a measure of embodiment of the (location) of the rubber hand.

The illusion-induced discrepancy in perceived hand location is often measured with a perceptual localization task (Botvinick and Cohen 1998; Tsakiris and Haggard 2005). Relocation of the perceived location of the participant’s own hand has now been shown to depend on the task. Although perceptual location judgments of the participant’s own hand were illusion-sensitive, ballistic actions with as well as towards the illuded hand proved robust against the illusion (Kammers et al. 2009a, b). We interpreted this task dependency as evidence for different dissociable underlying body representations, namely the body schema for action and the body image for perception. This was in line with the dual model of body representations. However, this distinction was primarily based on the illusion sensitivity of the body image versus the illusion robustness of the body schema. The interpretation of the body representation used for action became more complicated when we later showed that the robustness of motor responses against bodily illusions seems to be dependent on the exact type of motor task, as well as on the induction method of the illusion. More specifically, when the rubber hand illusion was induced not just on the index finger but on the whole grasping configuration of the hand (i.e., stroking on the index finger and thumb), the kinematic parameters of a grasping movement were affected by the RHI (Kammers et al. 2009).

Consequently, the main concern that one might have with the current approach, in healthy individuals especially, is its focus of interest. The dual and the triadic model are mainly interested in the final output of bodily information processing, and this is where they disagree. This focus on body representations per se is at the expense of the investigation of the building up of those body representations. Let us not assume the existence of multiple types of body representations in healthy individuals, on the basis of a heterogeneous group of syndromes, and try to avoid the pitfall of simply enumerating different representations on the basis of dissociable output, without also looking at the type of input and the interplay between the two. Therefore, instead of introducing yet another dissociation within the body representation(s) models, here we present a different view, focusing on the principles governing the construction of the body representations that are dependent on both available input and output.

A new approach

Two main problems in dissociating multiple body representations in healthy individuals with the use of bodily illusions are: (1) a disproportionate focus on output (i.e., task dependency) and (2) a failure to bring consensus to the current discussion between different body representation models. Alternatively, we suggest: (i) looking not only at the output but also at the type of input and the interplay between the two and (ii) to address and identify different models before conducting an experiment and then testing them directly and objectively against each other. The latter can be done on different levels, for example input, output or the different theoretical models on the number of body representations. Here, we propose Bayesian selection as a method to objectively test different models against each other simultaneously. The Bayes factor is a statistical measure that can be used to calculate the posterior model probability of a model. This quantity reflects the probability that the model is correct given the data. (For an introduction to Bayesian data analysis, see for instance, Gelman et al. (2004). Kass and Raftery (1995) provide a thorough overview of the properties of the Bayes factor as a model selection criterion.)

Why use Bayesian model selection as a tool to overcome the problems identified here? Application of the Bayes factor for this purpose has certain advantages in comparison to conventional null hypothesis testing. First, instead of having to compare each model of interest with the null model (or null hypothesis), the Bayes factor allows us to directly compare several models against each other. Second, this comparison of models does not result in the normal loss of power, due to multiple comparisons, because all relationships between the parameters in a model are simultaneously evaluated. Third, the Bayes factor has a naturally incorporated “Occam’s razor”, which means that when two models explain the data equally well, the Bayes factor prefers the simpler model. These benefits are especially interesting for the problem of the indefinite number of body representations identified here. First, the null model would be that there is no constraint of any body representation. So you could either say that this means that there is only a single body representation, or say that there is no body representation underlying the different responses at all. The other models would be, for example, the Dual model and Triadic model as well as perhaps a Quartic model (Sirigu et al. 1991). In the experimental design there should at least be as many different response types as there are possible body representations based on the most complex model. In this case, four different tasks to tap into the possible four different body representations. For example, a semantic task, a ballistic motor task, a purely perceptual localization task, a matching task, etc. Next, data can be collected and models can be tested in one single experiment, to evaluate which of these models best explain the data. In other words, this would answer the question whether we need two, three or four body representations to explain different effects of a bodily illusion on different type of tasks. This can potentially lead to more consensus within the body representation literature and to less isolated experiments. Next, we will provide a detailed and more technical example of the application of Bayesian model selection for this purpose.

Two rubber hand illusion experiments as a technical illustration

To illustrate the more technical application of this approach, we discuss two RHI experiments in detail. The RHI depends on the temporal correlation between visual and tactile stimulation (i.e., stroking), in which the discrepancy in location is overcome by “visual capture” of the tactile sensation, resulting in a feeling of ownership over the rubber hand and an illusory shift in the perceived location of the subject’s own hand towards the location of the rubber hand. The standard control condition involves asynchronous stimulation of the rubber hand and the subject’s own hand (Botvinick and Cohen 1998). An asynchronous stimulation, compared to synchronous stimulation, results in less feeling of ownership over the rubber hand and a smaller relocation of the subject's own occluded hand towards the visible rubber hand. First, we will discuss an imaginary dataset based on the standard way of investigating the effect of a bodily illusion on only a single type of response. Example 1 does therefore not address the issue of relating multiple responses to multiple body representations, but provides a simple example demonstrating that the proposed approach can be administered on different levels to investigate bodily illusions and body representations. Second, in Example 2 we discuss a more complicated, previously published RHI design, showing how the approach can deal with the conceptual implications of different perceived locations within one type of response (Kammers et al. 2009a).

In both examples we transformed different possible ways of integrating the RHI-induced conflict between multisensory information sources into inequality and equality constrained models. Subsequently, a Bayesian model selection criterion, i.e., the Bayes factor, will be used to investigate which tentative model (or models) of sensory integration can best describe the different perceived body locations.

Bayesian model selection: Example 1

In this first example we use an imaginary RHI dataset based on what has been frequently reported (e.g., Botvinick and Cohen 1998; Tsakiris and Haggard 2005). Five imaginary participants gave a perceptual judgment of the perceived location of their stimulated limb after either synchronous stroking (RHI illusion condition) or asynchronous stroking (RHI control condition). In this simplified version of an actual RHI experiment we investigate the effect of synchronicity of tactile stimulation. More precisely we will look at the mean response errors after synchronous versus asynchronous stroking to investigate the effect of the RHI (Table 1).

Table 1 Illustrative data set of five participants

This relocation error provides insight into the underlying relative weighting of visual and proprioceptive information. It is already known that accurate limb localization is based in general on the multimodal combination of visual and proprioceptive information (Desmurget et al. 1995; Graziano 1999; Graziano and Botvinick 2002). Several models have been proposed to account for this difference in multisensory weighting depending on the task demands (Deneve and Pouget 2004; Ernst and Banks 2002; Scheidt et al. 2005; van Beers et al. 1998, 1999, 2002). Although these models differ in the way multisensory information is integrated, they all agree that the objective of this integration/weighting is to reduce uncertainty and create an accurate (consistent) localization of the limb. A wide range of studies suggest that the relative weights that are given to both information sources seem to depend on a range of factors. For instance, the weighting seems to alter between different spatial directions (van Beers et al. 2002), which remains true even during illusory induced reaching errors (Snijders et al. 2007). Furthermore, different locations of the hand with respect to the body (Rossetti et al. 1994), and even illumination conditions of the hand and the visual background (Mon-Williams et al. 1997) can modify the relative weight given to vision and proprioception. Additionally, when looking at movements, the relative weighting seems to differ for the trajectory of the action versus the end point of a movement (Scheidt et al. 2005).

Model specification

We denote by μ the mean of the induced relocation of the illuded hand. In other words, this number represents the mean error between the perceived and the veridical location of the subject’s own hand in centimeters. From this number the relative weight of vision and proprioception can be derived. Complete visual dominance would result in a μ equal to the distance between the rubber hand and the subject’s own hand. By contrast, complete proprioceptive dominance would result in a μ of zero. In that case there would be no error between the perceived and the veridical location of the subject’s own hand.

In this first example we test two very simple models. We consider the inequality constrained model M 1: μ 1 > μ 2 (which states that there is an illusory relocation after synchronous stroking), this would results in a larger error towards the location of the rubber hand compared to the location error after asynchronous stroking. In terms of multisensory integration this means that the weight of visual information of the rubber hand is weighted more strongly after synchronous stroking compared to asynchronous stroking. Or conversely proprioception is weighted more strongly after asynchronous stroking compared to synchronous stroking.

This model will be tested against the unconstrained model M 0: μ 1, μ 2, which does not make any assumptions about the weighting of vision and proprioception, that is, μ 1 and μ 2 can have any combination of values.

Bayes factor

The Bayes factor, which is denoted by B ji , is a model selection criterion that provides the amount of evidence in the data in favour of model M j against model M i . If B 10 > 1, then model M 1 receives more evidence from the data than model M 0. For example, if B 10 = 3.0, there is three times more evidence in the data in favor of model M 1 in comparison to model M 0. Note that this is equivalent to B 01 = 0.33.

When selecting between the inequality constrained model (M 1: μ 1 > μ 2) versus the unconstrained model (M 0: μ 1, μ 2) based on the hypothetical data in Table 1, the Bayes factor can be calculated using the encompassing prior approach discussed by Klugkist et al. (2005). This methodology was generalized to address the multivariate normal model by Mulder et al. (2009).

First, a prior distribution must be specified for the model parameters (μ 1, μ 2) under the unconstrained model M 0. This prior is also referred to as the encompassing prior. The prior distribution represents the knowledge we have about the model parameters before observing the data. We assume vague (noninformative), independent, and identically distributed priors for μ 1 and μ 2 so that the prior distribution is dominated by the data. Figure 1 displays a contour plot of this prior (dashed lines).

Fig. 1
figure 1

Sketch of contour plots of prior and posterior densities based on the data of Table 1. The complete square can be interpreted as the unconstrained space of M 0 and the grey area can be interpreted as the inequality constrained space of M 1. The proportion of the prior satisfying μ1 > μ2 is 0.5. The proportion of the posterior satisfying μ1 > μ2 is 0.97. Note that the prior distribution is broad and vague, whilst the posterior distribution is narrower and centred on the means in the empirical data

When updating our knowledge about (μ 1, μ 2) using the data in Table 1, we obtain the posterior distribution of (μ 1, μ 2), which represents our knowledge about (μ 1, μ 2) after observing the data. For this data set, the posterior would be located around the sample means (3, 1.4) as is displayed in Fig. 1.

Note that the posterior variances of μ1 and μ2 are smaller than the prior variances as can be seen by the smaller radius of the contours of the posterior in Fig. 1. This is a consequence of the posterior containing more information about μ1 and μ2 than the prior.

According to the encompassing prior approach, the Bayes factor B 10 of model M 1 versus model M 0 is given by:

$$ B_{10} = {\frac{{{\text{posterior}}\;{\text{proportion}}\;{\text{satisfying}}\;\mu_{ 1} > \mu_{ 2} }}{{{\text{prior}}\;{\text{proportion}}\;{\text{satisfying}}\;\mu_{ 1} > \mu_{ 2} }}} = {\frac{0.97}{0.5}} = 1.94 $$

Hence, model M 1 is almost two times better than model M 0 at explaining the observed data. Therefore, the model that assumes a larger error towards the rubber hand after synchronous stroking (μ 1 > μ 2) should be preferred over the unconstrained model (μ 1, μ 2 unconstrained) given the data in Table 1. In terms of multisensory integration this means that visual information is relatively more strongly weighted after synchronous stroking compared to asynchronous stroking.

The Bayes factor can be used to calculate posterior model probabilities, denoted by p(M i |X), which reflect the probability that model M i is correct given the data X and the other models under evaluation. In this example, the posterior model probability of M 1 is calculated according to:

$$ p(M_{1} |X) = {\frac{{B_{10} }}{{B_{11} + B_{10} }}} = {\frac{1.94}{1 + 1.94}} = 0.66 $$

Similarly, the posterior model probability of the unconstrained model is p(M 0|X) = 0.34.

Bayesian model selection: Example 2

In this second example we use an existing dataset (Kammers et al. 2009a) that exemplifies the main pitfall of the premise that all significantly different types of output are subserved by dissociable body representations. In this experiment, we applied the RHI paradigm and measured its effect on different types of responses to investigate possible dissociable body representations. Subject’s own occluded right index finger and the visible index finger of the rubber hand were stroked either synchronously (illusion condition) or asynchronously (control condition). After this stimulation period, one of five perceptual localization responses was collected.

The perceptual response was a matching judgment in which the subject verbally indicated when the experimenter’s left index finger on top of the framework mirrored the perceived location of the subject’s own right index finger inside the framework. Perceptual response 1 was asked immediately after the RHI induction.

The perceptual responses 2 through 5 were all conducted after two action responses. In these cases there was first the stimulation period, next two pointing responses and finally a perceptual response. A pointing response could be conducted either with the illuded hand towards the location of the tip of the index finger of the non-illuded hand, or vice versa. All pointing movements were done inside the framework out of view. The pointing hand landed on a Plexiglas pane placed above the target hand so no cues about pointing accuracy were provided.

Perceptual response 2 was given after the subject pointed twice with the non-illuded hand towards the perceived location of the index finger of the illuded hand. Perceptual response 3 was conducted after a pointing movement first with the illuded hand towards the non-illuded hand and next with the non-illuded hand towards the illuded hand. Perceptual response 4 was identical to perceptual response 3, except that the order of the two previous pointing movements was reversed. Finally, perceptual response 5 was preceded by two pointing movements with the illuded hand.

Our conventional line of reasoning holds that if the perceived location of the illuded hand measured with response X significantly differs from the perceived location of the illuded hand with response Y, then X and Y must be based on different underlying body representations (Kammers et al. 2006). This line of reasoning works relatively well if we administer qualitatively different tasks, such as actions (body schema) versus perceptual localization tasks (body image). However, this reasoning introduces the risk of becoming redundant when we find significantly different perceived locations for response X1, X2, X3 etc., like we do for the perceptual responses in this experiment (Kammers et al. 2009a). Strictly speaking this could be interpreted as evidence for three different body images. Therefore, in this case, investigating the underlying multisensory integration processes in more detail might be more informative than simply proposing numerous dissociable body representations/images. Here, we investigate whether the difference in magnitude between these perceptual judgments can be explained by differences in the weighting of information depending on both the availability and quantity of more up to date proprioceptive information when visual information is no longer directly available. In this way, dissociable perceived locations for the different perceptual responses do not necessarily need to be explained by distinct multiple underlying body representations.

Model specification

We identified the following two important aspects which might have affected the relative weighting of visual and proprioceptive information: (1) the available (multi)sensory information and (2) the precision of each mode of information (for example, vision has proven to be more precise than proprioception in certain cases).

In the present example, new proprioceptive information about the location of the illuded hand is only available for Perceptual responses 3, 4, and 5. The amount of information was the same for responses 3 and 4, but doubled for response 5. For perceptual response 2 there is no new proprioceptive information of the illuded hand and the visual information is older than during perceptual response 1 which may or may not affect its relative weight.

Subsequently, we identified three different possible tentative weighting models, which might explain the plurality of dissociable perceived locations of the same limb for the same type of task (perceptual matching as a means to measure the body image) in this experiment.

  • M1—equality model. The location of the illuded hand is the result of a specific relative weighting between vision and proprioception that is equal across all conditions. In other words is unaffected by the amount of new proprioceptive information, which would thus result in the same localization error for all perceptual responses.

  • M2—availability model. The location of the illuded hand is unaffected by the amount of new proprioceptive information. However, when new proprioceptive information is provided the relative weight of visual information is reduced. This would result in similar localization error for perceptual responses 3, 4, and 5 which would be smaller than the relocation error found for perceptual responses 1 and 2.

  • M3—quantitative model. The location of the illuded hand is influenced by the presence as well as the quantity of more up to date proprioceptive information. In other words, the perceived location of the hand is not only influenced by movement of the illuded hand but also by the number of movements that are made before the perceptual response is given. This would result in diminishing relocation errors between 3, 4 versus 5.

We translate these hypotheses into constrained statistical models. To that end, we first subtract the strength of the RHI (illusion minus control condition) for each perceptual judgment so that we obtain five measurements of each subject.

Bayes factor

Response errors for 14 subjects in all 5 conditions are displayed in Table 2.

Table 2 Overview of the real dataset, showing the RHI-dependent location error in centimeters (cm) for each perceptual response (data previously published in Kammers et al. 2009a)

These five measurements were modeled with a multivariate normal distribution N(μ, Σ) where μ is a vector of length 5 containing the means of the 5 measurements, i.e. (μ1, μ2, μ3, μ4, μ5), and Σ is the corresponding covariance matrix, which contains the variances and covariances of the five measurements. The three theories stated above can be translated into models with inequality and equality constraints between the measurement means according to:

$$ \begin{gathered} M_{{{\mathbf{1}} \, }}\text{-} {\text{equality model}}:\mu_{ 1} = \mu_{ 2} = \mu_{ 3} = \mu_{ 4} = \mu_{ 5} \hfill \\ M_{{{\mathbf{2}} \, }} \text{-}{\text{availability model}}: \, \mu_{ 1} = \mu_{ 2} < \mu_{ 3} = \mu_{ 4} = \mu_{ 5} \hfill \\ M_{{{\mathbf{3}} \, }}\text{-} {\text{quantitative model}}: \, \mu_{ 1} = \mu_{ 2} < \mu_{ 3} = \mu_{ 4} < \mu_{ 5} \hfill \\ \end{gathered} $$

Please note here that model M 1 is equivalent to the null hypothesis, i.e., the perceived position of the subject’s own hand is based on a specific relative weighting between vision and proprioception that is equal across all conditions. Next, we calculate the Bayes factor of each model versus the other models. This can be done using the methodology described by Mulder et al. (2009). Finally, the posterior model probability of each model can be calculated using the Bayes factors according to:

$$ p(M_{1} |X) = {\frac{{B_{j1} }}{{B_{11} + B_{21} + B_{31} }}} $$
(1)

for j = 1, 2, 3, where B 11 = 1 because model M 1 is equally good as itself. As was mentioned earlier, the outcome reflects the probability that model M j is the correct model among the three models given the data.

The Bayes factors between each of the three models are displayed in Table 3. From these results it can be concluded that the Quantitative model M 3 is the best model because there is decisive evidence in favor of model M 3 against model M 1 (B 31 = 500) as well as strong evidence in favor of model M 3 against model M 2 (B 32 = 10).

Table 3 Bayes factors between the constrained models M 1, M 2, and M 3

The posterior model probabilities are calculated using (1) and are presented in Table 4. Hence, the Quantitative model M 3 is the most plausible of the three models given the data, with a posterior model probability of 0.91. This result implies that the perceived location of the subject’s own index finger depends on relative weighting between (memorized) visual information and proprioceptive information, whereby the relative weights depend on the availability as well as the quantity of new proprioceptive information. As the subject moves the limb, additional proprioceptive information about the limb’s location becomes available, and the relative weight assigned to the visual information about the limb’s location diminishes.

Table 4 Posterior model probabilities

By approaching the data in this way Example 2 shows that different sensed locations within a single perceptual task can be explained by differential weighting of information. Approaching the data in this way shifts the focus of interpretation back onto the interplay between the nature of the available sensory input and the specific output demands, providing more information about how the body representation is built up. This seems to be more informative and meaningful than classifying the task dependency of the RHI in terms of different body representation categories only—categories that differ between different body representation models in the first place.

Conclusion: the weight of representing the body

In the present paper, we address a problem that has recently arisen: the potentially indefinite number of body representations in healthy individuals when based on bodily illusion task-dependency alone. We propose a shift in focus by looking into how the sensory conflict induced during a bodily illusion is resolved depending on how different sensory weighting criteria are applied. Furthermore, we suggest identifying different models (either on the level of multisensory information or on the level of different theoretical body representation models) and testing them against each other simultaneously with Bayesian model selection in a single experiment. This way, we try to create more consensus and clarity within the body representations literature in healthy individuals. We illustrate the technical application of this approach in two RHI examples.

The advantage of this approach is twofold. First, the lack of unity between body representation models can now be overcome by testing these models directly against each other. The Bayes factor does not give the answer which of the models is “the truth”, but it can tell which of the models under investigation receives most support from the data. Second, the risk of infinite multiplication can be avoided by creating models that focus on the input together with the output, and by testing several different experimental manipulations at the same time against each other. This investigation of how the body is represented rather than in how many ways, might lead to more consensus and less isolated experiments.