PLAViMoP: How to standardize and simplify the use of point-light displays

Abstract

The study of biological point-light displays (PLDs) has fascinated researchers for more than 40 years. However, the mechanisms underlying PLD perception remain unclear, partly due to difficulties with precisely controlling and transforming PLD sequences. Furthermore, little agreement exists regarding how transformations are performed. This article introduces a new free-access program called PLAViMoP (Point-Light Display Visualization and Modification Platform) and presents the algorithms for PLD transformations actually included in the software. PLAViMoP fulfills two objectives. First, it standardizes and makes clear many classical spatial and kinematic transformations described in the PLD literature. Furthermore, given its optimized interface, PLAViMOP makes these transformations easy and fast to achieve. Overall, PLAViMoP could directly help scientists avoid technical difficulties and make possible the use of PLDs for nonacademic applications.

Background and motivation

More than 40 years ago, it was shown that human beings are highly sensitive to biological motion produced by living organisms. In a seminal paradigm, Johansson, a Swedish researcher, demonstrated that this sensitivity for biological motions was mainly related to the capacity to interpret kinematics. From minimalist motion sequences that contained only small lights representing an actor’s major joints, he demonstrated that people are able to recognize numerous actions such as walking or dancing (Johansson, 1973): The method of point-light display (PLD) was born. Since this first study, many researchers and non-researchers have seized upon this technique to better understand the mechanisms underlying visual perception of biological movements or, in a more applied framework, to improve sports performance, rehabilitation techniques or even the technologies used in the film and video game industries. In this perspective, many studies have been performed to improve the capture, the visualization and the modification of PLDs. However, to date, there are still many questions related, in particular, to the absence of clear algorithms concerning the different transformations of point-light sequences. In this article, we introduce new software called PLAViMoP (Point-Light Display Visualization and Modification PlatformFootnote 1), with the objective of standardizing and facilitating the visualization and modification of PLDs. After a brief review of the PLD literature, our article details the algorithms and functions included in our new software, PLAViMoP. The last part of the article is devoted to a discussion about possible uses of our tool for scientific experts as well as from more applied perspectives.

Since the first study by Johansson (1973), PLDs have been enthusiastically adopted by scientists, and many studies have been conducted using this method (for reviews, see Bidet-Ildei, Orliaguet, & Coello, 2011; Blake & Shiffrar, 2007; Pavlova, 2012). Globally, these studies have confirmed that humans have very high sensitivity to this type of animation. In fact, humans can recognize many of the biological actions of living organisms from PLDs (e.g., Johansson, 1973; Pavlova, Krageloh-Mann, Sokolov, & Birbaumer, 2001), as well as the gender (Kozlowski & Cutting, 1977; Pollick, Kay, Heim, & Stringer, 2005; Troje, Sadr, Geyer, & Nakayama, 2006), identity (Beardsworth & Buckner, 1981; Loula, Prasad, Harber, & Shiffrar, 2005; Troje, Westhoff, & Lavrov, 2005), emotion (Atkinson, Dittrich, Gemmell, & Young, 2004; Chouchourelou, Matsuka, Harber, & Shiffrar, 2006; Clarke, Bradshaw, Field, Hampson, & Rose, 2005; Dittrich, Troscianko, Lea, & Morgan, 1996), intention (Chaminade, Meary, Orliaguet, & Decety, 2001; Davila, Schouten, & Verfaillie, 2014; Iacoboni et al., 2005; Louis-Dam, Orliaguet, & Coello, 1999; Martel, Bidet-Ildei, & Coello, 2011), and personality traits (Thoresen, Vuong, & Atkinson, 2012) of the observed human stimuli. Moreover, properties of manipulated objects, such as weight (Runeson & Frykholm, 1981) or size (Jokisch & Troje, 2003), can also be detected via PLDs.

Interestingly, the capacities required for perceiving PLDs appear at birth (Bardi, Regolin, & Simion, 2011; Bidet-Ildei, Kitromilides, Orliaguet, Pavlova, & Gentaz, 2014; Simion, Regolin, & Bulf, 2008) and were related to the activation of specific parts of the brain (see Giese & Poggio, 2003; Pavlova, 2012, for reviews), including those involved in motor skills (e.g., Bonda, Petrides, Ostry, & Evans, 1996; Grèzes et al., 2001; Grossman et al., 2000; Saygin, Wilson, Hagler, Bates, & Sereno, 2004; Sokolov, Gharabaghi, Tatagiba, & Pavlova, 2010; Vaina, Solomon, Chowdhury, Sinha, & Belliveau, 2001; van Kemenade, Muggleton, Walsh, & Saygin, 2012). The involvement of the motor system during PLD processing has also been confirmed by developmental (Louis-Dam et al., 1999) and neuropsychological (Chary, Méary, Orliaguet, David, Moreaud, & Kandel, 2004; Pavlova, Bidet-Ildei, Sokolov, Braun, & Krageloh-Mann, 2009) studies that have shown a positive link between motor performance and the ability to recognize PLDs.

Finally, in addition to the interest in understanding the mechanisms involved in the visual perception of biological movements, other authors have been interested in the links between this perceptual capacity and other cognitive or social abilities. In this context, it has been shown that the visual perception of human movements is closely related to the abilities underlying social cognition, concerning the recognition of emotions, the interpretation of others’ behavior or even the level of empathy (see Pavlova, 2012, for a review). In the same manner, the sensitivity to biological motion is related to higher cognitive functions such as the processing of language and numbers. Indeed, it has recently been shown that listening to or reading an action verb increases the capacity to recognize a congruent point-light action embedded in masking dots (Bidet-Ildei, Gimenes, Toussaint, Almecija, & Badets, 2017; Bidet-Ildei, Sparrow, & Coello, 2011). In the same vein, the observation of a pointing movement directly affects the capacity of humans to generate numbers. In relation to the mental number line concept, in which small quantities are represented on the left side and large quantities on the right side (Dehaene, 1992), it has been shown that the observation of a pointing movement directed toward the left side increases the probability of generating a small number, whereas the observation of a pointing movement directed toward the right side increases the probability of generating a large number (Badets, Bidet-Ildei, & Pesenti, 2015).

Overall, this brief review underscores the role of PLDs in facilitating our understanding of the reciprocal links between action, perception, and cognition.

One important issue in the PLD literature is the need to better specify the mechanisms behind PLD processing and the factors that modulate PLD processing. To date, several questions remain under debate, such as the roles of local and global information (e.g., Bardi et al., 2011; Chang & Troje, 2009), the impact of motor and visual experience (see Bidet-Ildei, Orliaguet, & Coello, 2011, for a review) and the role of sex differences in perceptual performances (e.g., Pavlova, Sokolov, & Bidet-Ildei, 2015). Moreover, whereas links have been demonstrated between the processing of biological motion and the processing of language (Beauprez & Bidet-Ildei, 2017; Bidet-Ildei, Gimenes, Toussaint, Beauprez, & Badets, 2017; Bidet-Ildei, Sparrow, & Coello, 2011; Bidet-Ildei & Toussaint, 2015; Pavlova et al., 2015), numbers (Badets et al., 2015) or social activities (Atkinson et al., 2004), the specificity of these links and their neural substrates remain open questions.

To disentangle these issues, a valuable methodology consists of modifying natural PLDs and assessing the consequences of these modifications on perceptual capacities. Using this methodology, several studies have investigated the consequences of spatial and/or temporal modifications of biological PLDs on perceptual competencies (see Appendix A for a list of references using PLD transformations). Spatial perturbations can simply consist of showing the PLD using an unnatural orientation (e.g., Pavlova & Sokolov, 2000; Simion et al., 2008; Sumi, 1984; Verfaillie, 2000), playing it backward (e.g., Klin, Lin, Gorrindo, Ramsay, & Jones, 2009) or with shifting dots along the articulated limbs (e.g., Beintema & Lappe, 2002). Spatial transformations can also consist to average some PLDs with spatio-temporal morphing (e.g., Jastorff, Kourtzi, & Giese, 2006; Thoresen et al., 2012; Troje, 2002). Finally, it is possible to disturb the spatial coherence of the animation by scrambling the positions of the joints (“scrambled motions”; e.g., Bidet-Ildei et al., 2014; Grossman et al., 2000; Hirai & Hiraki, 2005; Hiris, 2007; Simion et al., 2008), by using temporal or spatial bubbles (Thurman & Grossman, 2008), or by using pair-wise motions that preserve the local pendular movements associated with individual limbs (Kim, Jung, Lee, & Blake, 2015). Overall, these different studies have shown that the capacity of humans to perceive and recognize biological motion is closely related to the spatial properties of the movement, such as the canonical orientation (Pavlova & Sokolov, 2000) and spatial coherence (Grossman et al., 2000; Hirai, Senju, Fukushima, & Hiraki, 2005) of the movement. Moreover, changing the orientation of a PLD may lead to a bias, in the sense that observers often perceive a PLD as facing toward them (Vanrie, Dekeyser, & Verfaillie, 2004). Interestingly, the sensitivity to the spatial specificities of a PLD is present at birth. In fact, the gaze of newborns 2–4 days of age oriented more toward a canonical biological PLD than toward an upside-down equivalent, and more toward biological than toward scrambled PLDs (Simion et al., 2008).

Other modifications consist of modifying the kinematics of each dot constituting the PLD while maintaining the spatial trajectory and total duration of each dot. Previous studies have rendered the biological movement nonbiological by modifying the velocity along the path using a constant velocity, a linear acceleration, or an inverse velocity (Bidet-Ildei, Kitromilides-Salerio, Orliaguet, & Badets, 2011; Bidet-Ildei, Meary, & Orliaguet, 2008; Bidet-Ildei, Orliaguet, Sokolov, & Pavlova, 2006; Bouquet, Gaurier, Shipley, Toussaint, & Blandin, 2007; Martel et al., 2011; Pozzo, Papaxanthis, Petit, Schweighofer, & Stucchi, 2006). When PLD violated the biological kinematic laws, recognition was generally degraded (Bouquet et al., 2007). Moreover, nonbiological velocity drastically reduced the capacity to anticipate the final position of a human movement presented as a PLD (Martel et al., 2011; Pozzo et al., 2006) and could affect the natural link between number and space (Badets et al., 2015).

Finally, one other way to study PLDs consists of camouflaging the PLD with dynamic masks (Cutting, Moore, & Morrison, 1988), which consist of several dots placed at random positions. Each dot of the mask can move with different dynamics corresponding to linear, random, or scrambled motions (Bidet-Ildei, Chauvin, & Coello, 2010; Cutting et al., 1988; Hiris, 2007). However, the duration of presentation (Cutting et al., 1988; Thornton, Pinto, & Shiffrar, 1998), the type of mask (Cutting et al., 1988; Hiris, 2007), and the number of masking dots (Bidet-Ildei et al., 2010; Cutting et al., 1988; Hiris, 2007) directly influence the perception of biological motion. When the duration of the target stimulus presentation is close to 200 ms, different types of dynamic masks (i.e., linear, random, and scrambled motion) can dramatically impede the ability to recognize human point-light display. In contrast, when the duration of the target stimulus is longer than 400 ms, only masks composed of scrambled dynamic components of the target stimulus can significantly decrease perceptual performance, whereas masks composed of random or linear motions do not influence participants’ sensitivity (Cutting et al., 1988; Hiris, 2007) as compared to PLDs without masks.

Altogether, these studies show that perturbing spatial–temporal PLD characteristics is a valuable methodology to better understand the mechanisms behind the considerable capacity of humans to perceive and interpret biological movements. However, despite the number of studies that have used spatial and kinematics transformation of PLD, there has been no clear description of the algorithms used to make these transformations. This lack of transparency can affect the reproducibility of results, and can even generate ambiguities. For example, in the literature, there is ambiguity regarding the scrambled label, which is used both by people who have randomized the initial spatial position of each dot constituting their original PLD in a specific window (e.g., Bidet-Ildei et al., 2014) and by people who have randomly permuted the positions of the different dots constituting the original PLD (e.g., Nackaerts et al., 2012). Moreover, studies rarely detail whether changes in kinematics were made on each component of the velocity or directly on the norm of the velocity. Yet these two types of transformation can lead to completely different configurations. In the same way, when a z-axis spatial rotation is executed, it is not always specified from which point of origin the rotation is performed, with most studies simply using the terms “inverted” or “upside down.” The second difficulty in applying PLD transformations is the number of steps and calculations required to execute them. For example, the inversion of the velocity norm necessitates (1) recovering the coordinates of the biological movement (here, it is possible to directly capture the motion or to keep the coordinates in an existing database; see, e.g., the base of Shipley & Brumberg, 2004), (2) calculating the tangential velocity and the trajectory evolution along the path, (3) calculating the mean of the velocity, (4) modifying the biological tangential velocity to have acceleration when biological motion decelerate and deceleration when biological motion accelerate, (5) retrieving new coordinates of motion that respect the new tangential profile and keep the spatial trajectory of the movement, and (6) generating the new stimulus. Performing all these steps takes time, increases the risk of error, and limits the use of PLDs to an academic environment when several applications could be imagined.

Even if some tools have tried to facilitate the modification of PLDs, such as the bio-motion toolbox in Matlab (van Boxtel & Lu, 2013), to our knowledge no simple software exists that allows for the transformation of PLDs by researchers without programming competence (but see the online demonstration by Niko and Troje: https://www.biomotionlab.ca/Demos/BMLwalker.html). In this article, we introduce the PLAViMoP software,Footnote 2 a new, free-access program that allows both the transformation and visualization of PLDs. The first objective of PLAViMoP is to standardize the spatial, temporal, and dynamic transformations classically applied to PLDs (see Appendix A for a review of the transformations used in the literature that can be performed with PLAViMoP). The second objective of PLAViMoP is to make all these transformations easy and fast. By allowing the automatic realization of all steps in a standardized routine, PLAViMoP will facilitate the applications of PLD transformations by scientists and will allow nonspecialists with limited access to technological resources to use biological movement transformations in line with their specified needs (motor reeducation, sports training, etc.). The third objective is to allow spatial and kinematic transformations in both 2-D and 3-D spaces; this offers the possibility of easily generating PLDs from various points of view (perspectives) rather than from only a side or a front view. Because we are aware that 3-D PLDs are ambiguous relative to 2-D displays (e.g., Rehg, Morris, & Kanade, 2003), the Mokka part of PLAViMoP proposes various tools (e.g., adding a grid floor, coloration of points to differentiate right and left sides, creation of links between points to model a skeleton, simultaneously proposing two different angles of view) to remove this ambiguity (see Fig. 1 for one example). Finally, PLAViMoP allows for managing several point lights together in order to facilitate the study of social interactions. In addition, it can run from sophisticated capture systems (e.g., Vicon or Qualisys), but also from very simple systems (e.g., leap motion), and even use point-lights created by computer simulation (Cutting, 1978), since a plug-in (“CSV2C3D,” provided with the PLAViMoP software) allows for creating a c3d file from an Excel file that specifies the set of coordinates (X, Y, Z) as a function of time.

In the next section, a description of PLAViMoP and its transformations is presented, along with potential applications. We decided to include in our program many of the functionalities that are classically used in the literature (see Appendix A for a list of studies that have already used the transformations available in PLAViMoP), even though we are conscious that the functionalities it provides are not exhaustive. The idea is first to standardize the modifications that have already been applied in previous research. However, PLAViMoP is a collaborative platform, and therefore new transformations and new functionalities can be added via plug-ins. Thanks to its standardized transformations, ease of use, and collaborative approach, we hope that PLAViMoP will be used and developed by the research community.Footnote 3

Implementation and examples

PLAViMoP is composed of a MATLAB graphical user interface interacting with the free, open-source program Mokka (Barre & Armand, 2014). Figure 1 presents a global view of the software.

Fig. 1
figure1

Global view of PLAViMoP. On the left, the different spatial, masking, and kinematic transformations proposed by the application are presented. In the middle of the screen, the PLD visualization of the selected action is presented, thanks to the Mokka software (here, the static view of a walking man is presented). Note that a grid floor can be added or removed, to decrease the ambiguity of a 3-D PLD shown on a 2-D display. On the right side of the screen, the available joints may be selected

The application can be installed by downloading the software installation package directly from the following web address: http://plavimop.prd.fr. Both the MATLAB interface and Mokka software are required, as well as aWindows 64-bit system and an Internet connection, for the PLAViMoP installation. The minimal screen resolution is 1,024 × 900 pixels. However, the application has been optimized for a screen resolution of 1,920 × 1,080 pixels. Only the C3D format is supported by the PLAViMoP application. This format is the standard for motion capture files.Footnote 4 Files should contain only 3-D trajectories of a set of markers (e.g., no force-plate data, no analogical channel). The X, Y, and Z components are expressed in millimeters in a global reference frame (forward direction given by the x-axis; vertical direction given by the z-axis, pointing upward; and lateral direction given by the y-axis, pointing to the left of the subject/object).Footnote 5 The number of markers is not limited, but a common set of markers for human motion is listed in Table 1. The number of frames of the C3D file and the frame rate are not limited. However, a high number of frames and/or a high frame rate will result in a time-consuming process. For example, standard C3D files are sampled at 100 Hz and contain approximately 200 frames.

Table 1 List and localization of common markers used to record human motions

The visualization of the PLD is achieved with Mokka (Fig. 1, middle). The application directly allows the modification of some aspects of the PLD.Footnote 6 For example, Mokka can act directly on markers (e.g., size and color), on PLD presentation (e.g., zoom and perspective), or on time display (e.g., video cropping and playback speed).

The different transformations proposed by PLAViMoP can be accessed from the user interface, situated at the left of the screen (Fig. 1). The user interface is divided into five zones—load movement, spatial transformation, masking PLDs, velocity transformation, and exportations—which, respectively, allow users to load a file containing a PLD, modify the file at the spatial level, add masking dots, modify the file at the kinematic level, and create a new PLD (as a.c3d or.avi file) after transformation.

Spatial transformations

At the spatial level, PLAViMoP enables users to spatially transform the original motion and add masking dots. The different modifications are detailed below. The transformation can be explained by stating that a C3D file consists of k frames sampled at F rate with a set of n markers whose time history coordinates are noted\( {\mathbf{M}}_{\mathbf{j}}\left(\boldsymbol{t}\right)\to {\left\{{M}_j^x(t)\kern0.5em {M}_j^y(t)\kern0.5em {M}_j^z(t)\right\}}^T \). The initial and final times are designated t0 and tf, respectively.

Modify the original PLD

Mirror transformation

This transformation enables users to create horizontal, lateral, or vertical symmetry of the original motions (see Fig. 2). The mathematical operations computed when selecting the mirror transformation buttons can be written as follows: ∀j ∈ {1…n} and ∀ t ∈ {t0tf}.

$$ {\mathbf{M}}_{\mathbf{j}}\left(\mathbf{t}\right)\left\{\begin{array}{c}{M}_j^x(t)\\ {}{M}_j^y(t)\\ {}{M}_j^z(t)\end{array}\right\}{\mathbf{M}}_{\mathbf{j}}\left(\boldsymbol{t}\right)\left\{\begin{array}{c}-{M}_j^x(t)\\ {}{M}_j^y(t)\\ {}{M}_j^z(t)\end{array}\right\}\ \left(\mathrm{horizontal}\right) $$
$$ {\mathbf{M}}_{\mathbf{j}}\left(\mathbf{t}\right)\left\{\begin{array}{c}{M}_j^x(t)\\ {}{M}_j^y(t)\\ {}{M}_j^z(t)\end{array}\right\}\to {\mathbf{M}}_{\mathbf{j}}\left(\boldsymbol{t}\right)\left\{\begin{array}{c}{M}_j^x(t)\\ {}-{M}_j^y(t)\\ {}{M}_j^z(t)\end{array}\right\}\left(\mathrm{lateral}\right) $$
$$ {\mathbf{M}}_{\boldsymbol{j}}\left(\boldsymbol{t}\right)\left\{\begin{array}{c}{M}_j^x(t)\\ {}{M}_j^y(t)\\ {}{M}_j^z(t)\end{array}\right\}\to {\mathbf{M}}_{\mathbf{j}}\left(\boldsymbol{t}\right)\left\{\begin{array}{c}{M}_j^x(t)\\ {}{M}_j^y(t)\\ {}-{M}_j^z(t)\end{array}\right\}\ \left(\mathrm{vertical}\right) $$
Fig. 2
figure2

Static illustrations of the different mirror transformations (horizontal, lateral, and vertical) available in PLAViMoP. For the sake of clarity, the left and right foot segments are represented in red and green, respectively

Rotation transformation

The rotation transformation allows the original sequence of motion to rotate around different axes (x, y, z). The rotation point (origin, mean point or joints) and the rotation angle (from – 180° to 180°) can be specified.

The available rotation points are the origin of the global reference frame \( \mathrm{O}\to {\left\{0\kern0.5em 0\kern0.5em 0\right\}}^T \), with each marker Mj and an average point N (mean point) computed as follows:

$$ \mathbf{N}\left(\mathbf{t}\right)=\left[\begin{array}{c}\frac{\sum_{j=1}^n{M}_j^x(t)}{n}\\ {}\frac{\sum_{j=1}^n{M}_j^y(t)}{n}\\ {}\frac{\sum_{j=1}^n{M}_j^z(t)}{n}\end{array}\right] $$

Then, the rotation angles around the x-, y-, and z-axes can be set with the corresponding sliders.

$$ \forall j\in \left\{1\dots n\right\}\ \mathrm{and}\forall t\in \left\{{t}_0\dots {t}_f\right\}, $$
$$ {\mathbf{M}}_{\mathbf{j}}\left(\mathbf{t}\right)={\mathbf{R}}_{\gamma}^t\left({\mathbf{R}}_{\boldsymbol{\beta}}^{\boldsymbol{t}}\left({\mathbf{R}}_{\boldsymbol{\upalpha}}^{\mathbf{t}}\left({\mathbf{M}}_{\mathbf{j}}(t)-{\left[{A}^x\left({t}_0\right)\kern0.5em {A}^y\left({t}_0\right)\kern0.5em {A}^z\left({t}_0\right)\right]}^T\right)\right)\right)+{\left[\begin{array}{ccc}{A}^x\left({t}_0\right)& {A}^y\left({t}_0\right)& {A}^z\left({t}_0\right)\end{array}\right]}^T $$

where

$$ {\mathbf{R}}_{\boldsymbol{\upalpha}}^{\mathbf{t}}=\left[\begin{array}{ccc}1& 0& 0\\ {}0& \cos \left({\alpha}_{(t)}\right)& \sin \left({\alpha}_{(t)}\right)\\ {}0& -\sin \left({\alpha}_{(t)}\right)& \cos \left({\alpha}_{(t)}\right)\end{array}\right];{\mathbf{R}}_{\boldsymbol{\upbeta}}^{\mathbf{t}}=\left[\begin{array}{ccc}\cos \left({\beta}_{(t)}\right)& 0& -\sin \left({\beta}_{(t)}\right)\\ {}0& 1& 0\\ {}\sin \left({\beta}_{(t)}\right)& 0& \cos \left({\beta}_{(t)}\right)\end{array}\right];{\mathbf{R}}_{\boldsymbol{\upgamma}}^{\mathbf{t}}=\left[\begin{array}{ccc}\cos \left({\gamma}_{(t)}\right)& \sin \left({\gamma}_{(t)}\right)& 0\\ {}-\sin \left({\gamma}_{(t)}\right)& \cos \left({\gamma}_{(t)}\right)& 0\\ {}0& 0& 1\end{array}\right] $$

Importantly, in addition to the classical rotations used in the literature (i.e., rotation around the center of gravity of the z-axis; see Pavlova & Sokolov, 2000), PLAViMoP allows rotation around the x- and y-axes. Furthermore, not only rotations around the center of gravity but also rotations around any joint of the starting PLD are possible.

Scrambled transformation

This transformation allows the scrambling of each point constituting the sequence (e.g., Bidet-Ildei & Toussaint, 2015). There are two modes: Shuffle and Random.

In Shuffle mode, each point takes the place of another but conserves its initial trajectory and dynamic. This transformation consists of replacing the initial coordinates of a marker Mj with those of another marker Mp. The indices of markers being exchanged are selected randomly.

$$ {\mathbf{M}}_{\mathbf{j}}(t)\to {\mathbf{M}}_{\boldsymbol{j}}(t)-{\mathbf{M}}_{\mathbf{j}}\left({t}_0\right)+{\mathbf{M}}_{\boldsymbol{p}}\left({t}_0\right) $$

In Random mode, each point starts at a random spatial location but conserves its initial trajectory and dynamic. The starting position of each dot is chosen in order to keep the new trajectory inside the initial box of the original movement defined as follows:

$$ \forall j\in \left\{1\dots n\right\}\ \mathrm{and}\forall t\in \left\{{t}_0\dots {t}_f\right\}\to \mathbf{L}=\left[\begin{array}{cc}\min \left({M}_j^x(t)\right)& \max \left({M}_j^x(t)\right)\\ {}\min \left({M}_j^y(t)\right)& \max \left({M}_j^y(t)\right)\\ {}\min \left({M}_j^z(t)\right)& \max \left({M}_j^z(t)\right)\end{array}\right]=\left[\begin{array}{cc}{L}_{\mathrm{min}}^x& {L}_{\mathrm{max}}^x\\ {}{L}_{\mathrm{min}}^y& {L}_{\mathrm{max}}^y\\ {}{L}_{\mathrm{min}}^z& {L}_{\mathrm{max}}^z\end{array}\right] $$

Appendix B details the control loop that maintains the point lights inside the initial box after the transformation.

Add mask to the PLD

The masks are additional point lights added to the original movement (Cutting et al., 1988). With PLAViMoP, it is possible to add the following four types of masks to the original PLD sequence: static masks, linear masks, random masks, and scrambled masks.

Static mask

A static mask is simply a stationary set of point lights whose coordinates (randomly defined) lie within the limits of the bounding box. Anywhere from 1 up to 200 static masks can be added using a slider. These points can be purely static or flashing. The flashing frequency can be set from 1 to 25 Hz. Since the C3D file frame rate is 100 Hz, a flashing frequency of 1 Hz causes a mask to be alternately visible and invisible during sets of 25 consecutive frames, whereas a flashing frequency of 25 Hz causes a mask to be alternately visible and invisible during sets of two consecutive frames.

Linear mask

A linear mask is a moving point light with constant velocity. Linear masks only move along the x-axis (in a positive or negative direction). Their initial positions are randomly chosen. According to their initial positions and the duration of the C3D file, a maximal velocity is computed to keep the masks within the limits of the bounding box. Then, a random percentage of this velocity is chosen to compute the trajectory. Up to 200 static masks can be added using the dedicated slider. Moreover, it is possible to control the velocity of each masking point (from 0% to 100%). Since there are two possible directions for the displacement of linear masks, masks can be divided into two groups. All markers for the same group will have the same velocity. An intensity of 0% causes static masks, whereas 100% intensity ensures that all markers of the group stay within the limits of the bounding box (see Appendix C for the algorithm).

Random mask

A random mask is composed of point lights with a randomly defined trajectory. Both the initial position and instantaneous acceleration are randomly chosen. Masks move along all three axes. A control loop ensures that all points constituting the mask stay within the bounding box limits (rebound). As we mentioned previously, it is possible to specify the common percentage of maximal velocity (arbitrarily fixed to 10 m/s) assigned to each masking dot (from 0% to 100%). The algorithm is detailed in Appendix D.

Scrambled mask

A scrambled mask is a set of point lights with the same trajectory of the initial point-light set (see Fig. 3). Only their starting positions are defined randomly (Bidet-Ildei et al., 2010). A control loop ensures that the mask stays within the bounding box limits. The number of scrambled masks (k) is proportional to the number of point lights (n) in the initial set. Note that k is limited by the following relation: k × n < 200.

Fig. 3
figure3

Static illustration of the addition of a scrambled mask (dots in green) to a PLD (dots in white) sequence. Here we have added two duplications for each point in the initial PLD sequence

Kinematic transformations

This series of tools aims to modify the dynamic of point-light displacement. There are two different types of transformations:

  • Norm of the velocity: In this case, the norm of the velocity of a point light is modified in order to maintain the original point-light path (Bidet-Ildei et al., 2008; Martel et al., 2011).

  • Components of the velocity: In this case, the norm, components and path are modified (Elsner, Falck-Ytter, & Gredeback, 2012).

Importantly, whereas some authors have already developed tools to modify the dynamics of PLDs (such as frame scrambling; see van Boxtel & Lu, 2013), to our knowledge, no tool allows to automatically modify the dynamics of the motion without changing the spatial trajectory or to modify independently each component of the motion.

Kinematic transformations based on changes in the norm

The norm of the velocity of a given point of light is classically computed at each frame, with

$$ \left\Vert V\right\Vert =\sqrt{V_X^2+{V}_Y^2+{V}_Z^2.} $$

All transformations detailed below allow the modification of the dynamics of the original sequence while maintaining the original trajectory and movement duration. One can apply each transformation to one or several markers at the same time. The velocity (V) and acceleration (A) of each point light are visible under the Scalars section of Mokka and can be easily exported to a.csv file with Mokka.

Constant norm

For this transformation, the components of a given point-light velocity are modified in order to achieve the following:

  1. 1.

    Keep the original point-light path.

  2. 2.

    Keep the original movement duration.

  3. 3.

    Keep a constant norm of the given point-light velocity throughout the movement.

To achieve this, the following process is used:

  1. 1.

    The length of the path is computed:

$$ \mathrm{L}=\sum \limits_{\mathrm{i}={\mathrm{t}}_0}^{{\mathrm{t}}_{\mathrm{f}}-1}\sqrt{{\left({\mathrm{X}}_{\mathrm{i}+1}-{\mathrm{X}}_{\mathrm{i}}\right)}^2+{\left({\mathrm{Y}}_{\mathrm{i}+1}-{\mathrm{Y}}_{\mathrm{i}}\right)}^2+{\left({\mathrm{Z}}_{\mathrm{i}+1}-{\mathrm{Z}}_{\mathrm{i}}\right)}^2} $$
  1. 2.

    To travel the path entirely, a mean velocity is computed, taking into account the duration of the movement:

$$ \left\Vert \overline{\mathrm{V}}\right\Vert =\frac{\mathrm{L}}{{\mathrm{t}}_{\mathrm{f}}-{\mathrm{t}}_0} $$
  1. 3.

    Let dt be the duration between two consecutive frames; the average distance between each pair of frames can be computed:

$$ \mathrm{d}=\left\Vert \overline{\mathrm{V}}\right\Vert \ast \mathrm{d}\mathrm{t} $$
  1. 4.

    Then, the tangent unit vector (T) of the Frenet–Serret frame of the original path between the current frame and the next frame is computed.

  2. 5.

    The modified trajectory is initialized with the original coordinates of the point light:

$$ \left[\begin{array}{c}{\mathrm{X}}_{{\mathrm{t}}_0}\\ {}{\mathrm{Y}}_{{\mathrm{t}}_0}\\ {}{\mathrm{Z}}_{{\mathrm{t}}_0}\end{array}\right] $$
  1. 6.

    The next coordinates of the modified point light are finally computed as follows:

$$ \left[\begin{array}{c}{X}_{t_i}\\ {}{Y}_{t_i}\\ {}{Z}_{t_i}\end{array}\right]=\left[\begin{array}{c}{X}_{t_{i-1}}\\ {}{Y}_{t_{i-1}}\\ {}{Z}_{t_{i-1}}\end{array}\right]+\mathbf{T}\ast \mathrm{d} $$

As is illustrated in Fig. 4, the constant transformation modifies the different components of velocity to have a constant norm but maintains the original path of the point light. Interestingly, the zoom (Fig. 4C right) highlights that the point light travels the same path but does not reach each point of the path at the same time as before the transformation.

Fig. 4
figure4

Graphic illustration of the constant norm transformation. (A) Tangential velocity observed on each component and on the norm before (in blue) and after (in red) the transformation. (B) Spatial position of each component before (in blue) and after (in red) the transformation. (C) 3-D trajectories before (in blue) and after (in red) the transformation

Inverse norm

For this transformation, the components of a given point-light velocity are modified in order to achieve the following:

  1. 1.

    Keep the original point-light path.

  2. 2.

    Keep the original movement duration.

  3. 3.

    Obtain a norm of the given point-light velocity inverted with respect to the mean norm original velocity.

To achieve this, the following process is used:

  1. 1.

    The length of the path is computed:

$$ \mathrm{L}=\sum \limits_{\mathrm{i}={\mathrm{t}}_0}^{{\mathrm{t}}_{\mathrm{f}}-1}\sqrt{{\left({\mathrm{X}}_{\mathrm{i}+1}-{\mathrm{X}}_{\mathrm{i}}\right)}^2+{\left({\mathrm{Y}}_{\mathrm{i}+1}-{\mathrm{Y}}_{\mathrm{i}}\right)}^2+{\left({\mathrm{Z}}_{\mathrm{i}+1}-{\mathrm{Z}}_{\mathrm{i}}\right)}^2} $$
  1. 2.

    The mean norm velocity \( \left\Vert \overline{V}\right\Vert \) is obtained by:

$$ \left\Vert \overline{\mathrm{V}}\right\Vert =\frac{\sum_{\mathrm{i}=1}^{\mathrm{nFrames}}\left\Vert {\mathrm{V}}_{\left(\mathrm{t}\right)}\right\Vert }{\mathrm{nFrames}} $$
  1. 3.

    The instantaneous inverted norm velocity \( \left\Vert {V}_t^{inv}\right\Vert \)is computed as follows:

$$ \left\Vert {\mathrm{V}}_{\mathrm{t}}^{\mathrm{inv}}\right\Vert =2\ast \left\Vert \overline{\mathrm{V}}\right\Vert -\left\Vert {\mathrm{V}}_{\left(\mathrm{t}\right)}\right\Vert $$
  1. 4.

    Since it is possible to obtain a negative instantaneous inverted norm velocity, a control loop has been written. There are two steps. The first step is to increase each value of \( \left\Vert {V}_t^{inv}\right\Vert \) so that the minimum value is higher than 0:

$$ if\ \min \left(\left\Vert {V}_t^{inv}\right\Vert \right)<0\ then\ \left\Vert {V}_t^{inv}\right\Vert =\left\Vert {V}_t^{inv}\right\Vert -\min \left(\left\Vert {V}_t^{inv}\right\Vert \right) $$

The second step consists of adjusting the corrected inverted norm velocity in order to guarantee a maximal difference between the original final position and the modified final position of less than 2 mm (see Appendix E).

  1. 5.

    Let dt be the duration between two consecutive frames. The average distance between each pair of frames can be computed as follows:

$$ \mathrm{d}=\left\Vert {\mathrm{V}}_{\mathrm{t}}^{\mathrm{inv}}\right\Vert \ast \mathrm{d}\mathrm{t} $$
  1. 6.

    Then, the tangent unit vector (T) of the Frenet–Serret frame of the original path between the current frame and the next frame is computed.

  2. 7.

    The modified trajectory is initialized with the original coordinates of the point light:

$$ \left[\begin{array}{c}{\mathrm{X}}_{{\mathrm{t}}_0}\\ {}{\mathrm{Y}}_{{\mathrm{t}}_0}\\ {}{\mathrm{Z}}_{{\mathrm{t}}_0}\end{array}\right] $$
  1. 8.

    The next coordinates of the modified point light are finally computed as follows:

$$ \left[\begin{array}{c}{X}_{t_i}\\ {}{Y}_{t_i}\\ {}{Z}_{t_i}\end{array}\right]=\left[\begin{array}{c}{X}_{t_{i-1}}\\ {}{Y}_{t_{i-1}}\\ {}{Z}_{t_{i-1}}\end{array}\right]+\mathbf{T}\ast \mathrm{d} $$

Accelerated norm

For this transformation, the components of a given point-light velocity are modified in order to achieve the following:

  1. 1.

    Keep the original point-light path.

  2. 2.

    Keep the original movement duration.

  3. 3.

    Obtain a uniformly accelerated motion.

To achieve this transformation, we followed the process detailed below:

  1. 1.

    The length of the path is computed:

$$ \mathrm{L}=\sum \limits_{\mathrm{i}={\mathrm{t}}_0}^{{\mathrm{t}}_{\mathrm{f}}-1}\sqrt{{\left({\mathrm{X}}_{\mathrm{i}+1}-{\mathrm{X}}_{\mathrm{i}}\right)}^2+{\left({\mathrm{Y}}_{\mathrm{i}+1}-{\mathrm{Y}}_{\mathrm{i}}\right)}^2+{\left({\mathrm{Z}}_{\mathrm{i}+1}-{\mathrm{Z}}_{\mathrm{i}}\right)}^2} $$
  1. 2.

    The mean norm velocity \( \left\Vert \overline{V}\right\Vert \) is obtained by:

$$ \left\Vert \overline{\mathrm{V}}\right\Vert =\frac{\mathrm{L}}{{\mathrm{t}}_{\mathrm{f}}-{\mathrm{t}}_0} $$
  1. 3.

    Then, the velocity profile V(t) is set as:

figurea
  1. 4.

    Let dt be the duration between two consecutive frames. The average distance between each pair of frames can be computed:

$$ \mathrm{d}={\mathrm{V}}_{\left(\mathrm{t}\right)}\ast \mathrm{d}\mathrm{t} $$
  1. 5.

    The tangent unit vector (T) of the Frenet–Serret frame of the original path between the current frame and the next frame is computed.

  2. 6.

    The modified trajectory is initialized with the original coordinates of the point light:

$$ \left[\begin{array}{c}{\mathrm{X}}_{{\mathrm{t}}_0}\\ {}{\mathrm{Y}}_{{\mathrm{t}}_0}\\ {}{\mathrm{Z}}_{{\mathrm{t}}_0}\end{array}\right] $$
  1. 7.

    The next coordinates of the modified point light are finally computed as follows:

$$ \left[\begin{array}{c}{X}_{t_i}\\ {}{Y}_{t_i}\\ {}{Z}_{t_i}\end{array}\right]=\left[\begin{array}{c}{X}_{t_{i-1}}\\ {}{Y}_{t_{i-1}}\\ {}{Z}_{t_{i-1}}\end{array}\right]+\mathbf{T}\ast \mathrm{d} $$

Decelerated norm

For this transformation, the components of a given point-light velocity are modified in order to achieve the following:

  1. 1.

    Keep the original point-light path.

  2. 2.

    Keep the original movement duration.

  3. 3.

    Obtain a uniformly decelerated motion.

To achieve this transformation, we followed the process detailed below:

  1. 1.

    The length of the path is computed:

$$ \mathrm{L}=\sum \limits_{\mathrm{i}={\mathrm{t}}_0}^{{\mathrm{t}}_{\mathrm{f}}-1}\sqrt{{\left({\mathrm{X}}_{\mathrm{i}+1}-{\mathrm{X}}_{\mathrm{i}}\right)}^2+{\left({\mathrm{Y}}_{\mathrm{i}+1}-{\mathrm{Y}}_{\mathrm{i}}\right)}^2+{\left({\mathrm{Z}}_{\mathrm{i}+1}-{\mathrm{Z}}_{\mathrm{i}}\right)}^2} $$
  1. 2.

    The mean norm velocity \( \left\Vert \overline{V}\right\Vert \) is obtained by:

$$ \left\Vert \overline{\mathrm{V}}\right\Vert =\frac{\mathrm{L}}{{\mathrm{t}}_{\mathrm{f}}-{\mathrm{t}}_0} $$
  1. 3.

    Then, the velocity profile V(t) is set as:

figureb
  1. 4.

    Let dt be the duration between two consecutive frames. The average distance between each pair of frames can be computed:

$$ \mathrm{d}={\mathrm{V}}_{\left(\mathrm{t}\right)}\ast \mathrm{d}\mathrm{t} $$
  1. 5.

    The tangent unit vector (T) of the Frenet–Serret frame of the original path between the current frame and the next frame is computed.

  2. 6.

    The modified trajectory is initialized with the original coordinates of the point light:

$$ \left[\begin{array}{c}{\mathrm{X}}_{{\mathrm{t}}_0}\\ {}{\mathrm{Y}}_{{\mathrm{t}}_0}\\ {}{\mathrm{Z}}_{{\mathrm{t}}_0}\end{array}\right] $$
  1. 7.

    The next coordinates of the modified point light are finally computed as follows:

$$ \left[\begin{array}{c}{X}_{t_i}\\ {}{Y}_{t_i}\\ {}{Z}_{t_i}\end{array}\right]=\left[\begin{array}{c}{X}_{t_{i-1}}\\ {}{Y}_{t_{i-1}}\\ {}{Z}_{t_{i-1}}\end{array}\right]+\mathbf{T}\ast \mathrm{d} $$

Transformations applied to each component of the velocity

These transformations can be applied component by component and point light by point light. Three transformations are available: constant, inverse, and manual. Each transformation can be accessed easily with the use of the popup menu at the top of each column of graphs (see Fig. 5).

Fig. 5
figure5

Illustration of the three types of transformations applicable for each component for one point light. The x velocity component is constrained to be constant throughout the movement, whereas the y velocity component is inverted, and the z velocity component is set manually. For each of the transformations, position, velocity, and acceleration are directly visible. Black lines represent the original movement, and blue lines represent the modified movement

Once a transformation is chosen for a point light and a velocity component, the new acceleration and coordinates are automatically computed. All the transformations are retained and definitively applied to the C3D file when the researcher closes the window. Consequently, it is not necessary to close the window after each point-light transformation.

As for the transformations applied to the norm of the velocity, when the C3D file is updated, the new velocity and acceleration components and norms are written and can be recorded in a.csv file.

Constant transformation

The process is divided into six steps:

  1. 1.

    Keep initial (\( {V}_{t_0} \)) and final (\( {V}_{t_f} \)) velocity components.

  2. 2.

    The mean velocity component \( \left\Vert \overline{V}\right\Vert \) is obtained by:

$$ \left\Vert \overline{\mathrm{V}}\right\Vert =\frac{\sum_{\mathrm{i}=1}^{\mathrm{nFrames}}\left\Vert {\mathrm{V}}_{\left(\mathrm{i}\right)}\right\Vert }{\mathrm{nFrames}} $$
  1. 3.

    Set the new velocity component to \( \left\Vert \overline{V}\right\Vert \) from 5% and 95% of the movement.

  2. 4.

    Then, a shape-preserving piecewise cubic interpolation is performed from 0% to 5% and from 95% to 100% of the movement in order to “connect” (\( {V}_{t_0} \)) and (\( {V}_{t_f} \)) to\( \left\Vert \overline{V}\right\Vert \)

  3. 5.

    Compute coordinates and acceleration.

  4. 6.

    Use a control loop to guarantee a final gap between the original and modified point-light coordinates of less than 0.1 mm.

Inverse transformation

The process is divided into two steps:

  1. 1.

    The mean velocity component ‖V‖ is obtained by:

$$ \left\Vert \overline{\mathrm{V}}\right\Vert =\frac{\sum_{\mathrm{i}=1}^{\mathrm{nFrames}}\left\Vert {\mathrm{V}}_{\left(\mathrm{i}\right)}\right\Vert }{\mathrm{nFrames}} $$
  1. 2.

    The instantaneous inverted velocity component is then computed as follows:

$$ \left\Vert {\mathrm{V}}_{\mathrm{t}}^{\mathrm{inv}}\right\Vert =2\ast \left\Vert \overline{\mathrm{V}}\right\Vert -\left\Vert {\mathrm{V}}_{\left(\mathrm{t}\right)}\right\Vert $$

Manual transformation

Manual transformation allows for the redefinition of the shape of the velocity component curve. For a C3D file of more than 20 frames, 19 movable points are added to the velocity curve (see Fig. 5, green circles). To move the checkpoint, users can left click, hold, and vertically drag the circle. When the left button is released, the velocity and acceleration components and the point-light coordinates are recomputed. Red circles mark the beginning and end time points and cannot be changed. As is shown in Fig. 5, shape-preserving piecewise cubic interpolation is performed between the clicked circle and the previous green (or red, if applicable) circle and between the clicked circle and the next green (or red, if applicable) circle.

Discussion and conclusion

As has been described by several authors, “visual processing of biological motion produced by living organisms is of immense value for successful daily life activities and, in particular, for adaptive social behavior and nonverbal communication” (Pavlova, 2012, p. 981). For more than 40 years, numerous studies have sought to better understand the mechanisms involved in this process, especially by studying the consequences of spatial or kinematic transformation in perceptual competencies.

PLAViMoP is a new program that enables users to visualize and transform 3-D point-light sequences. The innovation of this software presents several advantages for research and its applications.

First, thanks to this software, classical transformations of spatial (e.g., modifying the orientation, adding masking dots, and scrambling the original motion) and kinematic (e.g., changing the norm of the velocity) characteristics of PLD can be standardized using specific algorithms. This advance is important for scientists working on PLD sequences because it offers the possibility to work with similar stimuli. Actually, by disambiguating some transformations such as the application of scrambled modifications or the point of origin of a rotation, PLAViMoP will facilitate the reproducibility of the data, a crucial methodological step toward a better understanding of the literature on the mechanisms sustaining PLD processing. Moreover, PLAViMoP allows the application of these transformations in 3-D sequences and presents new types of spatial and kinematic transformations (i.e., the spatial rotation of original PLDs for each limb constituting the sequence, the possibility of rotating the original PLD on the different axes [x, y, or z], or the possibility of separately modifying the kinematics of each component of the original PLD). These new functionalities introduce the possibility of better understanding the crucial characteristics that are involved in the recognition of PLD. Futures studies should be performed to assess the effects of these different transformations. Both perceptual consequences of these transformations (recognition, detection, discrimination of the movement) and their implications in other cognitive or social functions (do these transformations modify the link between motion perception and processing of language, numbers, or social activity?) should be investigated. Brain studies will also be valuable to investigate whether these transformations modify brain networks classically observed in the perception of biological motion. PLAViMoP software facilitates the implementation of these experiments because it allows the user to produce a series of.avi files that correspond exactly to the settings made in the PLAViMoP software (color and size of dots, point of view, orientation, kinematics, etc.). Afterward, these videos files can be easily use with the Psychophysics Toolbox for Matlab (http://psychtoolbox.org/), E-Prime (https://pstnet.com/welcome-to-e-prime-2-0/), or PsychoPy (http://www.psychopy.org) programs to design experiments. For example, using the PLAViMoP software, we recently created 125 videos and designed four experiments with E-Prime 2 in order to assess how motion characteristics (orientation and kinematics) can influence the link between action and language (Beauprez & Bidet-Ildei, 2018). For information, all the stimuli used in these experiments are freely available in the PLAViMoP platform (http://plavimop.prd.fr/en/news/the-kinematics-not-the-orientation-of-an-action-influences-language-processing).

Second, PLAViMoP allows the use of the point-light display technique not only to study perceptual competencies but also to set up observational learning protocols. In fact, the efficacy of observing someone performing the task to be learned is well-documented in motor learning (see Gatti et al., 2013; Vogt & Thomaschke, 2007; Wolpert, Diedrichsen, & Flanagan, 2011, for reviews). Interestingly, the beneficial effects of observation prior to physical practice also appear when actions are presented from real or point-light videos (e.g., Hayes, Hodges, Scott, Horn & Williams 2007a; Horn, Williams, & Scott, 2002). However, understanding the processes underlying observational learning generally requires video transformation, such as the characteristics of the model’s performance (e.g., Andrieux & Proteau, 2014; Blandin, Lhuisset, & Proteau, 1999; Rohbanfard & Proteau, 2011) and its stability across trials (e.g., Buchanan & Dean, 2014). Other common procedures require displaying naturalistic or constant limb velocity (e.g., Roberts, Bennett, Elliot & Hayes, 2015) or limb or joint occlusion (e.g., Hayes, Hodges, Huys, & Mark Williams, 2007b; Mann, Abernethy, Farrow, Davis & Spratford, 2010; Mulligan, Lohse & Hodges, 2016) to determine which component of an action is essential to the learning processes. With the spatial and kinematic transformations included, PLAViMoP is a powerful tool that can be used for a better understanding of observational learning processes.

With a more applied focus, researchers have demonstrated the effectiveness of action observation in motor performance and motor rehabilitation, as well as in the treatment of language disorders. For example, to learn complex motor skills involved in volleyball (Weeks & Anderson, 2000), football (Horn et al., 2002), cricket bowling (Breslin, Hodges, & Williams, 2009), or golf (D’Innocenzo, Gonzalez, Williams, & Bishop, 2016), observing someone performing the action to be practiced enhances learning. Furthermore, observational learning has also been demonstrated to be efficient in the rehabilitation of patients suffering from motor disorders (see Abbruzzese, Avanzino, Marchese, & Pelosin, 2015, for a review) and in the recovery of postsurgical orthopedic intervention (Bellelli, Buccino, Bernardini, Padovani, & Trabucchi, 2010; Park, Song, & Kim, 2014). Therefore, the systematic observation of daily actions, followed by their execution, becomes a rehabilitative strategy to accelerate the functional recovery in patients with motor impairment (Ertelt et al., 2007). Finally, in light of the action–language link (see Fischer & Zwaan, 2008; Pulvermüller, 2005; Willems & Hagoort, 2007, for reviews), it has been shown that rehabilitation based on the observation of actions efficiently aids the recovery of word forms in aphasic patients (Marangolo et al., 2010; see Ertelt & Binkofski, 2012, for a review).

However, the recording of videos is often difficult in professional situations that do not always provide have the materials necessary for motion capture. Moreover, even if users have access to a video recording system, the videos generally represent the motions as produced—that is, without transformation. PLAViMoP allows therapists and coaches to modify the original videos to accentuate the processing of motion or to complicate or simplify the motion perceived. This feature could have a specific application to motor learning, to improve both global and specific learning and to optimize transfer (Robin, Toussaint, Blandin, & Proteau, 2005). Moreover, this software makes it possible to examine the evolution of patients’ motor capacities. For example, if a patient has undergone a knee operation, the “observation therapy” could initially be based on videos of movements with the knee blocked on the three axes; each axis of motion could then gradually be unlocked to portray the evolving possibilities for patients’ motor production (Moon, Robson, Langari, & Buchanan, 2012, 2015).

In conclusion, the PLAViMoP software is the first free program that allows users to visualize and transform PLDs without the need for computer programming skill. It will undoubtedly facilitate the replication of scientific data. It will also allow professionals (teacher in adapted physical activities, sports trainer, etc.) to access to the PLD technique, which could be used for learning new sporting gestures, developing perceptual anticipation skills, or rehabilitating patients with motor disorders. Future steps will be to enrich the functionalities using plug-ins and to develop the program for other operating systems.

Author note

Support for this research was provided by a grant from La Région Nouvelle Aquitaine (CPER-FEDER P-2017-BAFE-68), in partnership with the European Union (FEDER/ERDF, European Regional Development Fund), and by the French government research program Investissements d’Avenir through the Robotex Equipment of Excellence (ANR-10-EQPX-44). This work was a part of the Ph.D. program of S.-A.B.

Notes

  1. 1.

    PLAViMoP has been registered to the « Agence pour la Protection des Programmes » since May 2017 (Inter Deposit Digital number: IDDN.FR.001.200011.000.S.P.2017.000.31235).

  2. 2.

    The PLAViMoP software is one component of the PLAViMoP platform. The second component is the PLAViMoP Database, a new, freely accessible database with several point-light motions representing human movements.

  3. 3.

    New functionalities should be programmed in Matlab. Only users who have the status of “contributor” can propose new functionalities to enrich the PLAViMoP software.

  4. 4.

    For example c3d files, please visit the following websites: http://www.rockthe3d.com/100-best-free-motion-capture-files/ and http://mocapclub.com/Pages/MonthlyMocap.htm. C3d files will also be available from April 2018 on our platform: http://plavimop.prd.fr.

  5. 5.

    If you have PLDs in another format, you can use the function “CSV2C3D,” proposed as a plug-in for the PLAViMoP software. In this case, the .csv file should contain all information necessary to build a C3D (3-D time histories of markers, names of marker components, and a time column).

  6. 6.

    Here we describe only a few possible applications of Mokka. For an overview of all functionalities, please consult the Help menu of Mokka, available at this address: http://biomechanical-toolkit.github.io/docs/Mokka/index.html.

References

  1. Abbruzzese, G., Avanzino, L., Marchese, R., & Pelosin, E. (2015). Action observation and motor imagery: Innovative cognitive tools in the rehabilitation of Parkinson’s Disease. Parkinson’s Disease, 2015, 124214. https://doi.org/10.1155/2015/124214

    Article  PubMed  PubMed Central  Google Scholar 

  2. Anderson, L. C., Bolling, D. Z., Schelinski, S., Coffman, M. C., Pelphrey, K. A., & Kaiser, M. D. (2013). Sex differences in the development of brain mechanisms for processing biological motion. NeuroImage, 83, 751–760. https://doi.org/10.1016/j.neuroimage.2013.07.040

    Article  PubMed  Google Scholar 

  3. Andrieux, M., & Proteau, L. (2014). Mixed observation favors motor learning through better estimation of the model’s performance. Experimental Brain Research, 232, 3121–3132. https://doi.org/10.1007/s00221-014-4000-3

    Article  PubMed  Google Scholar 

  4. Atkinson, A. P., Dittrich, W. H., Gemmell, A. J., & Young, A. W. (2004). Emotion perception from dynamic and static body expressions in point-light and full-light displays. Perception, 33, 717–746.

    PubMed  Google Scholar 

  5. Badets, A., Bidet-Ildei, C., & Pesenti, M. (2015). Influence of biological kinematics on abstract concept processing. Quarterly Journal of Experimental Psychology, 68, 608–618. https://doi.org/10.1080/17470218.2014.964737

    Article  Google Scholar 

  6. Bardi, L., Regolin, L., & Simion, F. (2011). Biological motion preference in humans at birth: Role of dynamic and configural properties. Developmental Science, 14, 353–359. https://doi.org/10.1111/j.1467-7687.2010.00985.x

    Article  PubMed  Google Scholar 

  7. Bardi, L., Regolin, L., & Simion, F. (2014). The first time ever I saw your feet: Inversion effect in newborns’ sensitivity to biological motion. Developmental Psychology, 50, 986–993. https://doi.org/10.1037/a0034678

    Article  PubMed  Google Scholar 

  8. Barre, A., & Armand, S. (2014). Biomechanical ToolKit: Open-source framework to visualize and process biomechanical data. Computer Methods and Programs in Biomedicine, 114, 80–87. https://doi.org/10.1016/j.cmpb.2014.01.012

    Article  PubMed  Google Scholar 

  9. Beardsworth, T., & Buckner, T. (1981). The ability to recognize oneself from a video recording of one’s movements without seeing one’s body. Bulletin of the Psychonomic Society, 18, 19–22.

    Google Scholar 

  10. Beauprez, S.-A., & Bidet-Ildei, C. (2017). Perceiving a biological human movement facilitates action verb processing. Current Psychology. https://doi.org/10.1007/s12144-017-9694-5

    Google Scholar 

  11. Beauprez, S.-A., & Bidet-Ildei, C. (2018). The kinematics, not the orientation, of an action influences language processing. Journal of Experimental Psychology: Human Perception and Performance.

  12. Beintema, J. A., & Lappe, M. (2002). Perception of biological motion without local image motion. Proceedings of the National Academy of Sciences, 99, 5661–5663.

    Google Scholar 

  13. Bellelli, G., Buccino, G., Bernardini, B., Padovani, A., & Trabucchi, M. (2010). Action observation treatment improves recovery of postsurgical orthopedic patients: Evidence for a top-down effect? Archives of Physical Medicine and Rehabilitation, 91, 1489–1494. https://doi.org/10.1016/j.apmr.2010.07.013

    Article  PubMed  Google Scholar 

  14. Bertenthal, B. I., & Pinto, J. (1994). Global processing of biological motions. Psychological Science, 5, 221–225. https://doi.org/10.1111/j.1467-9280.1994.tb00504.x

    Article  Google Scholar 

  15. Bertenthal, B. I., Proffitt, D. R., & Cutting, J. E. (1984). Infant sensitivity to figural coherence in biomechanical motions. Journal of Experimental Child Psychology, 37, 213–230.

    PubMed  Google Scholar 

  16. Bertenthal, B. I., Proffitt, D. R., & Kramer, S. J. (1987). Perception of biomechanical motions by infants: implementation of various processing constraints. Journal of Experimental Psychology: Human Perception and Performance, 13, 577–585. https://doi.org/10.1037/0096-1523.13.4.577

    Article  PubMed  Google Scholar 

  17. Bertenthal, B. I., Proffitt, D. R., Spetner, N. B., & Thomas, M. A. (1985). The development of infant sensitivity to biomechanical motions. Child Development, 56, 531–543.

    PubMed  Google Scholar 

  18. Bidet-Ildei, C., Chauvin, A., & Coello, Y. (2010). Observing or producing a motor action improves later perception of biological motion: Evidence for a gender effect. Acta Psychologica, 134, 215–224. https://doi.org/10.1016/j.actpsy.2010.02.002

    Article  PubMed  Google Scholar 

  19. Bidet-Ildei, C., Gimenes, M., Toussaint, L., Almecija, Y., & Badets, A. (2017). Sentence plausibility influences the link between action words and the perception of biological human movements. Psychological Research, 81, 806–813. https://doi.org/10.1007/s00426-016-0776-z

    Article  PubMed  Google Scholar 

  20. Bidet-Ildei, C., Gimenes, M., Toussaint, L., Beauprez, S.-A., & Badets, A. (2017). Painful semantic context modulates the relationship between action words and biological movement perception. Journal of Cognitive Psychology, 29, 821–831. https://doi.org/10.1080/20445911.2017.1322093

    Article  Google Scholar 

  21. Bidet-Ildei, C., Kitromilides, E., Orliaguet, J. P., Pavlova, M., & Gentaz, E. (2014). Preference for point-light human biological motion in newborns: Contribution of translational displacement. Developmental Psychology, 50, 113–120. https://doi.org/10.1037/a0032956

    Article  PubMed  Google Scholar 

  22. Bidet-Ildei, C., Kitromilides-Salerio, E., Orliaguet, J. P., & Badets, A. (2011). Perceptual judgements of handwriting and pointing movements: Influence of kinematics rules. In A. M. Columbus (Ed.), Advances in psychology research (Vol. 77, pp. 307–316). New York: Nova Science.

    Google Scholar 

  23. Bidet-Ildei, C., Meary, D., & Orliaguet, J. P. (2008). Visual preference for isochronic movement does not necessarily emerge from movement kinematics: A challenge for the motor simulation theory. Neuroscience Letters, 430, 236–240. https://doi.org/10.1016/j.neulet.2007.10.040

    Article  PubMed  Google Scholar 

  24. Bidet-Ildei, C., Orliaguet, J. P., & Coello, Y. (2011). Rôle des représentations motrices dans la perception visuelle des mouvements humains. L’Année Psychologique, 111, 409–445. https://doi.org/10.4074/S0003503311002065

    Article  Google Scholar 

  25. Bidet-Ildei, C., Orliaguet, J. P., Sokolov, A. N., & Pavlova, M. (2006). Perception of elliptic biological motion. Perception, 35, 1137–1147.

    PubMed  Google Scholar 

  26. Bidet-Ildei, C., Sparrow, L., & Coello, Y. (2011). Reading action word affects the visual perception of biological motion. Acta Psychologica, 137, 330–334. https://doi.org/10.1016/j.actpsy.2011.04.001

    Article  PubMed  Google Scholar 

  27. Bidet-Ildei, C., & Toussaint, L. (2015). Are judgments for action verbs and point-light human actions equivalent? Cognitive Processing, 16, 57–67. https://doi.org/10.1007/s10339-014-0634-0

    Article  PubMed  Google Scholar 

  28. Blake, R., & Shiffrar, M. (2007). Perception of human motion. Annual Review of Psychology, 58, 47–73. https://doi.org/10.1146/annurev.psych.57.102904.190152

    Article  PubMed  Google Scholar 

  29. Blandin, Y., Lhuisset, L., & Proteau, L. (1999). Cognitive processes underlying observational learning of motor skills. Quarterly Journal of Experimental Psychology, 52A, 957–979. https://doi.org/10.1080/713755856

    Article  Google Scholar 

  30. Bonda, E., Petrides, M., Ostry, D., & Evans, A. (1996). Specific involvement of human parietal systems and the amygdala in the perception of biological motion. Journal of Neuroscience, 16, 3737–3744.

    PubMed  Google Scholar 

  31. Bouquet, C. A., Gaurier, V., Shipley, T., Toussaint, L., & Blandin, Y. (2007). Influence of the perception of biological or non-biological motion on movement execution. Journal of Sports Science, 25, 519–530.

    Google Scholar 

  32. Breslin, G., Hodges, N. J., & Williams, A. M. (2009). Effect of information load and time on observational learning. Research Quarterly for Exercise and Sport, 80, 480–490. https://doi.org/10.1080/02701367.2009.10599586

    Article  PubMed  Google Scholar 

  33. Chaminade, T., Meary, D., Orliaguet, J. P., & Decety, J. (2001). Is perceptual anticipation a motor simulation? A PET study. NeuroReport, 12, 3669–3674.

    PubMed  Google Scholar 

  34. Chandrasekaran, C., Turner, L., Bülthoff, H. H., & Thornton, I. M. (2010). Attentional networks and biological motion. Psihologija, 43, 5–20.

    Google Scholar 

  35. Chang, D. H., & Troje, N. F. (2008). Perception of animacy and direction from local biological motion signals. Journal of Vision, 8(5), 3.1-10. https://doi.org/10.1167/8.5.3

    Article  Google Scholar 

  36. Chang, D. H., & Troje, N. F. (2009). Characterizing global and local mechanisms in biological motion perception. Journal of Vision, 9(5), 8.1-10. https://doi.org/10.1167/9.5.8

    Article  Google Scholar 

  37. Chary, C., Méary, D., Orliaguet, J. P., David, D., Moreaud, O., & Kandel, S. (2004). Influence of motor disorders on the visual perception of human movements in a case of peripheral dysgraphia. Neurocase, 10, 223–232. https://doi.org/10.1080/13554790490495113

    Article  PubMed  Google Scholar 

  38. Chouchourelou, A., Matsuka, T., Harber, K., & Shiffrar, M. (2006). The visual analysis of emotional actions. Social Neuroscience, 1, 63–74.

    PubMed  Google Scholar 

  39. Clarke, T. J., Bradshaw, M. F., Field, D. T., Hampson, S. E., & Rose, D. (2005). The perception of emotion from body movement in point-light displays of interpersonal dialogue. Perception, 34, 1171–1180.

    PubMed  Google Scholar 

  40. Cusack, J. P., Williams, J. H. G., & Neri, P. (2015). Action perception is intact in autism spectrum disorder. Journal of Neuroscience, 35, 1849–1857. https://doi.org/10.1523/JNEUROSCI.4133-13.2015

    Article  PubMed  Google Scholar 

  41. Cutting, J. E. (1978). Generation of synthetic male and female walkers through manipulation of a biomechanical invariant. Perception, 7, 393–405.

    PubMed  Google Scholar 

  42. Cutting, J. E., Moore, C., & Morrison, R. (1988). Masking the motions of human gait. Perception & Psychophysics, 44, 339–347.

    Google Scholar 

  43. D’Innocenzo, G., Gonzalez, C. C., Williams, A. M., & Bishop, D. T. (2016). Looking to learn: The effects of visual guidance on observational learning of the golf swing. PLoS ONE, 11, e155442. https://doi.org/10.1371/journal.pone.0155442

    Article  Google Scholar 

  44. Daems, A., & Verfaillie, K. (1999). Viewpoint-dependent priming effects in the perception of human actions and body postures. Visual Cognition, 6, 665–693.

    Google Scholar 

  45. Davila, A., Schouten, B., & Verfaillie, K. (2014). Perceiving the direction of articulatory motion in point-light actions. PLoS ONE, 9, e115117. https://doi.org/10.1371/journal.pone.0115117

    Article  PubMed  PubMed Central  Google Scholar 

  46. Dehaene, S. (1992). Varieties of numerical abilities. Cognition, 44, 1–42. https://doi.org/10.1016/0010-0277(92)90049-N

    Article  PubMed  Google Scholar 

  47. Dittrich, W. H. (1993). Action categories and the perception of biological motion. Perception, 22, 15–22. https://doi.org/10.1068/p220015

    Article  PubMed  Google Scholar 

  48. Dittrich, W. H., Troscianko, T., Lea, S. E., & Morgan, D. (1996). Perception of emotion from dynamic point-light displays represented in dance. Perception, 25, 727–738.

    PubMed  Google Scholar 

  49. Elsner, C., Falck-Ytter, T., & Gredeback, G. (2012). Humans anticipate the goal of other people’s point-light actions. Frontiers in Psychology, 3, 120. https://doi.org/10.3389/fpsyg.2012.00120

    Article  PubMed  PubMed Central  Google Scholar 

  50. Ertelt, D., & Binkofski, F. (2012). Action observation as a tool for neurorehabilitation to moderate motor deficits and aphasia following stroke. Neural Regeneration Research, 7, 2063–2074. https://doi.org/10.3969/j.issn.1673-5374.2012.26.008

    Article  PubMed  PubMed Central  Google Scholar 

  51. Ertelt, D., Small, S., Solodkin, A., Dettmers, C., McNamara, A., Binkofski, F., & Buccino, G. (2007). Action observation has a positive impact on rehabilitation of motor deficits after stroke. NeuroImage, 36(Suppl. 2), T164–T173.

    PubMed  Google Scholar 

  52. Fischer, M. H., & Zwaan, R. A. (2008). Embodied language: A review of the role of the motor system in language comprehension. Quarterly Journal of Experimental Psychology, 61, 825–850.

    Google Scholar 

  53. Freire, A., Lewis, T. L., Maurer, D., & Blake, R. (2006). The development of sensitivity to biological motion in noise. Perception, 35, 647–657.

    PubMed  Google Scholar 

  54. Freitag, C. M., Konrad, C., Häberlen, M., Kleser, C., von Gontard, A., Reith, W., … Krick, C. (2008). Perception of biological motion in autism spectrum disorders. Neuropsychologia, 46, 1480–1494. https://doi.org/10.1016/j.neuropsychologia.2007.12.025

    Article  PubMed  Google Scholar 

  55. Galazka, M. A., Roché, L., Nyström, P., & Falck-Ytter, T. (2014). Human infants detect other people’s interactions based on complex patterns of kinematic information. PLoS ONE, 9, e112432. https://doi.org/10.1371/journal.pone.0112432

    Article  PubMed  PubMed Central  Google Scholar 

  56. Garcia, J. O., & Grossman, E. D. (2008). Necessary but not sufficient: Motion perception is required for perceiving biological motion. Vision Research, 48, 1144–1149. https://doi.org/10.1016/j.visres.2008.01.027

    Article  PubMed  Google Scholar 

  57. Gatti, R., Tettamanti, A., Gough, P. M., Riboldi, E., Marinoni, L., & Buccino, G. (2013). Action observation versus motor imagery in learning a complex motor task: A short review of literature and a kinematics study. Neuroscience Letters, 540, 37–42. https://doi.org/10.1016/j.neulet.2012.11.039

    Article  PubMed  Google Scholar 

  58. Giese, M. A., & Poggio, T. (2003). Neural mechanisms for the recognition of biological movements. Nature Review Neuroscience, 4, 179–192. https://doi.org/10.1038/nrn1057

    Article  Google Scholar 

  59. Grèzes, J., Fonlupt, P., Bertenthal, B., Delon-Martin, C., Segebarth, C., & Decety, J. (2001). Does perception of biological motion rely on specific brain regions? NeuroImage, 13, 775–785. https://doi.org/10.1006/nimg.2000.0740

    Article  PubMed  Google Scholar 

  60. Grossman, E., Donnelly, M., Price, R., Pickens, D., Morgan, V., Neighbor, G., & Blake, R. (2000). Brain areas involved in perception of biological motion. Journal of Cognitive Neuroscience, 12, 711–720. https://doi.org/10.1162/089892900562417

    Article  PubMed  Google Scholar 

  61. Grossman, E. D., Battelli, L., & Pascual-Leone, A. (2005). Repetitive TMS over posterior STS disrupts perception of biological motion. Vision Research, 45, 2847–2853. https://doi.org/10.1016/j.visres.2005.05.027

    Article  PubMed  Google Scholar 

  62. Grossman, E. D., & Blake, R. (2001). Brain activity evoked by inverted and imagined biological motion. Vision Research, 41, 1475–1482.

    PubMed  Google Scholar 

  63. Grossman, E. D., & Blake, R. (2002). Brain areas active during visual perception of biological motion. Neuron, 35, 1167–1175.

    PubMed  Google Scholar 

  64. Hayes, S. J., Hodges, N. J., Scott, M. A., Horn, R. R., & Williams, A. M. (2007a). The efficacy of demonstrations in teaching children an unfamiliar movement skill: The effects of object-orientated actions and point-light demonstrations. Journal of Sports Science, 25(5), 559–575.

  65. Hayes, S. J., Hodges, N. J., Huys, R., & Mark Williams, A. (2007b). Endpoint focus manipulations to determine what information is used during observational learning. Acta Psychologica (Amst), 126(2), 120–137.

  66. Hirai, M., & Hiraki, K. (2005). An event-related potentials study of biological motion perception in human infants. Cognitive Brain Research, 22, 301–304.

    PubMed  Google Scholar 

  67. Hirai, M., Senju, A., Fukushima, H., & Hiraki, K. (2005). Active processing of biological motion perception: an ERP study. Cognitive Brain Research, 23, 387–396.

    PubMed  Google Scholar 

  68. Hiris, E. (2007). Detection of biological and nonbiological motion. Journal of Vision, 7(12), 4.1–16. https://doi.org/10.1167/7.12.4

    Article  Google Scholar 

  69. Hiris, E., Humphrey, D., & Stout, A. (2005). Temporal properties in masking biological motion. Perception & Psychophysics, 67, 435–443.

    Google Scholar 

  70. Hiris, E., Krebeck, A., Edmonds, J., & Stout, A. (2005). What learning to see arbitrary motion tells us about biological motion perception. Journal of Experimental Psychology: Human Perception and Performance, 31, 1096–1106. https://doi.org/10.1037/0096-1523.31.5.1096

    Article  PubMed  Google Scholar 

  71. Horn, R. R., Williams, A. M., & Scott, M. A. (2002). Learning from demonstrations: the role of visual search during observational learning from video and point-light models. Journal of Sports Science, 20, 253–269.

    Google Scholar 

  72. Iacoboni, M., Molnar-Szakacs, I., Gallese, V., Buccino, G., Mazziotta, J. C., & Rizzolatti, G. (2005). Grasping the intentions of others with one’s own mirror neuron system. PLoS Biology, 3, e79.

    PubMed  PubMed Central  Google Scholar 

  73. Ikeda, H., Blake, R., & Watanabe, K. (2005). Eccentric perception of biological motion is unscalably poor. Vision Research, 45, 1935–1943.

    PubMed  Google Scholar 

  74. Jastorff, J., Kourtzi, Z., & Giese, M. A. (2006). Learning to discriminate complex movements: biological versus artificial trajectories. Journal of Vision, 6(8), 3.791–804. https://doi.org/10.1167/6.8.3

    Article  Google Scholar 

  75. Johansson, G. (1973). Visual perception of biological motion and a model for its analysis. Perception & Psychophysics, 14, 201–211. https://doi.org/10.3758/BF03212378

    Article  Google Scholar 

  76. Jokisch, D., Daum, I., Suchan, B., & Troje, N. F. (2005). Structural encoding and recognition of biological motion: Evidence from event-related potentials and source analysis. Behavioral Brain Research, 157, 195–204. https://doi.org/10.1016/j.bbr.2004.06.025

    Article  Google Scholar 

  77. Jokisch, D., & Troje, N. F. (2003). Biological motion as a cue for the perception of size. Journal of Vision, 3(4), 1.252–264. https://doi.org/10.1167/3.4.1

    Article  Google Scholar 

  78. Jung, W. H., Gu, B.-M., Kang, D.-H., Park, J.-Y., Yoo, S. Y., Choi, C.-H., … Kwon, J. S. (2009). BOLD response during visual perception of biological motion in obsessive-compulsive disorder. European Archives of Psychiatry and Clinical Neuroscience, 259, 46. https://doi.org/10.1007/s00406-008-0833-8

    Article  PubMed  Google Scholar 

  79. Kaiser, M. D., Hudac, C. M., Shultz, S., Lee, S. M., Cheung, C., Berken, A. M., … Pelphrey, K. A. (2010). Neural signatures of autism. Proceedings of the National Academy of Sciences, 107, 21223–21228. https://doi.org/10.1073/pnas.1010412107

    Article  Google Scholar 

  80. Kim, J., Doop, M. L., Blake, R., & Park, S. (2005). Impaired visual recognition of biological motion in schizophrenia. Schizophrenia Research, 77, 299–307.

    PubMed  Google Scholar 

  81. Kim, J., Jung, E. L., Lee, S.-H., & Blake, R. (2015). A new technique for generating disordered point-light animations for the study of biological motion perception. Journal of Vision, 15(11), 13. https://doi.org/10.1167/15.11.13

    Article  PubMed  PubMed Central  Google Scholar 

  82. Klin, A., Lin, D. J., Gorrindo, P., Ramsay, G., & Jones, W. (2009). Two-year-olds with autism orient to non-social contingencies rather than biological motion. Nature, 459, 257–261. https://doi.org/10.1038/nature07868

    Article  PubMed  PubMed Central  Google Scholar 

  83. Koldewyn, K., Whitney, D., & Rivera, S. M. (2010). The psychophysics of visual motion and global form processing in autism. Brain, 133(Pt. 2):599–610. https://doi.org/10.1093/brain/awp272

    Article  PubMed  Google Scholar 

  84. Kozlowski, L., & Cutting, J. E. (1977). Recognizing the sex of a walker from dynamic point-light displays. Perception & Psychophysics, 21, 575–580.

    Google Scholar 

  85. Legault, I., Troje, N. F., & Faubert, J. (2012). Healthy older observers cannot use biological-motion point-light information efficiently within 4 m of themselves. I-Perception, 3, 104–111. https://doi.org/10.1068/i0485

    Article  PubMed  PubMed Central  Google Scholar 

  86. Louis-Dam, A., Orliaguet, J.-P., & Coello, Y. (1999). Perceptual anticipation in grasping movement: When does it become possible? In M. A. Grealy & J. A. Thomson (Eds.), Studies in perception and action V: Tenth International Conference on Perception and Action (pp. 135–139). Mahwah: Erlbaum.

    Google Scholar 

  87. Loula, F., Prasad, S., Harber, K., & Shiffrar, M. (2005). Recognizing people from their movement. Journal of Experimental Psychology: Human Perception and Performance, 31, 210–220. https://doi.org/10.1037/0096-1523.31.1.210

    Article  PubMed  Google Scholar 

  88. Marangolo, P., Bonifazi, S., Tomaiuolo, F., Craighero, L., Coccia, M., Altoe, G., … Cantagallo, A. (2010). Improving language without words: First evidence from aphasia. Neuropsychologia, 48, 3824–3833. https://doi.org/10.1016/j.neuropsychologia.2010.09.025

    Article  PubMed  Google Scholar 

  89. Martel, L., Bidet-Ildei, C., & Coello, Y. (2011). Anticipating the terminal position of an observed action: Effect of kinematic, structural, and identity information. Visual Cognition, 19, 785–798. https://doi.org/10.1080/13506285.2011.587847

    Article  Google Scholar 

  90. Meary, D., Kitromilides, E., Mazens, K., Graff, C., & Gentaz, E. (2007). Four-day-old human neonates look longer at non-biological motions of a single point-of-light. PLoS ONE, 2, e186. https://doi.org/10.1371/journal.pone.0000186

    Article  PubMed  PubMed Central  Google Scholar 

  91. Moon, H., Robson, N. P., Langari, R., & Buchanan, J. J. (2012). Experimental observations on the human arm motion planning under an elbow joint constraint. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society 2012 (pp. 3870–3873). Piscataway: IEEE Press. doi:10.1109/EMBC.2012.6346812

  92. Moon, H., Robson, N. P., Langari, R., & Buchanan, J. J. (2015). Experimental observations on human reaching motion planning with and without reduced mobility. In W. Adams (Ed.), Robot kinematics and motion planning. New York: Nova Science.

    Google Scholar 

  93. Nackaerts, E., Wagemans, J., Helsen, W., Swinnen, S. P., Wenderoth, N., & Alaerts, K. (2012). Recognizing biological motion and emotions from point-light displays in autism spectrum disorders. PLoS ONE, 7, e44473. https://doi.org/10.1371/journal.pone.0044473

    Article  PubMed  PubMed Central  Google Scholar 

  94. Neri, P., & Levi, D. M. (2007). Temporal dynamics of figure–ground segregation in human vision. Journal of Neurophysiology, 97, 951–957. https://doi.org/10.1152/jn.00753.2006

    Article  PubMed  Google Scholar 

  95. Orban de Xivry, J. J., Coppe, S., Lefevre, P., & Missal, M. (2010). Biological motion drives perception and action. Journal of Vision, 10(2), 6.1-11. https://doi.org/10.1167/10.2.6

    Article  Google Scholar 

  96. Park, S. D., Song, H. S., & Kim, J. Y. (2014). The effect of action observation training on knee joint function and gait ability in total knee replacement patients. Journal of Exercise Rehabilitation, 10, 168–171. https://doi.org/10.12965/jer.140112

    Article  PubMed  PubMed Central  Google Scholar 

  97. Pavlova, M. (2012). Biological motion processing as a hallmark of social cognition. Cerebral Cortex, 22, 981–995. https://doi.org/10.1093/cercor/bhr156

    Article  PubMed  Google Scholar 

  98. Pavlova, M., Bidet-Ildei, C., Sokolov, A. N., Braun, C., & Krageloh-Mann, I. (2009). Neuromagnetic response to body motion and brain connectivity. Journal of Cognitive Neuroscience, 21, 837–846.

    PubMed  Google Scholar 

  99. Pavlova, M., Krageloh-Mann, I., Sokolov, A., & Birbaumer, N. (2001). Recognition of point-light biological motion displays by young children. Perception, 30, 925–933.

    PubMed  Google Scholar 

  100. Pavlova, M., & Sokolov, A. (2000). Orientation specificity in biological motion perception. Perception & Psychophysics, 62, 889–899.

    Google Scholar 

  101. Pavlova, M., & Sokolov, A. (2003). Prior knowledge about display inversion in biological motion perception. Perception, 32, 937–946.

    PubMed  Google Scholar 

  102. Pavlova, M., Sokolov, A. N., & Bidet-Ildei, C. (2015). Sex differences in the neuromagnetic cortical response to biological motion. Cerebral Cortex, 25, 3468–3474. https://doi.org/10.1093/cercor/bhu175

    Article  PubMed  Google Scholar 

  103. Pavlova, M., Staudt, M., Sokolov, A., Birbaumer, N., & Krageloh-Mann, I. (2003). Perception and production of biological movement in patients with early periventricular brain lesions. Brain, 126, 692–701.

    PubMed  Google Scholar 

  104. Peelen, M. V., Wiggett, A. J., & Downing, P. E. (2006). Patterns of fMRI activity dissociate overlapping functional brain areas that respond to biological motion. Neuron, 49, 815–822. https://doi.org/10.1016/j.neuron.2006.02.004

    Article  PubMed  Google Scholar 

  105. Peuskens, H., Vanrie, J., Verfaillie, K., & Orban, G. A. (2005). Specificity of regions processing biological motion. European Journal of Neuroscience, 21, 2864–2875.

    PubMed  Google Scholar 

  106. Pilz, K. S., Bennett, P. J., & Sekuler, A. B. (2010). Effects of aging on biological motion discrimination. Vision Research, 50, 211–219. https://doi.org/10.1016/j.visres.2009.11.014

    Article  PubMed  Google Scholar 

  107. Pinto, J., & Shiffrar, M. (1999). Subconfigurations of the human form in the perception of biological motion displays. Acta Psychologica, 102, 293–318.

    PubMed  Google Scholar 

  108. Pollick, F. E., Kay, J. W., Heim, K., & Stringer, R. (2005). Gender recognition from point-light walkers. Journal of Experimental Psychology: Human Perception and Performance, 31, 1247–1265. https://doi.org/10.1037/0096-1523.31.6.1247

    Article  PubMed  Google Scholar 

  109. Pozzo, T., Papaxanthis, C., Petit, J. L., Schweighofer, N., & Stucchi, N. (2006). Kinematic features of movement tunes perception and action coupling. Behavioral Brain Research, 169, 75–82.

    Google Scholar 

  110. Pulvermüller, F. (2005). Brain mechanisms linking language and action. Nature Review Neuroscience, 6, 576–582. https://doi.org/10.1038/nrn1706

    Article  Google Scholar 

  111. Rehg, J. M., Morris, D. D., & Kanade, T. (2003). Ambiguities in visual tracking of articulated objects using two- and three-dimensional models. International Journal of Robotics Research, 22, 393–418. https://doi.org/10.1177/0278364903022006004

    Article  Google Scholar 

  112. Reid, V. M., Hoehl, S., & Striano, T. (2006). The perception of biological motion by infants: An event-related potential study. Neuroscience Letters, 395, 211–214.

    PubMed  Google Scholar 

  113. Robin, C., Toussaint, L., Blandin, Y., & Proteau, L. (2005). Specificity of learning in a video-aiming task: modIfying the salience of dynamic visual cues. Journal of Motor Behavior, 37, 367–376. https://doi.org/10.3200/JMBR.37.5.367-376

    Article  PubMed  Google Scholar 

  114. Rohbanfard, H., & Proteau, L. (2011). Learning through observation: A combination of expert and novice models favors learning. Experimental Brain Research, 215(3‑4), 183–197. https://doi.org/10.1007/s00221-011-2882-x

    PubMed  Google Scholar 

  115. Runeson, S., & Frykholm, G. (1981). Visual perception of lifted weight. Journal of Experimental Psychology: Human Perception and Performance, 7, 733–740. https://doi.org/10.1037/0096-1523.7.4.733

    Article  PubMed  Google Scholar 

  116. Saunier, G., Martins, E. F., Dias, E. C., de Oliveira, J. M., Pozzo, T., & Vargas, C. D. (2013). Electrophysiological correlates of biological motion permanence in humans. Behavioural Brain Research, 236, 166–174. https://doi.org/10.1016/j.bbr.2012.08.038

    Article  PubMed  Google Scholar 

  117. Saygin, A. P., Wilson, S. M., Hagler, D. J., Jr., Bates, E., & Sereno, M. I. (2004). Point-light biological motion perception activates human premotor cortex. Journal of Neuroscience, 24, 6181–6188. https://doi.org/10.1523/JNEUROSCI.0504-04.2004

    Article  PubMed  Google Scholar 

  118. Shipley, T. F. (2003). The effect of object and event orientation on perception of biological motion. Psychological Science, 14, 377–380.

    PubMed  Google Scholar 

  119. Shipley, T. F., & Brumberg, J. S. (2004). Markerless motion-capture for point-light displays. Retrieved from http://astro.temple.edu/~tshipley/mocap/dotMovie.html

  120. Simion, F., Regolin, L., & Bulf, H. (2008). A predisposition for biological motion in the newborn baby. Proceedings of the National Academy of Sciences, 105, 809–813. https://doi.org/10.1073/pnas.0707021105

    Article  Google Scholar 

  121. Sokolov, A. A., Gharabaghi, A., Tatagiba, M. S., & Pavlova, M. (2010). Cerebellar engagement in an action observation network. Cerebral Cortex, 20, 486–491.

    PubMed  Google Scholar 

  122. Spencer, J. M. Y., Sekuler, A. B., Bennett, P. J., Giese, M. A., & Pilz, K. S. (2016). Effects of aging on identifying emotions conveyed by point-light walkers. Psychology and Aging, 31, 126–138. https://doi.org/10.1037/a0040009

    Article  PubMed  Google Scholar 

  123. Stadler, W., Springer, A., Parkinson, J., & Prinz, W. (2012). Movement kinematics affect action prediction: comparing human to non-human point-light actions. Psychological Research, 76, 395–406. https://doi.org/10.1007/s00426-012-0431-2

    Article  PubMed  Google Scholar 

  124. Sumi, S. (1984). Upside-down presentation of the Johansson moving light-spot pattern. Perception, 13, 283–286.

    PubMed  Google Scholar 

  125. Thoresen, J. C., Vuong, Q. C., & Atkinson, A. P. (2012). First impressions: gait cues drive reliable trait judgements. Cognition, 124, 261–271. https://doi.org/10.1016/j.cognition.2012.05.018

    Article  PubMed  Google Scholar 

  126. Thornton, I. M., Pinto, J., & Shiffrar, M. (1998). The visual perception of human locomotion. Cognitive Neuropsychology, 15, 535–552.

    PubMed  Google Scholar 

  127. Thornton, I. M., Rensink, R. A., & Shiffrar, M. (2002). Active versus passive processing of biological motion. Perception, 31, 837–853.

    PubMed  Google Scholar 

  128. Thurman, S. M., & Grossman, E. D. (2008). Temporal “Bubbles” reveal key features for point-light biological motion perception. Journal of Vision, 8(3), 28.1-11. https://doi.org/10.1167/8.3.28

    Article  Google Scholar 

  129. Thurman, S. M., & Lu, H. (2014). Perception of social interactions for spatially scrambled biological motion. PLOS ONE, 9, e112539. https://doi.org/10.1371/journal.pone.0112539

    Article  PubMed  PubMed Central  Google Scholar 

  130. Troje, N. F. (2002). Decomposing biological motion: A framework for analysis and synthesis of human gait patterns. Journal of Vision, 2(5), 2.371–387. https://doi.org/10.1167/2.5.2

    Article  Google Scholar 

  131. Troje, N. F., Sadr, J., Geyer, H., & Nakayama, K. (2006). Adaptation aftereffects in the perception of gender from biological motion. Journal of Vision, 6(8), 7.850–857. https://doi.org/10.1167/6.8.7

    Article  Google Scholar 

  132. Troje, N. F., & Westhoff, C. (2006). The inversion effect in biological motion perception: Evidence for a “life detector”? Current Biology, 16, 821–824. https://doi.org/10.1016/j.cub.2006.03.022

    Article  PubMed  Google Scholar 

  133. Troje, N. F., Westhoff, C., & Lavrov, M. (2005). Person identification from biological motion: Effects of structural and kinematic cues. Perception & Psychophysics, 67, 667–675. https://doi.org/10.3758/BF03193523

    Article  Google Scholar 

  134. Ulloa, E. R., & Pineda, J. A. (2007). Recognition of point-light biological motion: Mu rhythms and mirror neuron activity. Behavioral Brain Research, 183, 188–194.

    Google Scholar 

  135. Vaina, L. M., Solomon, J., Chowdhury, S., Sinha, P., & Belliveau, J. W. (2001). Functional neuroanatomy of biological motion perception in humans. Proceedings of the National Academy of Sciences, 98, 11656–11661.

  136. van Boxtel, J. J. A., & Lu, H. (2013). A biological motion toolbox for reading, displaying, and manipulating motion capture data in research settings. Journal of Vision, 13(12), 7. https://doi.org/10.1167/13.12.7

    Article  PubMed  Google Scholar 

  137. van Kemenade, B. M., Muggleton, N., Walsh, V., & Saygin, A. P. (2012). Effects of TMS over premotor and superior temporal cortices on biological motion perception. Journal of Cognitive Neuroscience, 24, 896–904. https://doi.org/10.1162/jocn_a_00194

    Article  PubMed  Google Scholar 

  138. Vanrie, J., Dekeyser, M., & Verfaillie, K. (2004). Bistability and biasing effects in the perception of ambiguous point-light walkers. Perception, 33, 547–560. https://doi.org/10.1068/p5004

    Article  PubMed  Google Scholar 

  139. Verfaillie, K. (2000). Perceiving human locomotion: Priming effects in direction discrimination. Brain and Cognition, 44, 192–213.

    PubMed  Google Scholar 

  140. Vogt, S., & Thomaschke, R. (2007). From visuo-motor interactions to imitation learning: Behavioural and brain imaging studies. Journal of Sports Science, 25, 497–517. https://doi.org/10.1080/02640410600946779

    Article  Google Scholar 

  141. Weeks, D. L., & Anderson, L. P. (2000). The interaction of observational learning with overt practice: effects on motor skill learning. Acta Psychologica, 104, 259–271.

    PubMed  Google Scholar 

  142. Weinhandl, J. T., & O’Connor, K. M. (2010). Assessment of a greater trochanter-based method of locating the hip joint center. Journal of Biomechanics, 43, 2633–2636. https://doi.org/10.1016/j.jbiomech.2010.05.023

    Article  PubMed  Google Scholar 

  143. Willems, R. M., & Hagoort, P. (2007). Neural evidence for the interplay between language, gesture, and action: A review. Brain and Language, 101, 278–289.

    PubMed  Google Scholar 

  144. Wolpert, D. M., Diedrichsen, J., & Flanagan, J. R. (2011). Principles of sensorimotor learning. Nature Reviews Neuroscience, 12, 739–751. https://doi.org/10.1038/nrn3112

    Article  PubMed  Google Scholar 

  145. Yoon, J. M., & Johnson, S. C. (2009). Biological motion displays elicit social behavior in 12-month-olds. Child Development, 80, 1069–1075. https://doi.org/10.1111/j.1467-8624.2009.01317.x

    Article  PubMed  Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Christel Bidet-Ildei.

Appendix

Appendix

Table 2. Experimental studies using PLD transformations integrated into PLAViMoP. Here, we present only studies performed on humans

Appendix B: Control loop that maintains the point-lights inside the initial box after the random transformation of PLDs

figurec

Appendix C: Algorithm used to create linear masking dots

figured

Appendix D: Algorithm used to create random masking dots

figuree

Appendix E: Algorithm used to inverse the norm of the velocity

figuref

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Decatoire, A., Beauprez, S., Pylouster, J. et al. PLAViMoP: How to standardize and simplify the use of point-light displays. Behav Res 51, 2573–2596 (2019). https://doi.org/10.3758/s13428-018-1112-x

Download citation

Keywords

  • Point-light displays
  • Action observation
  • Software
  • Kinematics transfomations
  • Spatial transformations
  • Masking dots