Controlled Gaussian process dynamical models with application to robotic cloth manipulation

Over the last years, significant advances have been made in robotic manipulation, but still, the handling of non-rigid objects, such as cloth garments, is an open problem. Physical interaction with non-rigid objects is uncertain and complex to model. Thus, extracting useful information from sample data can considerably improve modeling performance. However, the training of such models is a challenging task due to the high-dimensionality of the state representation. In this paper, we propose Controlled Gaussian Process Dynamical Models (CGPDMs) for learning high-dimensional, nonlinear dynamics by embedding them in a low-dimensional manifold. A CGPDM is constituted by a low-dimensional latent space, with an associated dynamics where external control variables can act and a mapping to the observation space. The parameters of both maps are marginalized out by considering Gaussian Process priors. Hence, a CGPDM projects a high-dimensional state space into a smaller dimension latent space, in which it is feasible to learn the system dynamics from training data. The modeling capacity of CGPDM has been tested in both a simulated and a real scenario, where it proved to be capable of generalizing over a wide range of movements and confidently predicting the cloth motions obtained by previously unseen sequences of control actions.

Funding.This work was partially developed in the context of the project CLOTHILDE ("CLOTH manIpulation Learning from DEmonstrations"), which has received funding from ERC under the European Union's Horizon 2020 research and innovation program (Advanced Grant agreement No 741930).
Competing Interests.The authors declare that they have no competing interests.
Author Contributions.Fabio Amadio, Juan Antonio Delgado-Guerrero and Adrià Colomé conceived the presented idea.Fabio Amadio developed the theory, implemented the code and carried out the numerical experiments.Fabio Amadio took the lead in writing the manuscript.Carme

Introduction
Robotic cloth manipulation has a wide range of applications, from textile industry to assistive robotics [5,8,14,19,23,29].However, the complexity of cloth behaviour results in a high uncertainty in the state transition given a certain action.This uncertainty is what makes manipulating cloth much more challenging than handling rigid objects.Intuitively, learning the cloth's dynamics is the solution to reduce such uncertainty.In literature, we can find several cloth models that simulate the internal cloth state [3,25,30].They represent cloth as a mesh of material points, and simulate their behaviour taking into account physical constraints.However, fitting those models to real data can be a complex task.Moreover, such models need not only to behave similarly enough to the cloth garment, but to have a tractable dimensionality, for computational reasons.As an example, an 8 × 8 mesh representing a square towel results in a 192-dimensional manifold.Such dimensionality is unmanageable, not only in terms of computational costs, but also for building a tractable state-action space policy.Such is the case of [4], where simulated results are obtained after hours of computations.
Hence, Dimensionality Reduction (DR) methods can be very beneficial.In [11], linear DR techniques were used for learning cloth manipulation by biasing the latent space projection with each execution's performance.Nonlinear methods, such as Gaussian Process Latent Variable Models (GPLVMs) [20] have also been applied for this purpose.In [18], GPLVM was employed to project task-specific motor-skills of the robot onto a much smaller state representation, whereas in [13] a GPLVM was also used to represent a robot manipulation policy in a latent space, taking contextual features into account.However, these approaches focus the dimensionality reduction on the robot action characterization, rather than on the manipulated object's dynamics.Instead, in [17] a GPLVM learns a latent representation of the cloth state from point clouds.However, such approach did not consider the cloth handling task dynamics, limiting the application to quasi-static manipulations.
In this paper, we assume to have recorded data from several cloth motions, as a time-varying mesh of points.To fit such data into a tractable dynamical model, we consider Gaussian Process Dynamical Models (GPDMs), first introduced in [32], which are an extension of the GPLVM structure explicitly oriented to the analysis of high-dimensional time series.GPDMs have been applied in several different fields, from human motion tracking [31,33] to dynamic texture modeling [35].In the context of cloth manipulation, GPDMs were adopted in [16] to learn a latent model of the dynamics of a cloth handling task.However, this framework, as it stands, lacks in its structure a fundamental component to correctly describe the dynamics of a system, namely control actions, limiting generalization capacity.
Therefore, we propose here an extension of the GPDM structure, that takes into account the influence of external control actions on the modeled dynamics.We call it Controlled Gaussian Process Dynamical Model (CGPDM).In this new version, control actions directly affect the dynamics in the latent space.Thus, a CGPDM, trained on a sufficiently diverse set of interactions, is able to predict the effects of control actions never experienced before inside a space of reduced dimensions, and then reconstruct highdimensional motions by projecting the latent state trajectories into the observation space.CGPDM has proved capable of fitting different types of cloth movements, in both a simulated and a real cloth manipulation scenario, and being able to predict the results of control actions never seen during training (example reported in Fig. 1).Finally, we compared two possible CGPDM parameterizations.The first is a straightforward extension of standard GPDM, whereas in the second we propose to employ squared exponential (SE) kernels with automatic relevance determination (ARD) [24] and inhomogeneous linear kernels, together with tunable dynamical map scaling factors, obtaining a better accuracy and generalization, especially in the low-data regime.
To summarize, the main contributions of this article are: • The proposal of the CGPDM structure, an extension of the GPDM capable of taking into account the presence of exogenous inputs.• The definition of a more rich parameterization able to achieve better accuracy and generalization w.r.t. the standard structure previously employed in the GPDM context.• The successful application of the proposed CGPDM to (both simulated and real) dynamic robotic cloth manipulation problems.
The remainder of the paper is structured as follows.Sec. 2 provides the details of the proposed CGPDM approach.Results obtained by CGPDM in cloth dynamics modeling are described in Sec. 3, both in simulation and in a real case scenario.Finally, the obtained results are discussed in Sec. 4 and conclusions are drawn in Sec. 5.

Methods
This section thoroughly describes the proposed method.We start by providing some background notions about the models we build on top: GP, GPLVM, and GPDM (Subsec.2.1).Then, we present the CGPDM (Subsec.2.2), detailing the structure of its latent and dynamics maps.In particular, we present two alternative CGPDM structures: naive and advanced.The first is a straightforward inclusion of exogenous inputs into standard GPDM, while the latter is the proposed CGPDM characterized by a richer parameterization.Finally, we conclude by describing the model training and prediction procedures (Subsec.2.3).

Background: From GP to GPDM
GPs [27] are the infinite-dimensional generalization of multivariate Gaussian distributions.They are defined as infinite-dimension stochastic processes such that, for any finite set of input locations x 1 , ..., x n , the random variables f (x 1 ), ..., f (x n ) have joint Gaussian distributions.
A GP is defined by its mean function m(x) and kernel k(x, x ), that must be a symmetric and positive semi-definite function.Usually GPs are denoted as f (x) ∼ GP(m(x), k(x, x )).GPs can be used for regression models of the form y = f (x) + ε, with ε an i.i.d.Gaussian noise, as they provide closed formulae to predict new target y * , given new input x * .GP regression has been widely applied as a data-driven tool for dynamical system identification [15], usually describing each state by its own GP.Nevertheless, such approach struggles to scale to high-dimensional systems.Thus, DR strategies must be considered.
GPLVMs [20,22] emerged as feature extraction methods that can be used as multiple-output GP regression models.These models, under a DR perspective, associate and learn low-dimensional representations of higher-dimensional observed data, assuming that observed variables are determined by the latent ones.Finally, GPLVMs provide, as a result of an optimization, a mapping from the latent space to the observation space, together with a set of latent variables representing the observed values.However, GPLVMs are not explicitly thought to deal with time series, where a dynamics relate the values observed at consecutive time steps.
Thus, [32] first introduced Gaussian Process Dynamical Models (GPDM), an extension of the GPLVM structure explicitly oriented to the analysis of high-dimensional time series.A GPDM entails essentially two stages: (i) a latent mapping that projects high-dimensional observations to a low-dimensional latent space; (ii) a discrete-time Markovian dynamics that captures the evolution of the time series inside the reduced latent space.GPs are used to model both maps.

Controlled GPDM
Let us consider a system governed by an unknown dynamics.At each time step t, u t ∈ R E represents the applied control action and y t ∈ R D the observation.For high-dimensional observation spaces, it could be unfeasible to directly model the evolution of a sequence of observations in response to a series of inputs.For instance, in the case of a robot moving a piece of cloth, we can consider as control actions u t the instantaneous movement of the end-effector, while the observations y t could be the coordinates of a mesh of material points, representing the cloth configuration.In this context, it could be convenient to capture the dynamics of the system in a low-dimensional latent space R d , with d << D. Let x t ∈ R d be the latent state associated with y t .We propose to use a variation of the GPDM that keeps into account the influence of control actions, while maintaining the dimensionality reduction properties of the original model.We call it Controlled Gaussian Process Dynamical Model (CGPDM).
A CGPDM consists of a latent map (1) projecting observations y t into latent states x t , and a dynamics map (2) that describes the evolution of x t , subject to u t .We denote the two maps as, where n y,t and n x,t are two zero-mean isotropic Gaussian noise processes, while g and h are two unknown functions.Differently from original GPDM, here the latent transition function ( 2) is also influenced by exogenous control inputs u t .Note that we consider x t+1 − x t to be the output of the CGPDM dynamic map, [33] suggested that this choice can improve latent trajectories smoothness.In the following, we report how we modeled (1) and ( 2) by means of GPs, while Fig. 2 illustrates the relation assumed by CGPDM between the latent, input, and output spaces along N time steps.

Latent variable mapping
Each component of the observation vector t , . . ., y ] T can be modeled a priori as a zeromean GP that takes as input x t , for t = 1, . . ., N .Let Y = [y 1 , . . ., y N ] T ∈ R N ×D be the matrix that collects the set of N observations, and X = [x 1 , . . . ,x N ] T ∈ R N ×d be the matrix of associated latent states.We denote with Y :,j the vector containing the j-th components of all the N observations.Then, if we assume that the D observation components are independent variables, the probability over the whole set of observations can be expressed by the product of the D GPs.In addition, if we choose the same kernel function k y (•, •) for each GP, differentiated only through a variable scaling factor w −2 y,j , with j = 1, . . ., D, the joint likelihood over the whole set of observations is given by where W y = diag(w y,1 , . . ., w y,D ), K y (X) is the covariance matrix defined element-wise by k y (•, •).Independence assumption may be relaxed by applying coregionalization models [1], at the cost of greater computational demands.In previous GPDM works [31][32][33], the GPs of the latent map were equipped with an isotrophic SE kernel, with parameters β 1 and β 2 (with δ(x r , x s ) we indicate the Kronecker delta).Instead here, we adopt a richer ARD structure for the SE kernel, characterized by a different length-scale for each latent state component: ) is a positive definite diagonal matrix, which weights the norm used in the SE function, and σ 2 y is the variance of the isotropic noise in (1).The trainable hyperparameters of the latent map model are then θ y = [w y,1 , . . ., w y,D , λ y,1 , . . ., λ y,D , σ y ] T .

Dynamics mapping
Similarly to Sec. 2.2.1, we can model a priori each component of the latent state difference t ] T as a zero-mean GP that takes as input the pair (x t , u t ), for t = 1, . . ., N − 1.
Let X = [x 1 , . . ., x N ] T ∈ R N ×d be the matrix collecting the set of N latent states, we can denote by X r:s,i the vector of the i-th components from time step r to time step s, with r, s = 1, . . ., N .We indicate the vector of differences between consecutive latent states along their i-th component with )×d is the matrix that collects differences along all the components.
Finally, we compactly represent the GP input of the dynamic model as xt = [x T t , u T t ] T ∈ R d+E , and refer to the the matrix collecting xt for t d+E) .With similar assumptions to the ones made for the latent map, and denoting the common kernel function for all the GPs with k x (•, •), and the different scaling factors with w x,i , for i = 1, . . ., d, the joint likelihood is given by where W x = diag(w x,1 , . . ., w x,d ) and K x ( X) is the covariance matrix defined by k x (•, •).In standard GPDM [32], dynamic mapping GPs have been proposed with constant scaling factors w x,i = 1 for i = 1, . . ., d, and equipped with a naive kernel resulting from the sum of an isotrophic SE and an homogeneous linear function, with only four trainable parameters: Analogously to the latent mapping case, we decided to adopt the following kernel function, ) is a positive definite diagonal matrix, which weights the norm used in the SE component of the kernel.Also ) is a positive definite diagonal matrix that describes the linear component.σ 2 x is the variance of the isotropic noise in (2).In comparison to (7), the adopted kernel weights differently the various components of the input in both SE and linear part, where the GP input is also extended as xT s , 1 T .The trainable hyper-parameters of the dynamic map model are then In the following, we will refer with naive CGPDM to the model that straightforwardly extends the standard GPDM structure from [32], using its same kernels, (4), (7), and constant scaling factors; while we denote with advanced CGPDM the proposed model characterized by kernels ( 5), (8) and trainable scaling factors in the dynamical map.Although ARD kernels are commonly adopted in GP regression [27], they were not tested before in GPDMs.Trainable scaling factors constitute a novelty for this kind of model too.

Working with multiple sequences
It is possible to easily extend the CGPDM formulation to P multiple sequences of observations, Y (1) , . . ., Y (P ) , and control inputs, U (1) , . . ., U (P ) .Let the length of each sequence p, for p = 1, . . ., P , be equal to N p , with P p=1 N p = N .Define the latent states associated with each sequence as X (1) , . . ., X (P ) .Following the notation of Sec.2.2.2, define X(1) , . . ., X(P ) , as the sequence of the aggregated matrices of latent states and control inputs, and ∆ (1) , . . ., ∆ (P ) as the difference matrices.Hence, model joint likelihoods can be calculated by using the following concatenated matrices inside (3) and ( 6 Note that, when dealing with multiple sequences, the number of data points in the dynamic mapping becomes N − P , and expression (6) must change accordingly.

CGPDM Training and Prediction
Training a CGPDM entails using numerical optimization techniques to estimate the unknowns in the model, i.e., latent states X and the hyperparameters θ x , θ y .Latent coordinates X are initialized by means of PCA [6], selecting the first d principal components of Y.A natural approach for training CGPDMs is to maximize the joint log-likelihood ln p(Y|X) + ln p(∆| X) w.r.t.{X, θ x , θ y }.To do so, in this work, we adopted the L-BFGS algorithm [9].
The overall loss to be optimized can be written as L = L y + L x , with L y and L x defined as In case the CGPDM is trained on multiple sequences of inputs and observations, make sure to employ the aggregated matrices defined in Sec.2.2.3 when computing loss functions 9-10.It is also necessary to use the factor N −P instead of N − 1 inside the L x expression.The overall training procedure is represented schematically in Fig. 3.
A trained CGPDM can be used to fulfill two different purposes: (i) map a given new latent state x * t to the corresponding y * t in observation space, (ii) predict the evolution of the latent state at the next time step x * t+1 , given x * t and a certain control u * t .The two processes, together, can predict the observations produced by a given series of control actions.

Latent prediction
Given x * t , its corresponding y * t is distributed as where

Trajectory prediction
Starting from an initial latent state x * 1 , one can predict the system evolution over a desired horizon of length N d , when subject to a given sequence of control actions u * 1 , . . ., u

Results
We employed the proposed CGPDM to model the high-dimensional dynamics that characterizes the motion of a piece of cloth held by a robotic system.This section reports the results obtained in two sets of experiments: a simulated session (Subsec.3.1) and one conducted on a real setup (Subsec.3.2).We exploited simulation to assess the performance of CGPDM over a wide set of scenarios (different amount of training data, motion ranges, and model structure), while the real-world experiment served as validation over non-synthetic data.The objective of the experiments was to learn the high-dimensional cloth dynamics using CGPDM, in order to make predictions about cloth movements in response to sequences of actions that were not seen during training.In particular, we aimed to evaluate how model prediction accuracy is affected by: • the number of data used for training, • the oscillation range of the cloth movements, • the use of advanced or naive CGPDM structures (as defined in Sec.2.2).
Such high-dimensional task would be unfeasible to model by standard GP regression without DR.CGPDMs were implemented in Python 1 , employing PyTorch [26].

Simulated Cloth Experiment
In the simulated scenario, we considered a bimanual robot moving a squared piece of cloth by holding its two upper corners, as shown in Fig. 4. The cloth was modeled as an 8×8 mesh of material points.We made the assumption that the two upper corner points are attached to the robot's end-effectors, while the other points move freely following the dynamical model proposed in [12].In this context, the observation vector is given by the Cartesian coordinates of all the points in the mesh (measured in meters); hence y t ∈ R D with D = 192.We assumed to control exactly the two robot arms in the operational space, keeping the same orientation and relative distance between the two end-effectors and producing oscillation in the Y-Z plane.Thus, we considered as control actions the differences between consecutive commanded end-effector positions in the Y and Z directions, resulting in a u t ∈ R E with E = 2.

Data collection
Training and test data were obtained by recording mesh trajectories associated with several types of cloth oscillation, obtained by applying different sequences of control actions.All the considered trajectories start from the same cloth configuration and last 5 seconds.Observations were recorded at 20 Hz, hence N = 100 total number of steps for each sequence.
Robot end-effectors move in a coordinate fashion drawing oscillations on the Y-Z plane.Let , where ∆ee Y t , and ∆ee Z t , indicate the difference between consecutive endeffector commanded positions along the Y and Z axes.Specifically, their values were given by the two following periodic expressions: Such controls make the end-effectors oscillate on the Y-Z plane of the operational space.The maximum displacement is regulated by A, that we set to 0.01 meters.Parameter γ can be interpreted as the inclination of u 1 w.r.t. the horizontal, and it loosely defines a direction of the oscillation.f Y and f Z define the frequencies of the oscillations along Y and Z axes.If they are similar, the endeffectors move mostly along the direction defined by γ, if not, they swipe in a broader space.
In order to obtain a heterogeneous set of trajectories for the composition of training and test sets, we collected several movements obtained by choosing in a random fashion the control parameters γ, f Y and f Z .Angles γ were uniformly sampled inside a variable range [− R 2 , R 2 ] (deg); in the following, we indicate this range with the amplitude of its angular area, R (deg).Instead, frequencies f Y and f Z were uniformly sampled inside the fixed interval [0.3, 0.6] (Hz).We considered four movement ranges of increasing width, namely R ∈ {30°, 60°, 90°, 120°} (Fig. 5), and collected a specific data-set D R associated with each range.Every set contains 50 cloth trajectories obtained by applying control actions of the form (15) with 50 different random choices for parameters γ, f Y and f Z .From each D R , 10 trajectories were extracted and used as test sets D test R for the corresponding movement range, several training sets D train R were built by randomly picking from the remaining sequences.

Model training
In all the models, we adopted a latent space of dimension d = 3, resulting in a dimensionality reduction factor of D/d = 64.This d value was chosen empirically after preliminary tests and allows to easily visualize the latent variables behaviour in a three-dimensional space, see for instance Fig. 1.Other choices are possible, but such sensitivity analysis is left out of the scope of this experimental analysis.
The objective of the experiment was to evaluate CGPDM prediction accuracy at different movement ranges, and for different amounts of training data.Moreover, we wanted to observe if the use of the proposed advanced CGPDM structure yields a substantial difference in terms of accuracy when compared to the naive model.Consequently, for each considered movement range R, we trained two different sets of CGPDMs, adopting in one the naive structure and in the other the advanced one.Each model in the two sets was trained employing an increasing number of sequences randomly picked from D train R .Specifically, we used 5 different random combinations of 5, 10, 15 and 20 sequences for each oscillation range (varying each time the random seed).In this way, we were able to reduce the dependencies on the specific training trajectories considered, and to average prediction accuracy over different possible sets of training data.

Model prediction
We used each learned CGPDM to predict the cloth movements when subject to the control actions observed for each test sequence inside D test R , with R ∈ {30°, 60°, 90°, 120°}.Let y  the corresponding predicted observation.As an example, in Fig. 6 we show a sequence of true and predicted cloth configurations for one of the considered test trajectory.Please, refer to the video2 for a clearer visualization of obtained results.
For every predicted trajectory, we measured the average distance between the real and the predicted mesh points.Fig. 7 represents the observed errors by means of boxplots, indicating also the statistical relevance of the naive-advanced difference in each experiment configuration (T-test performed by using the open-source library Statannotations3 ).Moreover, Table 1 reports the average distances between true and predicted mesh points obtained in the test sets by the different CGPDM configurations in all the movement ranges.Results are expressed in terms of mean and 95% confidence intervals obtained by averaging over the different training sets adopted (all the experiments were repeated 5 times, using a randomly composed D train R ).

Real Cloth Experiment
In this second set of experiments, we tested CGPDM on data collected in a real cloth manipulation scenario.For this purpose, we used a Barrett WAM Arm 4 , whose end-effector consists of a coat rack that can firmly grip a piece of cloth from its corners.The overall setup is depicted in Fig. 8.We controlled the robot's end-effector in position, recording the resulting movement of the cloth through a motion capture system based on information extracted from an RGBD camera.We combined object detection, image and point cloud processing for segmenting cloth-like objects 5 , following [7], [28] and [34].

Data Collection
As in the simulated scenario, we captured the cloth as an 8×8 mesh of points, whose spatial coordinates constitute the observation vector y t ∈ R D with D = 192.
Control actions were defined following again expressions (15) and commanded to the robot at 100 Hz.Parameters f Y and f Z were uniformly sampled within [0.2, 0.5] (Hz) and A was set to 0.004 meters.In this experiment, we considered only the R = 30°, R = 60°, and R = 90°oscillation ranges (R = 120°was excluded because of robot workspace limitations).
The motion capture system could work only at rates lower than 100 Hz, with no guaranteed sampling interval length.Thus, it was necessary to post-process the data to make them ready for modeling.Firstly, motion capture data were smoothed by a moving average filter.Then we interpolated the positions of both the end-effector and the cloth mesh, to obtain two synchronized sequences of observations and control actions, 4 Barrett WAM Arm: https://advanced.barrett.com/wam-arm-1 5 Code publicly available at https://github.com/MiguelARD/cloth point cloud segmentation  sampled at 20 Hz.For each of the three ranges, we collected 10 trajectories each 3 seconds long.

Model training & prediction
For every considered oscillation range, we trained two sets of CGPDMs, one using the naive and one the advanced model structure.Each set of trajectories is composed of 10 sequences, hence we followed a cross-validation method for training and testing the models.At every range, we trained the models using all the sequences but one, left out for testing, repeating the procedure ten times varying the test sequence each time.
The models were used to predict the cloth movements obtained in response to the control actions of each test trajectory, measuring the average distance between the real and the predicted mesh points.In Fig. 9, we provide a visual representation of the cloth movements, by representing the true and predicted trajectories of a subset of mesh points, in one of the example test cases.Please refer to the video 2 for better visualizing the obtained results.Similarly to the simulated experiment case, Fig. 10 represents the observed errors by means of boxplots and the last row of Table 1 reports the mean distances between true and predicted mesh points obtained in all the considered movement ranges.

Discussion
The experimental results obtained in simulation confirm the capacity of CGPDM to capture the cloth dynamics of oscillations along axes Y and be more evident, but the CGPDMs are still able to capture the overall movement of the cloth.
Moreover, the proposed advanced CGPDM structure significantly improves accuracy and consistency of the results in the majority of cases, when compared to the naive model.This effect is clearer in a low-data regime and when dealing with wide oscillation ranges.
Finally, results obtained in the real-world experiments confirm the trends observed in the simulated scenario.The advanced CGPDM structure drastically outperforms the naive model that seems unable to cope with the high noise that afflicts the real experimental setup.

Conclusion
We presented CGPDM, a modeling framework for high-dimensional dynamics governed by control actions.Essentially, this model projects observations into a latent space of low dimension, where dynamical relations are easier to infer.CGPDMs were applied to a robotic cloth manipulation task, where the observations are the coordinates of the cloth mesh.We tested CGPDMs in both simulated and real experiments.The observed results empirically demonstrate that the proposed advanced CGPDM structure can capture the complex highdimensional cloth dynamics given a small number of trajectories to learn from by leveraging the data efficiency that characterizes GP-based methods.
In future works, we aim to apply CGPDM within Model-Based Reinforcement Learning algorithms (such as [2,10]) to automatically learn control policies for high-dimensional systems.Moreover, CGPDM formulation could be extended through the introduction of back constraints [21] to preserve local distances and obtain an explicit formulation of the mapping from the observation to latent space.Finally, the integration of context variables within the CGPDM formulation could permit generalizing over different types of cloth fabric.

Fig. 1
Fig. 1 Latent trajectories predicted by a trained CGPDM in response to two different sequences of unseen actions.Each latent state has associated a particular configuration of the cloth model (some of them are shown as an example)

Fig. 2
Fig. 2 Symbolic representation of a CGPDM rollout along N time steps.Note how output y depends exclusively on the latent state x, while control action u influences only the latent dynamics

Fig. 3
Fig. 3 Flowchart summarizing the CGPDM training process.Given a set of training data Y and U (1) and a desired latent dimension d (2), the associated latent states X are initialized via PCA (3) and then optimized together with other CGPDM hyper-parameters (4).After the training, we obtain the probabilistic predictive model(5), and the set of optimized latent trajectories capturing the high-dimensional dynamics(6)

Fig. 4
Fig. 4 Simulated setup for cloth manipulation with bimanual robot.The cloth is positioned in its starting configuration

Fig. 6
Fig. 6 True (top) and predicted (bottom) simulated cloth oscillation frames for one of the considered test trajectories

Fig. 8 Fig. 9 Fig. 10
Fig. 8 Real experimental setup with the Barrett WAM Arm holding a piece of cloth whose motion can be tracked by a RGBD camera

Table 1
Mean prediction errors (with 95% C.I.) between true and predicted cloth trajectories obtained by the advanced and the naive CGPDM structures at different oscillation ranges in both the simulated and real-world experiment (the number of data used for training is indicated inside the squared brackets)