PredPsych: A toolbox for predictive machine learningbased approach in experimental psychology research
Abstract
Recent years have seen an increased interest in machine learningbased predictive methods for analyzing quantitative behavioral data in experimental psychology. While these methods can achieve relatively greater sensitivity compared to conventional univariate techniques, they still lack an established and accessible implementation. The aim of current work was to build an opensource R toolbox – “PredPsych” – that could make these methods readily available to all psychologists. PredPsych is a userfriendly, R toolbox based on machinelearning predictive algorithms. In this paper, we present the framework of PredPsych via the analysis of a recently published multiplesubject motion capture dataset. In addition, we discuss examples of possible research questions that can be addressed with the machinelearning algorithms implemented in PredPsych and cannot be easily addressed with univariate statistical analysis. We anticipate that PredPsych will be of use to researchers with limited programming experience not only in the field of psychology, but also in that of clinical neuroscience, enabling computational assessment of putative biobehavioral markers for both prognosis and diagnosis.
Keywords
Predictive approaches Classification Multivariate analysis Clustering Permutation testingIntroduction
Experimental psychology strives to explain human behavior. This implies being able to explain underlying causal mechanisms of behavior as well as to predict future behavior (Kaplan, 1973; Shmueli, 2010; Yarkoni & Westfall, 2016). In practice, however, traditional methods in experimental psychology have mainly focused on testing causal explanations. It is only in recent years that research in psychology has come to emphasize prediction (Forster, 2002; Shmueli & Koppius, 2011). Within this predictive turn, machine learningbased predictive methods have rapidly emerged as viable means to predict future observations as accurately as possible, i.e., to minimize prediction error (Breiman, 2001b; Song, Mitnitski, Cox, & Rockwood, 2004).
The multivariate nature and focus on prediction error (rather than “goodness of fit”) confers these methods greater sensitivity and higher future predictive power compared to traditional methods. In experimental psychology, they are successfully used for predicting a variable of interest (e.g., experimental condition A vs. experimental condition B) from behavioral patterns of an individual engaged in a task or activity by minimizing prediction error. Current applications range from prediction of facial action recognition from facial microexpressions to classification of intention from differences in the movement kinematics (e.g., Ansuini et al., 2015; Cavallo, Koul, Ansuini, Capozzi, & Becchio, 2016; Haynes et al., 2007; Srinivasan, Golomb, & Martinez, 2016). For example, they have been used to decode the intention in grasping an object (to pour vs. to drink) from subtle differences in patterns of hand movements (Cavallo et al., 2016). What is more, machine learningbased predictive models can be employed not only for group prediction (patients vs. controls), but also for individual prediction. Consequently, these models lend themselves as a potential diagnostic tool in clinical settings (Anzulewicz, Sobota, & DelafieldButt, 2016; Hahn, Nierenberg, & WhitfieldGabrieli, 2017; Huys, Maia, & Frank, 2016).
However, while the assets of predictive approaches are becoming well known, machine learningbased predictive methods still lack an established and easytouse software framework. Many existing implementations provide no or limited guidelines, consisting of small code snippets, or sets of packages. In addition, the use of existing packages often requires advanced programming expertise. To overcome these shortcomings, the main objective of the current paper was to build a userfriendly toolbox, “PredPsych”, endowed with multiple functionalities for multivariate analyses of quantitative behavioral data based on machinelearning models.
Data description

Wrist Velocity, defined as the module of the velocity of the wrist marker (mm/sec);

Wrist Height, defined as the zcomponent of the wrist marker (mm);

Grip Aperture, defined as the distance between the marker placed on thumb tip and that placed on the tip of the index finger (mm);

x, y, and zthumb, defined as x, y, and zcoordinates for the thumb with respect to Flocal (mm);

x, y, and zindex, defined as x, y, and zcoordinates for the index finger with respect to Flocal (mm);

x, y, and zfinger plane, defined as x, y, and zcomponents of the thumbindex plane, i.e., the threedimensional components of the vector that is orthogonal to the plane. This plane is defined as passing through thu0, ind3, and thu4, with components varying between +1 and ˗1.
All kinematic variables were expressed with respect to normalized movement duration (from 10 % to 100 %, at increments of 10 %; for detailed methods, please refer to (Ansuini et al., 2015). The dataset in the toolbox consists of a 848 × 121 matrix, where variables are arranged in columns (the first column represents the size of the grasped object, 1 = “small” object; 2 = “large” object; the other columns represent the kinematic variables) and observations (n = 848) are present in rows.
Toolbox installation and setup
To install the toolbox, the user has first to install the programming language R (R Core Team (2016) www.rproject.org). For easier use of R tools, we recommend using the interface RStudio (https://www.rstudio.com/). After successful installation of R environment, the command install.packages(‘PredPsych’,dependencies=TRUE) can be used to install the package (in case you are prompted to select a Comprehensive R Archive Network (CRAN) repository, choose the one located closest to you). All the packages required for the installation of the package will be installed automatically. The package can then be loaded with the command library(PredPsych). This command loads all the functions as well as the data from the experiment.
Research questions
 Q1.
Do my experimental conditions have discriminatory information?
 Q2.
Is my discrimination significant?
 Q3.
Which features/variables can best discriminate between my conditions?
 Q4.
Do my experimental conditions contain variability?
 Q5.
Can I represent my data in lower dimensions?
Q1. Do my experimental variables have discriminatory information?
This kind of question arises when researchers are interested in understanding whether properties of the collected data (i.e., data features) encode enough information to discriminate between two or more experimental conditions (i.e., classes or groups). This goes beyond asking whether the data features are significantly different among the classes; it also requires to determine whether and to what extent data features can be combined to reliably predict classes and, when errors are made, what is the nature of such errors, i.e., which conditions are more likely to be confused with each other (Tabachnick & Fidell, 2012).
Questions of this sort are perfectly suited for a classification analysis (Bishop, 2006; Hastie, Tibshirani, & Friedman, 2009). Classification analysis is a supervised machine learning approach that attempts to identify holistic patters in the data and assigns classes to it (classification). Given a set of features, a classification analysis automatically learns intrinsic patterns in the data to predict respective classes. If the data features are informative about the classes, a high classification score is achieved. Such an analysis thus provides a measure about whether the data features “as a whole” (i.e., in their multivariate organization) contain discriminatory information about the classes. Currently, PredPsych implements three of the most commonly used algorithms for classification: Linear Discriminant Analysis, Support Vector Machines and Decision Tree models (see Appendix 1 for guidelines on classifier selection).
Linear discriminant analysis (LDA)
The simplest algorithm for classification based analysis is the Linear Discriminant Analysis (LDA). LDA builds a model composed of a number of discriminant functions based on linear combinations of data features that provide the best discrimination between two or more classes. The aim of LDA is thus to combine the data feature scores in a way that a single new composite variable, the discriminant function, is produced (for details see Fisher, 1936; Rao, 1948). LDA is closely related to logistic regression analysis, which also attempts to express one dependent variable as a linear combination of other features. Compared to logistic regression, the advantage of LDA is that it can be used also when there are more than two classes. Importantly, LDA should be used only when the data features are continuous.
Implementation in PredPsych
LDA is implemented in PredPsych as LinearDA function in the toolbox and utilizes the mass package (Venables & Ripley, 2002). This function mandatorily requires inputs in the form of a dataframe^{1} (Data) and a column for the experimental conditions^{2} (classCol).^{2} Optionally, if the researcher would like to perform classification analysis only for a subset of possible features, he/she can also select only specific columns from the dataframe (selectedCols).
Additional optional inputs control the type of crossvalidation to be performed (Appendix 2): cvType = “folds” for kfold crossvalidation, cvType = “LOSO” for leaveonesubjectout procedure, cvType = “LOTO” for leaveonetrialout procedure, cvType = “holdout,” for partition based procedure. If no input is provided for this parameter, then LinearDA function performs a kfold crossvalidation^{3} splitting the dataset into 10 folds and repeatedly retaining one fold for testing the model and utilizes the other folds for training the model (for details on all other parameters that can be set for the LinearDA function, see PredPsych manual).
By default, the LinearDA function outputs the accuracy of the classification analysis and prints the confusion matrix of the actual and the predicted class memberships for the test data. However, the researcher can also optionally choose to output extended results (parameter: extendedResults = TRUE), including the LDA model, accuracy as well as confusion matrix metrics (see Appendix 3).
Confusion matrix generated by LDA. Rows represent the actual class of the data while the columns represent the predicted class membership
Predicted 1  Predicted 2  

Actual 1  51  32 
Actual 2  40  45 
The model obtained can then be used to predict new dataset – a new set of data that has never been used for training or testing the model (e.g., data to be collected in followup experiments). This can be accomplished using the same function LinearDA using the parameters extendedResults = TRUE and inputting the new data features using the parameter NewData. The predicted class membership for each case of new data is stored in the LDAModel variable (visible using the command – LDAModel$fitLDA$newDataprediction).
Support vector machines (SVMs)
More sophisticated algorithms like Support Vector Machines (SVMs) can be also applied to test whether the data features obtained from an experiment encode sufficient discriminatory information between conditions. Similarly to LDA, SVMs try to discriminate between classes/conditions. However, instead of finding a linear function that separates the data classes, SVMs try to find the function that is farthest from data points of any class (Cortes & Vapnik, 1995; Duda, Hart, & Stork, 2000; Vapnik, 1995). This leads to an optimal function that best separates the data classes. Since the data classes may not necessarily be linearly separable (by a single line in 2D or a plane in 3D), SVMs use a kernel function^{4} to project the data points into higher dimensional space. SVMs then construct a linear function in this higher dimension. Some of the commonly used kernel functions are linear, polynomial and radial basis function.
Implementation in PredPsych
Classification using SVMs is implemented as a classification function named classifyFun and utilizes the package e1071 (Meyer et al., 2017). This function additionally tunes parameters (searches for optimal parameter values) for one of commonly used kernel function – radial basis function (RBF). RBF kernel requires two parameters: a cost function C and a Gaussian kernel parameter gamma. The procedure implemented in PredPsych performs crossvalidation and returns tuned parameters (based on a separate division of the data). To obtain tuned parameters, the input dataset is divided into three parts. These three dataset divisions are used for tuning parameters, training and testing without reusing the same data. If, however, the tuning option is not selected, the data is divided only in training and testing parts. These divisions ensure avoiding biases in the classification analysis.
As for the LDA, the SVM model obtained can be used to make predictions about the class/condition of a new dataset using the parameter setting of extendedResults = TRUE and inputting new data features in NewData. Results of the analysis will be available in the variable Results (as Results$classificationResults$newDataprediction).
Decision tree models
Another class of algorithms that a researcher can employ to predict the outcome from the data are Decision tree (DT) models (Loh, 2011). DT models fall under the general “Treebased methods” involving generation of a recursive binary tree (Hastie et al., 2009). In terms of input, DT models can handle both continuous and categorical variables as well as missing data. From the input data, DT models build a set of logical “if …then” rules that permit accurate prediction of the input cases.
DT models are especially attractive types of models for two reasons. First, they are more flexible than regression methods and, unlike linear regression methods, can model nonlinear interactions. Second, they provide an intuitive representation based on partitioning – which variables combined in which configuration can predict the outcome (Breiman, Friedman, Stone, & Olshen, 1984). DT models implemented in the PredPsych toolbox are Classification and Regression Tree (Breiman et al., 1984), Conditional Inference (Hothorn, Hornik, & Zeileis, 2006), and Random Forest (Breiman, 2001a).
Implementation in PredPsych
DT models in PredPsych are implemented as function DTModel employing the rpart package (Therneau et al., 2015). This function takes as mandatory inputs a dataframe (Data), a column for the experimental conditions (classCol), and the type of DT model to use (tree): tree = “CART”, for a full CaRT model, tree = “CARTNACV”, for a CaRT model with crossvalidation (removing the missing values), tree = “CARTCV”, for a CaRT model with crossvalidation (the missing values being handled by the function rpart), tree = “CF”, for Conditional Inference, tree = “RF”, for Random Forest. The function rpart handles the missing data by creating surrogate variables instead of removing them entirely (Therneau, & Atkinson, 1997). This could be useful in case the data contains a higher number of missing values.
Additional optional arguments that can be provided are the subset of data features (selectedFeatures), type of crossvalidation (cvType = “holdout,” “folds,” “LOTO,” or “LOSO”) and related parameters for crossvalidation (see PredPsych manual for further details on other parameters that can be set). The output of this operation returns a decision tree and, if appropriate, accuracy results and a figure from the chosen DT model. In cases of CART, the tree is automatically pruned using a value of complexity parameter that minimizes the crossvalidation accuracy in the training dataset. The resulting figures thus display the pruned tree.
Further, the obtained DT model can then be used to make predictions about classes/conditions of a new dataset by setting the parameters – extendedResults = TRUE and inputting new data features as NewData. The Results for the new dataset would be available in model variable as model$fit$newDataprediction.
Q2. Is my discrimination successful?
Question 1 informs a researcher on the extent of discriminatory power of the variables collected in an experiment, but it does not comment on the statistical significance of the discrimination. For this reason, after obtaining classification results, a researcher might ask if the results obtained reflect a real class structure in the data, i.e., whether they are statistically significant. This is especially important when the data, as in most psychological research, have a high dimensional nature with a low number of observations. In such cases, even if the classification algorithm produces a low error rate, it could be that classification does not reflect interdependencies between the data features for classification, but rather differences in value distributions inside the classes (Ojala & Garriga, 2010). The data themselves, however, may have no structure. One way to assess whether the classifier is using a real dependency in the data is to utilize a permutation based testing (Ojala & Garriga, 2010). Permutation tests are a set of nonparametric methods for hypothesis testing without assuming a particular distribution (Good, 2005). In case of classification analysis, this requires shuffling the labels of the dataset (i.e., randomly shuffling classes/conditions between observations) and calculating the accuracies obtained. This process is repeated a number of times (usually 1,000 or more times). The distribution of accuracies is then compared to the actual accuracy obtained without shuffling. A measure of how many times accuracies obtained by randomization are higher than the actual accuracy provides information about significance of the classification. That is, the percentage of cases where randomly shuffled labels give accuracies higher than actual accuracy corresponds to an estimate of the pvalue. Pvalues are calculated either using exact or approximate procedure depending on the number of possible permutations (Phipson & Smyth, 2010). Given an alpha level, the estimated pvalue provides information about the statistical significance of the classification analysis.
Implementation in PredPsych
Permutation testing in PredPsych is implemented as ClassPerm. The main inputs necessary for the function are the dataframe (Data) for classification and a column for the experimental conditions (classCol). Optionally, a classifier function (classifierFun) can be provided as an input to the permutation function. This function can be any function that returns mean accuracy of classification (e.g., LinearDA). A specific number of simulations (nSims) can also be input as an optional input to the function. If no classifierFun is provided, a default SVM classifier with kfold crossvalidation is utilized. The number of simulations defaults to 1,000 if no input is provided. The function, in addition to calculating pvalue for the classification, also generates a figure for representation of the null distribution and classification accuracy (with chance level accuracy as red vertical line and actual classification accuracy with a vertical blue line) (Fig. 3b).
Q3. Which features/variables can best discriminate between the conditions?
Classification analysis provides information about whether data features contain discriminatory information. However, there are cases in which hundreds of features are used as inputs for the classification and many of them might not contribute (or not contribute equally) to the classification. This is because while certain features might favor discrimination, others might contain mere noise and hinder the classification (i.e., increase the prediction error). In such a case, it is advisable to perform some sort of feature selection to identify the features that are most important for a given analysis. In a first screening, the researcher can remove problematic features based on a set of criteria (e.g., percentage of missing values). Then, a rank can be assigned to the remaining features based on their importance. As a third step, according to their rank, the features that aid classification can be retained while those that merely add noise to the classification can be eliminated. Prediction errors can, thus, be evaluated on this subset of features instead of using all the features present.
Where \( {\overrightarrow{\mu}}^P \) and \( {\overrightarrow{\mu}}^Q \) are means of the feature vector and ∑^{ P } and ∑^{ Q } are the covariance matrices for P and Q classes respectively, tr() denotes trace of a matrix and ‖⋅‖_{2} denotes the Euclidean norm.
Even though this approach has the limitation of calculating scores independently for each feature, the measure is easy to calculate. An alternate approach for calculating importance of features is using the feature importance scores from random forest trees (also implemented using the DTModel function with tree parameter as “RF”).
Implementation in PredPsych
Feature selection results. Fscores for all the features at 10 % of the movements towards small vs. large object
Data features  F_scores 

Wrist Velocity 01  0.055 
Grip Aperture 01  0.030 
Wrist Height 01  0.00045 
x_index 01  0.00038 
y_index 01  0.012 
z_index 01  7.10e05 
x_thumb 01  0.011 
y_thumb 01  0.0067 
z_thumb 01  1.30E05 
x_finger plane 01  4.20e06 
y_finger plane 01  0.00033 
z_finger plane 01  0.0026 
Q4. Does my experimental conditions contain variability?
Variability in data has long been considered as unwanted noise arising from inherent noise in sensory or motor processing (Churchland, Afshar, & Shenoy, 2006; Jones, Hamilton, & Wolpert, 2002). More recent studies, however, suggest that this variability might reflect slight differences in the underlying processes, especially individualbased differences (Calabrese, Norris, Wenning, & Wright, 2011; Koul, Cavallo, Ansuini, & Becchio, 2016; Ting et al., 2015). Consequently, many researchers are attempting to gain a better understanding of their results in terms of intrinsic variability of the data. When the source of this variability is not clear, researchers have to rely on exploratory approaches such as clustering or nonnegative factorization.
Clustering approaches partition data features in subsets or clusters based on data similarity. Each cluster comprises observations that are similar to each other compared to those in the other clusters (for an overview see Han, Kamber, & Pei, 2012). Unlike classification analyses, clustering analysis does not require class labels but utilizes the data features to predict subsets and is thus an unsupervised learning approach.
Clustering has previously been utilized for a number of applications in data sciences ranging from image pattern recognition, consumer preferences, and gene expression data to clinical applications. All clustering approaches need the specification of a specific cluster number in addition to the data features. In most of the cases (unless there is an apriori information), this number of clusters is chosen arbitrarily. Model based clustering approaches provide a methodology for determining the number of clusters (Fraley & Raftery, 1998). In a model based approach, data are considered to be generated from a set of Gaussian distributions (components or clusters), i.e., as a mixture of these components (mixture models). Instead of using heuristics, model based clustering approximates Bayes factor (utilizing Bayesian Information Criterion) to determine the model with the highest evidence (as provided by the data). The generated model from this approach, in contrast to other clustering approaches, can further be used to predict new data classes from data features.
Implementation in PredPsych
We obtain that the number of clusters reduces as the movement progresses starting from nine clusters (at 10 %) to five clusters (at 100 % of the movement). This is in agreement with recent propositions that biological constraints and affordances shape socalled “don’t care” or “bottleneck” regions (Beer, Chiel, & Gallagher, 1999; Ting et al., 2015). These regions correspond to high and low motor variability, respectively.
Q5. Can I represent my data in lower dimensions?
While the excitement surrounding multivariate analyses of quantitative behavioral data is still growing, researchers have also come to realize that the nature and volume of multivariate data pose severe challenges for making psychological sense of these data. Variables in such data often are correlated with each other making the interpretation of the effects difficult. In addition, highdimensionality can have adverse effects on classification analyses. Problems of overfitting (i.e., classification model exhibiting small prediction error in the training data but much larger generalization error in unseen future data), in particular, can occur when the number of observed variables is higher than the number of available training sample.
To escape the curse of dimensionality (Bellman, 1957), it is sometimes imperative to construct interpretable lowdimensional summaries of highdimensional data. Dimensionality reduction has been proven useful for generating relatively independent data features, obtaining higher and more generalizable classification results (lower prediction errors), and aiding the interpretability of the results. Various models have been developed for such dimensionality reduction, including Principal Component Analysis (PCA), Independent Component Analysis (ICA), Nonnegative matrix factorization (NMF), Multidimensional scaling (MDS) etc. PredPsych currently implements two of the most commonly used models – MDS and PCA.
MDS, similarly to other mentioned techniques, attempts to project the multidimensional data into lower dimensions (Bishop, 2006; Cox & Cox, 2000). In contrast to PCA, MDS tries to preserve the original distance relationship present in the multidimensional space for projections in the lower dimension. PCA on the other hand, attempts to preserve the original covariance between the data points.
Implementation in PredPsych
Dimensionality Reduction in PredPsych is implemented as the function DimensionRed. This function as mandatory inputs requires the dataframe (Data) and the selected columns (selectedCols) for which the dimensionality has to be reduced. Additional inputs can be provided for visualizing the first two reduced dimensions – outcome (class of the observation present as rows of the dataframe) and plot (a logical indicating if the plot should be displayed).
Discussion and conclusions
Causal explanatory analyses in experimental psychology have been recently complemented by predictive methods based on machinelearning models. These methods allow an increased sensitivity and greater predictive power compared to traditional explanatory approaches. Resources available to researchers for their implementations, however, are still surprisingly scarce. Without a proper framework, utilizing these analyses requires substantial expertise and is frequently opaque to the nonexperts.
PredPsych aims at providing a comprehensive and userfriendly software framework for the use of predictive methods based on machine learning in experimental psychology. With this framework, we present PredPsych by outlining the type of questions that can be answered using the functions implemented in the package. Furthermore, we provide examples on how to apply these functions and offer suggestions on the choice of the parameters.
Navigating by trialanderror is often the default approach in machine learning. PredPsych, instead, encourages researchers to formulate their research questions first and, then, based on the specific question, select the most appropriate technique. A distinctive feature of PredPsych in comparison to other available packages is its tailoring to experimental psychology. This is both a strength and limitation: a strength, in that it makes the application of the implemented functions accessible to experimental psychologists with limited programming experience; a limitation, in that the resulting framework is less abstract and thus less reusable in other contexts. Other packages, such as Scikitlearn, for example, implement generic functions usable in various domains, ranging from spam detection and image recognition to drug response and stock prices. These packages are thus more flexible but also more difficult to use, as their adaptation requires the programming of specific scripts.
We anticipate that PredPsych along with the illustrations provided in this paper will favor the spread of predictive approaches across various subdomains of experimental psychology. Moreover, we hope that the framework of PredPsych will be inspiring and informative for the clinical psychology community, enabling clinicians to ask new questions – questions that cannot be easily investigated using traditional statistical tools. Overall, machine learningbased predictive methods promise many opportunities to study human behavior and develop new clinical tools.
Footnotes
 1.
An R data frame is an object used for storing data tables where each column is list of categorical or numeric data variables.
 2.
This method of parameter input is equivalent to a symbolic description of the tobe fitted model (e.g., classCol ~ feature1 + feature2, etc.).
 3.
Machine learning results can vary especially in small sample sizes or with disproportionate class sizes depending on the choice of crossvalidation scheme. To reduce such effects, PredPsych utilizes a stratified crossvalidation scheme and, by default, sets a fixed seed value (SetSeed = TRUE).
 4.
A mapping function that transforms input features into a higher dimensional space (Hofmann, Schölkopf, & Smola, 2008).
Notes
Acknowledgements
This work received funding from European Research Council under the European Union’s Seventh Framework Programme (FP7/20072013) / ERC grant agreement n. 312919.
References
 Ansuini, C., Cavallo, A., Koul, A., Jacono, M., Yang, Y., & Becchio, C. (2015). Predicting object size from hand kinematics: A temporal perspective. Plos One, 10(3), e0120432. https://doi.org/10.1371/journal.pone.0120432 CrossRefPubMedPubMedCentralGoogle Scholar
 Anzulewicz, A., Sobota, K., & DelafieldButt, J. T. (2016). Toward the autism motor signature: Gesture patterns during smart tablet gameplay identify children with autism. Scientific Reports, 6, 31107. https://doi.org/10.1038/srep31107 CrossRefPubMedPubMedCentralGoogle Scholar
 Beer, R. D., Chiel, H. J., & Gallagher, J. C. (1999). Evolution and analysis of model CPGs for walking: II. General principles and individual variability. Journal of Computational Neuroscience, 7(2), 119–47. https://doi.org/10.1023/A:1008920021246 CrossRefPubMedGoogle Scholar
 Bellman, R. E. (1957). Dynamic programming. Princeton, NJ: Princeton University Press.Google Scholar
 Bishop, C. M. (2006). Pattern recognition and machine learning. (1st ed.). SpringerVerlag New York. https://doi.org/10.1117/1.2819119 Google Scholar
 Borra, S., & Di Ciaccio, A. (2010). Measuring the prediction error. A comparison of crossvalidation, bootstrap and covariance penalty methods. Computational Statistics & Data Analysis, 54(12), 2976–2989. https://doi.org/10.1016/j.csda.2010.03.004 CrossRefGoogle Scholar
 Breiman, L. (2001a). Random forests. Machine learning, 45(1), 5–32. https://doi.org/10.1023/A:1010933404324 CrossRefGoogle Scholar
 Breiman, L. (2001b). Statistical modeling: The two cultures. Statistical Science, 16(3), 199–231. https://doi.org/10.1214/ss/1009213726 CrossRefGoogle Scholar
 Breiman, L., Friedman, J., Stone, C. J., & Olshen, R. A. (1984). Classification and regression trees. Wadsworth Statistics/Probability (1st ed.). Taylor & Francis.Google Scholar
 Browne, M. W. (2000). Crossvalidation methods. Journal of Mathematical Psychology, 44(1), 108–132. https://doi.org/10.1006/jmps.1999.1279 CrossRefPubMedGoogle Scholar
 Calabrese, R. L., Norris, B. J., Wenning, A., & Wright, T. M. (2011). Coping with variability in small neuronal networks. Integrative and Comparative Biology, 51(6), 845–855. https://doi.org/10.1093/icb/icr074 CrossRefPubMedPubMedCentralGoogle Scholar
 Cavallo, A., Koul, A., Ansuini, C., Capozzi, F., & Becchio, C. (2016). Decoding intentions from movement kinematics. Scientific Reports, 6, 37036. https://doi.org/10.1038/srep37036 CrossRefPubMedPubMedCentralGoogle Scholar
 Chen, Y., & Lin, C.J. (2006). Combining SVMs with various feature selection strategies. In Feature extraction: Foundations and applications (Vol. 324, pp. 315–324). Berlin, Heidelberg: Springer Berlin Heidelberg. https://doi.org/10.1007/9783540354888_13
 Churchland, M. M., Afshar, A., & Shenoy, K. V. (2006). A central source of movement variability. Neuron, 52(6), 1085–1096. https://doi.org/10.1016/j.neuron.2006.10.034 CrossRefPubMedPubMedCentralGoogle Scholar
 Cortes, C., & Vapnik, V. (1995). Supportvector networks. Machine Learning, 20(3), 273–297. https://doi.org/10.1023/A:1022627411411 Google Scholar
 Cox, T. F., & Cox, M. A. A. (2000). Multidimensional scaling (2nd ed.). Chapman & Hall/CRC.Google Scholar
 Douglas, P. K., Harris, S., Yuille, A., & Cohen, M. S. (2011). Performance comparison of machine learning algorithms and number of independent components used in fMRI decoding of belief vs. disbelief. NeuroImage, 56(2), 544–53. https://doi.org/10.1016/j.neuroimage.2010.11.002 CrossRefPubMedGoogle Scholar
 Duda, R. O., Hart, P. E., & Stork, D. G. (2000). Pattern classification. WileyInterscience (Vol. 24).Google Scholar
 Fisher, R. A. (1936). The use of multiple measurements in taxonomic problems. Annals of Eugenics, 7(2), 179–188. https://doi.org/10.1017/CBO9781107415324.004 CrossRefGoogle Scholar
 Forman, G., & Scholz, M. (2010). Applestoapples in crossvalidation studies. ACM SIGKDD Explorations Newsletter, 12(1), 49. https://doi.org/10.1145/1882471.1882479 CrossRefGoogle Scholar
 Forster, M. R. (2002). Predictive accuracy as an achievable goal of science. Philosophy of Science, 69, 124–134. https://doi.org/10.1086/341840 CrossRefGoogle Scholar
 Fraley, C., & Raftery, A. (2007). Modelbased methods of classification: Using the mclust software in Chemometrics. Journal of Statistical Software, 18(6), 1–13. doi: 10.18637/jss.v018.i06
 Fraley, C., & Raftery, A. E. (1998). How many clusters? Which clustering method? Answers via modelbased cluster analysis. The Computer Journal, 41(8), 578–588. https://doi.org/10.1093/comjnl/41.8.578 CrossRefGoogle Scholar
 Gong, G. (1986). Crossvalidation, Jakknife, and the Bootstrap: Excess error estimation in forward logistic regression. Journal of the American Statistical Association, 81(393), 108–113. https://doi.org/10.1080/01621459.1986.10478245 CrossRefGoogle Scholar
 Good, P. (2005). Permutation, parametric and bootstrap tests of hypotheses. New York: SpringerVerlag. https://doi.org/10.1007/b138696
 Hahn, T., Nierenberg, A. A., & WhitfieldGabrieli, S. (2017). Predictive analytics in mental health: Applications, guidelines, challenges and perspectives. Molecular Psychiatry, 22(1), 37–43. https://doi.org/10.1038/mp.2016.201 CrossRefPubMedGoogle Scholar
 Han, J., Kamber, M., & Pei, J. (2012). Cluster analysis. In Data mining (pp. 443–495). Elsevier. https://doi.org/10.1016/B9780123814791.000101
 Hastie, T., Tibshirani, R., & Friedman, J. (2009). The elements of statistical learning. Springer Series in Statistics (2nd ed., Vol. 1). New York, NY: Springer New York. https://doi.org/10.1007/9780387848587 Google Scholar
 Haynes, J.D., Sakai, K., Rees, G., Gilbert, S., Frith, C. D., & Passingham, R. E. (2007). Reading hidden intentions in the human brain. Current Biology, 17(4), 323–328. https://doi.org/10.1016/j.cub.2006.11.072 CrossRefPubMedGoogle Scholar
 Hofmann, T., Schölkopf, B., & Smola, A. J. (2008). Kernel methods in machine learning. The Annals of Statistics, 36(3), 1171–1220. https://doi.org/10.1214/009053607000000677 CrossRefGoogle Scholar
 Hothorn, T., Hornik, K., & Zeileis, A. (2006). Unbiased recursive partitioning: A conditional inference framework. Journal of Computational and Graphical Statistics, 15, 651–674. https://doi.org/10.1198/106186006X133933 CrossRefGoogle Scholar
 Huys, Q. J. M., Maia, T. V, & Frank, M. J. (2016). Computational psychiatry as a bridge from neuroscience to clinical applications. Nature Neuroscience, 19(3), 404–413. https://doi.org/10.1038/nn.4238 CrossRefPubMedPubMedCentralGoogle Scholar
 Jones, K. E., Hamilton, A. F., & Wolpert, D. M. (2002). Sources of signaldependent noise during isometric force production. Journal of Neurophysiology, 88(3), 1533–1544. https://doi.org/10.1152/jn.00985.2001 CrossRefPubMedGoogle Scholar
 Kaplan, A. (1973). The conduct of inquiry: Methodology for behavioral science. Transaction Publishers.Google Scholar
 Kelleher, J. D., Namee, B. Mac, & D’Arcy, A. (2015). Fundamentals of machine learning for predictive data analytics. Cambridge, Massachusetts: The MIT Press.Google Scholar
 Kiang, M. Y. (2003). A comparative assessment of classification methods. Decision Support Systems, 35(4), 441–454. https://doi.org/10.1016/S01679236(02)001100 CrossRefGoogle Scholar
 Kim, J.H. (2009). Estimating classification error rate: Repeated crossvalidation, repeated holdout and bootstrap. Computational Statistics & Data Analysis, 53(11), 3735–3745. https://doi.org/10.1016/j.csda.2009.04.009 CrossRefGoogle Scholar
 Koul, A., Cavallo, A., Ansuini, C., & Becchio, C. (2016). Doing it your way: How individual movement styles affect action prediction. PloS ONE, 11(10), e0165297. https://doi.org/10.1371/journal.pone.0165297 CrossRefPubMedPubMedCentralGoogle Scholar
 Loh, W.Y. (2011). Classification and regression trees. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 1(1), 14–23. https://doi.org/10.1002/widm.8 Google Scholar
 Meyer, D., Dimitriadou, E., Hornik, K., Weingessel, A., & Leisch, F. (2017). e1071: Misc Functions of the Department of Statistics, Probability Theory Group (Formerly: E1071), TU Wien. R package version 1.68. https://CRAN.Rproject.org/package=e1071
 Ojala, M., & Garriga, G. C. (2010). Permutation tests for studying classifier performance. The Journal of Machine Learning Research, 11, 1833–1863.Google Scholar
 Phipson, B., & Smyth, G. K. (2010). Permutation Pvalues should never be zero: Calculating exact Pvalues when permutations are randomly drawn. Statistical Applications in Genetics and Molecular Biology, 9(1), 1544–6115. https://doi.org/10.2202/15446115.1585 CrossRefGoogle Scholar
 Raftery, A. E., & Dean, N. (2006). Variable selection for modelbased clustering. Journal of the American Statistical Association, 101, 168–178. https://doi.org/10.1198/016214506000000113 CrossRefGoogle Scholar
 Rao, C. (1948). The utilization of multiple measurements in problems of biological classification. Journal of the Royal Statistical Society. Series B, 10, 159–203.Google Scholar
 R Development Core Team. (2016). R: A language and environment for statistical computing. Vienna: R Foundation for Statistical Computing. http://www.Rproject.org
 Saeys, Y., Inza, I., & Larrañaga, P. (2007). A review of feature selection techniques in bioinformatics. Bioinformatics, 23(19), 2507–2517. https://doi.org/10.1093/bioinformatics/btm344 CrossRefPubMedGoogle Scholar
 Shmueli, G. (2010). To explain or to predict? Statistical Science, 25(3), 289–310. https://doi.org/10.1214/10STS330 CrossRefGoogle Scholar
 Shmueli, G., & Koppius, O. R. (2011). Predictive analytics in information systems research. MIS Quarterly, 35(3), 553–572.CrossRefGoogle Scholar
 Song, X., Mitnitski, A., Cox, J., & Rockwood, K. (2004). Comparison of machine learning techniques with classical statistical models in predicting health outcomes. Studies in Health Technology and Informatics, 107, 736–740.PubMedGoogle Scholar
 Srinivasan, R., Golomb, J. D., & Martinez, A. M. (2016). A neural basis of facial action recognition in humans. Journal of Neuroscience, 36(16), 4434–4442. https://doi.org/10.1523/JNEUROSCI.170415.2016 CrossRefPubMedGoogle Scholar
 Tabachnick, B. G., & Fidell, L. S. (2012). Using multivariate statistics (6th ed.). New York: Harper and Row. https://doi.org/10.1037/022267 Google Scholar
 Therneau, T. M., Atkinson, B., & Ripley, B. (2015). rpart: Recursive Partitioning and Regression Trees. R package version 4.110. https://CRAN.Rproject.org/package=rpart
 Therneau, T. M., & Atkinson, E. J. (1997). An introduction to recursive partitioning using the RPART routines (Vol. 61, p. 452). Mayo Foundation: Technical report.Google Scholar
 Ting, L. H., Chiel, H. J., Trumbower, R. D., Allen, J. L., McKay, J. L., Hackney, M. E., & Kesar, T. M. (2015). Neuromechanical principles underlying movement modularity and their implications for rehabilitation. Neuron, 86(1), 38–54. https://doi.org/10.1016/j.neuron.2015.02.042 CrossRefPubMedPubMedCentralGoogle Scholar
 Vapnik, V. (1995). The nature of statistical learning theory. SpringerVerlag New York. https://doi.org/10.1007/9781475724400 CrossRefGoogle Scholar
 Varoquaux, G., Raamana, P. R., Engemann, D. A., HoyosIdrobo, A., Schwartz, Y., & Thirion, B. (2017). Assessing and tuning brain decoders: Crossvalidation, caveats, and guidelines. NeuroImage, 145(Pt B), 166–179. https://doi.org/10.1016/j.neuroimage.2016.10.038 CrossRefPubMedGoogle Scholar
 Venables, W. N., & Ripley, B. D. (2002). Modern applied statistics with S (4th). New York, NY: Springer New York. https://doi.org/10.1007/9780387217062 CrossRefGoogle Scholar
 Yarkoni, T., & Westfall, J. (2016). Choosing prediction over explanation in psychology: Lessons from machine learning. https://doi.org/10.6084/m9.figshare.2441878.v1
Copyright information
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.