Abstract
Intertemporal choices involve assessing options with different reward amounts available at different time delays. The similarity approach to intertemporal choice focuses on judging how similar amounts and delays are. Yet we do not fully understand the cognitive process of how these judgments are made. Here, we use machine-learning algorithms to predict similarity judgments to (1) investigate which algorithms best predict these judgments, (2) assess which predictors are most useful in predicting participants’ judgments, and (3) determine the minimum number of judgments required to accurately predict future judgments. We applied eight algorithms to similarity judgments for reward amount and time delay made by participants in two data sets. We found that neural network, random forest, and support vector machine algorithms generated the highest out-of-sample accuracy. Though neural networks and support vector machines offer little clarity in terms of a possible process for making similarity judgments, random forest algorithms generate decision trees that can mimic the cognitive computations of human judgment making. We also found that the numerical difference between amount values or delay values was the most important predictor of these judgments, replicating previous work. Finally, the best performing algorithms such as random forest can make highly accurate predictions of judgments with relatively small sample sizes (~ 15), which will help minimize the numbers of judgments required to extrapolate to new value pairs. In summary, machine-learning algorithms provide both theoretical improvements to our understanding of the cognitive computations involved in similarity judgments and intertemporal choices as well as practical improvements in designing better ways of collecting data.
Similar content being viewed by others
Availability of data and material
All data files and supplementary materials (tables, figures) are available at https://doi.org/10.17605/osf.io/edq39.
References
Read, D. (2004). Intertemporal choice. In D. Koehler & N. Harvey (Eds.), Blackwell handbook of judgment and decision making (pp. 424–443). Oxford: Blackwell.
Doyle, J. R. (2013). Survey of time preference, delay discounting models. Judgment and Decision Making, 8(2), 116–135.
Leland, J. W. (2002). Similarity judgments and anomalies in intertemporal choice. Economic Inquiry, 40(4), 574–581. https://doi.org/10.1093/ei/40.4.574.
Rubinstein, A. (2003). “Economics and psychology”? The case of hyperbolic discounting. International Economic Review, 44(4), 1207–1216. https://doi.org/10.1111/1468-2354.t01-1-00106.
Stevens, J. R. (2016). Intertemporal similarity: Discounting as a last resort. Journal of Behavioral Decision Making, 29(1), 12–24. https://doi.org/10.1002/bdm.1870.
Stevens, J. R., & Soh, L.-K. (2018). Predicting similarity judgments in intertemporal choice with machine learning. Psychonomic Bulletin & Review, 25(2), 627–635. https://doi.org/10.3758/s13423-017-1398-1.
Kuhn, M., & Johnson, K. (2013). Applied predictive modeling. New York: Springer.
Hastie, T., Tibshirani, R., & Friedman, J. H. (2009). The elements of statistical learning: Data mining, inference, and prediction. New York: Springer.
Murthy, S. K. (1998). Automatic construction of decision trees from data: A multi-disciplinary survey. Data Mining and Knowledge Discovery, 2(4), 345–389. https://doi.org/10.1023/A:1009744630224.
Fürnkranz, J. (2010). Decision tree. In C. Sammut & G. I. Webb (Eds.), Encyclopedia of machine learning (pp. 263–267). Boston: Springer. https://doi.org/10.1007/978-0-387-30164-8_204.
Breiman, L., Friedman, J. H., Olshen, R. A., & Stone, C. J. (1984). Classification and regression trees. New York: Chapman and Hall.
Ting, K. M. (2010). Confusion matrix. In C. Sammut & G. I. Webb (Eds.), Encyclopedia of machine learning (p. 209). Boston: Springer. https://doi.org/10.1007/978-0-387-30164-8_157.
Perlich, C., Provost, F., & Simonoff, J. S. (2003). Tree induction versus logistic regression: A learning-curve analysis. Journal of Machine Learning Research, 4, 211–255.
Yarkoni, T., & Westfall, J. (2017). Choosing prediction over explanation in psychology: Lessons from machine learning. Perspectives on Psychological Science, 12(6), 1100–1122. https://doi.org/10.1177/1745691617693393.
R Core Team. (2018). R: A language and environment for statistical computing. Vienna: R Foundation for Statistical Computing. https://www.R-project.org/. Accessed 13 Dec 2020.
Kuhn, M., & Quinlan, R. (2020). C50: C5.0 decision trees and rule-based models. https://CRAN.R-project.org/package=C50. Accessed 13 Dec 2020.
Kuhn, M. (2020). caret: Classification and regression training. https://CRAN.R-project.org/package=caret. Accessed 13 Dec 2020.
Meyer, D., Dimitriadou, E., Hornik, K., Weingessel, A., & Leisch, F. (2019). e1071: Misc functions of the Department of Statistics, probability theory group (formerly: E1071), TU Wien. https://CRAN.R-project.org/package=e1071. Accessed 13 Dec 2020.
Microsoft, & Weston, S. (2020). foreach: Provides foreach looping construct. https://CRAN.R-project.org/package=foreach. Accessed 13 Dec 2020.
Schloerke, B., Crowley, J., Cook, D., Briatte, F., Marbach, M., Thoen, E., et al. (2020). GGally: Extension to “ggplot2”. https://CRAN.R-project.org/package=GGally. Accessed 13 Dec 2020.
Müller, K. (2017). here: A simpler way to find your files. https://CRAN.R-project.org/package=here. Accessed 13 Dec 2020.
Karatzoglou, A., Smola, A., Hornik, K., & Zeileis, A. (2004). Kernlab—An S4 package for kernel methods in R. Journal of Statistical Software, 11(9), 1–20. https://doi.org/10.18637/jss.v011.i09. Accessed 13 Dec 2020.
Majka, M. (2019). naivebayes: High performance implementation of the naive Bayes algorithm in R. https://CRAN.R-project.org/package=naivebayes. Accessed 13 Dec 2020.
Venables, W. N., & Ripley, B. D. (2002). Modern applied statistics with S (4th ed.). New York: Springer.
Aust, F., & Barth, M. (2018). papaja: Create APA manuscripts with R Markdown. https://github.com/crsh/papaja. Accessed 13 Dec 2020.
Pedersen, T. L. (2019). patchwork: The composer of plots. https://CRAN.R-project.org/package=patchwork. Accessed 13 Dec 2020.
Liaw, A., & Wiener, M. (2002). Classification and regression by randomForest. R News, 2(3), 18–22.
Therneau, T., & Atkinson, B. (2019). rpart: Recursive partitioning and regression trees. https://CRAN.R-project.org/package=rpart. Accessed 13 Dec 2020.
Silge, J., & Robinson, D. (2016). tidytext: Text mining and analysis using tidy data principles in R. Journal of Open Source Software. https://doi.org/10.21105/joss.00037.
Wickham, H. (2017). tidyverse: Easily install and load the “tidyverse”. https://CRAN.R-project.org/package=tidyverse. Accessed 13 Dec 2020.
Xie, Y., Allaire, J. J., & Grolemund, G. (2018). R markdown: The definitive guide. Boca Raton: Chapman & Hall/CRC. https://bookdown.org/yihui/rmarkdown. Accessed 13 Dec 2020.
Quinlan, J. R. (1993). C4.5: Programs for machine learning. San Mateo: Morgan Kaufmann Publishers Inc.
Breiman, L. (2001). Random forests. Machine Learning, 45(1), 5–32. https://doi.org/10.1023/A:1010933404324.
Cover, T., & Hart, P. (1967). Nearest neighbor pattern classification. IEEE Transactions on Information Theory, 13(1), 21–27. https://doi.org/10.1109/TIT.1967.1053964.
Maron, M. E. (1961). Automatic indexing: An experimental inquiry. Journal of the ACM, 8(3), 404–417. https://doi.org/10.1145/321075.321084.
McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. The Bulletin of Mathematical Biophysics, 5(4), 115–133. https://doi.org/10.1007/BF02478259.
Boser, B. E., Guyon, I. M., & Vapnik, V. N. (1992). A training algorithm for optimal margin classifiers. In Proceedings of the fifth annual workshop on computational learning theory (pp. 144–152). Pittsburgh: Association for Computing Machinery. https://doi.org/10.1145/130385.130401.
Laine, A. (2003). Neural networks. In A. Ralston, E. D. Reilley, & D. Hemmendinger (Eds.), Encyclopedia of computer science (pp. 1233–1239). New York: Wiley.
Zhang, X. (2010). Support vector machines. In C. Sammut & G. I. Webb (Eds.), Encyclopedia of machine learning (pp. 941–946). Boston: Springer. https://doi.org/10.1007/978-0-387-30164-8_804.
Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33(2–3), 61–83. https://doi.org/10.1017/S0140525X0999152X.
Kuhn, M., & Johnson, K. (2019). Feature engineering and selection: A practical approach for predictive models. Boca Raton: CRC Press.
Rubinstein, A. (1988). Similarity and decision-making under risk (Is there a utility theory resolution to the Allais paradox?). Journal of Economic Theory, 46(1), 145–153. https://doi.org/10.1016/0022-0531(88)90154-8.
Leland, J. W. (1994). Generalized similarity judgments: An alternative explanation for choice anomalies. Journal of Risk and Uncertainty, 9(2), 151–172.
Leland, J. W. (2013). Equilibrium selection, similarity judgments, and the “nothing to gain/nothing to lose” effect. Journal of Behavioral Decision Making, 26(5), 418–428. https://doi.org/10.1002/bdm.1772.
Tversky, A. (1977). Features of similarity. Psychological Review, 84(4), 327–352. https://doi.org/10.1037/0033-295X.84.4.327.
Shepard, R. (1987). Toward a universal law of generalization for psychological science. Science, 237(4820), 1317–1323. https://doi.org/10.1126/science.3629243.
Goldstone, R. L., & Son, J. (2005). Similarity. In K. J. Holyoak & R. Morrison (Eds.), Cambridge handbook of thinking and reasoning (pp. 13–36). Cambridge: Cambridge University Press.
Aha, D. W., Kibler, D., & Albert, M. K. (1991). Instance-based learning algorithms. Machine Learning, 6(1), 37–66. https://doi.org/10.1007/BF00153759.
Hahn, U., & Chater, N. (1998). Similarity and rules: Distinct? Exhaustive? Empirically distinguishable? Cognition, 65(2–3), 197–230. https://doi.org/10.1016/S0010-0277(97)00044-9.
Mooney, R. J. (1993). Integrating theory and data in category learning. In G. V. Nakamura, D. L. Medin, & R. Taraban (Eds.), Categorization by humans and machines: Advances in research and theory (pp. 189–218). San Diego: Academic Press.
Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction. Cambridge: MIT Press.
Kattan, M. W., Adams, D. A., & Parks, M. S. (1993). A comparison of machine learning with human judgment. Journal of Management Information Systems, 9(4), 37–57. https://doi.org/10.1080/07421222.1993.11517977.
Rosenfeld, A., Zuckerman, I., Azaria, A., & Kraus, S. (2012). Combining psychological models with machine learning to better predict people’s decisions. Synthese, 189(1), 81–93. https://doi.org/10.1007/s11229-012-0182-z.
Brighton, H., & Gigerenzer, G. (2015). The bias bias. Journal of Business Research, 68(8), 1772–1784. https://doi.org/10.1016/j.jbusres.2015.01.061.
Stevens, J. R., Saltzman, A., Rasumussen, T., & Soh, L.-K. (2020). Improving measurements of similarity judgments with machine-learning algorithms. PsyArXiv. https://doi.org/10.31234/osf.io/epkyv.
Funding
This research was funded by the National Science Foundation (SES-1658837).
Author information
Authors and Affiliations
Contributions
The authors made the following contributions. JRS: conceptualization, data curation, formal analysis, funding acquisition, investigation, methodology, project administration, resources, software, supervision, visualization, writing—original draft preparation, writing—review and editing; APS: formal analysis, methodology, software, visualization, writing—review and editing; TR: formal analysis, methodology, software, visualization, writing—review and editing; L-KS: conceptualization, funding acquisition, investigation, supervision, writing—review and editing.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no known conflicts of interest.
Code availability
All data analysis scripts are available at https://doi.org/10.17605/osf.io/edq39.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This research was funded by an award from the National Science Foundation (SES-1658837). We thank the University of Nebraska Holland Computing Center for providing computing access to analyze the data.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
About this article
Cite this article
Stevens, J.R., Polzkill Saltzman, A., Rasmussen, T. et al. Improving measurements of similarity judgments with machine-learning algorithms. J Comput Soc Sc 4, 613–629 (2021). https://doi.org/10.1007/s42001-020-00098-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s42001-020-00098-1