Behavior Research Methods

, Volume 49, Issue 5, pp 1696–1715 | Cite as

nodeGame: Real-time, synchronous, online experiments in the browser

Review

Abstract

nodeGame is a free, open-source JavaScript/ HTML5 framework for conducting synchronous experiments online and in the lab directly in the browser window. It is specifically designed to support behavioral research along three dimensions: (i) larger group sizes, (ii) real-time (but also discrete time) experiments, and (iii) batches of simultaneous experiments. nodeGame has a modular source code, and defines an API (application programming interface) through which experimenters can create new strategic environments and configure the platform. With zero-install, nodeGame can run on a great variety of devices, from desktop computers to laptops, smartphones, and tablets. The current version of the software is 3.0, and extensive documentation is available on the wiki pages at http://nodegame.org.

Keywords

Behavioral experiments Software Real-time Browser Online Open-source JavaScript 

References

  1. Anderson, B., Bernauer, T., & Balietti, S. (2016). Effects of fairness principles on willingness to pay for climate change mitigation, submitted.Google Scholar
  2. Andersson, C., & Read, D. (2014). Group size and cultural complexity. Nature, 511(7507), E1. doi:10.1038/nature13411.CrossRefPubMedGoogle Scholar
  3. Balietti, S., Goldstone, R.L., & Helbing, D. (2016). Peer review and competition in the Art Exhibition Game. Proceedings of the National Academy of Sciences (PNAS), 113(30), 8414–8419.CrossRefGoogle Scholar
  4. Balietti, S., Jäggi, B., & Axhausen, K. (2016). Efficiency gains in coordination in information-poor environments. Available at SSRN 2802049.Google Scholar
  5. Balietti, S., Mäs, M., & Helbing, D. (2015). On disciplinary fragmentation and scientific progress. PLoS ONE, 10(3). e0118747.Google Scholar
  6. Bezanson, J., & et al. (2014). Julia: a fresh approach to numerical computing. arXiv:1411.1607 [cs.MS].
  7. Birnbaum, M.H. (2000). SurveyWiz and FactorWiz: JavaScript Web pages that make HTML forms for research on the Internet. Behavior Research Methods, Instruments, & Computers, 32(2), 339–346.CrossRefGoogle Scholar
  8. Birnbaum, M.H. (2004). Human research and data collection via the Internet. Annual Review of Psychology, 55, 803–832.CrossRefPubMedGoogle Scholar
  9. Bos, N. (2002). Effects of Four Computer-Mediated Communications Channels on Trust Development. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 135–140).Google Scholar
  10. Butler, B.S. (2001). Membership size, communication activity, and sustainability: a resource-based model of online social structures. Information Systems Research, 12(4), 346–362.CrossRefGoogle Scholar
  11. Centola, D. (2010). The spread of behavior in an online social network experiment. Science, 329(5996), 1194–1197.CrossRefPubMedGoogle Scholar
  12. Chaudhuri, A. (2011). Sustaining cooperation in laboratory public goods experiments: a selective survey of the literature. Experimental Economics, 14, 47–83.CrossRefGoogle Scholar
  13. Chen, C., Schonger, M., & Wickens, C. (2014). oTree: an open-source platform for laboratory, online, and field experiments. http://www.otree.org/.
  14. Chernoff, H. (1973). The use of faces to represent points in K-dimensional space graphically. Journal of the American Statistical Association, 68(342), 361–368.CrossRefGoogle Scholar
  15. Ciampaglia, G.L. (2014). Power and fairness in a generalized ultimatum game. PLoS ONE, 9(6). e99039.Google Scholar
  16. Cox, J.C., & Swarthout, J.T. (2005). EconPort: creating and maintaining a knowledge commons. Andrew Young School of Policy Studies Research Paper, 6–38.Google Scholar
  17. DellaVigna, S. (2016). What motivates effort? Evidence and expert forecasts NBER Working Paper No. 22193.Google Scholar
  18. Derex, M., & et al. (2013). Experimental evidence for the influence of group size on cultural complexity. Nature, 503(7476), 389–391.CrossRefPubMedGoogle Scholar
  19. Fischbacher, U. (2007). z-Tree: Zurich toolbox for ready-made economic experiments. Experimental Economics, 10(2), 171–178.CrossRefGoogle Scholar
  20. Friedman, D., & Oprea, R. (2012). A continuous dilemma. The American Economic Review, 337–363.Google Scholar
  21. Hawkins, R.X.D. (2014). Conducting real-time multiplayer experiments on the Web. Behavior Research Methods, 1–11.Google Scholar
  22. Helbing, D. (2014). Conditions for the emergence of shared norms in populations with incompatible preferences. PLoS ONE, 9(8). e104207.Google Scholar
  23. Heinrich, J. (2004). Demography and cultural evolution: How adaptive cultural processes can produce maladaptive losses: the Tasmanian case. American Antiquity, 197–214.Google Scholar
  24. Horton, J.J., Rand, D.G., & Zeckhauser, R.J. (2011). The online laboratory: conducting experiments in a real labor market. Experimental Economics, 14(3), 399–425.CrossRefGoogle Scholar
  25. Kraut, R., & et al. (2004). Psychological research online: Report of Board of Scientific Affairs’ Advisory Group on the Conduct of Research on the Internet. American Psychologist, 59(2), 105–107.CrossRefPubMedGoogle Scholar
  26. Lakkaraju, K., & et al. (2015). The Controlled, Large Online Social Experimentation Platform (CLOSE). In International Conference on Social Computing, Behavioral-Cultural Modeling, and Prediction (pp. 339–344).Google Scholar
  27. Leeuw, J.R. de (2014). jsPsych: A JavaScript library for creating behavioral experiments in a Web browser. Behavior Research Methods, 47(1), 1–12.CrossRefGoogle Scholar
  28. Leeuw, J.R. de, & Motz, B.A. (2016). Psychophysics in a Web browser? Comparing response times collected with JavaScript and psychophysics toolbox in a visual search task. Behavior Research Methods, 48(1), 1–12.CrossRefPubMedGoogle Scholar
  29. Mangan, M.A., & Reips, U.-D. (2007). Sleep, sex, and the Web: surveying the difficult-to-reach clinical population suffering from sexsomnia. Behavior Research Methods, 39(2), 233–236.CrossRefPubMedGoogle Scholar
  30. Mao, Q. (2015). Experimental studies of human behavior in social computing systems. PhD thesis. Harvard.Google Scholar
  31. Mason, W., & Suri, W. (2012). Conducting behavioral research on Amazon’s Mechanical Turk. Behavior Research Methods, 44(1), 1–23.CrossRefPubMedGoogle Scholar
  32. Murphy, R.O., Ackermann, K.A., & Handgraaf, M. (2011). Measuring social value orientation. Judgment and Decision Making, 6(8), 771–781.Google Scholar
  33. Musch, J., & Reips, U.-D. (2000). A Brief History of Web Experimenting In Birnbaum, M.H. (Ed.), Psychological experiments on the Internet: Academic Press.Google Scholar
  34. Nax, H.H., & et al. (2016). A welfare investigation of generalized contribution-based competitive grouping. Available at SSRN 2604140.Google Scholar
  35. Neath, I., & et al. (2011). Response time accuracy in Apple Macintosh computers. Behavior Research Methods, 43(2), 353–362.CrossRefPubMedGoogle Scholar
  36. Nosenzo, D., Quercia, S., & Sefton, M. (2013). Cooperation in small groups: the effect of group size. Experimental Economics, 18(1), 4–14.CrossRefGoogle Scholar
  37. Olson, M. (1965). The logic of collective action: public goods and the theory of groups. Cambridge: Harvard University Press.Google Scholar
  38. Paolacci, G., Chandler, J., & Ipeirotis, P.G. (2010). Running experiments on Amazon Mechanical Turk. Judgment and Decision making, 5(5), 411–419.Google Scholar
  39. Pettit, J., & et al. (2014). Software for continuous game experiments. Experimental Economics, 17(4), 631–648.CrossRefGoogle Scholar
  40. Rand, D.G., Greene, J.D., & Nowak, M.A. (2012). Spontaneous giving and calculated greed. Nature, 489 (7416), 427–430.CrossRefPubMedGoogle Scholar
  41. Reimers, S., & Stewart, N. (2015). Presentation and response timing accuracy in Adobe Flash and HTML5/JavaScript Web experiments. Behavior Research Methods, 47(2), 309–327.CrossRefPubMedGoogle Scholar
  42. Reips, U.-D. (2002). Standards for Internet-based experimenting. Experimental Psychology, 49(4), 243–256.CrossRefPubMedGoogle Scholar
  43. Reips, U.-D., & Krantz, J.H. (2010). Conducting True Experiments on the Web In Gosling, S.D., & Johnson, J.A. (Eds.), Advanced methods for conducting online behavioral research: American Psychological Association.Google Scholar
  44. Reips, U.-D., & Neuhaus, C. (2002). WEXTOR: A Web-based tool for generating and visualizing experimental designs and procedures. Behavior Research Methods, Instruments, & Computers, 34(2), 234–240.CrossRefGoogle Scholar
  45. Rubinstein, A. (2013). Response time and decision making: an experimental study. Judgment and Decision Making, 8(5), 540–551.Google Scholar
  46. Salganik, M.J., Dodds, P.S., & Watts, D.J. (2006). Experimental study of inequality and unpredictability in an artificial cultural market. Science, 311(5762), 854–856.CrossRefPubMedGoogle Scholar
  47. Schwarz, M., & Takhteyev, Y. (2010). Half a century of public software institutions: open source as a solution to hold-up problem. Journal of Public Economic Theory, 12(4), 609–639.CrossRefGoogle Scholar
  48. Simon, L.K., & Stinchcombe, M.B. (1989). Extensive form games in continuous time: Pure strategies. Econometrica: Journal of the Econometric Society, 1171–1214.Google Scholar
  49. Sonnenburg, S., & et al. (2007). The need for open-source software in machine learning. Journal of Machine Learning Research, 8, 2443–2466.Google Scholar
  50. Suri, S., & Watts, D.J. (2011). Cooperation and contagion in Web-based, networked public goods experiments. PLoS One, 6(3). e16836.Google Scholar
  51. Thompson, E.R. (2007). Development and validation of an internationally reliable short-form of the Positive and Negative Affect Schedule (PANAS). Journal of Cross-Cultural Psychology, 38(2), 227–242.CrossRefGoogle Scholar
  52. Von Krogh, G., Spaeth, S., & Lakhani, K.R. (2003). Community, joining, and specialization in open-source software innovation: a case study. Research Policy, 32(7), 1217–1241.CrossRefGoogle Scholar
  53. Wang, J., Suri, S., & Watts, D.J. (2012). Cooperation and assortativity with dynamic partner updating. In Proceedings of the National Academy of Sciences (PNAS), (Vol. 109 pp. 14363– 14368).Google Scholar
  54. Watson, D., Clark, L.A., & Tellegen, A. (1988). Development and validation of brief measures of Positive and Negative Affect: the PANAS Scales. Journal of Personality and Social Psychology, 54(6), 1063–1070.CrossRefPubMedGoogle Scholar

Copyright information

© Psychonomic Society, Inc. 2016

Authors and Affiliations

  1. 1.Network Science InstituteNortheastern UniversityBostonUSA

Personalised recommendations