Agrawal, M., Peterson, J.C., & Griffiths, T.L. (2020). Scaling up psychology via scientific regret minimization. Proceedings of the National Academy of Sciences, 117(16), 8825– 8835.
Article
Google Scholar
Almaatouq, A., Noriega-Campero, A., Alotaibi, A., Krafft, P.M., Moussaid, M., & Pentland, A. (2020). Adaptive social networks promote the wisdom of crowds. Proceedings of the National Academy of Sciences, 117(21), 11379–11386.
Article
Google Scholar
Almaatouq, A., Yin, M., & Watts, D.J. (2020). Collective problem-solving of groups across tasks of varying complexity. (PsyArXiv preprint).
Anwyl-Irvine, A.L., Massonnié, J., Flitton, A., Kirkham, N., & Evershed, J.K. (2020). Gorilla in our midst: an online behavioral experiment builder. Behavior Research Methods, 52(1), 388– 407.
Article
Google Scholar
Arechar, A.A., Gächter, S., & Molleman, L. (2018). Conducting interactive experiments online. Experimental Economics, 21(1), 99–131.
Article
Google Scholar
Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., & et al. (2018). The moral machine experiment. Nature, 563(7729), 59–64.
Article
Google Scholar
Balandat, M., Karrer, B., Jiang, D., Daulton, S., Letham, B., Wilson, A.G., & et al. (2020). Botorch: A framework for efficient Monte-Carlo Bayesian optimization. Advances in Neural Information Processing Systems, 33.
Balietti, S. (2017). nodegame: Real-time, synchronous, online experiments in the browser. Behavior Research Methods, 49(5), 1696–1715.
Article
Google Scholar
Balietti, S., Klein, B., & Riedl, C. (2020a). Optimal design of experiments to identify latent behavioral types. Experimental Economics.
Balietti, S., Klein, B., & Riedl, C. (2020b). Optimal design of experiments to identify latent behavioral types. Experimental Economics, pp. 1–28.
Becker, J., Almaatouq, A., & Horvat, A. (2020). Network structures of collective intelligence: The contingent benefits of group discussion. arXiv preprint arXiv:2009.07202.
Becker, J., Brackbill, D., & Centola, D. (2017). Network dynamics of social influence in the wisdom of crowds. Proceedings of the National Academy of Sciences, 114(26), E5070–E5076.
Google Scholar
Becker, J., Guilbeault, D., & Smith, E.B. (2019). The crowd classification problem. Academy of Management Proceedings, 2019, 13404.
Article
Google Scholar
Becker, J., Porter, E., & Centola, D. (2019). The wisdom of partisan crowds. Proceedings of the National Academy of Sciences, 116(2), 10717–10722.
Article
Google Scholar
Ben-Kiki, O., Evans, C., & Ingerson, B. (2009). Yaml ain’t markup language (yamlTM) version 1.1. Retrieved from https://yaml.org/spec/cvs/spec.pdf (Working Draft 2008–05).
Berinsky, A.J., Huber, G.A., & Lenz, G.S. (2012). Evaluating online labor markets for experimental research: Amazon Mechanical Turk. Political Analysis, 20(3), 351–368.
Article
Google Scholar
Birnbaum, M.H. (2004). Human research and data collection via the Internet. Annual Review of Psychology, 55, 803–832.
Article
Google Scholar
Bourgin, D.D., Peterson, J.C., Reichman, D., Russell, S.J., & Griffiths, T.L. (2019). Cognitive model priors for predicting human decisions. In Proceedings of Machine Learning Research, 97, 5133–5141.
Google Scholar
Chandler, J., Mueller, P., & Paolacci, G. (2014). Nonnaïveté among Amazon Mechanical Turk workers: Consequences and solutions for behavioral researchers. Behavior Research Methods, 46, 112–130.
Article
Google Scholar
Chen, D.L., Schonger, M., & Wickens, C (2016). oTree–an open-source platform for laboratory, online, and field experiments. Journal of Behavioral and Experimental Finance, 9, 88–97.
Article
Google Scholar
de Leeuw, J.R. (2015). jsPsych: a JavaScript library for creating behavioral experiments in a web browser. Behavior Research Methods, 47, 1–12.
Article
Google Scholar
Erev, I., Ert, E., Plonsky, O., Cohen, D., & Cohen, O. (2017). From anomalies to forecasts: Toward a descriptive model of decisions under risk, under ambiguity, and from experience. Psychological Review, 124(4), 369–409.
Article
Google Scholar
Fedosejev, A. (2015). React.js essentials. Packt Publishing Ltd.
Feng, D. (2020). Towards socially interactive agents: Learning generative models of social interactions via crowdsourcing. Unpublished doctoral dissertation, Northeastern University.
Feng, D., Carstensdottir, E., El-Nasr, M.S., & Marsella, S. (2019). Exploring improvisational approaches to social knowledge acquisition. In Proceedings of the 18th International Conference on Autonomous Agents and Multiagent Systems (pp. 1060–1068).
Finger, H., Goeke, C., Diekamp, D., Standvoß, K., & König, P. (2017). Labvanced: A unified JavaScript framework for online studies. In International Conference on Computational Social Science (Cologne).
Garaizar, P., & Reips, U.-D. (2019). Best practices: Two web-browser-based methods for stimulus presentation in behavioral experiments with high-resolution timing requirements. Behavior Research Methods, 51(3), 1441–1453.
Article
Google Scholar
Giamattei, M., Molleman, L., Seyed Yahosseini, K., & Gächter, S. (2019). Lioness lab-a free web-based platform for conducting interactive experiments online. (SSRN preprint).
Goodman, J.K., Cryder, C.E., & Cheema, A. (2013). Data collection in a flat world: The strengths and weaknesses of Mechanical Turk samples. Journal of Behavioral Decision Making, 26(3), 213–224.
Article
Google Scholar
Grootswagers, T. (2020). A primer on running human behavioural experiments online. Behavior Research Methods.
Guilbeault, D., Woolley, S., & Becker, J. (2020). Probabilistic social learning improves the public’s detection of misinformation.
Hartshorne, J.K., de Leeuw, J. R., Goodman, N.D., Jennings, M., & O’Donnell, T.J. (2019). A thousand studies for the price of one: Accelerating psychological science with Pushkin. Behavior Research Methods, 51(4), 1782–1803.
Article
Google Scholar
Henninger, F., Shevchenko, Y., Mertens, U., Kieslich, P.J., & Hilbig, B.E. (2019). Lab.js: A free, open, online study builder. (PsyArXiv preprint).
Horton, J.J., Rand, D.G., & Zeckhauser, R.J. (2011). The online laboratory: Conducting experiments in a real labor market. Experimental Economics, 14(3), 399–425.
Article
Google Scholar
Houghton, J. (2020). Interdependent diffusion: The social contagion of interacting beliefs. Unpublished doctoral dissertation Massachusetts Institute of Technology, Cambridge, MA.
Houghton, J.P. (2020). Interdependent diffusion:, The social contagion of interacting beliefs. arXiv preprint arXiv:2010.02188.
Ishowo-Oloko, F., Bonnefon, J.-F., Soroye, Z., Crandall, J., Rahwan, I., & Rahwan, T. (2019). Behavioural evidence for a transparency–efficiency tradeoff in human–machine cooperation. Nature Machine Intelligence, 1(11), 517–521.
Article
Google Scholar
Jahani, E., Gallagher, N.M., Merhout, F., Cavalli, N., Guilbeault, D., Leng, Y., & et al. (2020). Exposure to common enemies can increase political polarization: Evidence from a cooperation experiment with automated partisans.
Letham, B., Karrer, B., Ottoni, G., & Bakshy, E. (2019). Constrained Bayesian optimization with noisy experiments. Bayesian Analysis, 14(2), 495–519.
Article
Google Scholar
Litman, L., Robinson, J., & Abberbock, T. (2017). Turkprime. com: a versatile crowdsourcing data acquisition platform for the behavioral sciences. Behavior Research Methods, 49(2), 433–442.
Article
Google Scholar
Mao, A., Chen, Y., Gajos, K.Z., Parkes, D.C., Procaccia, A.D., & Zhang, H. (2012). Turkserver: Enabling synchronous and longitudinal online experiments. In Workshops at the Twenty-Sixth AAAI Conference on Artificial Intelligence.
Mao, A., Dworkin, L., Suri, S., & Watts, D.J. (2017). Resilient cooperators stabilize long-run cooperation in the finitely repeated prisoner’s dilemma. Nature Communications, 8, 13800.
Article
Google Scholar
Mason, W., & Suri, S. (2012). Conducting behavioral research on Amazon Mechanical Turk. Behavior Research Methods, 44(1), 1–23.
Article
Google Scholar
McClelland, G.H. (1997). Optimal design in psychological research. Psychological Methods, 2(1), 3–19.
Article
Google Scholar
McKnight, M.E., & Christakis, N.A. (2016). Breadboard: Software for online social experiments. Retrieved from https://breadboard.yale.edu/.
Musch, J., & Reips, U.-D. (2000). A brief history of web experimenting. In Psychological Experiments on the Internet (pp. 61–87): Elsevier.
Noriega, A., Camacho, D., Meizner, D., Enciso, J., Quiroz-Mercado, H., Morales-Canton, V., & et al. (2020). Screening diabetic retinopathy using an automated retinal image analysis (ARIA) system in Mexico: Independent and assistive use cases. (medRxiv preprint).
Palan, S., & Schitter, C. (2018). Prolific.ac–a subject pool for online experiments. Journal of Behavioral and Experimental Finance, 17, 22–27.
Article
Google Scholar
Paolacci, G., Chandler, J., & Ipeirotis, P.G. (2010). Running experiments on Amazon Mechanical Turk. Judgment and Decision Making, 5(5), 411–419.
Google Scholar
Pescetelli, N., Rutherford, A., Kao, A., & Rahwan, I. (2019). Collective learning in news consumption. (PsyArXiv preprint).
Plonsky, O., Apel, R., Ert, E., Tennenholtz, M., Bourgin, D., Peterson, J.C., & et al. (2019). Predicting human decisions with behavioral theories and machine learning. (arXiv preprint arXiv:1904.06866.
Reips, U.-D. (2000). The web experiment method: Advantages, disadvantages, and solutions. In Psychological Experiments on the Internet (pp. 89–117): Elsevier.
Reips, U.-D. (2012). Using the Internet to collect data. In APA Handbook of Research Methods in Psychology. American Psychological Association, (Vol. 2 pp. 201–310).
Reips, U.-D., & Neuhaus, C. (2002). Wextor: A web-based tool for generating and visualizing experimental designs and procedures. Behavior Research Methods, Instruments, & computers: A Journal of the Psychonomic Society, Inc, 34(2), 234–240.
Article
Google Scholar
Salganik, M.J., Dodds, P.S., & Watts, D.J. (2006). Experimental study of inequality and unpredictability in an artificial cultural market. Science, 311(5762), 854–856.
Article
Google Scholar
Schelling, T.C. (2006). Micromotives and macrobehavior. WW Norton & Company.
Shirado, H., & Christakis, N.A. (2017). Locally noisy autonomous agents improve global human coordination in network experiments. Nature, 545, 370–374.
Article
Google Scholar
Suchow, J.W., & Griffiths, T.L. (2016). Rethinking experiment design as algorithm design. Advances in Neural Information Processing Systems, 29, 1–8.
Google Scholar
Tilkov, S., & Vinoski, S. (2010). Node.js: Using JavaScript to build high-performance network programs. IEEE Internet Computing, 14(6), 80–83.
Article
Google Scholar
Traeger, M.L., Sebo, S.S., Jung, M., Scassellati, B., & Christakis, N.A. (2020). Vulnerable robots positively shape human conversational dynamics in a human–robot team. Proceedings of the National Academy of Sciences, 117(12), 6370–6375.
Article
Google Scholar
Valentine, M.A., Retelny, D., To, A., Rahmati, N., Doshi, T., & Bernstein, M.S. (2017). Flash organizations: Crowdsourcing complex work by structuring crowds as organizations. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (pp. 3523–3537).
von Ahn, L., & Dabbish, L. (2008). Designing games with a purpose. Communications of the ACM, 51(8), 58–67.
Article
Google Scholar
Whiting, M.E., Blaising, A., Barreau, C., Fiuza, L., Marda, N., Valentine, M., & et al. (2019). Did it have to end this way? Understanding the consistency of team fracture. In Proceedings of the ACM on Human–Computer Interaction, 3(CSCW).
Whiting, M.E., Gao, I., Xing, M., N’Godjigui, J.D., Nguyen, T., & Bernstein, M.S. (2020). Parallel worlds: Repeated initializations of the same team to improve team viability. Proceedings of the ACM on Human–Computer Interaction, 4(CSCW1), 22.
Article
Google Scholar
Whiting, M.E., Hugh, G., & Bernstein, M.S. (2019). Fair work: Crowd work minimum wage with one line of code. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, (Vol. 7 pp. 197–206).
Wieruch, R. (2017). The road to react: Your journey to master plain yet pragmatic react.js. Robin Wieruch.