1 Introduction

Online labor markets such as Amazon Mechanical Turk (MTurk) are increasingly popular tools for behavioral scientists. With their large and diverse pools of people ready to promptly perform tasks for pay, these markets present researchers with new opportunities to recruit participants for experiments.Footnote 1 Studies from across the social sciences have systematically compared data collected online with data from the physical laboratory. Their conclusions are promising: classic results from psychology and economics have been replicated using online samples, and the data obtained online is deemed as reliable as that obtained via traditional methods.Footnote 2

Despite its great potential, behavioral research online has so far remained largely limited to non-interactive decision-making tasks or one-shot games with simultaneous decisions. Current online studies of social behavior often use survey software such as Qualtrics or SurveyMonkey to document decision making in tasks that participants complete individually, and emulate interactions through post hoc matching. Although this approach can be powerful, it does not permit the study of repeated, ‘hot’ interactions where live feedback between participants is essential. Experimental designs with live interaction are rarely implemented online, partly because there is not yet a widely-used web-based equivalent of z-Tree (Fischbacher 2007).Footnote 3

In this paper, we assess the potential for interactive online experiments, where a set of participants interacts for more than one repetition. Interactive experiments raise novel challenges throughout the whole life cycle of an experiment. Our approach is to discuss these challenges, that is, methodological differences and similarities between interactive experiments in physical and online laboratories. We discuss these step-by-step, from recruitment to dismissal of participants after the experiment.

A particularly important challenge of interactive online experiments relates to participant dropout. While in the physical laboratory participants rarely leave a session, online experiments are more prone to dropouts which affect both the participant who is dropping out and their interaction partners (who still have to be paid for their participation). If dropouts happen for reasons exogenous to the experiment—e.g. due to network problems, frozen screens, or random distractions—they are just a (costly) nuisance to the experimentalist. Much more problematic are dropouts that happen endogenously, that is, people quitting because of what has happened in the experiment. Such dropouts could jeopardize the internal validity of experiments (Zhou and Fishbach 2016).

As a case study we replicate a repeated public goods game with and without peer punishment used in cross-cultural research (Herrmann et al. 2008), employing a sample of US participants recruited via MTurk.Footnote 4 We chose to replicate this experiment because it is fairly long and logistically complex. It is a within-subjects design with two experimental conditions of ten periods each, where, after the first set of ten periods, participants receive new instructions. Moreover, this experiment has often been replicated, and its design allows us to evaluate whether dropouts depend on the experimental conditions (that is, the presence or absence of punishment). We report data from participants recruited via MTurk (62 groups) and participants from the physical laboratory (18 groups). We used our own software LIONESS (Sect. 2.5), developed for conducting interactive online experiments.

We observe that basic patterns of behavior online are similar to those in the laboratory. In the absence of punishment, aggregate levels of cooperation are higher on MTurk than in the laboratory, but show similar rates of decay over time. Moreover, our econometric analysis reveals that in both of our samples the group contributions strongly determine the level of cooperation. The introduction of punishment promotes the emergence and maintenance of cooperation in both samples. Punishment is mainly prosocial in nature in both samples (cooperators punish non-cooperators) but occurs less frequently online.

Our most important result is that, in our implementation, dropouts are most likely due to reasons that are exogenous to the experiment. Together with the replication of findings from the laboratory, our results suggest that online interactive experiments can be a reliable tool for collecting internally-valid data and hence are a potentially valuable complement to the physical laboratory.

Our paper contributes to a recently-emerged literature on the reliability of data gathered on online labor markets such as MTurk (see references in footnotes 1–3). The most important predecessor of our paper is Anderhub et al. (2001), who compared online and laboratory experiments in the very early days of experimentation on the Internet. They also provide a methodological discussion that, however, could not consider the specific properties of modern online labor markets where the bulk of present-day online experimentation is happening.Footnote 5

The remainder of this paper is structured as follows. In Sect. 2, we introduce the experimental design. In Sect. 3, we discuss the conceptual and logistical differences between conducting interactive experiments in the laboratory and online and lay out our approach for dealing with them, highlighting important aspects of the data-collection process relating to attention and attrition. Section 4 shows the results of our experiment, systematically comparing cooperation and punishment behavior in our two samples. In Sect. 5 we present a detailed analysis of attrition in our online experiment. Finally, in Sect. 6 we make concluding remarks.

2 A case study to compare online and laboratory experiments

We base our discussion of online and laboratory experiments on a well-established paradigm: a public goods game with and without punishment (Fehr and Gächter 2000, 2002). In this section, we present the design of our experiments conducted in the laboratory and replicated online with a sample of participants recruited via MTurk. For the laboratory and the online samples, instructions and experimental screens were identical (screenshots are presented in the Online Appendix A).

2.1 General setup

Our experiment follows the within-subject design of Herrmann et al. (2008) and implements a repeated four-person public goods game with two conditions: one without punishment followed by one with punishment. Groups were constant throughout the experiment (‘partner matching’), and each condition ran for ten periods. Participants were aware that there were two ‘parts’ to the session (which corresponded to the conditions without and with punishment) but learned about the details of the second part only after the first one had finished.

At the beginning of a session, participants read on-screen instructions for the first experimental condition: the public goods game without punishment. Experimental instructions were shorter than those in Herrmann et al. (2008) (see Sect. 3 for rationale). Participants could start the interaction phase only once they had completed a set of comprehension questions.

2.2 Condition 1: a public goods game without peer punishment

In each period of the 10-period game, all four group members received an endowment of 20 points and simultaneously decided how many of those points to keep for themselves, and how many to contribute to a ‘group project’ (i.e. the public good). After all members had made their decision, the sum of all contributions was multiplied by 1.6 and distributed equally among all group members irrespective of their contributions. This setup reflects a social dilemma: in each period overall earnings are highest when each of the group members contributes all 20 of their points to the public good, while individuals maximize their earnings by contributing 0 regardless of the contributions of the others. Once all contributions had been made participants learned the result of that period. Apart from their own contribution and earnings, they were informed of the average contribution in their group. Subsequently, a separate screen showed the contributions of each of their fellow group members.

2.3 Condition 2: a public goods game with peer punishment

Once the 10 periods of Condition 1 were over, participants received new on-screen instructions about Condition 2. This condition also consisted of ten periods and was completed in the same groups as Condition 1. Again, the periods started once all group members had completed the comprehension questions. The decision situation was like Condition 1, but we introduced one change: once participants learned the contributions of each of their group members, they could assign up to 10 deduction points to each of their peers. Each assigned deduction point resulted in a loss of 1 point for the participant assigning it, and a loss of 3 points for its target. At the end of each period a separate screen informed participants of the total number of points they assigned and received. In cases where a participant made a loss during a period, only the costs of assigning deduction points would count towards the final earnings (cf. Herrmann et al. 2008). Each session concluded with a questionnaire including demographic items.

2.4 Online and laboratory sample

In all sessions, participants received instructions and made their decisions via web browsers. The program was implemented in the experimental software LIONESS (Sect. 2.5). Both online and in the laboratory, sessions took 28 min on average. This is considerably shorter than the original study by Herrmann et al. (2008), but longer than typical tasks on MTurk.

For our online sample, we recruited participants via MTurk, restricting their geographical location to the USA (for comparability with our laboratory sample, see below). Results are based on 24 sessions, with 248 participants in total (62 groups of four). The average age of participants in this sample was 31.5 years (s.d. 9.06), and 38.4% were female. Average earnings in our online sample were $6.69 (s.d. $1.03), which were paid via MTurk.

The data from our laboratory sample were collected at universities in two different cities in the USA (Harvard University, Boston MA; and Yale University, New Haven CT) over 8 sessions, with 72 participants in total (18 groups). Laboratory participants were invited through e-mails using the online recruitment software SONA. The average age for participants in this sample was 25.2 years (s.d. 7.45) and 42.3% were female. Average earnings in our laboratory sample were $20.02 (s.d. $1.65), paid in cash upon session completion. To conform to standards of the respective laboratories and average expected wages on MTurk, we used an exchange rate of $0.02 in the laboratory and $0.01 in our online sessions; show-up fees were $10 and $1 in the laboratory and online, respectively.Footnote 6

2.5 The software used to conduct interactive experiments: LIONESS

We conducted both the laboratory and online experiments with LIONESS (Live Interactive Online Experimental Server Software). LIONESS provides a basic architecture for conducting interactive experiments online. Its key features reflect the solutions to the logistical challenges discussed in Sect. 3; dynamically grouping participants to minimize waiting times, regulating interactions in groups, promoting participants’ attention to the experiment and dealing with participants dropping out of an experiment. The software developed for the experimental conditions reported here can be downloaded at: http://lioness.nottingham.ac.uk.

3 Methodological differences in conducting interactive experiments in the laboratory and online

3.1 The online laboratory MTurk

While our discussion of online experiments is based on an MTurk sample, many issues also hold for other online platforms (cf. footnote 1). MTurk is a large online labor market, which offers an active pool of over 500,000 workers. The MTurk workforce completes over 40,000 Human Intelligence Tasks (HITs) every day (www.mturk-tracker.com; Difallah et al. 2015; Ipeirotis 2010). MTurk ‘workers’ browse HITs that are published by ‘requesters’ who provide a brief description of the task, its expected duration and the minimum payment workers will receive upon completion (see Online Appendix A for screenshots of the HIT as published on MTurk). HITs typically involve short individual assignments which computers are currently unable to perform (Berinsky et al. 2012), such as the processing of images or data cleaning. Due to the sheer size of the pool of workers ready to perform tasks for pay, MTurk enables researchers to conduct large-scale experiments and to implement an effective random assignment of participants to different conditions beyond the capacity of a typical physical laboratory.Footnote 7 While not primarily designed for academic research, MTurk has the potential for efficient data collection. As mentioned in the introduction, questionnaire studies and experiments without repetitions (e.g. one-shot Prisoner’s dilemma) conducted with MTurk participants have produced results comparable to those obtained from laboratory samples (e.g. Paolacci et al. 2010; Horton et al. 2011; Goodman et al. 2013).

Despite their promise, online behavioral experiments have conceptual and logistical challenges that are usually not present in the laboratory. Here we focus on differences between laboratory and online experimentation that are specific to interactive designs. See Buhrmester et al. (2011) and Paolacci and Chandler (2014) for extensive discussions of differences regarding non-interactive (survey-style) designs.

3.2 A step-by-step comparison of laboratory and online experiments

We now discuss the implementation of interactive experiments in the laboratory and online via MTurk, based on the design we presented above. Our discussion is chronological in the way a typical experiment proceeds from recruitment to dismissal of participants. The four following subsections discuss the main phases of a typical experimental session (recruitment, session start-up, interactive decision making and payment). Along the way, we highlight the extent to which our approach bridges these gaps in the (relatively long and logistically-challenging) experiment presented above. Table 1 provides a concise overview of the issues discussed in this section.

Table 1 Methodological differences in conducting interactive experiments in the laboratory and on MTurk

3.2.1 Recruitment

In a typical laboratory experiment, participants receive a show-up fee for attending. Still, the main part of the participant’s payment is usually determined by the decisions made over the course of the session. In a typical task on MTurk, participants are paid a flat reward per HIT, and the part of the earnings determined by their decisions can be added to their payments as a ‘bonus’. Consequently, a HIT that pays a relatively large flat fee usually draws more attention than one that promises a large bonus. This is particularly relevant for interactive experiments where participants need to wait for others to form a group at the start of a session (see below).

Sessions in the laboratory are pre-scheduled. A database contains the contact details of a pool of aspiring participants, who can register (and cancel) within a determined time window. Pre-scheduling ensures that the number of participants can be anticipated quite accurately before a session takes place, and including a small number of backup participants can prevent problems associated with unannounced non-attendance. Online platforms such as MTurk allow for instant recruitment of participants, facilitating a time-efficient method of data collection.Footnote 8

To take advantage of these opportunities, LIONESS was ready to accommodate new participants during a time window specified by us, while capping the maximum number of entrants. In our experiment, we invited participants to sign up within 20 min of the HIT being posted and allotted them 45 min to complete the task. In addition, we asked them to start immediately. Recruitment rates were high (in a typical session with 100 slots, the first 50 participants normally entered within the first 5 min after the publication of the HIT), facilitating prompt group formation once participants had read and understood the instructions.Footnote 9

Invitations to laboratory experiments rarely reveal any information on the contents of the experiment. On MTurk, however, participants browse various tasks that are currently available to complete for pay. This requires a HIT description giving the workers some idea of what the task will involve. To avoid self-selection (based on the topic of the experiment) into interactive experiments as much as possible, an experimenter can leave out any detailed information in the HIT description.Footnote 10 For example, we did not announce that these are public goods experiments. On the other hand, it is essential that workers know that the HIT will involve live interactions with other people, and therefore they are expected to complete the interactive HIT without delay and without interruptions. The HIT description within MTurk is an appropriate place to make participants aware of this (see Online Appendix A for screenshots).

Typical laboratory subject pools are replenished annually when a new cohort of first-year students arrives on campus, and recruitment software allows invitation of only those participants with no (or little) experience with the experimental paradigm of a study. By contrast, the pool of MTurk workers (MTurkers) is replenished more regularly, but oftentimes MTurkers quickly acquire a sizable experience participating in hundreds of academic studies of all kinds.Footnote 11 While researchers may have reason to believe that (frequent) prior experience could be an issue for their experiment (for a discussion of “lab rats” see Guillen and Veszteg 2012), MTurk facilitates inviting participants based on various criteria (e.g. number of HITs completed, their success rate or their geographic location). In addition, post-experimental questionnaires can include self-reported measures of participants’ familiarity with decision-making experiments and specific experimental paradigms.

In our case, we used MTurk’s options to restrict the geographical location of the participants to the United States for comparability with our laboratory sample. In addition, to increase the likelihood that participants completed our HIT with care, we only allowed workers with at least 90% of their previous HITs approved by requesters to participate (see Peer et al. 2014 for a detailed discussion of approval rates).

3.2.2 Session start-up

For many studies, it is essential that participants take part only once. In the laboratory it is relatively straightforward to implement this, particularly if the experimenter uses recruitment software such as ORSEE (Greiner 2015), SONA or hroot (Bock et al. 2014) and is physically present during laboratory sessions to confirm identities. For online sessions, however, re-takers may seriously compromise the data (e.g. by operating two browsers within the same experiment, potentially even controlling two players within the same group). Accordingly detecting them requires specific measures. Within a session, we prevented duplicate participation by logging the user’s IP address and blocking users that had already been connected to the experimental server. Between sessions, we used third-party software to prevent workers who had already participated in a specific HIT from being invited for future sessions.Footnote 12

In a typical laboratory session participants can ask questions which the experimenter can answer in private, before the interactive phase of the experiment begins. For online sessions this is not feasible. To ensure that participants had a thorough understanding of the experimental decision situation and did not rush through the instructions, we introduced compulsory comprehension questions which participants had to solve before entering the decision-making phase of the experiment.Footnote 13 A fraction of participants who entered the experimental pages did not proceed past the instructions and never reached the comprehension questions. In our online sample, 83.2% of the individuals who did reach the comprehension questions solved them successfully.Footnote 14

3.2.3 Experimental interaction

In the laboratory, all participants typically arrive at a session at the same time and will simultaneously complete comprehension questions. By contrast, participants in online sessions may arrive during a time window set by the experimenter (20 min in our case), and the timing of completing comprehension questions may therefore vary substantially. Accordingly, we formed groups ‘on the fly’: participants who successfully completed the comprehension questions waited in a ‘lobby’. As soon as this lobby contained sufficient participants, a group was formed and its members were sent to the interaction phase of the experiment. An alternative (yet considerably less time-efficient) approach is to run pre-tests with participants and to build a ‘standing panel’ from which candidates for experimental sessions are recruited (see Suri and Watts 2011; Gallo and Yan 2015).

Although deception is uncommon in experimental economics, participants on MTurk are likely to encounter studies using deception (e.g. Pfattheicher and Schindler 2015). Participants may therefore be skeptical about the truthfulness of experimental instructions and doubt if their interaction partners are real people and not robots pre-programmed by the experimenter publishing the HIT. To promote trust between us (as experimenters) and the participants, we continuously strive to maintain a good reputation on our MTurk requester account (our records and those of other requesters can be found at https://turkopticon.ucsd.edu). In addition, our HIT description stated explicitly that groups were formed of real people recruited from MTurk.

To keep the attention of the participants focused on the experiment (and not have them dropping out in the very first period of the game), we clearly communicated the number of other participants they were waiting for at any given moment, and we added an on-screen countdown indicating the maximum amount of time left before participants could choose to leave the experiment if no group could be formed. When this timer reached zero in our experiment, participants could choose to either return to the lobby and wait for two additional minutes or to leave the session and collect their participation fee (of $1). This procedure led to a total of 89% of participants who correctly completed the comprehension questions being successfully matched into a group and starting the interaction phase.Footnote 15 The remaining 11% could not be matched in a group of four, and were paid their participation fee.

While in the laboratory the experimenter can monitor and enforce any restriction of communication between participants, it is in principle harder to categorically exclude the possibility that online participants communicate with their interaction partners through external channels. We ran relatively large sessions in which participants could not be identified to prevent them from colluding via online forums such as Reddit or MTurk Crowd. In fact, forum discussions, which are usually moderated and prohibit the dissemination of the content of HITs and the discussion of strategies, typically center upon the attractiveness of a HIT in terms of earnings and length rather than its content (Chandler et al. 2014). Therefore, communication between participants is a potential problem for online experiments, but it is not any more severe for interactive designs. A similar argument could be made for communication with other people who are not participating in the task at hand (e.g. someone else in the room while completing the task). Although such communication is harder to control in online experiments than in the physical laboratory, this issue is not specific to interactive designs either.

The most severe problem for online interactive studies, and the largest discrepancy with laboratory experiments, is attrition (participant dropout). In laboratory sessions participants very rarely leave or turn out to be unable to complete a session. In online experiments, attrition is a major issue—there is no straightforward way to prevent participants from leaving a session by closing their browser window or failing to submit responses to experimental decision situations due to technical problems. Moreover, in contrast to the laboratory, interaction partners are geographically scattered and the progress of an experiment depends on their joint attention to it. Typically, groups proceed at the pace of the slowest participant and long waiting times increase the risk of reduced attention, which may ‘cascade’ through the group. Thus, we took measures to retain attention and promote successful completion.Footnote 16 In our case, we used on-screen timers and told participants that failure to reach a decision in due time would result in their removal from the experiment without payment. Furthermore, in the event of a group member dropping out we notified the remaining participants of that and they continued in their reduced group.Footnote 17 Data from incomplete groups is not included in the results reported in Sect. 4. An alternative approach to dealing with dropouts is to terminate the whole group once a member drops out. This may, however, damage the reputation of the experimenter as participants will be unable to earn as much as they had anticipated.

Our procedure of letting smaller groups continue ensures that real people generate all the information that participants respond to. Alternative solutions to non-responding participants, such as introducing random decisions or repeating previous decisions (e.g. Suri and Watts 2011; Wang et al. 2012), may affect the behavior of those who are still in the experiment (now responding to partially computer-generated information) which potentially compromises the internal validity of the data from groups affected by a dropout. It also raises issues of deception if such computer-generated information is not disclosed. Moreover, this procedure may also compromise the validity of data from groups unaffected by attrition, as participants cannot know whether their interaction partners’ behavior shown to them is real or generated by a computer.

Due to the nature of conducting research via the Internet, some level of attrition seems unavoidable. Attrition rates are likely to vary with factors such as group size, complexity of the decision situation, and the pace of the experiment.Footnote 18 Despite our measures to prevent attrition, 84 participants (18%) who started the interaction phase dropped out at some point in our experiment.Footnote 19 As these participants were distributed across experimental groups, the fraction of the data set affected by these dropouts was considerably larger.

Figure 1 tracks the distribution of group sizes over time. All groups are initially formed of four group members, but a group’s size may decrease over the course of the experiment as participants drop out. The figure shows that our experiment suffered from quite substantial attrition and only 53% of the groups finished with all four members. Loss of group members was particularly likely around the ‘waiting room’ stages preceding periods 1 and 11. Specifically, in period 1 dropouts are presumably increased due to participants losing attention while waiting for their group to form. Similarly, before period 11 started participants had to wait until each of their group members had completed the comprehension questions, which could take a considerable amount of time. In some cases, this led to the termination of the whole group.

Fig. 1
figure 1

Attrition throughout the course of the experiment. Colors depict the group size. We always started with groups of four but let participants continue if a member dropped out. (Color figure online)

3.2.4 Payment and costs of experiments

After a session is over, participants are typically paid according to their performance. MTurk, like other crowdsourcing platforms (see footnote 1), facilitates secure payments. The experimental software can generate a random code for each participant which can be matched with their MTurk ID, allowing for payments according to performance in the experiment. It is important to process payments immediately to maintain a good reputation as an MTurk requester.

All in all, typical costs per useable data point in an experiment with participants recruited via MTurk are likely to be lower than in a laboratory experiment. Nevertheless, these costs may vary with attrition rates, which can be affected by the specific features of the experiment such as its length and group size (as one dropout may compromise the data of the whole group). In our case, laboratory participants earned $20.02 on average. Therefore, with a group of four as the unit of observation, a useable data point cost us $80.08. Corresponding costs online were $47.32.Footnote 20

4 Results

4.1 Contribution behavior

Figure 2 shows the aggregate contribution dynamics for both the online and laboratory samples. Only the data from groups with no dropouts are reported. In the condition without punishment, overall contributions were higher in our online sample than in the laboratory (12.52 vs 8.30, p = 0.003).Footnote 21 This result is consistent with recent literature reporting college students to be less cooperative than non-students and other adults (Belot et al. 2010; Carpenter and Seki 2011; Gächter et al. 2004; Stoop et al. 2012). The difference in contributions emerges right in the very first period of the game, with online participants contributing substantially more to the public good (15.00 vs 11.04, p = 0.001). Higher contributions by MTurkers can be only partially explained by the higher average age in the MTurk sample (OLS fitted to contribution decisions in the first period of the first condition: age β = 0.132, p = 0.031; MTurk dummy: β = 0.182, p = 0.004).Footnote 22

Fig. 2
figure 2

Contributions over time. Numbers in parentheses are the mean contributions in each experimental condition. Error bars indicate 95% confidence intervals (clustered at the group level)

The introduction of punishment opportunities strongly increases average contributions in both samples (average contributions in periods 10 vs 11: laboratory: 4.63 vs 12.94, p = 0.001; online: 7.85 vs 16.15, p = 0.001). Moreover, average cooperation levels over the course of the game are higher than in the absence of punishment (average group contributions in periods 11–20 vs 1–10: laboratory: 16.26 vs 8.30, p = 0.001; online: 17.15 vs 12.52, p = 0.001). As in the condition without punishment, overall contributions within groups were slightly yet significantly higher in our online sample than in the laboratory (16.26 vs 17.15, p = 0.008).Footnote 23

The regression models in Table 2 confirm that MTurk participants initially contribute more to the public good than their laboratory counterparts (Wald test on ‘Constant’: p < 0.001; Table 2, columns 1 and 2). Over the course of the game cooperation decays at comparable rates (Wald test on ‘Period’: p = 0.624; columns 1 and 2). In the punishment condition, the constant differs between the two samples (Wald test on ‘Constant’: p = 0.084; Table 2, columns 4 and 5), but the effect of period is equivalent (Wald test on ‘Period’: p = 0.407; Table 2, columns 4 and 5).Footnote 24

Table 2 Cooperation dynamics

Individual responses to the contributions of fellow group members were significant and similar across the online and laboratory samples (Wald test on ‘Mean peer contribution in t-1’: p = 0.505; Table 3, columns 1 and 2), which suggests that individual decision making online was not more ‘random’ than in the laboratory.Footnote 25

Table 3 Cooperation dynamics (no punishment)

4.2 Punishment behavior

Participants in the online sample punished less often than their laboratory counterparts (overall punishment frequency: 0.072 vs 0.167, p = 0.001). Moreover, Fig. 3 shows that in both samples the frequency of punishment tended to decrease over the course of the game, albeit less markedly in our online sample. Accordingly, mean efficiency in the experimental condition with punishment was higher on MTurk than in the laboratory (averages 26.91 vs 22.64 points per individual per period, p = 0.002). In cases where participants decided to punish, they did so equally severely in both samples. The average number of points assigned when punishing did not differ significantly between MTurk and the laboratory (4.15 vs 3.88; p = 0.545).

Fig. 3
figure 3

Frequencies of punishment over time. Frequencies are calculated by counting instances of assigning non-zero deduction points out of the total number of punishment opportunities per participant, per recipient, per period. Mean punishment frequencies in parenthesis. Error bars indicate 95% confidence intervals clustered on groups

Figure 4 reveals that, in both samples, punishment was predominantly pro-social in nature: most instances of punishment represented cooperators (who contributed relatively many points to the public good) punishing defectors (who contributed relatively fewer points). In the laboratory and online, both frequency and severity of punishment were higher with increasing differences in contributions between the punisher and their target (Fig. 4, compare the bottom four stacked bars; see Table 8 of the Appendix for regression analyses). Interestingly, we observe some instances of anti-social punishment in both samples (Fig. 4, top four stacked bars).

Fig. 4
figure 4

Directionality and severity of punishment in our laboratory and online samples. Stacked bars show frequency distributions of punishment decisions. Each bar shows the distribution for a given difference between punishers and their target’s contribution to the public good

Table 4 presents an econometric analysis of punishment behavior. It confirms the observations from Figs. 3 and 4. The overall patterns of punishment are similar in the laboratory and online. This analysis further reveals that online participants tended to punish less frequently and less severely, even after controlling for relative contributions and previously received punishment (Table 4, Wald test on constants; columns 1 and 2: p = 0.014; columns 3 and 4: p = 0.064).Footnote 26

Table 4 Determinants of punishment

In sum, our results show that basic patterns of cooperation and punishment behavior in the laboratory are largely replicable online, and thus are robust to changes in the experimental method. Participants in our online sample initially contribute more but, in the absence of punishment opportunities, cooperation decays at similar rates. In both samples, peer punishment is mainly pro-social in nature, and its introduction increases and stabilizes cooperation.

5 Attrition: endogenous or exogenous?

Our observation that, across conditions, experimental results are quite robust already suggests that endogenous attrition due to what has happened in the experiment (and hence selection) is not a big issue in our data. Here, we investigate this issue more rigorously.

Crucially, we find no evidence that attrition was selective in our experiment: dropout rates did not vary with the experimental condition (absence or presence of punishment). Table 5 details the results of a proportional hazards model (Jenkins 1995) fitted to instances where participants dropped out. The first model confirms the visual impression from Fig. 1 that dropout rates are relatively high in the first period of each experimental condition (columns 1–3). Most likely, this effect is due to the fact that participants have to wait for their group to form (before the start of the first condition), or for all members of their group to finish reading the instructions and completing the comprehension questions (before the start of the second condition). Over the course of each of the conditions, the attrition rates slightly decrease (‘period’ has a negative estimate), suggesting that over time participants become more loyal to the task.

Table 5 Determinants of attrition

Models 3–5 show that attrition is much more likely when a group member has dropped out in the previous period. This seems indicative of ‘cascading inattention’: when a participant drops out of the session (e.g. due to inactivity, a closed connection, or waning attention), their group members will have to wait for some time before they can proceed.Footnote 27 Reduced attention may lead to additional attrition.

The results in columns 4 and 5 of Table 5 indicate that attrition did not depend on cooperation and punishment behavior in each of the experimental conditions. Specifically, dropouts did not depend on the behavior of the dropped-out participants relative to their fellow group mates, or on their earnings. The model in column 4 (fitted to the data from the experiment without punishment) shows that dropouts did not depend on relative average contributions. In addition to that, the model in column 5 (fitted to the data from the experiment with punishment) reveals that individuals who dropped out had neither received more punishment relative to their group members who did not drop out, nor differed from them in terms of the punishment towards others.

6 Discussion

In light of the results presented here, one might feel tempted to embrace interactive online experimentation as a valuable complement to laboratory studies—and others might even see it as a cost-efficient substitute. The measures presented here address the most important methodological issues for conducting interactive experiments online, and our case study illustrates that established results from the laboratory can be replicated online. However, future research needs to establish how generalizable our results are to other research questions as, despite these measures, methodological differences between laboratory and online experimentation inevitably remain.

For instance, depending on the nature of the experiment, online participants can conceivably communicate with each other to share their knowledge, strategies and even experimental materials more quickly than their laboratory counterparts. As mentioned earlier, most of the forums that monitor the online community have mechanisms in place that prohibit the dissemination of materials, and participants themselves might find this practice prohibitively costly. Yet, one cannot completely rule out this possibility as laboratory and online participants can simply discuss an experiment through other channels. To some extent, the nature of interactive designs prevents participants from crafting intricate strategies beforehand, but this might not be the case for experimental designs where participants can figure out “correct” answers, and they might be at risk of being ineffective (Haigh 2016), or exhibiting reduced effect sizes (Chandler et al. 2015).

Comparisons between online and laboratory experiments can also be affected by differences in selection bias. Participants in online and laboratory experiments may self-select based on their opportunity cost of working time and their reservation wage. Indeed, opportunity costs and reservation wages might well differ between sessions conducted in the laboratory and online: laboratory participants might decide on whether to participate in an experiment by looking at the show-up fee paid and the travel costs they would incur (e.g. walk a long distance, experience bad weather, or even get dressed!), whereas for online participants such costs would typically be negligible. Interestingly though, results from Anderson et al. (2013) show that a comparable type of selection is unlikely to bias inference about the prevalence of other-regarding preferences. Thus, we have reasons to believe that our design is not particularly affected by the relatively low opportunity costs, but that others might be.

In this study, we systematically controlled for what we think are the most daunting logistical issues for running an interactive experiment online. However, one could argue that some important methodological differences between laboratory and online experiments remain, and that such discrepancies may potentially affect findings and treatment comparisons, regardless of experimental designs being interactive or not. For instance, we replicate classic patterns of behavior in an environment with less control but also find an important disparity between initial contributions. As we pointed out earlier though, this divergence is consistent with the one found in related studies comparing different adult populations with college students using various recruitment methods (Belot et al. 2010; Carpenter and Seki 2011; Gächter et al. 2004; Stoop et al. 2012; Gächter and Herrmann 2011). Yet we acknowledge that unobserved methodological differences might account for some of the variability observed in our results. Assessing the extent and severity of such differences in other designs is certainly a topic for future research.

The similarities between our online and laboratory results suggest that interactive designs conducted over the internet can be robust to changes in the experimental method. Yet, our results are based on a comparison between sessions that differed in terms of both the experimental method (online versus in the physical laboratory) and the subject pool (MTurk workers versus university participants). Further assessment of online experimentation could include a systematic study of the isolated effects of the method for collecting data (online or laboratory) on the one hand, and the subject pool (‘workers’ from an online labor market or university students) on the other hand, e.g. by running online experiments with university students and inviting MTurkers into the physical laboratory.

To summarize our discussion, we see our paper as a guide for researchers to think about relevant issues before deciding whether the online or the physical laboratory is most appropriate for their research question. Some might conclude that the loss of control is too big a problem for their designs, whereas others are willing to bear that loss of control. In the end, the extent of any loss of control is an empirical question and we encourage researchers to add to our first piece of experimental evidence.

7 Summary and conclusion

In this paper, we presented a detailed conceptual and methodological discussion of conducting interactive experiments in the physical laboratory and online. We illustrated similarities and differences using a repeated public goods experiment without and with punishment. Our comparative results suggest that online data quality is adequate and reliable, making online interactive experimentation a potentially valuable complement to laboratory studies.

Most importantly, attrition, though a significant nuisance in online experiments, did not compromise the internal validity of our data because attrition was unrelated to what happened in our experiment. Future research will need to establish how generalizable this result is to other interactive decision problems, in particular when attrition might be treatment-specific, which poses the biggest problem to internal validity (Zhou and Fishbach 2016). Future research should also investigate how individual characteristics of participants (e.g. social preferences) and aspects of the experimental design (e.g. group size, number of periods, complexity of the task and its instructions) affect dropouts.

We observed that cooperation levels in our online sample are substantially higher than in the laboratory, and are on the high end of the range of cooperation levels observed in the cross-cultural samples of Herrmann et al. (2008). These differences can be partly (but not completely) explained by the age of MTurkers relative to students in typical laboratory samples. Still, it is unclear whether some other differences in terms of the participants’ demographics, the perceived degree of anonymity, or the degree of familiarity with the experimental paradigm influence our results. We believe that future research should explore such avenues.