As inhabitants of a highly mechanized environment, we live in a world populated by both fellow humans and animated characters. As a result, interactions with virtual beings have become a prominent reality in various spheres of life—most noticeably during role-playing videogames. Because role playing provides a basis for individuals to acquire and practice new skills and viewpoints, in the present research we sought to explore the impact of role-playing video gaming on social behavior. Unlike the wealth of studies that have investigated the impacts of video gaming on perception, attention, and action (e.g., Anderson & Bushman, 2001; Gentile et al., 2009; Green & Bavelier, 2003; Jackson, Von Eye, Witt, Zhao, & Fitzgerald, 2011), here we sought to investigate whether adopting the mindset of an avatar during role-playing video gaming opens individuals to the “opinion” of a computer, and whether they subsequently affiliate more systematically with computers and show social conformity more readily in the presence of computer judgments, even when these judgments are wrong.

Social conformity

Individuals are part of a social context, and their judgments and decisions are informed by other members of this context. Sometimes the influence of others becomes dominant enough to bias an individual’s behavior away from their subjective perception or appraisal of an event. This phenomenon, known as social conformity, occurs when an individual exhibits a behavior because he witnesses it in others rather than because it is “his own” (Claidière & Whiten, 2012). The classic studies by Asch (1955, 1956) have set the stage for more than half a century of research on when and why such conformity occurs. In a typical social-conformity scenario, an individual witnesses a number of unanimous judgments from confederates that are often right but occasionally incorrect. When the participant then has to add a judgment of his own (e.g., regarding the length of a line), this judgment is typically biased in the direction of the majority vote, even if this majority vote is strikingly incorrect. Conformity occurs because people aim to be accurate and gain social approval from others—goals that are ultimately rooted in the need to maintain or protect one’s own self-esteem or self-concept (Arndt, Schimel, Greenberg, & Pyszczynski, 2002; Cialdini & Goldstein, 2004).

A crucial question in the present context is whether conformity can be triggered by nonhuman agents, in particular by computers. Cialdini and Goldstein (2004) defined conformity as the “act of changing one’s behaviour to match the responses of others” (p. 606). These “others” are typically understood to be human counterparts—but this conclusion is not compulsory. From a number of different paradigms, we know that social influence is not only exercised through the physical, but also through the indirect or imagined presence of other people; at other times, the vote or judgment of others is even delivered via written or digital messages. Likewise, a computer output is ultimately a human-made judgment—dressed in standardized and predesigned algorithms that lack the immediacy of an individual human mind. It is thus conceivable that people might also experience conformity from computers.

The first aim of the present research, then, was to investigate whether individuals can indeed be prompted by computers to exhibit conformity in social decision-making situations.

We also sought to explore whether immersive video gaming increases identification with virtual characters, and in turn leads individuals to experience greater pressure for conformity. By increased identification with virtual characters, we mean an increased sense of seeing the world through their eyes and an increased readiness of understanding those characters as representing oneself. Increased immersion into a virtual character and subsequent identification with this character can be expected to increasingly blur the human–machine boundary. Moreover, the repeated moving back and forth between the virtual and the real worlds likely attenuates our perception of a categorical difference between the human and the virtual world, and in turn increases the perceived kinship of a participant with a virtual character.

Pressure toward social conformity is increased when it originates from sources that we see as relevant. Sources that we immerse into and identify with can be seen as closer to us, and in that sense as more relevant to ourselves, and we thus expect that immersive videogame players will face greater pressure for conformity from computers than will nonimmersive gamers. In a recent study, we have already found evidence of this blurring as a consequence of immersive video gaming—although from the opposite end: Participants were more likely to adopt a robotic mindset after playing an immersive videogame, as evidenced by their lower level of human (emotional) experience (Weger & Loughnan, 2014).

Here we sought to explore whether immersion into virtual beings prompts participants to feel closer to the machine/computer world, and therefore to exhibit even greater conformity to computer judgments than do control participants who have not played immersive games.

Finally, the level of conformity may vary as a function of how obvious the correct answer is. We hence introduced different levels of ambiguity in order to examine when participants are most likely to conform. We expected that conformity would be highest under conditions of relatively high ambiguity—that is, when the correct answer is least obvious—and that conformity would accordingly be lower under conditions in which ambiguity is also lower.

Experiment 1a

Participants

A total of 63 student participants took part in our study. They were recruited via the departmental research participation scheme and were compensated for their participation with course-related credit. Of those, 29 participated in the gaming condition, 34 in the Internet condition.

Material and procedure

Participants completed two ostensibly unrelated tasks—a computer task (manipulation phase) and a job selection task (experimental phase).

Manipulation phase

During the manipulation phase, participants were randomly assigned to either a computer game condition or the control condition. Participants in the computer game condition for 7 min played an immersive game in which they played through the eyes of a virtual character (an avatar), by travelling through a landscape and manipulating the environment at their discretion. As a control condition, we allowed participants to use the Internet for the same duration to do whatever activity they liked, as long as they were not playing games. The specific activity of the participants was not recorded, but upon checking incidentally, most of the control participants reported using Facebook and checking their e-mails. We chose this control task in order to make sure that in both conditions people (a) worked on a computer, (b) navigated in a virtual space, and (c) were occupied with a cognitively engaging task.

Experimental phase

Following the completion of the computer task, participants were instructed to commence the job-candidate task. They were told that we were interested in validating two computer-based algorithms that were allegedly under development and that would allow us to more easily find and assess job candidates.

Each of the 30 trials in the experimental phase consisted of the following stages: First, two brief, standardized descriptions of two job candidates (A and B) were presented auditorily. The candidates were described as varying on two dimensions—commitment and experience—and it was highlighted that for the purpose of the selection process, the two dimensions were of identical importance. The scores on these dimensions were reported as lying between 0 (very low) and 10 (very high); hence, the ideal job candidate would have a total score of 20. In principle, participants simply had to add up the scores on both dimensions for each of the candidates. Whichever sum was greater was the more qualified candidate. After participants had been introduced to candidates A and B, the computer programs and the participant made choices as to which of the two job candidates should be hired. The programs always chose first (by displaying the response on the screen), always agreed on the same candidate, and their responses were predetermined in the following way. On 17 trials, the computer algorithms were presented as choosing the candidate who was objectively better suited for the job—that is, the candidate with the higher total score. On the remaining 13 trials, they chose the objectively incorrect candidate (the one with the lower total score). On five of these trials, they chose an incorrect candidate who was only 1 point lower, on average, than the better-suited candidate (the high-ambiguity condition). On four further trials, they chose a candidate who was 2 points lower than the better-suited candidate (the moderate-ambiguity condition). On the remaining four trials, the computer algorithms chose a candidate who was 3 points lower than the better-suited candidate (the low-ambiguity condition). Thus, on 13 trials the computer chose the incorrect candidate, with varying degrees of ambiguity (on the 17 correct trials, the respective numbers of trials were five, six, and six for the high-, moderate-, and low-ambiguity conditions). Numbers were not identical across conditions due to an experimental oversight.

The lower-ranked candidate could be either candidate A or candidate B, and all trials were presented in a random order. Different job types were first presented (e.g., waiter, doctor, or bus driver) and then judged by the computers. A number of efforts were made to make the cover stories of the computers spontaneously choosing one candidate over the other more credible. Whilst the auditory track described the candidates, the programs showed matched audio bars. Following the completion of the track, the two programs showed “analysis progress” bars that filled at different rates before revealing the two decisions simultaneously.

After the two computer programs had indicated their judgments, the participant was given an opportunity to enter his or her own assessment into the keyboard. The accuracy of the responses was the main variable of interest.

Results

In total, participants chose the incorrect candidate on 25 % of the trials. This differed as a function of the computer vote, with participants being more likely to choose the incorrect candidate when the computers had previously chosen the incorrect (370 out of 945 trials, 39 %), as opposed to the correct (96 out of 945 trials, 10 %) candidate, χ 2(1) = 213.83, p < .001.

To measure the degree of conformity, we subsequently examined only trials in which the computer erroneously selected the less-qualified job candidate. If participants chose the same (i.e., the inaccurate) candidate as the computers, they had conformed and received a score of 1. If they selected the other—more-qualified—candidate, they had resisted conforming and were assigned a score of 0. Since the numbers of trials for high, moderate, and low ambiguity were five, four, and four, respectively, we converted these to proportion scores, such that complete conformers received a score of 1, and complete resisters a score of 0.

A 3 (ambiguity: high, moderate, low) ×2 (condition: video gaming vs. control) mixed-model analysis of variance (ANOVA) yielded a significant main effect of ambiguity, F(2, 122) = 22.42, p < .001, η p 2 = .269. Participants conformed less in the low-ambiguity trials (M = .19) than in the moderate-ambiguity (M = .29) and high-ambiguity (M = .38) trials. Importantly, we found a main effect of condition: Participants conformed more in the game condition (M = .34) than in the control condition (M = .24), F(1, 61) = 5.217, p = .026, η p 2 = .079. The interaction between the two factors was also significant, F(2, 122) = 3.96, p = .022, η p 2 = . 061, since the effect became stronger with increasing ambiguity [for high-ambiguity trials, t(61) = 3.22, p = .002, d = 0.81; for moderate-ambiguity trials, t(61) = 2.07, p = .042, d = .52; for low-ambiguity trials, t(61) = 0.030, p = .976, d = 0.007].Footnote 1 The results are shown in Fig. 1.

Fig. 1
figure 1

Rates of conformity as a function of experimental condition and ambiguity condition in Experiments 1a and 1b. Error bars are also depicted

Experiment 1b

In Experiment 1a, participants in the control group played on the Internet; we had chosen this control activity because it was similar to that of the gaming group on several dimensions. Yet, it can be argued that this was not an ideal control, because so many different aspects other than immersion varied simultaneously. The following experiment had two aims: First, here we used a more suitable control condition; second, we sought to measure the degree of immersion directly, so as to be able to investigate whether the observed effects could be attributed to the level of immersion.

Method

Participants

A total of 26 student participants (13 male, 13 female) took part in this experiment. They were both undergraduates and postgraduates, as well as teaching staff from the University of Kent, and were compensated for their participation with course-related credit and/or by means of a lottery scheme that offered $50 for two people who were drawn by chance. Participants arrived in pairs; one was randomly assigned to the gaming condition, and one to the control condition. There were seven mixed-gender pairs (three male/females pairs and four female/male pairs, representing the gaming/control conditions, respectively) and six single-gender pairs (three female, three male).

Materials and procedure

Experiment 1b was similar to Experiment 1a, with the following exceptions. After signing the consent form, participants were taken to one of the two labs and provided with further instructions. The two labs were adjacent to each other with a one-way mirror between the two. In the gaming lab, the computer was positioned just in front of the one-way mirror, and when seated the participant was faced looking away from the mirror at approximately a 45-deg angle. This allowed the control participant in the adjacent room to view the gamer playing the videogame on the computer without any obstructions to the computer screen. The control group were asked just to quietly observe the gamer playing the game. Participants in the gaming group played the immersive videogame as participants had in the first experiment. We chose this control condition in order to keep the exposure to the stimulus materials as similar as possible, while also manipulating the degree of immersion: One was actively engaged in the task (the gamers in the experimental group), whereas the other was passively observing (the observers in the control group). In addition to this, we collected information on a number of correlated dimensions. We measured the degree of identification with the virtual character by means of three questions in which participants had to indicate their agreement with the following statements, on a 10-point scale: (a) I felt I was looking through the eyes of the character in the game; (b) I felt I was participating in a virtual landscape; (c) I felt the actions of the character in the game were representing me.

Participants were also asked a number of additional questions, such as how appropriate they felt the use of computers in such a task is; how much they trusted them; how much they trust their own best judgment; how much they enjoyed the task; and how many hours of video gaming they typically do in a week/on a weekday/on a weekend day.

Results

In total, participants chose the incorrect candidate on 16.7 % of the trials. This differed as a function of the computer vote, with participants being more likely to choose the incorrect candidate when the computers had previously chosen the incorrect (105 out of 338 trials; 31 %) as opposed to the correct (25 out of 442 trials; 5.7 %) candidate, χ 2(1) = 89.033, p < .001. A 3 (ambiguity: high, moderate, low) ×2 (condition: video gaming vs. control) mixed-model ANOVA once again yielded a significant main effect of ambiguity, F(2, 48) = 12.61, p < .001, η p 2 = .584: Participants conformed less in the low-ambiguity trials (M = .04) than in the moderate-ambiguity (M = .25) and high-ambiguity (M = .4) trials. As in Experiment 1a, participants conformed more in the game condition (M = .27) than did participants in the control condition (M = .19), an effect that approached significance, F(1, 24) = 3.28, p = .083, η p 2 = .120. The interaction between the factors was not significant, F(2, 48) = .074, p = .929, η p 2 = .003.Footnote 2 The measures of identification with the computer also showed reliable results. Participants in the gaming condition were more likely to indicate that they felt they were looking through the eyes of the character in the game (5.22 vs. 3.07), t(24) = 3.061, p = .006, d = 1.2; that they felt they were participating in a virtual landscape (5.15 vs. 3.07), t(24) = 3.332, p = .003, d = 1.306; and that they “felt the actions of the character in the game were representing me” (3.77 vs. 2.38), t(24) = 2.019, p = .055, d = 0.79. The data are shown in Fig. 1.

Correlation analyses also showed a reliable association between conformity and the measures of identification. To avoid multiple comparisons, we computed a single conformity score (by adding up the conformity scores across the three ambiguity conditions). We found a reliable correlation between conformity and the extent to which participants felt they were looking through the eyes of the avatar, r = .424, p = .031; between conformity and the rate at which participants felt they were participating in a virtual landscape, r = .406, p = .040; and a nonreliable correlation between conformity and the rate at which participants felt the computer was representing themselves, r = .259, p = .202. People’s average numbers of hours of video gaming per week did not differ across the two groups, t(23) = 1.19, p = .46, and the other variables did not yield reliable effects, either, all ts <1.71, all ps > .11.

Discussion

The results of the present study show that participants follow computers in making a wrong judgment—indicating that social conformity also emerges when opinions are voiced by nonhuman agents. More importantly, a brief period of immersive video gaming (here, 7 min) increased the extent to which individuals exhibited such social conformity.

The question arises as to what motivated the conformity in this context. In the conformity literature, a distinction has been made between different primary motivators for conformity—the goal of accuracy as well as the goal of affiliation, both of which converge on a third goal, the goal of maintaining a positive self-concept (Cialdini & Goldstein, 2004). People seek to be accurate: Not being accurate would question their physical (e.g., perceptual) or their mental skills—a threat to one’s self-concept. The goal of seeking accuracy is often viewed as an indication of what is described as informational conformity—conforming to others because those others provide a reasonable basis for making an accurate judgment. The goal of affiliation, by contrast, is typically taken to be an indication of a second type of conformity—normative conformity—that emerges because an individual seeks to be part of a certain group and to gain approval from that group. Not gaining such approval is once again a threat to one’s self-concept. What, then, motivated the conformity in the present experiments?

In favor of informational conformity is the fact that the task was ultimately arithmetic in nature—and computers may be seen as the undisputed experts in this context. Relying on their judgment is hence a reasonable choice. Arguing against informational conformity, however, was the fact that the “expert” status of the computers was clearly undermined by the cover story in which it was pointed out that these computer programs were under development and that participants should trust their own best judgment. Also, the nature of our video-gaming task was such that it provided little ground for participants to develop a view of a computer as a competent social judge. By contrast, the different degrees of identification with the avatar across groups point more toward the goal-of-affiliation account; if affiliation was the motivation for conformity in our study, this implies that individuals gain orientation and even social approval from interacting with computers—a stunning and noteworthy pattern. Note, however, that we measured these goals only in Experiment 1b—and that people are often motivated by multiple goals; it would thus be premature at this point to interpret this pattern in one direction versus another. Future research will have to tackle this issue further.

The fact that participants calibrated their judgments with inaccurate computer votes—and did so increasingly after playing immersive videogames and in as socially sensitive a context as a job selection task—is noteworthy at a time when video gaming is so widespread (those 8–18 years of age play, on average, 1:13 h per day: Rideout, Foehr, & Roberts, 2010). Results such as these call for a systematic reflection on such gaming practices and on the consequences of entering the artificiality of a virtual world. Further research will be needed to substantiate the present findings—but if such confirmation emerges, parents, educators, and players will need to take these consequences into consideration and take appropriate countermeasures—for instance, by reflecting on what it really means to be human, and on how this humanness can be educated and strengthened when it is attenuated during such virtual journeys.

We also wish to highlight a number of shortcomings of the present research that will need to be addressed in future work. One point is the small sample sizes and marginal effects—further replication will be needed to substantiate the pattern. Moreover, the candidate selection task was newly developed and has not been further validated. We chose it in order to increase the ecological validity of our study, but this came at a cost: It could be argued, for instance, that the finding is an instance of persuasion rather than of conformity to computer judgments. We did not measure whether people actually humanized computers and subsequently perceived them as an authority to which they would have conformed. Future research will also need to rule out alternative explanations (such as level of education as a confounding variable) and document the duration of the impact.

Despite these limitations, the results of the present work reflect a pattern that is consistent with earlier work (Weger & Loughnan, 2014). There is no doubt that future validation and confirmation will be needed; but if these findings are robust and reflect the reality of such gaming effects, there is also no doubt that by the time such validation and confirmation is provided, the effort required to change such habits and (gaming) routines will have substantially increased.