As expected, the results observed in this research replicated those observed with positive illusions (e.g., Blanco et al., 2011, 2012; Hannah & Beneteau, 2009; Matute, 1996), except that they were in the opposite direction. In experiments exploring positive illusions, the illusion became weaker when p(C) was lower and when an alternative cause was available to which the outcome could be attributed (Barberia et al., 2013; Blanco et al., 2012; Hannah & Beneteau, 2009; Matute, 1996; Vadillo et al., 2013). What the present research shows is that, in cases in which uncontrollable undesired outcomes occur frequently, if people act frequently, as they do by default, their behavior often seems to be punished, so they conclude that their degree of control is weak. However, if we instruct them from the beginning to reduce their p(C) and provide them with an alternative cause for the occurrence of those frequent undesired outcomes, then they probably feel that their behavior is no longer (or at least is not as frequently) punished. Thus, participants may confirm their belief that, on the few occasions on which those undesired events do not occur, this is due to their having control over them (e.g., “I did not pick number 13, so this is why I was lucky”). Therefore, the illusion that their behavior is appropriate becomes stronger when they reduce their p(C) and attribute the occurrence of undesired outcomes to external causes than when they act frequently and receive frequent punishment. In sum, if one aims to prevent illusions of control in cases in which the outcomes following the action are undesired, it might be better to ask participants to increase, rather than reduce, the frequency of their behavior (see note 3 for a discussion of the similarities and differences between positive and negative illusions).
Contrary to the general assumption that negative illusions are weak and almost nonexistent (e.g., Alloy & Abramson, 1979), our results have shown that negative illusions can be intense, too (in line with Aeschleman et al., 2003; Bloom, Venard, Harden, & Seetharaman, 2007), but are developed and maintained in exactly the opposite way as positive illusions. In a classic experiment, Alloy and Abramson (1979, Exp. 3) concluded that negative illusions were much weaker than positive illusions. However, a close inspection of that experiment suggests that their claim might have been unwarranted. In their experiment, participants tried to control the onset of a light in order to obtain coins as a reward. One group won coins on 50 % of the trials (win group), whereas the other group lost coins on 50 % of the trials (lose group). Thus, the frequency of the “earned-coin” outcome was 50 % in the win group and 0 % in the lose group, and the frequency of the “lost-coin” outcome was 50 % in the lose group and 0 % in the win group. Thus, the problem is that p(O) was confounded with outcome valence in their experiment. Participants in the two groups were exposed to neither the same p(O) nor the same response–outcome contingencies. If participants focused on the coins, as they probably did, those in the win group had a high reinforcement rate. By contrast, participants in the lose group had 50 % of their responses punished and none of their actions rewarded. Whenever they thought that they had found a way to control the light, they possibly repeated that response, and they lost one more coin. Importantly, participants in the lose group were exposed to exactly the opposite contingencies to those that are known to increase the illusion of control. That is, rather than a high frequency of the desired outcome, they were under a zero-reinforcement schedule.
Our results are consistent with a study by Rudski, Lischner, and Albert (1999). Their participants earned or lost points randomly, on 75 %, 50 %, or 25 % of trials (depending on the condition). The illusion of control was higher under conditions of maximal (75 %) gain or minimal (25 %) loss. Moreover, a study conducted by Aeschleman et al. (2003) presented similar results. In Aeschleman et al.’s study, the participants’ goal was to produce and maintain the word GOOD on the computer screen, or to prevent and remove the word BAD. The results showed that the strongest illusion took place with the lower percentage of the word BAD (see also Bloom et al., 2007, for related evidence).
The studies of Rudski et al. (1999) and Aeschleman et al. (2003) refer to the opposite effect that the p(O) has when the outcome is undesired, as compared to when it is desired. To the best of our knowledge, however, the effect of manipulating the existence of alternative causes and p(C) had not yet been investigated with regard to negative illusions. This is important, since the combination of a medium p(C) and a warning about alternative causes has been suggested as an evidence-based strategy to reduce the illusion of control. Our results show that in the case of negative illusions, this strategy should be used in just the opposite way as with positive illusions. That is, insisting on the existence of alternative causes and on the need to act on about 50 % of the trials increases negative illusions, rather than reducing them.
Although the p(C) effect detected in our experiment can be framed straightforwardly in terms of operant conditioning (i.e., adventitious punishment of actions), the effect of the second component of the combined treatment to decrease the illusion (i.e., the availability of alternative causes) could also be discussed in light of inferential theories of causal learning. Most of these theories emphasize the crucial role of structure and counterfactuals in normative causal reasoning (see, e.g., Sloman, 2013). To answer a question framed in causal terms, the participants must evaluate the target cause’s power to produce the outcome when it is isolated from a background where other alternative causes may be present (Cheng, 1997). In the experimental group, we suggested to participants that they reduce p(C) while their attention was explicitly drawn to alternative causes (i.e., the possible malfunction of the spacebar). According to theories based on inferences that make use of causal structures, these two factors might have facilitated the attribution of the frequent undesired outcomes to the alternative cause, therefore increasing the causal power attributed to the target cause. Because p(C) was low in this group, participants were exposed to many trials in which they did not act but the undesired outcome still took place. The counterfactual reasoning would imply asking themselves what would have happened if they had pressed the spacebar on those trials. The availability of a potential alternative cause for the flash (i.e., the spacebar malfunctioning) could lead to the conclusion that we already sketched out above, which is that their behavior was an effective cause, responsible for those few trials on which the aversive outcome was not present—hence, the strong illusion of control that we observed in this group. In any case, our experiment was not designed to test theories of structure-based causal inference. Thus, we cannot reach any conclusion concerning the relative ability of these theories to account for the general finding that we have reported and that we believe to be important: that reducing p(C) while suggesting that other causes might be operating in the background strengthens, rather than weakens, the negative illusion of control. This should be taken into account in analyses of real-life conditions in which uncontrollable undesired outcomes occur at a high rate.