Introduction

Human response in the face of danger is mitigated by compensatory strategies. So, for example, cyclists ride faster or more aggressively when wearing bicycle helmets, as they unduly believe the risk of injuries to be offset by safety equipment (Adams & Hillman, 2001; Robinson, 2006). Also, people tend to inflate virtual balloons closer to popping limit when wearing helmets than when wearing baseball caps (Gamble & Walker, 2016). Interestingly, wearing a helmet cannot possibly prevent balloons from bursting, hence safety devices trigger compensatory behavior irrespective of whether or not they are relevant to the task at hand. Indeed, compensatory behavior is a general and unconscious tendency towards taking increased risks when using protective equipment (Peltzman, 1975; Wilde, 1982). As individuals feel safer, they re-evaluate their previous behavior as being safer as well, hence adapt (i.e., increase) the risk level of their current behavior to the lower safety threshold. An important question at this point is whether and how safety devices interact with target type to affect perceived safety and perceptual judgments such as egocentric distance estimations. Do helmets, for instance, increase safety levels regardless of target type, or do they contribute jointly with targets to perceptual estimation and decision making? We investigated this question in a study where participants wearing helmets or baseball caps estimated egocentric distances to safe and unsafe targets placed at various locations away from them. We started from the assumption that, unlike baseball cap wearers, helmet wearers would emphasize payoffs (i.e., delay confrontation with neutral or undesirable targets) by overshooting distance estimations, to compensate for the feeling of safety that helmets usually afford.

Egocentric distance estimation is a subjective adjustment of perceived distance based on past experience or training with judgments on actual distances. Perceived distance is usually underestimated with respect to physical distance (Foley, 1972; Foley et al., 2004), hence individuals use supplementary cues to self-correct their perception (Gilinsky, 1951). However, when people rely exclusively on motor cues, as when walking to a target instead of verbally estimating its position with respect to themselves, the distance they cover is linearly scaled with the actual distance at least out to 20 m (Loomis et al., 1992; Thomson, 1983). This disparity in estimations between verbal and motor tasks is predicted by the action-specific account of perception (Proffit et al., 2003; Witt, 2011), according to which individuals perceive targets in terms of their ability to act upon them. For targets located at a distance beyond reach, estimations are compressed, possibly as an attempt to bring them closer to the subject.

Another factor contributing to egocentric distance estimation is the physical characteristics of the observer. There is a large body of evidence (for a review, see Philbeck & Witt, 2015) indicating that the perception of a subject’s environment is scaled by the costs incurred for reaching a target. For example, archers judge a particular target as being larger when they use bow stabilizers, which makes them feel better about their coordination and control abilities (Lee et al., 2012). Similarly, drivers underestimate egocentric distances to a larger extent compared to pedestrians (Moeller, Zoppke, & Frings, 2016), as being in a car boosts their potential of reaching distant targets without investing important physical resources. Moreover, since drivers tend to identify themselves with the car they are driving, they extend the perception of their body in space and time (also see Berti & Frassinetti, 2000), which makes distances appear shorter. Further, we may speculate that the action-specific account also explains distance underestimation in virtual environments (Creem-Regehr et al., 2003; Thompson et al., 2004), as images on a screen appear more affordable, and therefore closer.

To summarize, distance estimation depends on the perception of both body and target. Specifically, there is a linear correlation between target size and physical resources, and an inverse correlation when targets are undesirable or difficult to reach. For example, distances seem greater and hills steeper when people are old, tired, or wearing a heavy backpack (Bhalla & Proffitt, 1999; Proffitt et al., 1995, 2003). Moreover, target identity and previous experience affect egocentric distance estimation. For example, individuals – especially those affected by phobias – view threatening objects as being closer than they actually are (Riskind, Moore, & Bowley, 1995), suggesting that looming danger and fearful cognitive distortions are closely tied. Also, (social) control over objects or individuals influences distance estimation, as people tend to associate control with closer proximity and the lack of control with greater distance (Wakslak & Kim, 2015). We might anticipate that safety levels experienced by helmet wearers compared to baseball cap wearers would make them better equipped to reach desirable targets or block undesirable ones. However, accumulating evidence (Canal Bruland, Pijpers, and Oudejans, 2010; Harber et al., 2011; Witt & Sugovic, 2013) suggests that threatening circumstances cancel individuals’ propensity for perceiving the world in terms of their ability to act upon it. From this perspective, the perception of the body (modified by headgear) and the perception of threatening targets are independent of each other. From a risk-management perspective, however, losses loom larger than gains (Kahneman & Tversky, 1984), irrespective of the increased safety levels afforded by helmets in threatening contexts. Given that payoffs in risky scenarios involve avoiding confrontation, and costs involve facing undesirable targets, we predict that individuals will no longer focus on payoffs and risk-compensate when estimating distances to unsafe targets, but will focus on costs instead. Threatening targets should thus appear closer than neutral targets.

In our study, safe and unsafe targets (i.e., non-threatening and threatening animals illustrated on cards) were situated to the right and to the left of an observer and further distributed over two depth levels and two visual angles. We expect the positive affect induced by safety headgear to induce relative attentional widening, thus higher sensitivity for the location of neutral targets. Indeed, researchers ever since Easterbrook (1959) have reported that (relative) negative affect causes attentional narrowing (Cohen, 1980; Chajut & Algom, 2003; Gable & Harmon-Jones, 2010; Steenbergen, Band, & Hommel, 2011). If target type further modulates this effect, we should be able to observe angle narrowing when targets are unsafe compared to when they are safe.

Methods

Participants

We recruited 48 volunteers (24 females) with normal or corrected-to-normal vision based on an a priori sample size estimation for an F-test with eight within-groups repeated measures (safe vs. unsafe targets × four target locations) and two between-groups factors (baseball cap vs. helmet wearers) run in G*Power 3.1.9.2 (Faul, Erdfelder, Lang, & Buchner, 2007). We obtained a 95% chance of correctly rejecting the null hypothesis of no difference between helmet wearers and cap wearers when judging the distance to threatening versus non-threatening targets at four locations by 42 people (ηP2 = 0.16). Participants were all undergraduate students and signed an informed consent form upon enrolment in the study, in accordance with the Declaration of Helsinki. All participants were screened in a preliminary session and completed the Rosenberg Self-Esteem Scale (Rosenberg, 1965) and the State-Trait Anxiety Inventory STAI-T (Spielberger et al., 1973). Half of the participants in each group (helmet wearers vs. baseball-cap wearers) were males, and the other half females. We matched helmet and baseball cap wearers in terms of self-esteem, t (46) = .834, p = .409 (M = 30.45 vs. M = 29.37) and state anxiety ratings, t (46) = -.279, p = .781 (M = 39.29 vs. M = 40.00). All participants received a small gift (cafeteria bonuses) for their participation in individual testing sessions lasting up to 30 min. The protocol was approved by the Ethics Committee of Sabanci University (FASS-16-01).

Stimuli

Stimuli consisted of two decks of 16 cards each that included both safe and unsafe targets. We ran a preliminary study in order to classify the illustrations on the 32 cards as either threatening or non-threatening by asking ten participants to estimate their threat level on a scale from 0 (“not very threatening”) to 10 (“very threatening”). Scores for threatening animals were significantly different (p < .001) and about four times as high as the scores for non-threatening animals (M = 6.05, StDev = .68 vs. M = 1.59, StDev = .64). Cards in the first deck illustrated non-threatening animals (e.g., horse, dog, rabbit) and cards in the second deck illustrated threatening animals (e.g., tiger, bat, shark). The cards were folded such that they could be stood on a table at eight locations marked by eight points not visible to participants. There were four locations in each hemifield, distributed over two depth levels (positions 1 and 2 closer by and positions 3 and 4 farther away) and two visual angles (positions 1 and 3 at a narrow angle and positions 2 and 4 at a wider angle) from the observer. Euclidean distances were 75 cm, 80 cm, 90 cm, and 95 cm for locations 1, 2, 3, and 4, respectively, as seen in Fig. 1.

Fig. 1
figure 1

Experimental setup for egocentric distance estimation of targets located in the front right and left quadrants (a). For targets situated in the back right and left quadrants, participants turned 180° so their back touched the table edge and looked through a hand-held mirror at the targets now situated behind them. Examples of threatening and non-threatening targets (b). Distances and sizes are not drawn to scale

Design and procedure

Upon completing the participation consent form and preliminary questionnaires, participants were randomly assigned to one of two headgear groups. Those in the first group wore a yellow helmet (hard hat) all through the testing session, and those in the second group wore a yellow cap instead. We informed participants that wearing the designated headgear was part of the experimental setup, and offered no supplementary information on the type of headgear other participants might be asked to wear and on the potential effects of headgear on egocentric distance estimation. We considered it unlikely that participants might recover our experimental hypothesis that wearing a helmet as opposed to wearing a cap is likely to compensate for distance underestimation biases in both depth and visual angle. Moreover, by limiting verbal interaction with the experimenter, we minimized unwanted priming effects (Gilder & Heerey, 2018). The design included headgear as a between-subjects factor and target valence (safe vs. unsafe), depth level (close vs. far), and visual angle (narrow vs. broad) as within-subjects factors.

Participants were informed that, on each trial, they should carefully investigate the card placed on the desk, estimate the card’s distance with respect to where they were seated, which was as close as possible to the midline of the short edge of the desk, and name the animal illustrated on the card. In each trial, the experimenter placed one randomly chosen card on an individually randomized marker on the desk. The card was removed as soon as participants estimated the distance and named the animal represented. Naming and distance-estimation tasks were completed for cards placed in both hemifields in front of the observer as well as in both hemifields behind the observer, which were visible through a hand-held mirror (dimensions 20 cm × 15 cm). In order to visualize the hemifields behind them, participants turned 180° and touched the edge of the table at the observation point with their back. Chair height was adjusted for each participant. In total, stimuli covered four individually randomized quadrants (front right, front left, back right, back left), tested in just as many blocks, and a total of 16 card placements. The study followed a 2 (Headgear: Helmet wearers vs. Cap wearers) × 4 (Position: 75 cm vs. 80 cm vs. 90 cm vs. 95 cm) × 2 (Threat level: Non-threatening vs. Threatening) mixed factorial design with estimated distance as dependent variable.

Results

Figure 2 shows average distance estimations across trials. We entered the results in a 2 × 4 × 2 ANOVA and found a triple interaction between all factors – headgear, distance, and threat level, F (3, 44) = 3.09, p = .029, ηP2 = .063. We further analyzed the results for each headgear type separately. Helmet wearers estimated distances to non-threatening targets as being greater than distances to threatening targets, with statistics approaching significance, F (1, 23) = 4.19, p = .052, ηP2 = .154 (M = 76 cm vs. 74 cm). We also found an interaction between distance and threat-level, F (3, 21) = 3.62, p = .017, ηP2 = .136. Mauchly’s test of sphericity: χ2(5) = 4.87, p = .432. Paired comparisons revealed that, for safe targets, helmet wearers distinguished between distances to all four locations, F (3, 21) = 10.86, p < .001, ηP2 = .608, namely between the distance to the first location on the one hand and the distance to the second (p = .003), the third (p < .001), and the fourth location (p < .001) on the other hand, as well as between the distance to the second and the distance to the third location (p < .001) or the fourth location (p < .001), and finally between the distance to the third location and the distance to the fourth location (p = .025). When making their estimations, participants thus took into account both the depth level and the angle at which targets were situated with respect to observation point. For unsafe targets, helmet wearers could only distinguish between distances to locations at different depth levels, F (3, 21) = 11.61, p < .001, ηP2 = .624, namely between the distance to the first location and the distance to either the third or the fourth location (p < .001 for both), as well as between distances to the second location and distances to either the third or the fourth location (p < .001 for both), but not between distances to locations at different angles, namely between the distance to the first and distances to the second location (p = .525), or between the distance to the third and the distance to the fourth location (p = .246).

Fig. 2
figure 2

Average estimation scores by helmet wearers and baseball cap wearers for threatening and non-threatening targets at positions 1, 2, 3, and 4 (75 cm, 80 cm, 90 cm, and 95 cm) away from observation point. Error bars indicate 95% confidence intervals

When looking at the estimations made by cap wearers, we observed a main effect of distance, F (3, 21) = 22.97, p < .001, ηP2 = .50 – Greenhouse-Geisser corrected to F (1.22, 28.16) = 22.97, p < .001, ηP2 = .50, as they only distinguished between distances to locations at different depth levels, namely between the distance to the first location and the distance to either the third or the fourth location (p < .001 for both), as well as between distances to the second location and distances to either the third or the fourth location (p < .001 for both), but not between distances to locations at different angles, namely between the distance to the first and distances to the second location (p = .130), or between the distance to the third and the distance to the fourth location (p = .268). Differently put, cap wearers discounted information on target safety when making their estimations.

We further run a supplementary analysis to investigate the role of sex or quadrant position for targets situated right-front, right-back, left-front, and left-back, and found no differences in performance between males and females (p > .05), but a close-to-significant difference between targets across quadrants, F (3, 44) = 2.58, p = .056, ηP2 = .053 such that distance estimations were greater for the front quadrants compared to the back-left quadrant, p = .05 and .046, respectively (M = 70.7 cm front-right and 71.01 cm front-left vs. 66.40 cm back-left). The back-right quadrant had an average intermediate value of 68.94 cm.

Discussion

We reported that people wearing helmets were inclined to overshoot egocentric distance estimations to safe targets compared to unsafe targets, unlike people wearing non-safety-related headgear such as baseball caps. Estimations made by helmet wearers were thus closer to actual Euclidean distances between targets and observation point, as the general tendency among naïve observers is to (severely) underestimate distances. The results lend support to the action-specific approach to perception, according to which individuals perceive targets in terms of their ability to act upon them – here, in the sense of blocking those targets one has no motivation to reach for, such as neutral and threatening animals. In the same time, the results concur with previous reports that threatening objects loom larger than non-threatening objects (Riskind, Moore, & Bowley, 1995), which can be readily explained by evolutionary principles: The closer unsafe targets are perceived to be, the sooner observers can decide on the most appropriate response (e.g., flight or fight) to manage an unwanted encounter. There were no differences, in terms of target valence, for estimations made by baseball cap wearers. In contrast, helmet wearers took into account the valence of targets as well as their perceived ability to act upon them when estimating egocentric distance. Thus, they felt safe enough to risk-compensate for safe targets, but focused on the losses incurred for obtaining the payoff (i.e., delayed confrontation) when estimating distances to unsafe targets. “Threat effects” were previously described in terms of an independence between the perception of the body and the perception of target valence (e.g., Witt & Sugovic, 2013). Here, we reconcile them with the results obtained for safe targets in the framework of risk-taking behavior.

Unsurprisingly, all participants made larger distance estimations for targets situated in the front right and left quadrants compared to targets in the back quadrants, particularly the back-left, suggesting that individuals remained on their guard for targets they could not monitor or control, such as items situated behind them, which they estimated to be much closer. Alternatively, participants estimated targets in the left-back quadrant as being closer than targets situated in any of the other quadrants because they felt safest about this particular location; they might tend to virtually lean on the left-back region because they view it as a virtual support for their body. This latter explanation would imply that individuals distinguish, at least functionally, between observation point and bodily reference point when making egocentric distance estimations.

Importantly, we reported that target identity was a relevant factor insofar as helmet wearers discriminated between locations at different visual angles for safe targets. For unsafe targets, their attention was restricted to a narrower angle. Better perceptual estimations for both target depth and target angle by helmet wearers under safe circumstances underscores the efficacy of using protective gear for recreational purposes, thus helping circumscribe risk compensation behavior and reconciling divergent findings in literature. Indeed, compensatory strategies do not always explain cycling habits (Phillips, Fyhri, & Sagberg, 2011) or vaccine-related behavior (for a review, see Kasting et al., 2016). Further empirical investigation might reveal that wearing a helmet or securing seatbelts are not relevant for preventing self-caused accidents when drivers are heading to safe locations such as regular routes to work, as they may be inclined to compensate by taking risks, but that safety gear helps prevent such accidents when drivers are heading to unknown or covertly unsafe journeys such as vacation destinations. Overall, our findings help address the ongoing debate between safety advocates who endorse traditional models of prevention (e.g., Ruedl, Kopp, & Burtscher, 2012) and those who maintain that safety measures are offset, at least to some extent, by risk compensation (e.g., Fyhri & Phillips, 2013). The findings lend support to a different account of compensatory behavior, in which helmet wearers risk-compensate only when they perceive that the costs associated with payoffs are decreasing rather than increasing. More generally, we provided evidence that behavior adaptation is blocked unless targets are safe.