Six of one, half dozen of the other: Suboptimal prioritizing for equal and unequal alternatives

It is possible to accomplish multiple goals when available resources are abundant, but when the tasks are difficult and resources are limited, it is better to focus on one task and complete it successfully than to divide your efforts and fail on both. Previous research has shown that people rarely apply this logic when faced with prioritizing dilemmas. The pairs of tasks in previous research had equal utility, which according to some models, can disrupt decision-making. We investigated whether the equivalence of two tasks contributes to suboptimal decisions about how to prioritize them. If so, removing or manipulating the arbitrary nature of the decision between options should facilitate optimal decisions about whether to focus effort on one goal or divide effort over two. Across all three experiments, however, participants did not appropriately adjust their decisions with task difficulty. The only condition in which participants adopted a strategy that approached optimal was when they had voluntarily placed more reward on one task over the other. For the task that was more rewarded, choices were modified more effectively with task difficulty. However, participants were more likely to choose to distribute rewards equally than unequally. The results demonstrate that situations involving choices between options with equal utility are not avoided and are even slightly preferred over unequal options, despite unequal options having larger potential gains and leading to more effective prioritizing strategies. Supplementary Information The online version contains supplementary material available at 10.3758/s13421-022-01356-5.


Power Analysis
Our power analysis made use of simulations and resampling approaches based on data that we previously collected. Resampling was carried out in order to establish if adding more participants to our sample sizes would have lead to a reduction in uncertainty around our estimates. The experiments were all interested in the extent to which a distribution of standing position shifted with the introduction of a new factor. As such, one power analysis was carried out to determine what sort of sample size would be sufficient to detect a small shift in the mean of this distribution.
We started by fitting a a beta distribution to each participant's data from the Throwing Experiment in Clarke and Hunt (2016) using the fitdistrplus package (as can be seen in Figure 1. In the original data, normalised standing position was coded ϕ ∈ [0, 1] with 0 indicating a central standing position, and 1 indicating the participant stood by either of the two hoops. As we are breaking the symmetry between the two targets in this new experiment, we will now treat standing position as ϕ ∈ (0, 1) with 0 and 1 representing the positions of the large and small hoop respectively. The central midpoint is now represented as ϕ = 0.5. This is directly linked to the idea behind the Hoop Size experiment in which we aimed to observe whether participants would shift towards one side when presented with hoops of different sizes. However, this can be applied to the other experiments in that they are also looking at how the distribution shifts given a new factor in the experiment. : These plots show the various distributions that were fit to each participant. These were then used to draw random samples from in order to simulate participants shifting towards one target.
The fit with the empirical data is reasonable, although we under-estimate the frequency of standing positions ϕ ≈ 0.5. In this case 0.5 is considered the central point with 0 and 1 representing the left and right hoop respectively. We assumed that by introducing a difference in size between the two hoops that this distribution would shift towards the smaller hoop so as to balance their chances of success. We assumed that the difference in hoop size would cause participants to stand slightly closer to the small hoop, in this case the shift in mean would be +0.05.
We can now these use distributions to simulate experiments with N = 3 . . . 24 participants and 72 trials. Figure 2 shows the uncertainty surrounding the mean estimate for the smallest difference tested (5%). After 15 participants, the uncertainty surrounding the estimate appears to plateau which demonstrates that the sample size of 21 was sufficient to detect an effect of this size.

Additional Analyses
In the analyses below, we use multi-level Bayesian beta regression to model the change in standing position ϕ when we 1) use unequal hoop sizes, 2) remove the need to chose between hoops, and 3) provide unequal rewards. Figure 4 shows the data for each or the participants. Each plot shows the standing position on every trial for each of the standardised distances the participants were tested at. The dots are set to be somewhat translucent so darker regions demonstrate the participant stood in this location more often. Anything above the dashed line demonstrates the participant shifting towards the smaller hoop, while below represents the participant opting to be closer to the big hoop.

Experiment 1: Hoop Size
The data from this experiment were analysed using a Bayesian beta regression. The recorded data for standing positions were transformed to be between 0 and 1, with 0 representing the larger hoop and 1 representing the smaller hoop. Therefore, the central point would be 0.5, and meaning anything above this value would demonstrate a shift in participant's behavior away from the mid-point and towards the small hoop. The model included normalised hoop delta as a predictor to see how participants changed position with an increasing distance. This was also entered as a random effect by participant.

Modelling
For all models, we opted to use priors that were weakly informative. The chosen priors made the weak assumption that there was no effect of the experimental manipulations introduced in this series of experiments. : Each facet shows how a participant's accuracy dropped as a function of distance for both hoop sizes that were used in the experiment. The x-axis has been normalised so that 1 represents the furthest distance tested.
The models themselves looked as follows with the prior predictive checks having the additional "sam-ple_prior" argument set to "only": The model results confirmed that participants in general had a bias towards standing closer to the small hoop (mean of 0.549, 95% HDPI of |0.506, 0.591|). We can be reasonably confident about this results as the p(x > 0.5|data) = 98.7%. This can be seen in the posterior in the right hand plot of Figure 5. Also, note that distance did not appear to have an effect on position (i.e., participants were generally biased slightly towards the smaller hoop).   Figure 7: Each facet shows how a participant's accuracy dropped as a function of distance. The x-axis has been normalised so 1 represents the furthest range tested. Figure 8 shows the individual data for each of the participants in this experiment. Each plot shows where the participants stood on each trial for each of the distances they were tested at with the different conditions being colour coded.
A Bayesian Beta regression was carried out to investigate whether participants performed the task in a more optimal way when they were given the chance to attempt to throw at both targets. The predicted value was the Normalised standing position with 0 being central and 1 being next to one of the hoops. The predictors were the normalised distance of the hoops from the centre (norm_delta), the number of throws participants had (Num_throws), and the interactions between these two terms.

Modelling
We first carry out a prior predictive check to confirm that our choice of priors are reasonable. The model was specified as follows with the prior predictive checks having the additional "sample_prior" argument set to "only." Next, we train the model on the data and plot the posterior distributions.
Two_throw_m1 <-brm( abspos~0 + norm_delta * Num_throws + (0 + norm_delta * Num_throws|Participant), family = "beta", data = model_data_pos, prior = TT_prior, cores = 1, chains = 1, iter = TT_iter, The x-axis shows the different distances that participants were tested at with 0 being the point at which they were closest 50% from the centre. The y-axis shows the normalised standing position (between the centre point and one of the two side hoops). As can be seen from the summary output above,R ≈ 1.
The analysis suggested that there was an overall greater tendency for participants in the One-throw condition to stand further from the centre (mean of 0.222, 95% HPDI of |0.123 , 0.33|) than when they were in the Two-throw condition (mean of 0.152, 95% HPDI of |0.082 , 0.228|) with P(One-throw > Two-throw|data) = 91%.
This effect was the strongest for the closest separation which reflects the larger amount of variation in standing position with distance in the Two-throw condition ( Figure 9. In general, when in the One-throw condition, participants stood further from the centre (mean of 0.218, 95% HPDI of |0.123 , 0.314|) than in the the Two-throw condition (mean of 0.218, 95% HPDI of |0.123 , 0.314|) with P(One-throw > Two_throw|data) = 97.62%. However, as can be seen in 9, the difference is generally small which and consistent across all distances. This means that when participants were given the opportunity to throw to both hoops (i.e., in the Two throw condition), they were still sub-optimal in their performance.

Raw Accuracy
One Two Hoop Delta (∆) Expected Accuracy Figure 11: This figure shows the average accuracy participants achieved at each distance. Each line represents a participant The facets represent the two different conditions.

Experiment 3: Reward
## 'geom_smooth()' using formula 'y~x' Figure 13 shows the individual data for each of the participants in this experiment. Each plot shows where the participants stood on each trial for each of the distances at which they were tested with the different conditions being colour coded.
A Bayesian Beta Regression was carried out in order to investigate whether opting for an unequal reward structure in this task would facilitate the use of a more optimal strategy in the Throwing Task. The predictors in this model were whether the participant had opted for an Equal or Unequal (Gamble_Type), whether the hoops were Close or Far (dist_type), and the interaction between these. Additionally, random effects of participant were included for all predictors. The predicted value was the normalised standing position, with 0 representing the centre and 1 being next to one of the hoops.

Modelling
The priors in for this model were set so as to reduce the likelihood of extreme values (i.e., close to 0% or 100%). Until this experiment, we had not run a version of this task in which the different targets could have different value associated with them.
As the prior predictive checks demonstrated that the model was able to retrieve the prior, and the priors were suitably flat, the model was conditioned on the raw data from the experiment. The model looked as follows with the prior predictive checks having the additional "sample_prior" argument set to "only":    As can be seen in Figure 14, participants in this task were more likely to stand towards one of the side hoops when the hoops were far apart (mean of 0.516, 95% HPDI of |0.287 , 0.711|) than when they were close together (mean of 0.121, 95% HPDI of |0.084 , 0.164|) with P(Far > Close|data) = 100%. This suggests that participants were sensitive to the addition of a reward in this experiment.
In addition to this, the interaction of Distance Type with Gamble Type was in the direction of an asymmetrical reward structure pushing participants to make more optimal decisions with participants standing closer to one of the targets when they opted for an unequal split (mean of 0.615, 95% HPDI of |0.491 , 0.74|) than when they had opted for an equal split (mean of 0.417, 95% HPDI of |0.255 , 0.576|) with P(Unequal > Equal|data) = 96.6%.

Note
This experiment was cut from the main body of the paper as it is the only experiment to make use of the Detection Task from Clarke and Hunt (2016). The logic behind this experiment was the same as in the experiments discussed in the main paper in that we investigated a potential asymmetry in the design that may alter the way in which participants approached this task.

Introduction
In this experiment, we made the probability of the target appearing in one of the two locations higher (80%) than the other (20%). When predicting which of two events is about to occur (for instance, whether a blue or yellow light would illuminate on a particular trial), people tend to match the underlying probability of each event, a tendency called Probability Matching (Koehler and James 2010;Goodnow 1955;Vulkan 2000). If event A occurs 80% of the time, and option B only 20% of the time, the best course of action is to predict event A every time, as this would result in an average accuracy rate of 80%. People, in general, tend to instead select each option in proportion to its likelihood of success, yielding an average success rate (in this example) of only 68%.
Probability matching may be due in part to a misunderstanding of probability (Reimers, Donkin, and Pelley 2018), but it has also been argued to reflect a reasonable tendency to seek out and exploit patterns in sequential events (Gaissmaier and Schooler 2008;Wolford et al. 2004). Gao and Corter (2015) argue that when a person selects the more likely option on every trial (a strategy known as Maximising), they are accepting a loss for a minority of trials (20% in this example). As demonstrated in Kahneman and Tversky (1984), people tend to find certain loss very aversive, so a strategy that includes a certain loss may be disregarded as a "solution" to the problem at hand (Goodnow 1955). The only way to avoid this loss is to make an attempt to detect and exploit any potential pattern. In the context of deciding where to fixate between two potential targets locations, in both Morvan and Maloney (2012) and Clarke and Hunt (2016), the probability for each location to contain the target target was 50%. Given the philosophical and experimental arguments described above, participants may react to this choice between equally likely options with idiosyncratic pattern-seeking behavior (Gaissmaier and Schooler 2008;Yellott 1969;Wolford et al. 2004). In Clarke and Hunt (2016), this may have interfered with participants using information about their own ability to make decisions. Instead, they may have been attempting to figure out patterns in the sequence of targets to make better guesses about which one was likely to be the target on each trial. Adding in a bias for one option to be more likely may cater to people's tendency to seek out, and attempt to exploit, patterns. It also breaks the Buridan's Ass deadlock and makes the decision about which of two goals to attempt easier. In this Experiment, unequal probability was introduced into the paradigm used by Clarke and Hunt (2016) in order to investigate 1) whether participants would make use of probability information in deciding where to fixate, and 2) whether probability manipulations might facilitate the decision about whether to fixate between two potential target locations or to look directly at one of them. If so, it would suggest that at least part of the reason for the poor decisions observed previously was the (unnatural) balance between the two potential targets. Additionally,this experiment will provide a useful replication of both the fixations decision results (Morvan and Maloney 2012; Clarke and Hunt 2016) and the probability matching tendency (Vulkan 2000) in a new context.

Power Analysis
For this experiment, data from (N = 12) Clarke and Hunt (2016), which acted as a substitute for the Symmetric condition, and an unpublished pilot study (N = 11), which followed the same rules as the Bias condition in the main experiment, were resampled in order to investigate how often participants would fixate one of the side boxes given it was more likely to contain the target for the Bias condition. For the Symmetric condition, we followed the same rule as the in the paper and classified the most likely box as the side box that participants fixated most often (Figure 16. The main interest for our purposes was the proportion of time participants fixated the most likely box. The data were coded so that a fixation was classified as either being to the Most likely box or not. We then resampled (with replacement) these data by selecting a random N participants from each condition (ranging from 2 to 20) then sampling 300 trials from these participants from the different data sets. This was done 5000 times to estimate the expected difference between the Symmetric and Bias conditions in terms of proportion of fixations to the Most likely side and the associated certainty around these values.
As can be seen in the figure below, the uncertainty surrounding the estimate for the difference between the groups appears to plateau somewhere around 15 participants. The shaded region represents a 95% Highest Density Interval (HDI) for the distribution of differences simulated through resampling. As such, the sample size of 18 in the main experiment appears to be ample in order to detect this difference and increasing the sample size above this value does not add more to the certainty, as can be seen in Figure 17.

Participants
16 Participants (two Male) were recruited from the University of Aberdeen community with an average age of 22.75 (between 19 and 29). Each participant was reimbursed £10 for their time.

Procedure
The experiment followed a similar procedure to the "Detection Task" from Clarke and Hunt (2016). In the first session, we measured visual acuity which lasted approximately 30 minutes. This was followed by a second session in which the participants performed the actual decision task which lasted approximately 40-50 minutes.
The experiment took place in a darkened room on a desktop computer. An Eyelink 1000 (version 4.594)(SR Research ltd, Mississauga, Ontario, Canada) was used to record eye position at 1000Hz. In each session, a 5 point calibration was carried out with additional calibration and validation sequences prior to each block, and if the participants had broken fixation ten times cumulatively or for five trials in a row. The stimuli were displayed on a CRT monitor (resolution 1920 X 1080 pixels) using Matlab 7.9.0 (R2009b) with Psychtoolbox (Brainard 1997;Pelli 1997) and EyelinkToolbox functions (Cornelissen, Peters, and Palmer 2002). A chin rest with forehead bar was used to ensure participants maintained a viewing distance of ≈ 47cm.
In the first session, participants were instructed to remain fixated at the center and identify a letter that would appear in one of two boxes. At the start of each trial,participants fixated a central black cross on a grayscale background and pressed the spacebar. The cross was flanked by two boxes which were a lighter shade of grey and occupied 1• of visual angle. After a stable fixation had been maintained for 700ms, the target would appear in one of the two boxes for 500ms. The target was a white letter, 0.4°of visual angle, drawn using the Sloan font as these letters are generally of equal recognisability at different viewing angles (Sloan, Rowland, and Altman 1952). Ten letters were used and one was selected randomly on each trial to be the target letter. Participants were then presented with a screen prompting them to report which letter was presented by clicking on the corresponding character. An illustration of each trial in Session 1 can be seen in Figure ??. The boxes were presented on either side of the fixation cross at several different eccentricities (3.1°,4.3°,5.8°,7.5°,9.3°,11.1°,12.5°,& 13.7°). Each eccentricity was repeated 12 times in a row before moving on to another eccentricity (the order of these sets of 12 eccentricities was random),for a total of 96 trials in a block.
After each block, participants were offered a break before recalibrating and moving onto the next block. Participants completed four of these blocks. The data from this first session were used to tailor the separations that would be used for the second session. A "switch-point" was calculated for each participant based on their Session 1 performance. This was then used as an anchor point from which to calculate the other separations to be used in Session 2. The switch-point for each participant was the separation at which the participant was 68.5% accurate. The accuracy level of 68.5% is the mid-point between 55% and 82%; these two values are the points at which participants should switch from fixating the central box to the side box in the Symmetric and Bias conditions, respectively(see below for more details).
From this point, 6 other separations were calculated ± 1°, 2°, and 3°. There were two other points included which acted as anchors, one being a very large distance (19.4°), the other very small (1.9°). In Session 2, participants started by fixating a cross that appeared above where the boxes would appear. The cross would appear at the midpoint between the centre box and either the left or right box with an equal probability. Once they had fixated the cross, they were instructed to press the spacebar. After a stable fixation was detected for 700ms, three boxes would be presented (Figure 19). One box would appear in the centre with the other two spaced equally on either side with separations calculated as above, based on Session 1 performance for that particular participant. Once these boxes had appeared, participants were instructed to fixate one of the three boxes. Participants were told that the target would never appear in the central box, however they could choose to fixate this location. After they had fixated one of the boxes, the letter appeared in one of the side boxes for 500ms. After this, the 10 letter stimuli were presented on screen for Figure 19: The sequence for every trial in Session 2 them to select which letter they had seen. Each separation appeared 10 times within a block, for a total of 90 trials. The order of separations was randomised (rather than blocked, as it was in Session 1).
Prior to each block, participants were told how likely each box was to contain the target. There were two levels of probability; one in which each box was equally likely to contain the target on each trial (Symmetric), and one in which one side would contain the target 80% of the time (Biased). Each participant took part in both the Symmetric and Biased conditions. Participants completed one condition for 4 blocks before moving on to the other condition. This order was counterbalanced across participants. Additionally, the side that was more likely to contain the target was also counterbalanced, to control for any bias to look to one side over the other.

Optimal Strategy
The optimal strategy is to fixate whichever of the three boxes maximises expected accuracy. We refer to a participant's expected accuracy if they had followed the optimal strategy, as "optimal accuracy." Using the psychometric curves fit to each participant's Session 1 data, we can calculate how likely a given person is to detect the target at various distances. This was then used to predict how accurate that same person would be if they had fixated one of the side boxes, or the centre box, by using their average accuracy for a given distance from the left and right box (Bl and Br respectively) in Session 1. The fixation choice that gave the greatest expected accuracy was then selected as the optimal decision for the respective box separation. As probability was also a factor in this experiment, this had to be accounted for by multiplying the chance of detecting the target in either box by the probability that the box would contain the target (P l and P r). The formula for expected accuracy is therefore: (Bl × P l) + (Br × P r). Expected accuracy given optimal fixations differs between the Symmetric and Biased conditions. For example, in the Symmetric condition, participants could expect a 55% success rate if they fixated the optimal location when the targets were far apart. This value comes from assuming they would be ≈ 100% accurate for the fixated box, and at chance level for the non-fixated box (10%). This gives us (1 × 0.5) + (0.1 × 0.5) = 55. To get the lower limit for the Bias condition, we simply change the chance for each box to contain the target (additionally, we assume that the participant would fixate the most likely box). This gives us (1 × 0.8) + (0.1 × 0.2) = 82. This same formula can be used to calculate expected accuracy had the participant fixated the central box. In this case, Bl = Br, and so would simply be put into the formula using the appropriate values for P , as demonstrated above.

Analysis
All analysis for this experiment follow the same procedures as described in previous sections.

Results
Figure 20: These boxplots show the proportion of the time that participants fixated the centre (left panel), most-likely (central panel) and least-likely (right panel) box. In the Bias condition, most likely was the box with and 80% probability of containing the target. In the Symmetric condition, most-likely was whichever side box a given participant had fixated the most. The Close and Far distinction is baseed on when participants should swith from fixating the central box to the side box. Note that for some participants, expected performance differences between the Centre and Side strategies for the closest separation was negligible. However, all participants should have fixated the Side boxes in the Far condition, where the performance advantage of doing so was substantial.
The choices made by participants on where to look are summarized in Figure 20. The optimal strategy in this situation is to fixate the central box when the two sides boxes are near (i.e., when their distance from centre, ∆, is closer than each participant's switch point). When they are far (i.e. delta is greater than the switch point), a participant behaving optimally should fixate either of the side boxes when in the symmetric condition, and the most-likely box when in the biased condition. The first panel of Figure 20 clearly shows that our participants did not follow this strategy. However, while adding a bias to the location of the target did not help participants to behave optimally, we can see that it did have an effect on their behavior, as they were much less likely to fixate the central box. Furthermore, when fixating one of the two side squares, they fixated the square that was most likely to contain the target (Figure 20, central and right panel) almost all the time We can also look at how the choice of where to look influenced participants' accuracy. We use the target detection models from Session 1 to calculate the expected accuracy given either (i) an optimal, (ii) counteroptimal, or (iii) the observed strategy. This measure eliminates the variance present in the actual accuracy data due to the random location of the target and participant's guessing the correct answer by chance. These data are summarised in Figure 21. We can see that in the symmetric condition, with the exception of one individual, our participants behaved in a way that gave rise to expected accuracies much closer to the counter-optimal strategy than optimal. Expected accuracy was higher in the biased condition, with four participants reaching optimal performance, with the rest ending up somewhere between the optimal and counter-optimal.

Modelling
ProbModel <-brm(Ml_fix~(bias_type + dist_type)ˆ2 + (dist_type * bias_type|participant), data = df_model, family = "bernoulli", prior = Rewards_priors, chains = 1, iter = n_iter, warmup = n_warmup) The results of this model suggest that our participants were sensitive to the probability information (see figure @ref{fig:ProbModelOutputs}). In the Biased condition, the average participant fixated the most likely target 58.1% of the time (95% HDPI of |31.3%, 82.2%|), compared to 36.9% (95% HPDI of |17.5%,54.7%|) in the symmetric condition. The width of these intervals reflects a high degree of uncertainty in fixed effects, due to the range of behaviours exhibited by participants. None-the-less, the HPDI on the difference between these two conditions, |-0.7%, 41.4%| is largely positive and we can be reasonably confident (P(difference > 0 | data ) = 95.9%) that the most-likely target is fixated more frequently in the biased condition. The distance between the square targets did not appear to have any consistent effect in the Symmetric condition, however there was a small decrease in fixations towards the "most likely" box when the boxes were far apart in the bias condition (dropping from 61.4%, 95% HPDI of |34.4%,83.6%| to 54.8%, 95% HPDI of |0.305%,80.3%|). As such, the difference between the conditions were much more pronounced in the close condition (P(Bias > Symmetric|data) = 96.9%) than in the far condition (P(Bias > Symmetric|data) = 94.8%). The width of these intervals reflects the range of performance that was exhibited by participants. However, this does show that participants generally made us of this probability information in order to decide where to fixate.

Discussion
The results of this study suggest that participants were in fact sensitive to the difference in probability of each side containing the target and responded by fixating this location more often when there was a bias compared to the Symmetric conditions. However, the presence of a bias did not lead participants to utilising a more optimal strategy. It would appear that the introduction of a bias for one target distracted participants from discovering the optimal strategy. Instead, participants appeared to focus primarily on the probability aspect of the task.