Abstract
Recent studies have shown that biased search results can produce substantial shifts in the opinions and voting preferences of undecided voters – a phenomenon called the “search engine manipulation effect” (SEME), one of the most powerful list effects ever discovered. We believe this is so because, unlike other list effects, SEME is supported by a daily regimen of operant conditioning. When people conduct searches for simple facts (86% of searches), the correct answer invariably turns up in the top position, which teaches users to attend to and click on high-ranking search results. As a result, when people are undecided, they tend to formulate opinions based on web pages linked to top search results. We tested this hypothesis in a controlled experiment with 551 US voters. Participants in our High-Trust group conducted routine searches in which the correct answer always appeared in the first search result. In our Low-Trust group, the correct answer could appear in any search position other than the first two. In all, participants had to answer five questions during this pre-training, and we focused our analysis on people who answered all the questions correctly (n = 355) – in other words, on people who were maximally impacted by the pre-training contingencies. A difference consistent with our hypothesis emerged between the groups when they were subsequently asked to search for information on political candidates. Voting preferences in the High-Trust group shifted toward the favored candidate at a higher rate (34.6%) than voting preferences in the Low-Trust group (17.1%, p = 0.001).
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
In recent years, people around the world have become increasingly dependent on search engines to obtain information, including information that helps them make decisions about complex and socially important matters, such as whom to vote for in an upcoming election (Arendt & Fawzi, 2018; Trevisan et al., 2016; Wang et al., 2017). An increasing body of evidence also shows that search results that favor one candidate, cause, or company – by which we mean that they link to web pages that make that candidate, cause, or company appear superior to competitors – can have a rapid and dramatic impact on people’s opinions, purchases, and votes (Agudo & Matute, 2021; Allam et al., 2014; Epstein & Robertson, 2015, 2016; Epstein et al., 2022; Ghose et al., 2014; Joachims et al., 2007; Knobloch-Westerwick et al., 2015; Pan et al., 2007; Prinz et al., 2017; Wilhite & Houmanfar, 2015; cf. Feezell et al., 2021). In five randomized, controlled experiments with 4556 participants in two countries, Epstein and Robertson (2015) showed that search rankings favoring one political candidate can rapidly produce dramatic shifts in the opinions and voting preferences of undecided voters, in some demographic groups producing vote margins as high as 80% after just one online search. They labeled this new form of influence the “search engine manipulation effect” (SEME) and demonstrated that these shifts can occur without people being aware that they have been manipulated. SEME has been replicated several times since 2015 (Agudo & Matute, 2021; Draws et al., 2021; Epstein et al., 2022; Eslami et al., 2017; Haas & Unkel, 2017; Knobloch-Westerwick et al., 2015; Ludolph et al., 2016; Pogacar et al., 2017; Trielli & Diakopoulos, 2019).
Moreover, since search results are ephemeral experiences (West, 2018; cf. Mckinnon & MacMillan, 2018) – fleeting, often personalized, experiences that are generated spontaneously, impact the user, and subsequently disappear without being stored anywhere – they can impact millions of users every day without leaving a paper trail for authorities to trace (Epstein, 2018a). One cannot go back in time to determine what ephemeral content people have been shown, even if one has access to the algorithm that generated that content (Hendler & Mulvehill, 2016; Paudyal & Wong, 2018; cf. Taylor, 2019).
The fact that more than 90% of searches conducted in almost every country in the world are conducted on just one search engine (Google) (StatCounter GlobalStats, n.d.) raises special concerns about SEME (Epstein, 2018a). It means that a single company – one that is unregulated, highly secretive, not accountable to the public, and that has, for all practical purposes, no competitors (Singer, 2019) – could be producing systematic changes in the thinking of billions of people every day with no way for other parties to counteract its influence, or even, for that matter, to detect and document that influence (Hazan, 2013; Ørmen, 2016; see S1 Text for additional information about bias in search results).
Why is SEME so large? It is a list effect, but it seems different, both qualitatively and quantitatively, from previously studied list effects. Researchers have been studying list effects, such as the serial position effect, for more than a century (Ebbinghaus, 2013; Mack et al., 2017; Murre & Dros, 2015), and such effects are sometimes substantive. For example, when Candidate A’s name consistently appears above his or her opponent’s name on a ballot – perhaps simply because the names are in alphabetical order – this tends to boost Candidate A’s share of the votes by 3–15% – an effect called the “ballot-order effect” (Grant, 2017; Ho & Imai, 2008; Koppell & Steen, 2004). While counterbalancing the order of names on ballots can easily be done – even for paper ballots – it has rarely been done (Beazley, 2013).
The serial position effect itself can increase the likelihood of a word being recalled from a list; words at the beginning of a list (the primacy effect) and the end of a list (the recency effect) are usually recalled more often than words in the middle (Murdock, 1962). The ranking of content in lists can even affect juries’ opinions (Anderson, 1958; Carlson & Russo, 2001), the opinions of judges in singing contests (Bruine de Bruin, 2005), and wine preferences (Mantonakis et al., 2009).
SEME might be large, at least in part, because people generally trust computer output more than they trust content in which the human hand is evident (Bogert et al., 2021; Logg et al., 2019). Most people have no idea how computers work or what an algorithm is; as a result, they are inclined to view computer-generated content as impartial or objective (Fast & Jago, 2020; Logg et al., 2018). This trust has also been driven by the positive image Big Tech companies have had for many years. That trust has been tarnished in recent years because of data breaches and other scandals (Burt, 2019; Fortune, n.d.; Kramer, 2019), and leaks of documents and videos from these companies, along with reports by whistleblowers, have shown that the algorithmic output we see is frequently adjusted by employees. At Google, search results are apparently adjusted by employees at least 3200 times a year (Google, n.d.; Meyers, 2019).
Trust in companies and trust in computer output can be driven by a number of factors – marketing and advertising, for example (Danbury et al., 2013; Sahin et al., 2011), or the fact that nearly all the services we receive from Big Tech companies appear to be free (Epstein, 2016; Nicas et al., 2019). It is not clear how SEME can be accounted for by such trust, however. How can we account for the fact that high-ranking search results are more trusted than lower-ranking results (Edelman, 2011; Marable, 2003; Pan et al., 2007)? Why is the preference for high-ranking results so strong – strong enough not only to influence purchases (Ghose et al., 2014; Joachims et al., 2007) but to have a large and almost immediate impact on opinions and voting preferences?
The preference for high-ranking search results might be due in part to what people sometimes call “laziness” or “convenience.” People are busy, so, sometimes at least, they attend to and click on a high-ranking search result because doing so saves time. As one might expect, eye-tracking and other studies show that people generally attend to the first results displayed on a screen before they scroll down or click to another page (Athukorala et al., 2015; Nielsen & Pernice, 2010; Schultheiß & Lewandowski, 2020). This finding is comparable to the attention people pay (or at least used to pay) to above-the-fold content in newspapers. The limited attention span of users can be problematic for longer pages; people want information that gets to the point and are unlikely to read long web pages filled with text (Nielsen, 2010; Weinreich et al., 2008).
Convenience might contribute to some extent to the large impact of SEME, but in the present study, we explore another possibility – namely, that the power of SEME derives in part from the distinctive way in which people interact with search results. In an authoritative list of the 100 most common search terms people use (Soulo, n.d.), 86% of the search queries were one-to-two words long and simply directed users to simple facts or specific websites – search terms such as “news,” “speed test,” and “nfl scores.” The correct website invariably turns up in the highest position of the search results that are generated; frequently, that same information occurs in the second or third positions, as well. Other lists of common search terms are also dominated by queries that tend to produce simple factual answers in the top position of search results (Hardwick, n.d.; Siege Media, n.d.).
Because, day after day, the vast majority of search queries produce simple factual answers in the highest position of search results (Rose, 2018), we all learn, over and over again, that what is higher in the list is better or truer than what is lower in the list. To be more specific, we usually attend to and click on the highest-ranking search result because doing so is reinforced by the appearance of the correct answer to our query. Almost any reply to a verbal inquiry strengthens inquiries of that type, but a correct answer to an inquiry is an especially powerful reinforcer, presumably because it makes a speaker more effective (Skinner, 1957; cf. Kieta et al., 2018), and when the same source provides a series of correct answers over time, the value and potential power of those answers increases. As B. F. Skinner put it in his classic text on verbal behavior, “The extent to which the listener judges the response as true, valid, or correct is governed by the extent to which comparable responses by the same speaker have proved useful in the past” (Skinner, 1957, p. 427).
When, at some point, people finally enter an open-ended search query that either has no definitive answer (“trump”) or that seeks an opinion (“what’s the best restaurant in Denver”), they will tend both to attend to and click on high-ranking search results. We are speculating, in effect, that SEME is a large effect because it is supported by a daily regimen of operant conditioning. Although the idea that operant conditioning plays a role in voting behavior is not new (Visser, 1996), in this paper, we are emphasizing a kind of operant conditioning that never stops and that people are entirely unaware of – specifically, one that reinforces attending to and clicking on high-ranking search results that appear in response to routine factual searches.
We test this hypothesis with a randomized, controlled experiment – a modified version of the experimental procedure used by Epstein and Robertson (2015) in their original SEME experiments (see S2 Text for details about the procedure). The present study added one feature to the Epstein and Robertson (2015) procedure: Before beginning the political opinion study, participants experienced a pre-training procedure that either reinforced or extinguished the tendency to attend to and click on high-ranking search results. In theory, extinguishing that tendency should (a) change the pattern of clicks that typifies search behavior, and (b) reduce the impact that statistically biased search results have on people’s opinions and voting preferences.
Method
Participants
A total of 551 eligible US voters from 46 states were recruited through Amazon Mechanical Turk (MTurk, accessed through a company called Cloud Research, which screens out bots) and were paid a small fee (US$7.50) to participate. Fifty-nine point nine percent (n = 330) of participants identified themselves as female and 40.1% (n = 221) as male. The mean age was 38.3 (SD = 11.7). Seventy-three point nine percent (n = 407) of participants identified themselves as White, 8.2% (n = 45) as Black, 6.5% (n = 36) as Hispanic, 6.4% (n = 35) as Asian, 4.4% (n = 24) as Mixed, and 0.7% (n = 4) as Other. A majority of participants were college educated, with 55.2% (n = 304) reporting having received a bachelor’s degree or higher.
Procedure
See S3 Text in our Supplementary Material for our statement of compliance with current ethical standards.
The experiment was conducted online, and participants identified themselves using their MTurk Worker IDs; we had no knowledge of their names or email addresses. Before the experiment began, participants were asked a series of demographic questions and were then given instructions about the experimental procedure (see S4 Text). In compliance with APA and HHS guidelines, participants also clicked to indicate their informed consent to participate in the study. We also asked participants how familiar they were with the two candidates identified in the political opinion portion of the study.
The initial dataset contained 806 records and was cleaned as follows: Records were deleted in which no clicks were recorded, in which people’s reported familiarity with either candidate exceeded 3 on a scale from 1 to 10 (where 1 was labeled “Not at all” and 10 was labeled “Quite familiar”), or in which people reported English fluency below 6 on a scale from 1 to 10 (where 1 was labeled “Not fluent” and 10 was labeled “Highly fluent”).
The experiment itself had two main parts (Fig. 1).
Pre-Training
In the pre-training portion of the experiment, participants were randomly assigned to either a High-Trust (n = 312) or a Low-Trust (n = 239) group. Each group was given five pre-training trials in which they were shown a search question that had a simple factual answer (such as “What is the capital of Lesotho?”) (see S5 Text for details), and they were then given 2 minutes to find the answer using the Kadoodle search engine, which closely simulates the functioning of the Google search engine. All participants had access to the same search results (on two search result pages, each listing six search results) and web pages (which could be accessed by clicking on the corresponding search result). Only the order of the search results varied between the groups.
In the High-Trust group, the answer could always be found by clicking on the highest-ranking result – just as it is virtually always found in that position on the leading search engine. In the Low-Trust group, the correct answer could be found in any of the 12 search result positions except the first two. At the end of 2 minutes, participants were given a five-option, multiple-choice question and were asked to provide the correct answer to the question they were shown earlier. They were immediately then told whether their answer was correct or incorrect. In theory, the pre-training trials in the High-Trust group were strengthening the user’s tendency to attend to and click on the highest-ranking search result, and the pre-training trials in the Low-Trust group were either (a) extinguishing tendencies to attend to and click on high-ranking search results, (b) reinforcing tendencies to attend to and click on low-ranking search results (differential reinforcement of alternative behavior), or (c) having both effects.
SEME Experiment
Immediately following the pre-training, the participants in each of the trust groups were randomly assigned to three sub-groups: Pro-Candidate-A, Pro-Candidate-B, or a control group in which neither candidate was favored. The election we used was the 2015 election for the Prime Minister of the United Kingdom; the candidates were David Cameron and Ed Miliband. We chose this election to try to assure that our participants – all from the US – would initially be “undecided” voters. On a 10-point scale, our participants reported an average familiarity level of 1.3 (0.6) for David Cameron and 1.3 (0.6) for Ed Miliband.
All participants (in each of the six sub-groups) were then given basic instructions about the “political opinion study” in which they were about to participate. Then they read brief, neutral biographies of both candidates (approximately 150 words each, see S6 Text), after which they were asked eight questions about any preferences they might have for each candidate: their overall impression of each candidate, how likeable each candidate was, and how much they trusted each candidate. We also asked which candidate they would likely vote for if they had to vote today (on an 11-point scale from –5 for one candidate to +5 for the other, with the order of the names counterbalanced from one participant to another), and, finally, which of the two candidates they would in fact vote for today (forced choice).
They were then given up to 15 minutes to use our mock search engine to conduct research on the candidates. All participants had access to five pages of search results, six results per page (see S7 Text for details). All search results were real (from the 2015 UK election, obtained from Google.com), and so were the web pages to which the search results linked. The only difference between the groups was the order in which search results were shown. In the Pro-Candidate-A group, higher ranking search results linked to web pages that favored Cameron (Candidate A), and the lowest ranking search results (on the last pages of search results) favored Miliband (Candidate B). In the Pro-Candidate-B group, the order of the search results was reversed. In the control group, pro-Cameron search results alternated with pro-Miliband search results (and the first search result had a 50/50 chance of favoring either candidate), so neither candidate was favored. Prior to the experiment, the “bias” of all web pages had been rated on an 11-point scale from –5 to +5 (with the names of the candidates counterbalanced) by five independent judges to determine the extent to which a web page favored one candidate or another. The mean bias rating for each web page was used in determining the ranking of search results.
When participants chose to exit from our search engine, they were asked those eight preference questions again, and they were then asked whether anything “bothered” them about the search results they had been shown. If they answered “yes,” then they could type the details about their concerns. This was our way of trying to detect whether people spotted any bias in the search results they saw. We could not ask about bias directly, because leading questions of that sort generate predictable and often invalid answers (Loftus, 1975). We subsequently searched textual responses for words such as “bias,” “skewed,” or “slanted” to identify people in the bias groups who had apparently noticed the favoritism in the search results we showed them.
Results
We focused our data analysis on people in the two pre-training groups who answered all five of the pre-training questions correctly. These individuals not only demonstrated high compliance with our instructions; they also presumably were most highly impacted by the pre-training contingencies. On any given trial in which people did not find the correct answer, they presumably were not impacted by the low-trust contingencies.
For comparison purposes, we also analyzed data from people who scored lower than 100% on the pre-training questions; the bulk of this analysis is included in the Supplementary Material of this paper. As one might expect, participants in the High-Trust group answered our multiple-choice questions more accurately (MCorrect = 4.8 out of 5 [0.4]) than participants in the Low-Trust group did (MCorrect = 4.1 [1.0]; t = 10.37, p < 0.001, d = 0.92) (also see S1 Fig.). This was presumably because Low-Trust participants had more trouble finding the correct answer in the allotted 2 minutes. Focusing on the high-compliance participants reduced the number of people in the High-Trust group from 312 to 255 and reduced the number of people in the Low-Trust group from 239 to 100.
Please note that we did not exclude any participants from the experiment; rather, we chose to analyze separately data we obtained from high-compliance participants – that is, people who were most likely to have been impacted by the training contingencies – and low-compliance participants – that is, people who were less likely to have been impacted by the training contingencies.
Pre-Training
Participants in the High-Trust group spent significantly more time on the webpages that were linked to the first two search results (M = 169.7 s [124.9]) than participants in the Low-Trust group did (M = 135.7 s [86.1]; t = 2.92, p = 0.004, d = 0.32). Participants in the High-Trust group also clicked more frequently on the webpages linked to the first two search results (M = 5.9 [1.2]) than participants in the Low-Trust group did (M = 5.4 [1.5]; t = 3.00, p = 0.003, d = 0.37). Participants in the High-Trust group also spent substantially less time on each of the search engine results pages (M = 83.5 s [49.0]) than participants in the Low-Trust group did (M = 168.2 s [66.1]; t = –11.63, p < 0.001, d = 1.46). In other words, High-Trust group participants were attending more to the first two search results and spent less time searching in general.
SEME Experiment
Immediately following the pre-training trials, all participants transitioned to a standard SEME procedure, in which it appears that the Low-Trust pre-training impacted behavior in a number of ways.
The main finding in SEME experiments is that participants show little preference for one candidate or the other before they conduct their search, and that post-search, the preferences of the participants in the two bias groups tend to shift in the direction of the bias that was present in the search results they had been shown. SEME studies look at five different measures of this shift, the most important of which is called “vote manipulation power” or VMP (see S8 Text for how VMP is calculated). VMP is of special interest because it is a direct measure of the increase in votes produced by the bias. It is calculated from answers given to a forced-choice question we ask participants both pre- and post-search, namely “If you had to vote right now, which candidate would you vote for?”
Biased search results tend to produce substantial VMPs after a single search (Epstein & Robertson, 2015; Epstein et al., 2022). This finding was replicated in the present study; however, the bias-driven VMP in the High-Trust group (VMP = 34.6%, McNemar’s Χ2 = 23.56, p < 0.001) was substantially larger than the bias-driven VMP in the Low-Trust group (VMP = 17.1%, Χ2 = 1.56, p = 0.21 NS, z = –3.25, p = 0.001) (see Table 1 and S1 Table for further details; cf. S2 and S3 Tables for low-compliance data; cf. S4 Table for high-compliance versus low-compliance VMP comparisons).
The different VMPs for the High- and Low-Trust groups can be explained by the different ways – all predictable from the pre-training session – these two groups interacted with our search engine in the political opinion portion of our study. Participants in the High-Trust group spent more time viewing the web page linked to the highest search result than participants in the Low-Trust group did (MHigh = 60.9 s [58.1]; MLow = 53.4 s [57.0]; t = 1.11, p = 0.27 NS; d = 0.13) (also see Fig. 2). In addition, participants in the High-Trust group clicked on the link to the first search result significantly more often than participants in the Low-Trust group did (MHigh = 0.9 [0.4], MLow = 0.8 [0.5], t = 2.18, p = 0.03, d = 0.22) (Fig. 3). Participants in the High-Trust group spent more time on web pages linked to search results on the first page of search results than participants in the Low-Trust group did (MHigh = 241.5 s [193.9], MLow = 204.6 [153.2], t = 1.71, p = 0.09 NS, d = 0.21), and participants in the Low-Trust group spent more than twice as much time on web pages linked to search results past the first page of search results than participants in the High-Trust group did (MLow = 51.0 s [51.8], MHigh = 20.4 s [32.5], t = –5.51, p < 0.001, d = 0.71) (Fig. 4). Participants in the High-Trust group also clicked on search results on the first page of search results significantly more often than participants in the Low-Trust group did (MHigh = 4.0 [1.6], MLow = 3.6 [1.6], t = 2.54, p = 0.01, d = 0.25), and participants in the Low-Trust group clicked on search results past the first page of search results significantly more often than participants in the High-Trust group did (MHigh = 0.5 [0.8], MLow = 1.0 [1.1], t = –3.94, p < 0.001, d = 0.52) (Fig. 5). These differences emerged presumably because people in the Low-Trust group had learned in pre-training to attend to and click on lower-ranked search results that people in the High-Trust group tended to ignore.
Post search, differences also emerged on most of the answers to the seven other preference questions. Pre-search, for question 7 – voting preference measured on an 11-point scale – we found no significant differences in mean ratings in the three sub-groups (pro-Cameron, pro-Miliband, and control) in both the High- and Low-Trust conditions (Table 2). Post-search, the mean ratings in the three sub-groups were significantly different in both the High- and Low-Trust conditions (Table 2).
Pre- vs. post-search shifts in ratings on the 11-point scale were consistent with the predicted impact of the bias, with pre/post gaps larger in the High-Trust group than in the Low-Trust group (Table 3). In the control group, pre/post shifts were minimal and non-significant (U = 1259.5, p = 0.82 NS).
Pre-search, we found no significant differences among the three sub-groups (pro-Cameron, pro-Miliband, and control) on their answers to any of the six opinion questions we asked about the candidates (S5 Table; see S6 Table for low-compliance data). Post-search, significant differences emerged for all six of those opinion questions for participants in both the High- and Low-Trust groups (S7 Table; see S8 Table for low-compliance data). Moreover, the net impact of biased search results on people’s opinions (that is, the change in opinions about the favored candidate vs. the change in opinions about the non-favored candidate) was always larger in the High-Trust group than in the Low-Trust group and always shifted opinions (for both groups) in a way that was advantageous to the favored candidate (S9 Table; see S10 Table for low-compliance data; cf. S11 and S12 Tables for control group comparisons). However, nearly all the High- versus Low-Trust differences between pre/post changes in opinions about the candidates were nonsignificant (S13 Table; see S14 Table for low-compliance data). See S9 Text for information about perceived bias in the SEME experiment.
Discussion
The present study supports the theory that operant conditioning contributes to the power that search results have to alter thinking and behavior. The fact that a large majority (about 86%) of people’s searches are for simple facts, combined with the fact that the correct answer to such queries invariably turns up in the highest-ranked position of search results, appears to teach people to attend to and click on that first result and, perhaps as a kind of generalization effect, to attend to and click on nearby search results in a pattern resembling one side of a generalization gradient. Both eye-tracking studies and studies looking at click patterns find those kinds of gradients for both attention and clicks (Athukorala et al., 2015; Chitika Insights, 2013; Cutrell & Guan, 2007; Dean, n.d.; Epstein & Robertson, 2015; Granka et al., 2004; Joachims et al., 2007; Kammerer & Gerjets, 2014; Lorigo et al., 2008; Pan et al., 2007; Schultheiß & Lewandowski, 2020). On the cognitive side, it could also be said that that daily regimen of operant conditioning is causing people to believe, trust, or have faith in the validity of high-ranking search results, and it is notable that people are entirely unaware that this regimen exists.
The fact that people generally believe that algorithms inherently produce objective and impartial output does not in and of itself explain the existence of that gradient of attention and responding. When, in the pre-training portion of the current experiment, we directed attention and clicks away from the top positions in the search list, we disrupted the usual gradient so that in the SEME portion of the study, attention was directed toward lower-ranking search results (in everyday language, we “broke the trust” people have in high-ranking results). As a result, the extreme candidate bias that was present in the search results we presented to participants in our two bias groups had less impact on the people in our Low-Trust pre-training group (VMP = 17.1%) than it did on the people in our High-Trust pre-training group (VMP = 34.6%, p = 0.001).
We note that if SEME is a large effect because of generalization, it is not the simple kind of generalization that occurs when wavelengths of light or sound are altered (Mis et al., 1972). That is because the nature of the task in the training situation is inherently different from the nature of the task in what we might call the test situation (the SEME experiment) – and this observation applies both to the present experiment and to the way people use search engines on a daily basis. In the pre-training phase of our experiment, people are searching for simple facts, and the reinforcing consequence is the correct answer; this is also the case when people are searching for simple facts on real search engines. In the test situation, however, there is no correct answer; the user is asking an open-ended question on an issue about which people might have a wide range of different opinions. In other words, there is a mismatch between informational properties of the training and test settings (Hogarth et al., 2015). This problem has long been a challenge when, with various impaired populations, new behavior is taught in a classroom setting, but it fails to occur in, say, the home setting; hence, the long-running concern with “transfer of training” in the behavior-analytic literature (Baldwin & Ford, 1988). Although a simple-fact query might be easily discriminable from an opinion query – at least most of the time – the present experiment sheds no light on this issue. We can assert only that pre-training that favors lower-ranked search results causes people to look more closely at lower-ranked search results, and that in turn reduces the magnitude of the shift in voting preferences.
As noted earlier, convenience might also play a role in the power that SEME has to shift opinions and voting preferences, but if that were the main or even a significant factor in explaining SEME’s power, it seems unlikely that the Low-Trust training procedure we employed in the present experiment would have disrupted performance as much as it did. Breaking the pattern of reinforcement that usually supports search behavior seemed to override any importance that convenience (that is, that search position alone) might play in SEME.
Limitations and Future Research
At first glance, it might appear to be remarkable that so little retraining – a mere five search trials in which the correct answer to a search query could appear anywhere among 12 search results other than in the top two positions – could interfere with years of conditioning that reinforced attending to and clicking on the highest-ranking search items. Presumably, with more training trials, we could have reduced the impact of our biased search results far more than we did in the present procedure. But bear in mind that attending to and clicking on the highest-ranking search results has been consistently reinforced on a nearly continuous schedule – the kind of schedule that often makes behavior highly vulnerable to disruption when reinforcement is discontinued (Kimble, 1961; Lerman et al., 1996; Mackintosh, 1974). It is especially easy to disrupt behavior when it has been continuously reinforced in discrete trials (Nevin, 2012), which is always the case for search behavior on a search engine.
The present study is also limited in how it motivates participants to express their views about political candidates. They have little or no familiarity with the candidates or the issues, given that they are looking at a foreign election. Would similar numbers emerge in a study with real voters in the middle of a real election? This issue was addressed in Experiment 5 in the Epstein and Robertson study (Epstein & Robertson, 2015). That experiment included more than 2000 undecided voters throughout India during the final weeks of the 2014 Lok Sabha election for Prime Minister. Biased search results shifted both opinions and voting preferences, with shifts in voting preferences (the VMP) exceeding 60% in some demographic groups.
That said, recent research suggests that low-familiarity (also called “low-information”) voters differ in nontrivial ways from high-familiarity (“high-information”) voters (Yarchi et al., 2021). Our 2014 Lok Sabha experiment suggests that low-familiarity voters may be more vulnerable to SEME than high-familiarity voters, and so does a set of experiments we recently conducted on what we call the “multiple exposure effect” (MEE) (Epstein et al., 2023). Understanding the relationship between familiarity and vulnerability to manipulation will require a systematic investigation, however, not simply a comparison of values found in separate SEME experiments.
The familiarity issue does raise another question that we can address directly with the data we collected in the present study: Can we be assured that our participants were indeed undecided? Here we have strong affirmative evidence. As we noted in our Results section, the differences in pre-search opinion ratings across the three groups (pro-Cameron, pro-Milliband, and control) were nonsignificant (Table 2). In addition, both the voting preferences on the 11-point scale and the voting preferences on the forced-choice question showed no candidate preferences (Table 3, S1 Table). Post-search, all these measures showed clear and predictable differences.
Pollsters often seek out people who are likely to vote, and, presumably, a company like Google can, given the vast amount of information they collect about people, easily discriminate between likely and unlikely voters. In the present study, we did not screen for this characteristic. In future studies, we will consider screening potential participants with a question such as, “How likely are you to vote in upcoming elections?”
We have other concerns about the real-world applicability of the present study, and we are addressing them in other research. The present study exposed voters to biased search results just once, but in the real world, voters might be exposed to similarly biased search results hundreds of times before an election. Are multiple exposures to similarly biased search results additive over time? And how might opinions and voting preferences be affected if people are exposed to search results biased toward Candidate A on some occasions and Candidate B on others? Overall, do the opinions and voting preferences of undecided voters shift in the direction of the net bias?
In the real world, moreover, people are impacted by multiple sources of bias. In the traditional, non-digital world of political influence, many if not all of these sources of influence might cancel each other out. If Candidate A erects a billboard or buys a television commercial, Candidate B can do the same. However, in the world of Big Tech, things work differently. If, for any reason, the algorithm of a large online platform favors one candidate, there is no way to counteract its impact, and if multiple online platforms all favor the same candidate, the impact of these different sources of influence might be additive.
Implications and Concerns
Given the concerns that have been raised about the power of biased search results to impact people’s thinking and behavior, one might wonder whether informing people about the role that operant conditioning appears to play in their online decision making would have any practical benefit. We submit that raising such awareness would, unfortunately, have few or no benefits, for one simple reason: Search algorithms are designed to put the best possible answer in the top position; when one is searching for simple facts, that means the correct answer. A search engine that listed the best answer in a lower search position – especially in an unpredictable position – would be of little value. That means that the daily regimen of conditioning we described earlier will continue to occur as long as people continue to use properly functioning search engines. Worse still, people will always be unaware that the process by which they make both trivial and important decisions is being affected by a perpetual regimen of operant conditioning, as if they were rats trapped forever in an operant chamber.
So how can people be protected from bias that might occur in search results that are displayed in response to open-ended queries about, say, election-related issues? No matter what the cause of the bias, it can have a rapid and profound effect on the thinking and behavior of people who are undecided on an issue, and that, we believe, should be a matter for concern.
We suggest three ways to provide such protection. One would be for the US Congress, the European Parliament, or other relevant authorities to declare Google’s index – the database it uses to generate search results – to be a public commons (Epstein, 2019). This will quickly lead to the creation of hundreds, then thousands, of competing search platforms, each vying for the attention of different populations, just as thousands of news sources do currently. With numerous platforms having access to the index through a public API (an application programming interface), search will become both competitive and innovative again, as it was before Google began to dominate the search industry more than a decade ago.
Users could also be protected to some extent if browsers or search engines are at some point required to post bias alerts on individual search results or on entire search pages, with bias continuously rated by algorithms, human raters, or both. Epstein and Robertson (2016) showed that the magnitude of SEME could be reduced to some extent by such alerts (cf. Tapinsky et al., 2018; Wu et al., 2023). Alerts of this sort could also be used to flag the rising tide of online “misinformation” – an imperfect but not entirely unreasonable method for appeasing free speech advocates without suppressing content (Nekmat, 2020; Shin et al., 2023; cf. Bak-Coleman et al., 2022; BBC, 2017; Bruns et al., 2023).
Finally, a leak of documents from Google in 2019 showed that the company has long been concerned with finding ways to assure “algorithmic fairness,” primarily as a way of correcting what Google executives and employees perceive to be social inequities (Lakshmanan, 2019). Setting aside the concerns one might have about the possibility that a highly influential company might be engaging in a large-scale program of social engineering (Chigne, 2018; Epstein, 2018b; Savov, 2018), the good news is that Google has developed tools for eliminating bias in algorithmic content quickly and efficiently. One of the leaked documents was a manual for Google’s “Twiddler” application, which was developed “for re-ranking results from a single corpus” (Google, 2018). In other words, Google has the power to eliminate political or other bias in search results “almost as easily as one can flip a light switch” (Z. Vorhies, personal communication, June 26, 2020).
If steps are eventually taken to protect users from the bias in search results that might be displayed in response to open-ended queries, perhaps operant conditioning or other factors that currently focus user attention on high-ranking results will do no harm. As it stands, we believe that this almost irresistible tendency to attend to and click on high-ranking results, which is currently affecting the thinking and behavior of more than 5 billion people worldwide with no mechanisms in place to offset its influence, poses a serious threat to democracy, free speech, and human autonomy.
Data Availability
An anonymized version of the data can be found at https://doi.org/10.5281/zenodo.6978977. The data have been anonymized to comply with requirements of the sponsoring institution’s Institutional Review Board (IRB). The IRB granted exempt status to this study under HHS rules because (a) the anonymity of participants was preserved and (b) the risk to participants was minimal. The IRB also exempted this study from informed consent requirements (relevant HHS Federal Regulations 45 CFR 46.101.(b)(2), 45 CFR 46.116(d), 45 CFR 46.117(c)(2), and 45 CFR 46.111).
References
Agudo, U., & Matute, H. (2021). The influence of algorithms on political and dating decisions. PLOS ONE, 16(4). https://doi.org/10.1371/journal.pone.0249454
Allam, A., Schulz, P. J., & Nakamoto, K. (2014). The impact of search engine selection and sorting criteria on vaccination beliefs and attitudes: Two experiments manipulating Google output. Journal of Medical Internet Research, 16(4). https://doi.org/10.2196/jmir.2642
Anderson, N. (1958). Test of a model for opinion change. Journal of Abnormal Psychology, 59(3), 371–381. https://doi.org/10.1037/h0042539
Arendt, F., & Fawzi, N. (2018). Googling for Trump: Investigating online information seeking during the 2016 US presidential election. Information, Communication & Society, 22(13), 1945–1955. https://doi.org/10.1080/1369118X.2018.1473459
Athukorala, K., Glowacka, D., Jacucci, G., Oulasvirta, A., & Vreeken, J. (2015). Is exploratory search different? A comparison of information search behavior for exploratory and lookup tasks. Journal of the Association for Information Science and Technology, 67(11), 2635–2651. https://doi.org/10.1002/asi.23617
Bak-Coleman, J. B., Kennedy, I., Wack, M., Beers, A., Spiro, E. S., Starbird, K., & West, J. D. (2022). Combining interventions to reduce the spread of viral misinformation. Nature Human Behavior, 6(10), 1–9. https://doi.org/10.1038/s41562-022-01388-6
Baldwin, T., & Ford, J. K. (1988). Transfer of training: A review and directions for future research. Personnel Psychology, 41(1), 63–105. https://doi.org/10.1111/j.1744-6570.1988.tb00632.x
BBC. (2017, December 21). Facebook ditches fake news warning flag. Retrieved October 26, 2023, from https://www.bbc.com/news/technology-42438750
Beazley, M. B. (2013). Ballot design as fail-safe: An ounce of rotation is worth a pound of litigation. Election Law Journal: Rules, Politics, and Policy, 12(1), 18–52. https://doi.org/10.1089/elj.2012.0171
Bogert, E., Schecter, A., & Watson, R. T. (2021). Humans rely more on algorithms than social influence as a task becomes more difficult. Scientific Reports, 11(1). https://doi.org/10.1038/s41598-021-87480-9
Bruine de Bruin, W. (2005). Save the last dance for me: Unwanted serial position effects in jury evaluations. Acta Psychologica, 118(3), 245–260. https://doi.org/10.1016/j.actpsy.2004.08.005
Bruns, H., Dessart, F. J., Krawczyk, M. W., Lewandowsky, S., Pantazi, M., Pennycook, G., Schmid, P., & Smillie, L. (2023). The role of (trust in) the source of prebunks and debunks of misinformation. Evidence from online experiments in four EU countries. OSF Preprints. https://doi.org/10.31219/osf.io/vd5qt
Burt, A. (2019). Can Facebook ever be fixed? Harvard Business Review. Retrieved May, 11, 2019, from https://hbr.org/2019/04/can-facebook-ever-be-fixed
Carlson, K. A., & Russo, J. E. (2001). Biased interpretation of evidence by mock jurors. Journal of Experimental Psychology: Applied, 7(2), 91–103. https://doi.org/10.1037/1076-898x.7.2.91
Chigne, J. P. (2018). Google’s leaked internal video ‘The Selfish Ledger’ shows how a population could be controlled by data. Tech Times. Retrieved May 24, 2018, from https://www.techtimes.com/articles/228053/20180518/googles-leaked-internal-video-the-selfish-ledger-shows-how-a-population-could-be-controlled-by-data.htm
Chitika Insights. (2013). The value of Google results positioning. Chitika. http://info.chitika.com/uploads/4/9/2/1/49215843/chitikainsights-valueofgoogleresultspositioning.pdf
Cutrell, E., & Guan, Z. (2007). What are you looking for? An eye-tracking study of information usage in web search. In CHI '07: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 407–416). https://doi.org/10.1145/1240624.1240690
Danbury, A., Palazza, M., Mortimer, K., & Siano, A. (2013). Advertising and brand trust: Perspectives from the UK and Italy. Proceedings of the 18th International Conference on Corporate & Marketing Communication: Responsible Communication - Past, Present, Future (pp. 1–11). University of Salerno.
Dean, B. (n.d.). We analyzed 5 million Google search results. Here’s what we learned about organic CTR. Backlinko. Retrieved August 27, 2022, from https://backlinko.com/google-ctr-stats
Draws, T., Tintarev, N., Gadiraju, U., Bozzon, A., & Timmermans, B. (2021). This is not what we ordered: Exploring why biased search result rankings affect user attitudes on debated topics. In: Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 295–305. https://doi.org/10.1145/3404835.3462851
Ebbinghaus, H. (2013). Memory: A contribution to experimental psychology. Annals of Neuroscience, 20(4), 155–156. https://doi.org/10.5214/ans.0972.7531.200408
Edelman, B. (2011). Adverse selection in online “trust” certifications and search results. Electronic Commerce Research and Applications, 10(1), 17–25. https://doi.org/10.1016/j.elerap.2010.06.001
Epstein, R. (2016). Free isn’t freedom: How Silicon Valley tricks us. Motherboard https://motherboard.vice.com/read/free-isnt-freedom-epstein-essay
Epstein, R. (2018a). Manipulating minds: The power of search engines to influence votes and opinions. In M. Moore & D. Tambini (Eds.), Digital dominance: The power of Google, Amazon, Facebook, and Apple (pp. 294–319). Oxford University Press.
Epstein, R. (2018b). Transcript to Google’s internal video, “The Selfish Ledger.” American Institute for Behavioral Research and Technology. https://aibrt.org/downloads/GOOGLE-Selfish_Ledger-TRANSCRIPT.pdf
Epstein, R. (2019). Why Google poses a serious threat to democracy and how to end that threat. Mercatornet https://www.thethinkingconservative.com/why-google-poses-a-serious-threat-to-democracy-and-how-to-end-that-threat/
Epstein, R., & Robertson, R. E. (2015). The search engine manipulation effect (SEME) and its possible impact on the outcomes of elections. Proceedings of the National Academy of Sciences USA, 112(33), E4512–E4521. https://doi.org/10.1073/pnas.1419828112
Epstein, R., Ding, M., Mourani, C., Newland, A., Olson, E., & Tran, F. (2023). Multiple searches increase the impact of similarly biased search results: An example of the “multiple exposure effect” (MEE). SSRN https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4636728
Epstein, R., & Robertson, R. E. (2016, April 28–May 1). Suppressing the search engine manipulation effect (SEME), plus methods for suppressing the effect [Paper presentation]. The Western Psychological Association 96th Annual Meeting, Long Beach, CA, United States.
Epstein, R., Lee, V., Mohr, R., & Zankich, V. R. (2022). The Answer Bot Effect (ABE): A powerful new form of influence made possible by intelligent personal assistants and search engines. PLOS ONE, 17(6). https://doi.org/10.1371/journal.pone.0268081
Eslami, M., Vaccaro, K., Karahalios, K., & Hamilton, K. (2017). “Be careful; things can be worse than they appear”: Understanding biased algorithms and users’ behavior around them in rating platforms. Proceedings of the 11th International AAAI Conference on Web and Social Media, 11(1), 62–71.
Fast, N., & Jago, A. (2020). Privacy matters…or does it? Algorithms, rationalization, and the erosion of concern for privacy. Current Opinion in Psychology, 31, 44–48. https://doi.org/10.1016/j.copsyc.2019.07.011
Feezell, J. T., Wagner, J. K., & Conroy, M. (2021). Exploring the effects of algorithm-driven news sources on political behavior and polarization. Computers in Human Behavior, 116. https://doi.org/10.1016/j.chb.2020.106626
Fortune. (n.d.). World’s most admired companies. Retrieved December 7, 2020, from https://fortune.com/worlds-most-admired-companies/
Ghose, A., Ipirotis, P., & Li, B. (2014). Examining the impact of ranking on consumer behavior and search engine revenue. Management Science, 60(7), 1617–1859. https://doi.org/10.1287/MNSC.2013.1828
Google. (n.d.). How Search works. Retrieved July 13, 2022, from https://www.google.co.uk/intl/en_uk/search/howsearchworks/mission/users/
Google. (2018). Twiddler quick start guide. Retrieved February 11, 2021, from https://aibrt.org/downloads/GOOGLE_2018-Twiddler_Quick_Start_Guide-Superroot.pdf
Granka, L., Joachims, T., & Gay, G. (2004). Eye-tracking analysis of user behavior in WWW search. In Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 478–479). https://doi.org/10.1145/1008992.1009079
Grant, D. (2017). The ballot order effect is huge: Evidence from Texas. Public Choice, 172, 421–442. https://doi.org/10.1007/s11127-017-0454-8
Haas, A., & Unkel, J. (2017). Ranking versus reputation: Perception and effects of search result credibility. Behaviour & Information Technology, 36(12), 1285–1298. https://doi.org/10.1080/0144929X.2017.1381166
Hardwick, J. (n.d.). Top Bing searches. Ahrefs Blog. Retrieved November 17, 2020, from https://ahrefs.com/blog/top-bing-searches/
Hazan, J. G. (2013). Stop being evil: A proposal for unbiased Google Search. Michigan Law Review, 111(5), 789–820.
Hendler, J., & Mulvehill, A. (2016). Social machines: The coming collision of artificial intelligence, social networking, and humanity. Apress.
Ho, D., & Imai, K. (2008). Estimating causal effects of ballot order from a randomized natural experiment the California alphabet lottery, 1978–2002. Public Opinion Quarterly, 72(2), 216–240. https://doi.org/10.1093/poq/nfn018
Hogarth, R. M., Lejarraga, T., & Soyer, E. (2015). The two settings of kind and wicked learning environments. Current Directions in Psychological Science, 24(5), 379–385. https://doi.org/10.1177/0963721415591878
Joachims, T., Granka, L., Pan, B., Hembrooke, H., Radlinski, F., & Gay, G. (2007). Evaluating the accuracy of implicit feedback from clicks and query reformulations in web search. ACM Transaction on Information Systems, 25(2). https://doi.org/10.1145/1229179.1229181
Kammerer, Y., & Gerjets, P. (2014). The role of search result position and source trustworthiness in the selection of web search results when using a list or a grid interface. International Journal of Human-Computer Interaction, 30(3), 177–191. https://doi.org/10.1080/10447318.2013.846790
Kieta, A. R., Cihon, T. M., & Abdel-Jalil, A. (2018). Problem solving from a behavioral perspective: Implications for behavior analysts and educators. Journal of Behavioral Education, 28, 275–300. https://doi.org/10.1007/s10864-018-9296-9
Kimble, G. A. (1961). Hilgard and Marquis’ conditioning and learning (2nd ed.). Appleton-Century-Crofts. https://doi.org/10.1037/14591-000
Knobloch-Westerwick, S., Mothes, C., Johnson, B. K., Westerwick, A., & Donsbach, W. (2015). Political online information searching in Germany and the United States: Confirmation bias, source credibility, and attitude impacts. Journal of Communication, 65(3), 489–511. https://doi.org/10.1111/jcom.12154
Koppell, J. G., & Steen, J. A. (2004). The effects of ballot position on election outcomes. The Journal of Politics, 66(1), 267–281. https://doi.org/10.1046/j.1468-2508.2004.00151.x
Kramer, M. (2019). With Facebook falling out of favor, will Instagram be enough to rescue shareholders? CCN. Retrieved July 7, 2022, from https://www.ccn.com/with-facebook-falling-out-of-favor-will-instagram-be-enough-to-rescue-shareholders/
Lakshmanan, R. (2019). Project Veritas releases ‘internal documents’ from Google and alleges anti-conservative bias. The Next Web. Retrieved September 3, 2021, from https://thenextweb.com/google/2019/08/15/project-veritas-releases-internal-documents-from-google-and-alleges-anti-conservative-bias/
Lerman, D. C., Iwata, B. A., Shore, B. A., & Kahng, S. (1996). Responding maintained by intermittent reinforcement: Implications for the use of extinction with problem behavior in clinical settings. Journal of Applied Behavior Analysis, 29(2), 153–171. https://doi.org/10.1901/jaba.1996.29-153
Loftus, E. (1975). Leading questions and the eyewitness report. Cognitive Psychology, 7(4), 560–572. https://doi.org/10.1016/0010-0285(75)90023-7
Logg, J., Minson, J., & Moore, D. (2018). Do people trust algorithms more than companies realize? Harvard Business Review. Retrieved April 12, 2021, from https://hbr.org/2018/10/do-people-trust-algorithms-more-than-companies-realize
Logg, J., Minson, J., & Moore, D. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90–103. https://doi.org/10.1016/j.obhdp.2018.12.005
Lorigo, L., Haridasan, M., Brynjarsdottir, H., Xia, L., Joachims, T., Gay, G., Granka, L., Pellacini, F., & Pan, B. (2008). Eye tracking and online search: Lessons learned and challenges ahead. Journal of the American Society for Information Science and Technology, 59(7), 1041–1052. https://doi.org/10.1002/asi.20794
Ludolph, R., Allam, A., & Schulz, P. J. (2016). Manipulating Google’s knowledge box to counter biased information processing during an online search on vaccination: Application of a technological debiasing strategy. Journal of Medical Internet Research, 18(6). https://doi.org/10.2196/jmir.5430
Mack, C. C., Cinel, C., Davies, N., Harding, M., & Ward, G. (2017). Serial position, output order, and list length effects for words presented on smartphones over very long intervals. Journal of Memory and Language, 9, 61–80. https://doi.org/10.1016/j.jml.2017.07.009
Mackintosh, N. J. (1974). The psychology of animal learning. Academic Press.
Mantonakis, A., Rodero, P., Lesschaeve, I., & Hastie, R. (2009). Order in choice: Effects of serial position on preferences. Psychological Science, 20(11), 1309–1312. https://doi.org/10.1111/j.1467-9280.2009.02453.x
Marable, L. (2003). False oracles: Consumer reaction to learning the truth about how search engines work. Consumer WebWatch https://advocacy.consumerreports.org/wp-content/uploads/2013/05/false-oracles.pdf
McKinnon, J., & MacMillan, D. (2018). Google workers discussed tweaking search function to counter travel ban. The Wall Street Journal https://www.wsj.com/articles/google-workers-discussed-tweaking-search-function-to-counter-travel-ban-1537488472
Meyers, P. J. (2019). How often does Google update its algorithm? Moz https://moz.com/blog/how-often-does-google-update-its-algorithm
Mis, F. W., Lumia, A. R., & Moore, J. W. (1972). Inhibitory stimulus control of the classically conditioned nictitating membrane response of the rabbit. Behavior Research Methods & Instrumentation, 4, 297–299. https://doi.org/10.3758/BF03207309
Murdock, B. (1962). The serial position effect of free recall. Journal of Experimental Psychology, 64(5), 482–488. https://doi.org/10.1037/h0045106
Murre, J., & Dros, J. (2015). Replication and analysis of Ebbinghaus’ forgetting curve. PLOS ONE, 10(7). https://doi.org/10.1371/journal.pone.0120644
Nekmat, E. (2020). Nudge effect of fact-check alerts: Source influence and media skepticism on sharing of news misinformation in social media. Social Media + Society, 6(1), 1–14. https://doi.org/10.1177/2056305119897322
Nevin, J. A. (2012). Resistance to extinction and behavioral momentum. Behavioural Processes, 90(1), 89–97. https://doi.org/10.1016/j.beproc.2012.02.006
Nicas, J., Weise, K., & Isaac, M. (2019). How each big tech company may be targeted by regulators. The New York Times. Retrieved October 11, 2019, from https://www.nytimes.com/2019/09/08/technology/antitrust-amazon-apple-facebook-google.html
Nielsen, J. (2010). Scrolling and attention. Nielson Norman Group. Retrieved March 13, 2021, from https://www.nngroup.com/articles/scrolling-and-attention-original-research/
Nielsen, J., & Pernice, K. (2010). Eyetracking web usability. New Riders.
Ørmen, J. (2016). Googling the news: Opportunities and challenges in studying news events through Google Search. Digital Journalism, 4(1), 107–124. https://doi.org/10.1080/21670811.2015.1093272
Pan, B., Joachims, T., Granka, L., Hembrooke, H., Gay, G., & Lorigo, L. (2007). In Google we trust: User’s decisions on rank, position, and relevance. Journal of Computer-Mediated Communication, 12(3), 801–823. https://doi.org/10.1111/j.1083-6101.2007.00351.x
Paudyal, P., & Wong, W. (2018). Algorithmic opacity: Making algorithmic processes transparent through abstraction hierarchy. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 62(1), 192–196. https://doi.org/10.1177/1541931218621046
Pogacar, F. A, Chenai, A., Smucker, M. D., & Clarke, C. L. A. (2017). The positive and negative influence of search results on people’s decisions about the efficacy of medical treatments. In: Proceedings of the ACM SIGIR International Conference on Theory of Information Retrieval, pp. 209-216. https://doi.org/10.1145/3121050.3121074
Prinz, R., Brighton, H., Luan, S., & Gigerenzer, G. (2017). Can biased search engine results influence healthcare decisions?[Paper presentation]. International Convention of Psychological Science.
Rose, C. (2018). SEO 101: How the Google search algorithm works. SEO Mechanic https://www.seomechanic.com/google-search-algorithm-work/
Sahin, A., Zehir, C., & Kitapci, H. (2011). The effects of brand experiences, trust and satisfaction on building brand loyalty: An empirical research on global brands. Procedia - Social and Behavioral Sciences, 24, 1288–1301. https://doi.org/10.1016/j.sbspro.2011.09.143
Savov, V. (2018). Google’s Selfish Ledger is an unsettling vision of Silicon Valley social engineering. The Verge https://www.theverge.com/2018/5/17/17344250/google-x-selfish-ledger-video-data-privacy
Schultheiß, S., & Lewandowski, D. (2020). How users’ knowledge of advertisements influences their viewing and selection behavior in search engines. Journal of the Association for Information Science and Technology, 72(3), 285–301. https://doi.org/10.1002/asi.24410
Shin, D., Kee, K. F., & Shin, E. Y. (2023). The nudging effect of accuracy alerts for combating the diffusion of misinformation: Algorithmic news sources, trust in algorithms, and users’ discernment of fake news. Journal of Broadcasting & Electronic Media, 67(3), 1–20. https://doi.org/10.1080/08838151.2023.2175830
Siege Media. (n.d.). The 100 most popular Google keywords. Siege Media. Retrieved April 20, 2020, from https://www.siegemedia.com/seo/most-popular-keywords
Singer, N. (2019). The government protects our food and cars. Why not our data? The New York Times https://www.nytimes.com/2019/11/02/sunday-review/data-protection-privacy.html
Skinner, B. F. (1957). Verbal behavior. Appleton-Century-Crofts.
Soulo, T. (n.d.). Top Google searches. Ahrefs Blog. Retrieved July 15, 2020, from https://ahrefs.com/blog/top-google-searches/
StatCounter GlobalStats. (n.d.). Search engine market share worldwide. Retrieved August 30, 2023, from https://gs.statcounter.com/search-engine-market-share
Tapinsky, G., Votta, F., & Roose, K. M. (2018). Fighting fake news and post-truth politics with behavioral science: The pro-truth pledge. Behavior and Social Issues, 27, 47–70. https://doi.org/10.5210/bsi.v.27i0.9127
Taylor, R. (2019). Facebook and Google algorithms are secret – but Australia plans to change that. The Wall Street Journal. Retrieved March 15, 2021, from https://www.wsj.com/articles/facebook-and-google-algorithms-are-secretbut-australia-plans-to-change-that-11564134106
Trevisan, F., Hoskins, A., Oates, S., & Mahlouly, D. (2016). The Google voter: Search engines and elections in the new media ecology. Information, Communication & Society, 21(1), 111–128. https://doi.org/10.1080/1369118X.2016.1261171
Trielli, D., & Diakopoulos, N. (2019). Search as news curator: The role of Google in shaping attention to news information. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–15. https://doi.org/10.1145/3290605.3300683
Visser, M. (1996). Voting: A Behavior Analysis. Behavior and Social Issues, 6(1), 23–34. https://doi.org/10.5210/bsi.v6i1.278
Wang, Y., Wu, L., Luo, L., Zhang, Y., & Dong, G. (2017). Short-term internet search using makes people rely on search engines when facing unknown issues. PLOS ONE, 12(4). https://doi.org/10.1371/journal.pone.0176325
Weinreich, H., Obendorf, H., Herder, E., & Matthias, M. (2008). Not quite the average: An empirical study of web use. ACM Transactions on the Web, 2(1), 1–31. https://doi.org/10.1145/1326561.1326566
West, S. (2018). The challenge of anonymous and ephemeral social media: Reflective research methodologies and student-user composing practices [Doctoral dissertation]. University of Arkansas.
Wilhite, C. J., & Houmanfar, R. (2015). Mass news media and American culture: An interdisciplinary approach. Behavior and Social Issues, 24, 88–110. https://doi.org/10.5210/bsi.v.24i0.5004
Wu, Z., Draws, T., Maria Cau, F., Barile, F., Rieger, A., & Tintarev, N. (2023). Explaining search result stances to opinionated people. In L. Longo (Ed.), Explainable artificial intelligence, Communications in Computer and Information Science (1902nd ed.). Springer. https://doi.org/10.1007/978-3-031-44067-0_29
Yarchi, M., Wolfsfeld, G., & Samuel-Azran, T. (2021). Not all undecided voters are alike: Evidence from an Israeli election. Government Information Quarterly, 38(4), 101598. https://doi.org/10.1016/j.giq.2021.101598
Funding
This work was supported by general funds of the American Institute for Behavioral Research and Technology (AIBRT), a nonpartisan, nonprofit, 501(c)(3) organization. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Author information
Authors and Affiliations
Contributions
Robert Epstein: Conceptualization, Methodology, Supervision, Writing – Original draft, Writing – Reviewing and Editing. Michael Lothringer: Statistical Analysis, Writing – Reviewing and Editing. Vanessa Zankich: Statistical Analysis, Visualization, Writing – Reviewing and Editing.
Corresponding author
Ethics declarations
Conflicts of Interest
The authors have no conflicts of interest to declare.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Epstein, R., Lothringer, M. & Zankich, V.R. How a Daily Regimen of Operant Conditioning Might Explain the Power of the Search Engine Manipulation Effect (SEME). Behav. Soc. Iss. 33, 82–106 (2024). https://doi.org/10.1007/s42822-023-00155-0
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s42822-023-00155-0