Skip to main content

Navigating the Thin Line: Examining User Behavior in Search to Detect Engagement and Backfire Effects

  • Conference paper
  • First Online:
Advances in Information Retrieval (ECIR 2024)

Abstract

Opinionated users often seek information that aligns with their preexisting beliefs while dismissing contradictory evidence due to confirmation bias. This conduct hinders their ability to consider alternative stances when searching the web. Despite this, few studies have analyzed how the diversification of search results on disputed topics influences the search behavior of highly opinionated users. To this end, we present a preregistered user study (n = 257) investigating whether different levels (low and high) of bias metrics and search results presentation (with or without AI-predicted stances labels) can affect the stance diversity consumption and search behavior of opinionated users on three debated topics (i.e., atheism, intellectual property rights, and school uniforms). Our results show that exposing participants to (counter- attitudinally) biased search results increases their consumption of attitude-opposing content, but we also found that bias was associated with a trend toward overall fewer interactions within the search page. We also found that 19% of users interacted with queries and search pages but did not select any search results. When we removed these participants in a post-hoc analysis, we found that stance labels increased the diversity of stances consumed by users, particularly when the search results were biased. Our findings highlight the need for future research to explore distinct search scenario settings to gain insight into opinionated users’ behavior.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Since the number of AI-correct predictions constrained us in the selection of search results, we found that 40 search results (ten results per four pages) were a suitable number to compute low and high bias metrics and mimic a realistic search scenario.

  2. 2.

    “Please shortly describe your experience with the web search engine. Did you look for specific information, and if yes, how did you try to find it? Did you think the web search helped you build a more informed opinion on [topic]? If yes/no, why?”.

  3. 3.

    If participants have no strong attitude on any topic, they exit the study (fully paid).

  4. 4.

    We balanced participation across topics, bias, and display experimental conditions.

  5. 5.

    We randomly assign participants to one of the 40 search results combinations with an opposite bias direction based on their pre-stance attitude (i.e., users who strongly support a topic are assigned to the opposing bias direction and vice versa).

  6. 6.

    To encourage interactions with the web search engine, participants could only advance to the next step of the study after one minute. There was no maximum search time to simulate a realistic scenario.

  7. 7.

    https://prolific.co.

  8. 8.

    https://www.qualtrics.com/.

  9. 9.

    The study has been approved by the Ethics Committee of Maastricht University.

References

  1. Abualsaud, M.: The effect of queries and search result quality on the rate of query abandonment in interactive information retrieval. In: Proceedings of the 2020 Conference on Human Information Interaction and Retrieval, CHIIR 2020, pp. 523–526. Association for Computing Machinery, New York (2020). https://doi.org/10.1145/3343413.3377951

  2. Ajjour, Y., Alshomary, M., Wachsmuth, H., Stein, B.: Modeling frames in argumentation. In: Inui, K., Jiang, J., Ng, V., Wan, X. (eds.) Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2922–2932. Association for Computational Linguistics, Hong Kong (2019). https://doi.org/10.18653/v1/D19-1290. https://aclanthology.org/D19-1290

  3. Allam, A., Schulz, P.J., Nakamoto, K.: The impact of search engine selection and sorting criteria on vaccination beliefs and attitudes: two experiments manipulating google output. J. Med. Internet Res. 16(4), e100 (2014). https://doi.org/10.2196/jmir.2642, http://www.jmir.org/2014/4/e100/

  4. Azzopardi, L.: Cognitive biases in search: a review and reflection of cognitive biases in information retrieval, pp. 27–37 (2021). https://doi.org/10.1145/3406522.3446023

  5. Chen, S., Khashabi, D., Yin, W., Callison-Burch, C., Roth, D.: Seeing things from a different angle: discovering diverse perspectives about claims. arXiv:1906.03538 [cs] (2019)

  6. Draws, T., Inel, O., Tintarev, N., Baden, C., Timmermans, B.: Comprehensive viewpoint representations for a deeper understanding of user interactions with debated topics. In: ACM SIGIR Conference on Human Information Interaction and Retrieval, pp. 135–145. ACM, Regensburg (2022). https://doi.org/10.1145/3498366.3505812. https://dl.acm.org/doi/10.1145/3498366.3505812

  7. Draws, T., Liu, J., Tintarev, N.: Helping users discover perspectives: enhancing opinion mining with joint topic models. In: 2020 International Conference on Data Mining Workshops (ICDMW), pp. 23–30. IEEE, Sorrento (2020). https://doi.org/10.1109/ICDMW51313.2020.00013. https://ieeexplore.ieee.org/document/9346407/

  8. Draws, T., et al.: Explainable cross-topic stance detection for search results. In: CHIIR 2023: ACM SIGIR Conference on Human Information Interaction and Retrieval (2023)

    Google Scholar 

  9. Draws, T., et al.: Viewpoint diversity in search results. In: Kamps, J., et al. (eds.) ECIR 2023. LNCS, vol. 13980, pp. 279–297. Springer, Heidelberg (2023). https://doi.org/10.1007/978-3-031-28244-7_18

    Chapter  Google Scholar 

  10. Draws, T., Tintarev, N., Gadiraju, U.: Assessing viewpoint diversity in search results using ranking fairness metrics. SIGKDD Explor. Newsl. 23(1), 50–58 (2021). https://doi.org/10.1145/3468507.3468515

    Article  Google Scholar 

  11. Draws, T., Tintarev, N., Gadiraju, U., Bozzon, A., Timmermans, B.: This is not what we ordered: exploring why biased search result rankings affect user attitudes on debated topics. In: Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2021, pp. 295–305. Association for Computing Machinery, New York (2021). https://doi.org/10.1145/3404835.3462851. https://dl.acm.org/doi/10.1145/3404835.3462851

  12. Dumani, L., Neumann, P.J., Schenkel, R.: A framework for argument retrieval. In: Jose, J.M., Yilmaz, E., Magalhães, J., Castells, P., Ferro, N., Silva, M.J., Martins, F. (eds.) ECIR 2020. LNCS, vol. 12035, pp. 431–445. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-45439-5_29

    Chapter  Google Scholar 

  13. Epstein, R., Robertson, R.E.: The search engine manipulation effect (SEME) and its possible impact on the outcomes of elections. Proc. Natl. Acad. Sci. 112(33), E4512–E4521 (2015). https://doi.org/10.1073/pnas.1419828112. http://www.pnas.org/lookup/doi/10.1073/pnas.1419828112

  14. Epstein, R., Robertson, R.E., Lazer, D., Wilson, C.: Suppressing the search engine manipulation effect (SEME). In: Proceedings of the ACM on Human-Computer Interaction, vol. 1(CSCW), pp. 1–22 (2017). https://doi.org/10.1145/3134677. https://dl.acm.org/doi/10.1145/3134677

  15. Faul, F., Erdfelder, E., Buchner, A., Lang, A.G.: Statistical power analyses using g* power 3.1: tests for correlation and regression analyses. Behav. Res. Methods 41(4), 1149–1160 (2009)

    Article  Google Scholar 

  16. Gao, R., Shah, C.: Toward creating a fairer ranking in search engine results. Inf. Process. Manag. 57(1), 102138 (2020). https://doi.org/10.1016/j.ipm.2019.102138. https://linkinghub.elsevier.com/retrieve/pii/S0306457319304121

  17. Gezici, G., Lipani, A., Saygin, Y., Yilmaz, E.: Evaluation metrics for measuring bias in search engine results. Inf. Retr. J. 24(2), 85–113 (2021). https://doi.org/10.1007/s10791-020-09386-w. http://link.springer.com/10.1007/s10791-020-09386-w

  18. Jürgens, P., Stark, B.: Mapping exposure diversity: the divergent effects of algorithmic curation on news consumption. J. Commun. 72(3), 322–344 (2022). https://doi.org/10.1093/joc/jqac009

    Article  Google Scholar 

  19. Küçük, D., Can, F.: Stance detection: a survey. ACM Comput. Surv. 53(1), 1–37 (2021). https://doi.org/10.1145/3369026. https://dl.acm.org/doi/10.1145/3369026

  20. Loecherbach, F., Welbers, K., Moeller, J., Trilling, D., Van Atteveldt, W.: Is this a click towards diversity? explaining when and why news users make diverse choices. In: Proceedings of the 13th ACM Web Science Conference 2021, WebSci 2021, pp. 282–290. Association for Computing Machinery, New York (2021). https://doi.org/10.1145/3447535.3462506

  21. Lucien, H., et al.: Benefits of diverse news recommendations for democracy: a user study. Dig. Journalism 10(10), 1710–1730 (2022). https://doi.org/10.1080/21670811.2021.2021804

    Article  Google Scholar 

  22. Mattis, N.M., Masur, P.K., Moeller, J., van Atteveldt, W.: Nudging towards diversity: a theoretical framework for facilitating diverse news consumption through recommender design (2021). https://doi.org/10.31235/osf.io/wvxf5. https://osf.io/preprints/socarxiv/wvxf5

  23. Maxwell, D., Azzopardi, L., Moshfeghi, Y.: The impact of result diversification on search behaviour and performance. Inf. Retr. J. 22 (2019). https://doi.org/10.1007/s10791-019-09353-0

  24. Mcdonald, G., Macdonald, C., Ounis, I.: Search results diversification for effective fair ranking in academic search. Inf. Retr. J. 25, 1–26 (2022). https://doi.org/10.1007/s10791-021-09399-z

    Article  Google Scholar 

  25. Michiels, L., Leysen, J., Smets, A., Goethals, B.: What are filter bubbles really? a review of the conceptual and empirical work. In: Adjunct Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization, UMAP 2022 Adjunct, pp. 274–279. Association for Computing Machinery, New York (2022). https://doi.org/10.1145/3511047.3538028

  26. Nickerson, R.S.: Confirmation bias: a ubiquitous phenomenon in many guises. Rev. Gener. Psychol. 2(2), 175–220 (1998). https://doi.org/10.1037/1089-2680.2.2.175

  27. Pathiyan Cherumanal, S., Spina, D., Scholer, F., Croft, W.B.: Evaluating fairness in argument retrieval. In: Proceedings of the 30th ACM International Conference on Information & Knowledge Management, CIKM 2021, pp. 3363–3367. Association for Computing Machinery, New York (2021). https://doi.org/10.1145/3459637.3482099

  28. Pogacar, F.A., Ghenai, A., Smucker, M.D., Clarke, C.L.: The positive and negative influence of search results on people’s decisions about the efficacy of medical treatments. In: Proceedings of the ACM SIGIR International Conference on Theory of Information Retrieval, ICTIR 2017, pp. 209–216. Association for Computing Machinery, New York (2017). https://doi.org/10.1145/3121050.3121074

  29. Pothirattanachaikul, S., Yamamoto, T., Yamamoto, Y., Yoshikawa, M.: Analyzing the effects of document’s opinion and credibility on search behaviors and belief dynamics. In: Proceedings of the 28th ACM International Conference on Information and Knowledge Management, CIKM 2019, pp. 1653–1662. Association for Computing Machinery, New York (2019). https://doi.org/10.1145/3357384.3357886

  30. Puschmann, C.: Beyond the bubble: assessing the diversity of political search results. Dig. Journalism 7(6), 824–843 (2019). https://doi.org/10.1080/21670811.2018.1539626

  31. Rieger, A., Bredius, F., Tintarev, N., Pera, M.: Searching for the whole truth: harnessing the power of intellectual humility to boost better search on debated topics. In: CHI 2023 - Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. Conference on Human Factors in Computing Systems - Proceedings, 23–28 April 2023. Association for Computing Machinery (ACM), United States (2023). https://doi.org/10.1145/3544549.3585693

  32. Rieger, A., Draws, T., Tintarev, N., Theune, M.: This item might reinforce your opinion: obfuscation and labeling of search results to mitigate confirmation bias. In: Proceedings of the 32nd ACM Conference on Hypertext and Social Media, HT 2021, pp. 189–199. Association for Computing Machinery, New York (2021). https://doi.org/10.1145/3465336.3475101

  33. Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019)

  34. Sapiezynski, P., Zeng, W., E Robertson, R., Mislove, A., Wilson, C.: Quantifying the impact of user attentionon fair group representation in ranked lists. In: Companion Proceedings of The 2019 World Wide Web Conference, WWW 2019, pp. 553–562. Association for Computing Machinery, New York (2019). https://doi.org/10.1145/3308560.3317595

  35. Suzuki, M., Yamamoto, Y.: Analysis of relationship between confirmation bias and web search behavior. In: Proceedings of the 22nd International Conference on Information Integration and Web-Based Applications & Services, iiWAS 2020, pp. 184–191. Association for Computing Machinery, New York USA (2021). https://doi.org/10.1145/3428757.3429086

  36. Suzuki, M., Yamamoto, Y.: Characterizing the influence of confirmation bias on web search behavior. Front. Psychol. 12 (2021). https://api.semanticscholar.org/CorpusID:244897002

  37. Swire-Thompson, B., DeGutis, J., Lazer, D.: Searching for the backfire effect: measurement and design considerations. J. Appl. Res. Memory Cogn. 9(3), 286–299 (2020). https://doi.org/10.1016/j.jarmac.2020.06.006. https://www.sciencedirect.com/science/article/pii/S2211368120300516

  38. Tversky, A., Kahneman, D.: Judgment under uncertainty: heuristics and biases. Sci. 185(4157), 1124–1131 (1974). https://doi.org/10.1126/science.185.4157.1124. https://www.science.org/doi/abs/10.1126/science.185.4157.1124

  39. Vrijenhoek, S., Bénédict, G., Gutierrez Granada, M., Odijk, D., De Rijke, M.: Radio - rank-aware divergence metrics to measure normative diversity in news recommendations. In: Proceedings of the 16th ACM Conference on Recommender Systems, RecSys 2022, pp. 208–219. Association for Computing Machinery, New York (2022). https://doi.org/10.1145/3523227.3546780

  40. Wu, Z., Draws, T., Cau, F., Barile, F., Rieger, A., Tintarev, N.: Explaining search result stances to opinionated people. In: Longo, L. (ed.) xAI 2023, vol. 1902, pp. 573–596. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-44067-0_29

    Chapter  Google Scholar 

  41. Xu, L., Zhuang, M., Gadiraju, U.: How do user opinions influence their interaction with web search results? In: Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization, UMAP 2021, pp. 240–244. Association for Computing Machinery, New York (2021). https://doi.org/10.1145/3450613.3456824

  42. Yom-Tov, E., Dumais, S., Guo, Q.: Promoting civil discourse through search engine diversity. Social Sci. Comput. Rev. 32(2), 145–154 (2014). https://doi.org/10.1177/0894439313506838. http://journals.sagepub.com/doi/10.1177/0894439313506838

  43. Zehlike, M., Bonchi, F., Castillo, C., Hajian, S., Megahed, M., Baeza-Yates, R.: Fa*ir: a fair top-k ranking algorithm. In: Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM 2017, pp. 1569–1578. Association for Computing Machinery, New York (2017). https://doi.org/10.1145/3132847.3132938

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Federico Maria Cau .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Cau, F.M., Tintarev, N. (2024). Navigating the Thin Line: Examining User Behavior in Search to Detect Engagement and Backfire Effects. In: Goharian, N., et al. Advances in Information Retrieval. ECIR 2024. Lecture Notes in Computer Science, vol 14611. Springer, Cham. https://doi.org/10.1007/978-3-031-56066-8_30

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-56066-8_30

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-56065-1

  • Online ISBN: 978-3-031-56066-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics