Skip to main content

Viewpoint Diversity in Search Results

  • Conference paper
  • First Online:
Advances in Information Retrieval (ECIR 2023)

Abstract

Adverse phenomena such as the search engine manipulation effect (SEME), where web search users change their attitude on a topic following whatever most highly-ranked search results promote, represent crucial challenges for research and industry. However, the current lack of automatic methods to comprehensively measure or increase viewpoint diversity in search results complicates the understanding and mitigation of such effects. This paper proposes a viewpoint bias metric that evaluates the divergence from a pre-defined scenario of ideal viewpoint diversity considering two essential viewpoint dimensions (i.e., stance and logic of evaluation). In a case study, we apply this metric to actual search results and find considerable viewpoint bias in search results across queries, topics, and search engines that could lead to adverse effects such as SEME. We subsequently demonstrate that viewpoint diversity in search results can be dramatically increased using existing diversification algorithms. The methods proposed in this paper can assist researchers and practitioners in evaluating and improving viewpoint diversity in search results.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Due to error, we used the \(2^{nd}\) most common supporting query for the IPR topic.

  2. 2.

    The retrieval took place on December 12th, 2021 in the Netherlands.

  3. 3.

    Note that viewpoint labels do not refer to specific web search queries, but always to the topic (or claim) at hand. For example, a search result supporting the idea that students should have to wear school uniforms always receives a positive stance label (i.e., 1, 2, or 3), no matter what query was used to retrieve it.

References

  1. Abid, A., et al.: A survey on search results diversification techniques. Neural Comput. Appl. 27(5), 1207–1229 (2015). https://doi.org/10.1007/s00521-015-1945-5

    Article  Google Scholar 

  2. Agrawal, R., Gollapudi, S., Halverson, A., Ieong, S.: Diversifying search results. In: Proceedings of the Second ACM International Conference on Web Search and Data Mining - WSDM 2009, p. 5. ACM Press, Barcelona, Spain (2009). https://doi.org/10.1145/1498759.1498766, http://portal.acm.org/citation.cfm?doid=1498759.1498766

  3. Ajjour, Y., Alshomary, M., Wachsmuth, H., Stein, B.: Modeling frames in argumentation. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2922–2932. Association for Computational Linguistics, Hong Kong, China (Nov 2019). https://doi.org/10.18653/v1/D19-1290, https://aclanthology.org/D19-1290

  4. Ajjour, Y., et al.: Visualization of the topic space of argument search results in args. me. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 60–65 (2018)

    Google Scholar 

  5. Allam, A., Schulz, P.J., Nakamoto, K.: The impact of search engine selection and sorting criteria on vaccination beliefs and attitudes: two experiments manipulating google output. J. Med. Internet Res. 16(4), e100 (Apr 2014). https://doi.org/10.2196/jmir.2642, http://www.jmir.org/2014/4/e100/

  6. Azzopardi, L.: Cognitive Biases in Search: a review and reflection of cognitive biases in information retrieval. In: Proceedings of the 2021 Conference on Human Information Interaction and Retrieval, pp. 27–37. ACM, Canberra ACT Australia (Mar 2021). https://doi.org/10.1145/3406522.3446023, https://dl.acm.org/doi/10.1145/3406522.3446023

  7. Baden, C., Springer, N.: Com(ple)menting the news on the financial crisis: the contribution of news users’ commentary to the diversity of viewpoints in the public debate. Euro. J. Commun. 29(5), 529–548 (Oct 2014). https://doi.org/10.1177/0267323114538724, http://journals.sagepub.com/doi/10.1177/0267323114538724

  8. Baden, C., Springer, N.: Conceptualizing viewpoint diversity in news discourse. Journalism 18(2), 176–194 (Feb 2017). https://doi.org/10.1177/1464884915605028, http://journals.sagepub.com/doi/10.1177/1464884915605028

  9. Biega, A.J., Gummadi, K.P., Weikum, G.: Equity of Attention: amortizing individual fairness in rankings. In: The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, pp. 405–414. ACM, Ann Arbor MI USA (Jun 2018). https://doi.org/10.1145/3209978.3210063, https://dl.acm.org/doi/10.1145/3209978.3210063

  10. Bink, M., Zimmerman, S., Elsweiler, D.: Featured snippets and their influence on users’ credibility judgements. In: ACM SIGIR Conference on Human Information Interaction and Retrieval, pp. 113–122. CHIIR ’22, Association for Computing Machinery, New York, NY, USA (2022). https://doi.org/10.1145/3498366.3505766, https://doi.org/10.1145/3498366.3505766

  11. Boltanski, L., Thévenot, L.: On justification: economies of worth, vol. 27. Princeton University Press (2006)

    Google Scholar 

  12. Bondarenko, A., Ajjour, Y., Dittmar, V., Homann, N., Braslavski, P., Hagen, M.: Towards understanding and answering comparative questions. In: Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, pp. 66–74 (2022)

    Google Scholar 

  13. Bondarenko, A., et al.: Overview of touché 2021: argument retrieval. In: International Conference of the Cross-Language Evaluation Forum for European Languages, pp. 450–467. Springer (2021)

    Google Scholar 

  14. Boykoff, M.T., Boykoff, J.M.: Balance as bias: global warming and the us prestige press. Glob. Environ. Chang. 14(2), 125–136 (2004)

    Article  Google Scholar 

  15. Burscher, B., Odijk, D., Vliegenthart, R., De Rijke, M., De Vreese, C.H.: Teaching the computer to code frames in news: comparing two supervised machine learning approaches to frame analysis. Commun. Methods Measures 8(3), 190–206 (2014)

    Article  Google Scholar 

  16. Carroll, N.: In Search We Trust: exploring how search engines are shaping society. Int. J. Knowl. Soc. Res. 5(1), 12–27 (Jan 2014). https://doi.org/10.4018/ijksr.2014010102, http://services.igi-global.com/resolvedoi/resolve.aspx?doi=10.4018/ijksr.2014010102

  17. Chamberlain, J., Kruschwitz, U., Hoeber, O.: Scalable visualisation of sentiment and stance. In: Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). European Language Resources Association (ELRA), Miyazaki, Japan (May 2018), https://aclanthology.org/L18-1660

  18. Chen, S., Khashabi, D., Yin, W., Callison-Burch, C., Roth, D.: Seeing things from a different angle: discovering diverse perspectives about claims. In: Proceedings of NAACL-HLT, pp. 542–557 (2019)

    Google Scholar 

  19. Clarke, C.L., et al.: Novelty and diversity in information retrieval evaluation. In: Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval - SIGIR 2008, p. 659. ACM Press, Singapore, Singapore (2008). https://doi.org/10.1145/1390334.1390446, http://portal.acm.org/citation.cfm?doid=1390334.1390446

  20. Draws, T., Inel, O., Tintarev, N., Baden, C., Timmermans, B.: Comprehensive viewpoint representations for a deeper understanding of user interactions with debated topics. In: ACM SIGIR Conference on Human Information Interaction and Retrieval, pp. 135–145 (2022)

    Google Scholar 

  21. Draws, T., Liu, J., Tintarev, N.: Helping users discover perspectives: enhancing opinion mining with joint topic models. In: 2020 International Conference on Data Mining Workshops (ICDMW), pp. 23–30. IEEE, Sorrento, Italy (Nov 2020). https://doi.org/10.1109/ICDMW51313.2020.00013, https://ieeexplore.ieee.org/document/9346407/

  22. Draws, T., et al.: Explainable cross-topic stance detection for search results. In: CHIIR 2023: ACM SIGIR Conference on Human Information Interaction and Retrieval. CHIIR 2023, ACM SIGIR Conference on Human Information Interaction and Retrieval (2023)

    Google Scholar 

  23. Draws, T., Tintarev, N., Gadiraju, U., Bozzon, A., Timmermans, B.: Assessing viewpoint diversity in search results using ranking fairness metrics. ACM SIGKDD Explorations Newsletter 23(1), 50–58 (May 2021). https://doi.org/10.1145/3468507.3468515, https://dl.acm.org/doi/10.1145/3468507.3468515

  24. Draws, T., Tintarev, N., Gadiraju, U., Bozzon, A., Timmermans, B.: This is not what we ordered: exploring why biased search result rankings affect user attitudes on debated topics. In: Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 295–305. ACM, Virtual Event Canada (Jul 2021). https://doi.org/10.1145/3404835.3462851, https://dl.acm.org/doi/10.1145/3404835.3462851

  25. Drosou, M., Pitoura, E.: Search result diversification. SIGMOD Record 39(1), 7 (2010)

    Article  MATH  Google Scholar 

  26. Dumani, L., Neumann, P.J., Schenkel, R.: A framework for argument retrieval. In: Jose, J.M., et al. (eds.) ECIR 2020. LNCS, vol. 12035, pp. 431–445. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-45439-5_29

    Chapter  Google Scholar 

  27. Epstein, R., Robertson, R.E.: The search engine manipulation effect (SEME) and its possible impact on the outcomes of elections. In: Proceedings of the National Academy of Sciences 112(33), E4512–E4521 (Aug 2015). https://doi.org/10.1073/pnas.1419828112, http://www.pnas.org/lookup/doi/10.1073/pnas.1419828112

  28. Epstein, R., Robertson, R.E., Lazer, D., Wilson, C.: Suppressing the search engine manipulation effect (SEME). In: Proceedings of the ACM on Human-Computer Interaction 1(CSCW), 1–22 (Dec 2017). https://doi.org/10.1145/3134677, https://dl.acm.org/doi/10.1145/3134677

  29. Fuglede, B., Topsoe, F.: Jensen-shannon divergence and hilbert space embedding. In: International Symposium on Information Theory, 2004. ISIT 2004. Proceedings, p. 31. IEEE (2004)

    Google Scholar 

  30. Gao, R., Shah, C.: Toward creating a fairer ranking in search engine results. Inf. Process. Manag. 57(1), 102138 (Jan 2020). https://doi.org/10.1016/j.ipm.2019.102138, https://linkinghub.elsevier.com/retrieve/pii/S0306457319304121

  31. Gevelber, L.: It’s all about ‘me’-how people are taking search personally. Tech. rep. (2018). https://www.thinkwithgoogle.com/marketing-strategies/search/personal-needs-search-trends/

  32. Gezici, G., Lipani, A., Saygin, Y., Yilmaz, E.: Evaluation metrics for measuring bias in search engine results. Inf. Retrieval J. 24(2), 85–113 (Apr 2021). https://doi.org/10.1007/s10791-020-09386-w, http://link.springer.com/10.1007/s10791-020-09386-w

  33. Ghenai, A., Smucker, M.D., Clarke, C.L.: A think-aloud study to understand factors affecting online health search. In: Proceedings of the 2020 Conference on Human Information Interaction and Retrieval, pp. 273–282. ACM, Vancouver BC Canada (Mar 2020). https://doi.org/10.1145/3343413.3377961, https://dl.acm.org/doi/10.1145/3343413.3377961

  34. Grady, C., Lease, M.: Crowdsourcing document relevance assessment with mechanical turk. In: NAACL HLT Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk, pp. 172–179 (2010)

    Google Scholar 

  35. Gretz, S., et al.: A large-scale dataset for argument quality ranking: construction and analysis. Proc. AAAI Conf. Artif. Intell. 34, 7805–7813 (2020). https://doi.org/10.1609/aaai.v34i05.6285

    Article  Google Scholar 

  36. Han, B., Shah, C., Saelid, D.: Users’ perception of search-engine biases and satisfaction. In: Boratto, L., Faralli, S., Marras, M., Stilo, G. (eds.) BIAS 2021. CCIS, vol. 1418, pp. 14–24. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-78818-6_3

    Chapter  Google Scholar 

  37. Helberger, N.: On the democratic role of news recommenders. Digital Journalism 7(8), 993–1012 (Sep 2019). https://doi.org/10.1080/21670811.2019.1623700, https://www.tandfonline.com/doi/full/10.1080/21670811.2019.1623700

  38. Hu, S., Dou, Z., Wang, X., Sakai, T., Wen, J.R.: Search result diversification based on hierarchical intents. In: Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pp. 63–72. ACM, Melbourne Australia (Oct 2015). https://doi.org/10.1145/2806416.2806455, https://dl.acm.org/doi/10.1145/2806416.2806455

  39. Joachims, T., Granka, L., Pan, B., Hembrooke, H., Gay, G.: Accurately interpreting clickthrough data as implicit feedback. ACM SIGIR Forum 51(1), 8 (2016)

    Google Scholar 

  40. Kaya, M., Bridge, D.: Subprofile-aware diversification of recommendations. User Modeling and User-Adapted Interaction 29(3), 661–700 (Jul 2019). https://doi.org/10.1007/s11257-019-09235-6, http://link.springer.com/10.1007/s11257-019-09235-6

  41. Küçük, D., Can, F.: Stance detection: a survey. ACM Comput. Surv. (CSUR) 53(1), 1–37 (2020)

    Article  Google Scholar 

  42. Kulshrestha, J., et al.: Quantifying search bias: investigating sources of bias for political searches in social media. In: Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, pp. 417–432. ACM, Portland Oregon USA (Feb 2017). https://doi.org/10.1145/2998181.2998321, https://dl.acm.org/doi/10.1145/2998181.2998321

  43. Kulshrestha, J., et al.: Search bias quantification: investigating political bias in social media and web search. Information Retrieval Journal 22(1–2), 188–227 (Apr 2019). https://doi.org/10.1007/s10791-018-9341-2, http://link.springer.com/10.1007/s10791-018-9341-2

  44. Ludolph, R., Allam, A., Schulz, P.J.: Manipulating google’s knowledge graph box to counter biased information processing during an online search on vaccination: application of a technological debiasing strategy. J. Med. Internet Res. 18(6), e137 (Jun 2016). https://doi.org/10.2196/jmir.5430, http://www.jmir.org/2016/6/e137/

  45. McKay, D., et al.: We are the change that we seek: information interactions during a change of viewpoint. In: Proceedings of the 2020 Conference on Human Information Interaction and Retrieval, pp. 173–182 (2020)

    Google Scholar 

  46. McKay, D., Owyong, K., Makri, S., Gutierrez Lopez, M.: Turn and face the strange: investigating filter bubble bursting information interactions. In: ACM SIGIR Conference on Human Information Interaction and Retrieval, pp. 233–242. CHIIR ’22, Association for Computing Machinery, New York, NY, USA (2022). https://doi.org/10.1145/3498366.3505822

  47. Mulder, M., Inel, O., Oosterman, J., Tintarev, N.: Operationalizing framing to support multiperspective recommendations of opinion pieces. In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 478–488. ACM, Virtual Event Canada (Mar 2021). https://doi.org/10.1145/3442188.3445911, https://dl.acm.org/doi/10.1145/3442188.3445911

  48. Munson, S.A., Resnick, P.: Presenting diverse political opinions: how and how much. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1457–1466. CHI 2010, Association for Computing Machinery, New York, NY, USA (2010). https://doi.org/10.1145/1753326.1753543

  49. Pan, B., Hembrooke, H., Joachims, T., Lorigo, L., Gay, G., Granka, L.. In Google We Trust: Users’ Decisions on Rank, Position, and Relevance. J. Comput.-Mediated Commun. 12(3), 801–823 (Apr 2007). https://doi.org/10.1111/j.1083-6101.2007.00351.x, https://academic.oup.com/jcmc/article/12/3/801-823/4582975

  50. Pathiyan Cherumanal, S., Spina, D., Scholer, F., Croft, W.B.: Evaluating fairness in argument retrieval. In: Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pp. 3363–3367 (2021)

    Google Scholar 

  51. Pennycook, G., Rand, D.G.: Lazy, not biased: susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition 188, 39–50 (Jul 2019). https://doi.org/10.1016/j.cognition.2018.06.011, https://linkinghub.elsevier.com/retrieve/pii/S001002771830163X

  52. Pogacar, F.A., Ghenai, A., Smucker, M.D., Clarke, C.L.: The positive and negative influence of search results on people’s decisions about the efficacy of medical treatments. In: Proceedings of the ACM SIGIR International Conference on Theory of Information Retrieval, pp. 209–216. ACM, Amsterdam The Netherlands (Oct 2017). https://doi.org/10.1145/3121050.3121074, https://dl.acm.org/doi/10.1145/3121050.3121074

  53. Purcell, K., Rainie, L., Brenner, J.: Search engine use 2012 (2012)

    Google Scholar 

  54. Puschmann, C.: Beyond the bubble: assessing the diversity of political search results. Digital Journalism 7(6), 824–843 (Jul 2019). https://doi.org/10.1080/21670811.2018.1539626, https://www.tandfonline.com/doi/full/10.1080/21670811.2018.1539626

  55. Radlinski, F., Craswell, N.: Comparing the sensitivity of information retrieval metrics. In: Proceedings of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 667–674 (2010)

    Google Scholar 

  56. Reimer, J.H., Huck, J., Bondarenko, A.: Grimjack at touché 2022: axiomatic re-ranking and query reformulation. Working Notes Papers of the CLEF (2022)

    Google Scholar 

  57. Rieger, A., Draws, T., Theune, M., Tintarev, N.: This item might reinforce your opinion: obfuscation and labeling of search results to mitigate confirmation bias. In: Proceedings of the 32nd ACM Conference on Hypertext and Social Media, pp. 189–199 (2021)

    Google Scholar 

  58. Sakai, T., Craswell, N., Song, R., Robertson, S., Dou, Z., Lin, C.Y.: Simple evaluation metrics for diversified search results, p. 9 (2010)

    Google Scholar 

  59. Santos, R.L., Macdonald, C., Ounis, I.: Exploiting query reformulations for web search result diversification. In: Proceedings of the 19th international conference on World wide web, pp. 881–890 (2010)

    Google Scholar 

  60. Stab, C., et al.: ArgumenText: searching for arguments in heterogeneous sources. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pp. 21–25 (2018)

    Google Scholar 

  61. Tintarev, N., Sullivan, E., Guldin, D., Qiu, S., Odjik, D.: Same, same, but different: algorithmic diversification of viewpoints in news. In: Adjunct Publication of the 26th Conference on User Modeling, Adaptation and Personalization, pp. 7–13. ACM, Singapore Singapore (Jul 2018). https://doi.org/10.1145/3213586.3226203, https://dl.acm.org/doi/10.1145/3213586.3226203

  62. Voorhees, E.M.: Variations in relevance judgments and the measurement of retrieval effectiveness. Inf. Process. Manag. 36(5), 697–716 (2000)

    Article  Google Scholar 

  63. Vrijenhoek, S., Bénédict, G., Gutierrez Granada, M., Odijk, D., De Rijke, M.: Radio-rank-aware divergence metrics to measure normative diversity in news recommendations. In: Proceedings of the 16th ACM Conference on Recommender Systems. pp. 208–219 (2022)

    Google Scholar 

  64. Vrijenhoek, S., Kaya, M., Metoui, N., Möller, J., Odijk, D., Helberger, N.: Recommenders with a mission: Assessing diversity in news recommendations. In: Proceedings of the 2021 Conference on Human Information Interaction and Retrieval, pp. 173–183. CHIIR 2021, Association for Computing Machinery, New York, NY, USA (2021). https://doi.org/10.1145/3406522.3446019

  65. White, R.: Beliefs and biases in web search. In: Proceedings of the 36th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 3–12. ACM, Dublin Ireland (Jul 2013). https://doi.org/10.1145/2484028.2484053, https://dl.acm.org/doi/10.1145/2484028.2484053

  66. White, R.W., Hassan, A.: Content bias in online health search. ACM Transactions on the Web 8(4), 1–33 (Nov 2014). https://doi.org/10.1145/2663355, https://dl.acm.org/doi/10.1145/2663355

  67. White, R.W., Horvitz, E.: Belief dynamics and biases in web search. ACM Trans. Inf. Syst. 33(4), 1–46 (May 2015). https://doi.org/10.1145/2746229, https://dl.acm.org/doi/10.1145/2746229

  68. Xu, L., Zhuang, M., Gadiraju, U.: How do user opinions influence their interaction with web search results?, pp. 240–244. Association for Computing Machinery, New York, NY, USA (2021), https://doi.org/10.1145/3450613.3456824

  69. Yamamoto, Y., Shimada, S.: Can disputed topic suggestion enhance user consideration of information credibility in web search? In: Proceedings of the 27th ACM Conference on Hypertext and Social Media, pp. 169–177. ACM, Halifax Nova Scotia Canada (Jul 2016). https://doi.org/10.1145/2914586.2914592, https://dl.acm.org/doi/10.1145/2914586.2914592

  70. Yang, K., Stoyanovich, J.: Measuring fairness in ranked outputs. In: Proceedings of the 29th International Conference on Scientific and Statistical Database Management, pp. 1–6. ACM, Chicago IL USA (Jun 2017). https://doi.org/10.1145/3085504.3085526, https://dl.acm.org/doi/10.1145/3085504.3085526

  71. Yom-Tov, E., Dumais, S., Guo, Q.: Promoting civil discourse through search engine diversity. Soc. Sci. Comput. Rev. 32(2), 145–154 (Apr 2014). https://doi.org/10.1177/0894439313506838, http://journals.sagepub.com/doi/10.1177/0894439313506838

  72. Zehlike, M., Bonchi, F., Castillo, C., Hajian, S., Megahed, M., Baeza-Yates, R.: FA*IR: A Fair Top-k ranking algorithm. In: Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pp. 1569–1578. ACM, Singapore Singapore (Nov 2017). https://doi.org/10.1145/3132847.3132938, https://dl.acm.org/doi/10.1145/3132847.3132938

  73. Zehlike, M., Yang, K., Stoyanovich, J.: Fairness in ranking, part i: score-based ranking. ACM Comput. Surv. 55(6), 1–36 (2022)

    Google Scholar 

Download references

Acknowledgements

This activity is financed by IBM and the Allowance for Top Consortia for Knowledge and Innovation (TKI’s) of the Dutch ministry of economic affairs.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tim Draws .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Draws, T. et al. (2023). Viewpoint Diversity in Search Results. In: Kamps, J., et al. Advances in Information Retrieval. ECIR 2023. Lecture Notes in Computer Science, vol 13980. Springer, Cham. https://doi.org/10.1007/978-3-031-28244-7_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-28244-7_18

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-28243-0

  • Online ISBN: 978-3-031-28244-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics