Serendipity as an emerging design principle of the infosphere: challenges and opportunities


Underestimated for a long time, serendipity is an increasingly recognized design principle of the infosphere. Being influenced by environmental and human factors, the experience of serendipity encompasses fundamental phases of production, distribution and consumption of information. On the one hand, design information architectures for serendipity increases the diversity of information encountered as well as users’ control over information processes. On the other hand, serendipity is a capability. It helps individuals to internalize and adopt strategies that increase the chances of experiencing it. As such, the pursuit for serendipity can help to burst filter bubbles and weaken echo chambers in social media. The article reviews the literature on emerging issues surrounding serendipity in human–computer interactions. By doing so, it firstly presents the study of serendipity and the debate about its role in digital environments. Then, it introduces the main features of a preliminary architecture for serendipity. Finally, it analyzes from an interdisciplinary perspective the values that embraces and sustains. The conclusion is that serendipity can be conceived as an emerging design and ethical principle able to strengthen media pluralism and other emerging human rights in the context of online personalization. Main limitations and potential unintended consequences are also discussed.

This is a preview of subscription content, log in to check access.


  1. 1.

    It had been indeed perceived as an “esoteric word” given that it did not appear in any of the abridged dictionaries until 1951 (Merton and Barber 2006).

  2. 2.

    Consider one of the most famous example of serendipity: the discovery of penicillin by Fleming. It has been argued that at least 28 scientists before Fleming reported the same colonies of bacteria that led him to the discovery of penicillin (de Melo 2018). Yet, all chose to view that anomaly as an unfortunate error rather than an opportunity for discovery.

  3. 3.

    There are many other taxonomies of course (for a recent one see Yaqub 2018). Yet, one of the most popular and intuitive taxonomy of serendipitous discoveries is the one by Friedel (see de Melo 2018), which identifies three main forms from science’s historical examples: Columbian, which occurs when one is looking for one thing of value, but finds another one (from Columbus’s unsought discovery of America); Archimedean, which occurs when one discovers sought-for results, although by routes not logically deduced but luckily observed (from Archimedes’s “Eureka” moment); and Galilean, which occurs when one discovers something valuable without intentionally seeking it (from Galileo’s unexpected discoveries through his telescope).

  4. 4.

    Actually, there is no consensus on the definition of serendipity, and it also varies on the field of study (McCay-Peet and Toms 2017). Moreover, some of the adjectives used are difficult to operationalize it are often used interchangeably (for instance, unexpectedness and surprise, or usefulness, interestingness and meaningfulness).

  5. 5.

    Nearly 1 in 10 of the most-cited scientific papers of all time explicitly mention serendipity as a contributing factor (Campanario 1996) and it has been estimated that over 50 percent of scientific discovery might have been unintended (Dunbar and Fugelsang 2005).

  6. 6.

    Interestingly, it is possible to trace back the influence of serendipity in the foundations of cybernetics by Norbert Wiener. Olma (2016) argues that the Rad Lab—the famous Radiation Laboratory located at the Massachusetts Institute of Technology (MIT)—was an example of an ‘institutionalised serendipity’ environment which created the necessary conditions that encouraged “a transversal exchange of knowledge that, in turn, enabled Wiener to create the discipline of cybernetics, which itself allowed for the development of ARPANET, one of the technical foundations of the internet” (de Melo 2018).

  7. 7.

    If we want to generalize the concept of surprise we may have that an event which occurs with high probability should have a low surprise, whereas an event which occurs with low probability should have a high surprise. This sheds light on the paradoxical challenge to program serendipity.

  8. 8.

    Folksonomy, also known as collaborative tagging, social classification, social indexing, and social tagging, is a collaborative user-generated system of classifying and organizing online content into different categories by the use of metadata such as electronic tags. Famous social networks based on it are and StumbleUpon.

  9. 9.

    Even the metaphor ‘surfing the Internet’ was chosen to refer to a fun feeling and “something that would evoke a sense of randomness, chaos, and even danger” (Polly 1992). Without search engines, the journey on the web was indeed initially intended as discovering what was out there—accidentally—not on finding specific content (Hendler and Hugill 2013).

  10. 10.

    See for example the project which shows how certain video on Youtube—mostly about conspiracy theories—are much more recommended than others. One of its founder—a former Google employee—argues the possibility that certain videos that discredit traditional media are more recommended with the goal of further engage users within its platform. See

  11. 11.

    Algorithms which predict individual’s preferences tend to nudge users’ comfort zones. For instance, Facebook is deeply committed to maintain friends’ relationships. Its newsfeed is therefore moderated by homophily (DeVito 2017) which is, however, the primary driver of content diffusion, especially misinformation and conspiracy theories, with a frequent result of homogeneous, polarized clusters (Del Vicario et al. 2016). Moreover, information intermediaries may increase engagement also by developing unconscious addictive rituals based on gamification and dark patterns with the help of affective computing and “captology” (Fogg et al. 2002). Algorithms indeed explore manipulative strategies that may be detrimental to users (Albanie et al 2017).

  12. 12.

    Implicit personalization determines user preferences from data collected (Thurman and Schifferes 2012). It actually increases political selective exposure as it makes information avoidance less psychologically costly (Dylko et al. 2018).

  13. 13.

    To give a general portray of the magnitude of the phenomenon, consider that the average time currently spent on Facebook by a user (in total circa 2 billion) is about 1 h a day, and the posts encountered are circa 350, prioritized on about 1.500 (thus, roughly 75% are hidden) (Backstrom 2013). In U.S., two-thirds (67%) report that they get at least some of their news on social media (Shearer and Gottfried 2017). Yet, just 14% of Facebook users believe ordinary users have a lot of control over the newsfeed and only about 36% intentionally tried to influence that (Smith 2018).

  14. 14.

    Negroponte referred to two main concepts: “daily me” and “daily us”. With the first he referred to personalized online news summaries (tending to a convergent system), while with the latter to non-personalized online news summaries (tending to a divergent system). Of course, these are not two black and white different states but one tends to move between them.

  15. 15.

    Notably, the most significant affordance that Facebook currently provides is to select the option “Most recent” stories in the Newsfeed. However, anytime you re-launch the site the setting spontaneously reset itself by default to “Top Stories”—the classic curated newsfeed. See and also

  16. 16.

    Consider that Youtube’s recommendations already drive more than 70% of the time spent in the video sharing platform. See

  17. 17.

    As the Greek philosopher Heraclitus (544–484 B.C.) famously argued: “if you do not expect it, you will not find the unexpected, for it is hard to find and difficult.”

  18. 18.

    While serendipity is usually acknowledged as a pleasant experience, in this paper it is also valued the role of unpleasant encounters, usually called zemblanity. This means that designing for serendipity unavoidably implies also unpleasant encounters—albeit potentially serendipitous.

  19. 19.

    Only 24% were aware that Facebook prioritizes certain posts and hides others from users’ feeds while 37% believed every post is included in the newsfeed (Powers 2017).

  20. 20.

    As an ENISA study (Danezis et al. 2015) summarizes: “Intervenability ensures intervention is possible concerning all ongoing or planned privacy-relevant data processing, in particular by those persons whose data are processed. The objective of intervenability is the application of corrective measures and counter-balances where necessary.”

  21. 21.

    This seems to be true also for consumers’ advertisement satisfaction. In fact, there is no consensus yet on the effectiveness of targeted advertisement (Zuiderveen Borgesius et al. 2014). On the contrary, contextual advertisement may actually increase serendipitous encounters more than the personalized one. Given the oligopoly of the advertisement industry, there might indeed be incentives not to meet demand and supply as efficiently as possible.

  22. 22.

    For instance, Sunstein (2017) proposed that social media like Facebook could create a “serendipity button” for news and opinions, allowing people to opt in, especially during elections. Similarly, related stories at the bottom of a “post” seem to help in counteracting misinformation or simply enriching a user perspective in a serendipitous way (Bode and Vraga 2015).

  23. 23.

    Notably, plug-ins like Balancer and Scoopinion can show a histogram of the user’s liberal and conservative pages or what journals a user use to read and for how long, with the aim to increase awareness so that they would make their reading behaviour more balanced. Other tools like Social Fixer and Gobo (a social media aggregator built by MIT) provide more interactive control over design choices and information filtering processes.

  24. 24.

    For example, MIT Lab created for Twitter a plug-in called Flipfeed which basically provides to the users the possibility to scroll the feed of a random individual which resides in a far ideological spectrum from our own.

  25. 25.

    For example, you can glance at the beginning, the middle and the end of a book, or a newspaper, so you can find a page by chance, or a particular paragraph or line. Online one may miss that strange feeling of mystery and accident when we come across one particular line by chance. It may feel somehow irrationally significant because we are, indeed, also irrational creatures. This feeling of awe and surprise is so deeply entrenched in human nature that many cultures reflected it in a practice called “bibliomancy”, the art of predicting the future with books (Forsyth 2014). When an ancient Greek wanted to know the future, he would take a copy of the Iliad and let it fall open at a random page, point at a line, read it out, and that would be his fate. This was the Sortes Homericae. The Romans did the same thing with Virgil—the Sortes Virgilianae. Even medieval chaps did that with the Bible and called it the Sortes Sanctorum. Interestingly, according to Walpole—who coined the term serendipity—his particular brand of discovery was referred to by a certain “Mr. Chute” as a Sortes Walpolianae (de Melo 2018).

  26. 26.

    It is important to stress that a serendipity-driven information filtering would actually increase diversity as long as it remains highly probabilistic—and even purely random—as is it by definition, so that serendipitous encounters remain highly unpredictable and rare. A truly perfect personalized serendipity engine might even decrease the diversity of information and, as a consequence, most of its beneficial effects. This issue, however, concerns only Type A serendipity and particularly what is defined as Kairos (see note 33).

  27. 27.

    Ambient Intelligence refers to the eventual future vision in which automatic smart online and offline environments interact with each other and take an unprecedented number of decisions for us and about us in order to cater to our inferred preferences. It may actually represent a new paradigm in the construction of knowledge (Hildebrandt and Koops 2010).

  28. 28.

    Building on the work of Hirshman, Harambam et al. (2018), intend the concept of voice as the possibility to exert control over the data-driven processes that shape news provision.

  29. 29.

    The word ‘semaniticise’ did not really exist and it is defined by Floridi (2011) as the way in which “we make sense of our environment, of ourselves in it, and of our interactions with and within it.” (p. 564).

  30. 30.

    In this case, the un/expectedness is referred to the probability assigned by the engineers that a certain information may be liked. The challenge is indeed how to balance the probability distribution of the information prioritized.

  31. 31.

    For example, Google implicitly afford users to experience such passive serendipity with the search button “I am Feeling Lucky”. However, it is rarely used. Instead, a more constructive design choice Google implemented was a toggle feature directly into the central search bar that allowed to generalize the query results (See Carson 2015). This option, however, has been recently relegated into the advanced options, no more evident in the Google results page.

  32. 32.

    Kairos is a Greek divinity, personification of the “opportune moment”. It might be conceived as the “evil brother” of serendipity. It is in fact a surprising event yet not necessarily serendipitous. It is the epitome of convergent systems. Also, the individual is persuaded and has no autonomy whatsoever. Yet these hyper-personalized and persuasive techniques will be increasingly employed. Once tasted, however, such algorithmic recommendations may be impossible to live without. To resist it, then, it may be beneficial to employ convergent systems also for serendipitous recommendations. Afterwards, the boundary between Kairos and Serendipity is only how ‘good’ the intentions (and skills) of designers and engineers are.


  1. Abbott, A. (2008). The traditional future: A computational theory of library research. College & Research Libraries, 69(6), 524–545.

    Article  Google Scholar 

  2. Ahmadi, M., & Wohn, D. Y. (2018). The antecedents of incidental news exposure on social media. Social Media + Society, 4(2), 2056305118772827.

    Article  Google Scholar 

  3. Albanie, S., Shakespeare, H., & Gunter, T. (2017). Unknowable manipulators: Social network curator algorithms. arXiv preprint arXiv:1701.04895.

  4. Auray, N. (2007). Folksonomy: The new way to serendipity. International Journal of Digital Economics, 65, 67–88.

    Google Scholar 

  5. Backstrom, L. (2013). News feed FYI: A window into news feed (p. 6). Menlo Park: Facebook for Business.

    Google Scholar 

  6. Björneborn, L. (2017). Three key affordances for serendipity: Toward a framework connecting environmental and personal factors in serendipitous encounters. Journal of Documentation, 73(5), 1053–1081.

    Article  Google Scholar 

  7. Bode, L., & Vraga, E. K. (2015). In related news, that was wrong: The correction of misinformation through related stories functionality in social media. Journal of Communication, 65(4), 619–638.

    Article  Google Scholar 

  8. Bodo, B., Helberger, N., Irion, K., Zuiderveen Borgesius, F., Moller, J., van de Velde, B., … de Vreese, C. (2017). Tackling the algorithmic control crisis—The technical, legal, and ethical challenges of research into algorithmic agents. Yale JL & Tech., 19, 133.

    Google Scholar 

  9. Bogers, T., & Björneborn, L. (2013). Micro-serendipity: Meaningful coincidences in everyday life shared on Twitter. iConference, 2013, 196–208.

    Google Scholar 

  10. Bozdag, E., & Timmermans, E. (2011). Values in the filter bubble Ethics of Personalization Algorithms in Cloud Computing. In Proceedings, 1st International Workshop on Values in Design–Building Bridges between RE, HCI and Ethics, Lisbon.

  11. Bozdag, E., & van den Hoven, J. (2015). Breaking the filter bubble: Democracy and design. Ethics and Information Technology, 17(4), 249–265.

    Article  Google Scholar 

  12. Bucher, T. (2017). The algorithmic imaginary: Exploring the ordinary affects of Facebook algorithms. Information, Communication & Society, 20(1), 30–44.

    Article  Google Scholar 

  13. Campanario, J. M. (1996). Using citation classics to study the incidence of serendipity in scientific discovery. Scieontometrics, 37(1), 3–24.

    Article  Google Scholar 

  14. Campos, J., & Figueiredo, A. D. (2002). Programming for serendipity. In Proceedings of the AAAI fall symposium on chance discoveryThe discovery and management of chance events.

  15. Carr, N. (2016). Utopia is creepy: And other provocations. New York: W W Norton & Co Inc.

    Google Scholar 

  16. Carr, P. L. (2015). Serendipity in the stacks: Libraries, information architecture, and the problems of accidental discovery. College & Research Libraries, 76, 831–842.

    Article  Google Scholar 

  17. Carson, A. B. (2015). Public discourse in the age of personalization: Psychological explanations and political implications of search engine bias and the filter bubble. Journal of Science Policy & Governance, 7(1).

  18. Cobo, C., & Moravec, J. (2011). Invisible Learning. Toward a New Ecology of Education. Col·lecció Transmedia XXI. Laboratori de Mitjans Interactius/Publicacions i Edicions de la Universitat de Barcelona. Barcelona.

  19. Colton, S., & Wiggins, G. A. (2012). Computational creativity: The final frontier?. In Ecai (Vol. 2012, pp. 16–21).

  20. Cunningham, D. J. (2001). Fear and loathing in the information age. Cybernetics & Human Knowing, 8(4), 64–74.

    Google Scholar 

  21. Danezis, G., Domingo-Ferrer, J., Hansen, M., Hoepman, J. H., Metayer, D. L., Tirtea, R., & Schiffner, S. (2015). Privacy and data protection by design-from policy to engineering. arXiv preprint arXiv:1501.03726.

  22. Darbellay, F., Moody, Z., Sedooka, A., & Steffen, G. (2014). Interdisciplinary research boosted by serendipity. Creativity Research Journal, 26(1), 1–10.

    Article  Google Scholar 

  23. de Melo, R. M. C. (2018). On serendipity in the digital medium: Towards a framework for valuable unpredictability in interaction Design.

  24. De Rond, M. (2014). The structure of serendipity. Culture and Organization, 20(5), 342–358.

    Article  Google Scholar 

  25. Delacroix, S. (2018). Taking turing by surprise? Designing digital computers for morally-loaded contexts. arXiv preprint arXiv:1803.04548.

  26. Derakhshan, H. (2016). Social Media Is Killing Discourse Because It’s Too Much Like TV in MIT Technology Review. Accessed Jan 15, 2018, from

  27. DeVito, M. A. (2017). From editors to algorithms: A values-based approach to understanding story selection in the Facebook news feed. Digital Journalism, 5(6), 753–773.

    Article  Google Scholar 

  28. Domingos, P. (2015). The master algorithm: How the quest for the ultimate learning machine will remake our world. New York: Basic Books.

    Google Scholar 

  29. Dunbar, K., & Fugelsang, J. (2005). Scientific thinking and reasoning. The Cambridge Handbook of Thinking and Reasoning, 705–725.

  30. Dylko, I., Dolgov, I., Hoffman, W., Eckhart, N., Molina, M., & Aaziz, O. (2018). Impact of customizability technology on political polarization. Journal of Information Technology & Politics, 15(1), 19–33.

    Article  Google Scholar 

  31. Edward Foster, A., & Ellis, D. (2014). Serendipity and its study. Journal of Documentation, 70(6), 1015–1038.

    Article  Google Scholar 

  32. Erdelez, S. (1997). Information encountering: a conceptual framework for accidental information discovery. In Proceedings of an international conference on Information seeking in context (pp. 412–421). Taylor Graham Publishing, London.

  33. Erdelez, S. (2004). Investigation of information encountering in the controlled research environment. Information Processing & Management, 40(6), 1013–1025.

    MATH  Article  Google Scholar 

  34. Erdelez, S., & Jahnke, I. (2018). Personalized systems and illusion of serendipity: A sociotechnical lens. In Workshop of WEPIR 2018.

  35. Eskens, S., Helberger, N., & Moeller, J. (2017). Challenged by news personalisation: Five perspectives on the right to receive information. Journal of Media Law, 9(2), 259–284.

    Article  Google Scholar 

  36. Floridi, L. (2011). The informational nature of personal identity. Minds and Machines, 21(4), 549.

    Article  Google Scholar 

  37. Floridi, L. (2015a). The politics of uncertainty. Philosophy & Technology, 28(1), 1–4.

    Article  Google Scholar 

  38. Floridi, L. (2015b). The onlife manifesto. Cham: Springer.

    Google Scholar 

  39. Floridi, L. (2016a). Mature information societies—A matter of expectations. Philosophy & Technology, 29(1), 1–4.

    MathSciNet  Article  Google Scholar 

  40. Floridi, L. (2016b). Tolerant paternalism: Pro-ethical design as a resolution of the dilemma of toleration. Science and Engineering Ethics, 22(6), 1669–1688.

    Article  Google Scholar 

  41. Fogg, B. J., Lee, E., & Marshall, J. (2002). Interactive technology and persuasion. The Handbook of Persuasion: Theory and Practice. Thousand Oaks: Sage.

    Google Scholar 

  42. Forsyth, M. (2014). The unknown unknown: Bookshops and the delight of not getting what you wanted. London: Icon Books Ltd.

    Google Scholar 

  43. Friedman, B., Kahn, P., & Borning, A. (2002). Value sensitive design: Theory and methods. University of Washington technical report, pp. 02–12.

  44. Gal, M. S. (2017). Algorithmic challenges to autonomous choice. Michigan Telecommunications and Technology Law Review, 2017.

  45. Ge, M., Delgado-Battenfeld, C., & Jannach, D. (2010). Beyond accuracy: Evaluating recommender systems by coverage and serendipity. In Proceedings of the fourth ACM conference on Recommender systems (pp. 257–260). ACM, New York.

  46. Gibson, J. J. (2014). The ecological approach to visual perception: Classic edition. Hove: Psychology Press.

    Google Scholar 

  47. Gillespie, T. (2014). The relevance of algorithms. Media technologies: Essays on communication, materiality, and society, p. 167.

  48. Granovetter, M. S. (1977). The strength of weak ties. In Social networks (pp. 347–367). Chicago: University of Chicago Press

    Google Scholar 

  49. Gup, T. (1997). Technology and the end of serendipity. The Chronicle of Higher Education, 44(21), A52.

    Google Scholar 

  50. Harambam, J., Helberger, N., & van Hoboken, J. (2018). Democratizing algorithmic news recommenders: How to materialize voice in a technologically saturated media ecosystem. Philosophical Transactions A, 376(2133), 20180088.

    Article  Google Scholar 

  51. Helberger, N. (2011). Diversity by design. Journal of Information Policy, 1, 441–469.

    Article  Google Scholar 

  52. Helberger, N., Karppinen, K., & D’Acunto, L. (2016). Exposure diversity as a design principle for recommender systems. Information, Communication & Society, 21, 1–17.

  53. Hendler, J., & Hugill, A. (2013). The syzygy surfer:(Ab) using the semantic web to inspire creativity. International Journal of Creative Computing, 1(1), 20–34.

    Article  Google Scholar 

  54. Hildebrandt, M. (2009). Profiling and AmI. In The future of identity in the information society (pp. 273–310). Berlin: Springer.

    Google Scholar 

  55. Hildebrandt, M. (2017). Privacy as protection of the incomputable self: Agonistic machine learning.

  56. Hildebrandt, M., & Koops, B. J. (2010). The challenges of ambient law and legal protection in the profiling era. The Modern Law Review, 73(3), 428–460.

    Article  Google Scholar 

  57. Hoffmann, C. P., Lutz, C., Meckel, M., & Ranzini, G. (2015). Diversity by choice: Applying a social cognitive perspective to the role of public service media in the digital age. International Journal of Communication, 9(1), 1360–1381.

    Google Scholar 

  58. Hoven, J. van den, Miller, S., & Pogge, T. (Eds.). (2017). Designing in ethics. Cambridge: Cambridge University Press.

    Google Scholar 

  59. Karppinen, K. (2008). Media and the paradoxes of pluralism. The Media and Social Theory, 27–42.

  60. Keymolen, E. (2016). Trust on the line: A philosophycal exploration of trust in the networked era.

  61. Kop, R. (2012). The unexpected connection: Serendipity and human mediation in networked learning. Journal of Educational Technology & Society, 15(2), 2–11.

    MathSciNet  Google Scholar 

  62. Kotkov, D., Wang, S., & Veijalainen, J. (2016). A survey of serendipity in recommender systems. Knowledge-Based Systems, 111, 180–192.

    Article  Google Scholar 

  63. Kroes, P., & van de Poel, I. (2015). Design for values and the definition, specification, and operationalization of values. Handbook of Ethics, Values, and Technological Design: Sources, Theory, Values and Application Domains, 151–178.

  64. Krotoski, A. (2011). Digital serendipity: Be careful what you don’t wish for in The Guardian International Edition. Accessed Jan 15, 2018, from

  65. Loepp, B., Hussein, T., & Ziegler, J. (2014). Choice-based preference elicitation for collaborative filtering recommender systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (New York, NY, 2014), pp. 3085–3094.

  66. Lupo, L. (2012). Filosofia della serendipity (Vol. 73). Napoli: Guida Editori.

    Google Scholar 

  67. Lutz, C., Hoffmann, P. C., & Meckel, M. (2017). Online serendipity: A contextual differentiation of antecedents and outcomes. Journal of the Association for Information Science and Technology, 68(7), 1698–1710.

    Article  Google Scholar 

  68. Lynch, M. P. (2016). The Internet of us: Knowing more and understanding less in the age of big data. New York: WW Norton & Company.

    Google Scholar 

  69. Makri, S. (2014). Serendipity is not Bullshit. Paper presented at the EuroHCIR 2014, The 4th European Symposium on Human-Computer Interaction and Information Retrieval, 13 Sep 2014, London, UK.

  70. Makri, S., & Blandford, A. (2012). Coming across information serendipitously—Part 1, p. A process model. Journal of Documentation, 68(5), 684–705.

    Article  Google Scholar 

  71. Makri, S., Blandford, A., Woods, M., Sharples, S., & Maxwell, D. (2014). “Making my own luck”: Serendipity strategies and how to support them in digital information environments. Journal of the Association for Information Science and Technology, 65(11), 2179–2194.

    Article  Google Scholar 

  72. Maloney, A., & Conrad, L. Y. (2016). Expecting the unexpected: Serendipity, discovery, and the scholarly research process (white paper), Thousand Oaks: SAGE.

    Google Scholar 

  73. Marcus, G. E. (2010). Sentimental citizen: Emotion in democratic politics. University Park: Penn State Press.

    Google Scholar 

  74. Matt, C., Benlian, A., Hess, T., & Weiß, C. (2014). Escaping from the Filter Bubble? The Effects of Novelty and Serendipity on Users’ Evaluations of Online Recommendations. In Proceedings of the 35th International Conference on Information Systems (ICIS2014), Auckland, New Zealand.

  75. McCay-Peet, L., & Toms, E. G. (2013). Proposed facets of a serendipitous digital environment. In Teoksessa iConference 2013 Proceedings, ss. 688–691.

  76. McCay-Peet, L., & Toms, E. G. (2017). Researching serendipity in digital information environments. Synthesis Lectures on Information Concepts, Retrieval, and Services, 9(6), i–i91.

    Article  Google Scholar 

  77. Meckel, M. (2011). “Sos—save our serendipity”, Personal Blog.

  78. Merton, R. K., & Barber, E. (2006). The travels and adventures of serendipity: A study in sociological semantics and the sociology of science. Princeton: Princeton University Press.

    Google Scholar 

  79. Nagulendra, S., & Vassileva, J. (2016). Providing awareness, explanation and control of personalized filtering in a social networking site. Information Systems Frontiers, 18(1), 145–158.

    Article  Google Scholar 

  80. Negroponte, N. (1996). Being digital. New York: Vintage.

    Google Scholar 

  81. Olma, S. (2016). In Defence of Serendipity. Watkins Media Limited, 2016.

  82. Pariser, E. (2011). The filter bubble: How the new personalized web is changing what we read and how we think. New York: Penguin.

    Google Scholar 

  83. Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard: Harvard University Press.

    Google Scholar 

  84. Peirce, C. S. (1992). The essential Peirce: Selected philosophical writings (Vol. 2). Indiana: Indiana University Press.

    Google Scholar 

  85. Pentland, A. (2015). Social physics: How social networks can make us smarter. New York: Penguin.

    Google Scholar 

  86. Pinch, T. J., & Bijker, W. E. (1984). The social construction of facts and artefacts: Or how the sociology of science and the sociology of technology might benefit each other. Social Studies of Science, 14(3), 399–441.

    Article  Google Scholar 

  87. Polly, J. A. (1992). Surfing the internet. An Introduction. Wilson Library Bulletin, 66(10), 38.

    Google Scholar 

  88. Powers, E. (2017). My news feed is filtered? Awareness of news personalization among college students. Digital Journalism, 5(10), 1315–1335.

    Article  Google Scholar 

  89. Quattrociocchi, W., Scala, A., & Sunstein, C. R. (2016). Echo chambers on Facebook. Available at SSRN:

  90. Race, T., & Makri, S. (2016). Accidental information discovery: Cultivating serendipity in the digital age. Cambridge: Chandos Publishing.

    Google Scholar 

  91. Reviglio, U. (2017). Serendipity by design? How to turn from diversity exposure to diversity experience to face filter bubbles in social media. In International Conference on Internet Science (pp. 281–300). Springer, Cham.

  92. Rubin, V. L., Burkell, J., & Quan-Haase, A. (2011). Facets of serendipity in everyday chance encounters: A grounded theory approach to blog analysis. Information Research, 16(3), 27

    Google Scholar 

  93. Schmidt, E. (2006). How we’re doing and where we’re going. Google Inc. Press Day 2006.

  94. Schönbach, K. (2007). ‘The own in the foreign’: Reliable surprise-an important function of the media? Media, Culture & Society, 29(2), 344–353.

    Article  Google Scholar 

  95. Sen, A. (1990). Justice: Means versus freedoms. Philosophy & Public Affairs, 19(2), 111–121.

    Google Scholar 

  96. Sen, A. (2005). Human rights and capabilities. Journal of Human Development, 6(2), 151–166.

    Article  Google Scholar 

  97. Shannon, C. (1948). A mathematical theory of communication. Bell System Technical Journal, 27, 379–423.

    MathSciNet  MATH  Article  Google Scholar 

  98. Shearer, E., & Gottfried, J. (2017). News use across social media platforms 2017. Pew Research Center, Journalism and Media.

  99. Simon, H. A. (1955). A behavioral model of rational choice. The Quarterly Journal of Economics, 69(1), 99–118.

    Article  Google Scholar 

  100. Smith, A. (2018). Many Facebook users don’t understand how the site’s news feed works. Pew Research Center, Journalism and Media.

  101. Stiegler, B. (2017). The new conflict of the faculties and functions: Quasi-causality and serendipity in the anthropocene. Qui Parle, 26(1), 79–99.

    Article  Google Scholar 

  102. Sunstein, C. R. (2009). Going to extremes: How like minds unite and divide. Oxford: Oxford University Press.

    Google Scholar 

  103. Sunstein, C. R. (2017a). # Republic: Divided Democracy in the Age of Social Media. Princeton: Princeton University Press.

    Google Scholar 

  104. Sunstein, C. R. (2017b). Default rules are better than active choosing (Often). Trends in Cognitive Sciences, 21(8), 600–606.

    Article  Google Scholar 

  105. Sunstein, C. R. (2017c). In praise of serendipity in The Economist, Accessed Feb 4, 2018, from

  106. Taleb, N. N. (2012). Antifragile: Things that gain from disorder (Vol. 3). New York: Random House.

    Google Scholar 

  107. Thurman, N. (2011). Making ‘The Daily Me’: Technology, economics and habit in the mainstream assimilation of personalized news. Journalism, 12(4), 395–415.

    Article  Google Scholar 

  108. Thurman, N., & Schifferes, S. (2012). The future of personalization at news websites: Lessons from a longitudinal study. Journalism Studies, 13(5–6), 775–790.

    Article  Google Scholar 

  109. Turing, A. M. (1950). ‘Computing machinery and intelligence’. Mind, 59, 433–460.

    MathSciNet  Article  Google Scholar 

  110. Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131.

    Article  Google Scholar 

  111. van Andel, P. (1994). Anatomy of the unsought finding. serendipity: Origin, history, domains, traditions, appearances, patterns and programmability. The British Journal for the Philosophy of Science, 45(2), 631–648.

    Article  Google Scholar 

  112. van den Hoven, J., & Rooksby, E. (2008). Distributive justice and the value of information: A (broadly) Rawlsian approach. Information Technology and Moral Philosophy, p. 376. Cambridge: Cambridge University Press

  113. Verbeek, P. P. (2011). Moralizing technology: Understanding and designing the morality of things. Chicago: University of Chicago Press.

    Google Scholar 

  114. Vicario, M., Bessi, A., Zollo, F., Petroni, F., Scala, A., Caldarelli, G., & Quattrociocchi, W. (2016). The spreading of misinformation online. Proceedings of the National Academy of Sciences, 113(3), 554–559.

    Article  Google Scholar 

  115. Winner, L. (1980). Do artifacts have politics? Daedalus, 109(1), 121–136.

    Google Scholar 

  116. Yadamsuren, B., & Erdelez, S. (2016). Incidental exposure to online news. Synthesis Lectures on Information Concepts, Retrieval, and Services, 8(5), i–i73.

    Article  Google Scholar 

  117. Yamamoto, M., Hmielowski, J., Beam, M., & Hutchens, M. (2018). Skepticism as a political orientation factor: A moderated mediation model of online opinion expression. Journal of Information Technology & Politics, 15(2), 178–192.

    Article  Google Scholar 

  118. Yaqub, O. (2018). Serendipity: Towards a taxonomy and a theory. Research Policy, 47(1), 169–179.

    Article  Google Scholar 

  119. Yeung, K. (2017). ‘Hypernudge’: Big Data as a mode of regulation by design. Information, Communication & Society, 20(1), 118–136.

    Article  Google Scholar 

  120. Zarsky, T. Z. (2002). Mine your own business: Making the case for the implications of the data mining of personal information in the forum of public opinion. Yale JL & Tech, 5, 1.

    Google Scholar 

  121. Zuckerman, E. (2013). Rewire: Digital cosmopolitans in the age of connection. New York: W. W. Norton & Company.

    Google Scholar 

  122. Zuiderveen Borgesius, F. J., Trilling, D., Moeller, J., Bodó, B., De Vreese, C. H., & Helberger, N. (2016). Should we worry about filter bubbles? Internet Policy Review. Journal on Internet Regulation, 5(1), 16.

    Google Scholar 

Download references


This research is funded by the ERASMUS MUNDUS program in Law, Science and Technology (LAST-JD) coordinated by University of Bologna.

Author information



Corresponding author

Correspondence to Urbano Reviglio.

Ethics declarations

Conflict of interest

The author declares that he has no conflict of interest.

Informed consent

Research for this paper did not involve animal or human participants; neither was there a need to request informed consent from anyone.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Reviglio, U. Serendipity as an emerging design principle of the infosphere: challenges and opportunities. Ethics Inf Technol 21, 151–166 (2019).

Download citation


  • Serendipity
  • Design ethics
  • Nudging
  • Personalization
  • Filter bubbles
  • Echo chambers