1 Introduction

The proliferation of ubiquitous computing of recent years brings with it a dramatic increase in data collection, data generation and automatic data processing, including non-personal as well as personal data. These data are often used and reused in original and modified form by different involved parties and published to online networks or on the internet. Within the resulting complex ecosystems of data flows, it becomes increasingly hard for individuals to exercise control over data that concern them. Motivated by this observation arises the new research agenda of human–data interaction (HDI), which proposes “placing the human in the center of the flows of data and providing mechanisms for citizens to interact with these systems and data explicitly” (Mortier et al. 2014, p. 1). The article proposes granular computing (GC) as a potential theoretical, formal and computational basis for HDI. Here, information granules are understood as complex information entities that result from abstraction, generalization, approximation, aggregation or other forms of derivation of knowledge from data or information, and are considered a central part of human reasoning.

We argue that the ability of computer systems to represent and process information granules is pivotal to human-centered interaction with data systems:

  1. 1.

    Only if computers are given the ability to abstract, generalize, approximate, etc. from detailed numerical data can humans intuitively understand the results of automated data processing and their implications for them.

  2. 2.

    Conversely, only if people are given the possibility to provide input and feedback to data systems in a way that comes natural to them (i.e., in a granular form) can they consciously participate in and actively control the produced information that concerns them.

Information granulation involves the ability to differentiate necessary from unnecessary detail in a given semantic context, and, when used for interaction with data systems, also the ability to assess people’s intentions, i.e., the pragmatics of a given piece of information. Both are highly complex tasks and subject to research in numerous disciplines, cf., e.g., Noy et al. (2013). The tasks require that computers are able to not only represent, but also to process information in a way that emulates human thinking and human reasoning, and we argue that an indispensable requirement to achieve this is the ability to represent and reason with granular information. We discuss that the general framework of granular computing provides methods and tools for granular representation and reasoning, together with the necessary theoretical foundations to embed diverse granular methods and approaches in consolidated common processing flows.

Moreover, we argue that the use of granular computing as a basis of HDI allows for extending the HDI research agenda with the topic of collective intelligence amplification, which considers human control an intrinsic part of (big) data systems. The goal of collective intelligence amplification is to augment human (group) intelligence with information technology by tightly interweaving algorithmic processing with networked collaborative, interactive and iterative human feedback. We argue that granular information processing is a necessary prerequisite for realizing such a symbiotic man–machine relationship, and, conversely, that such a relationship is a prerequisite for the possibility to exercise control over ones data. As an example application of collective intelligence amplification, we discuss the concept of cognitive cities as an urban environment that is characterized by high information connectivity and high social connectivity, thus furthering collaborative, computer supported work, increased democratic participation and resilience against external shocks.

To exemplify the use of granular computing in HDI and particularly in collective intelligence amplification, we introduce a use case describing a collaborative planning endeavor in a cognitive city environment. Collaborative or participatory planning is an urban planning paradigm that seeks to further bottom–up community-level planning processes, and tries to reconcile different views of participants and to thereby prevent potential conflicts. In our use case, people, as well as information are assumed to be connected in a digital space. To support and augment the collaborative planning process, an iteratively updated visualization of user suggestions and ideas is automatically generated in the form of a floor plan that expresses common ideas of the persons involved, and allows them to immediately react with feedback. The automatically generated floor map helps participants to immediately recognize flaws in their ideas, visually discover commonalities of and differences between ideas, as well as possibilities and opportunities that would otherwise not be obvious to them. For the generation of the floor plan, a granular spatial planning tool is used, whose data structure is based on a granular geometry. Granular geometry as introduced by Wilke (2015) is used as an example of a granular calculus. It allows people to provide their ideas in common sense terminology by accepting extended physical objects (such as trees, streets or houses) as atomic units of geometric processing. For example, a person may say “I’d like the children’s playground to be situated between the park entrance, the well, and the big oak tree”, and a granular geometry algorithm constructs a corresponding “granular triangle” without resorting to exact algorithms that use coordinate points and lines. In contrast to existing heuristics, constructions in granular geometry are logically sound, i.e., reliable: granular geometry is built on mathematical fuzzy logic and therefore has the intrinsic ability for tolerating imprecision without impeding the soundness of the reasoning process.

The remainder of the article is structured as follows: Sect. 2 briefly introduces the core ideas of human–data interaction and discusses the concept of intelligence amplification in the context of HDI, together with cognitive cities as an application example. Section 3 introduces granular computing, and elaborates on granular computing as a basis for HDI and its proposed extended research agenda that includes intelligence amplification. Section 4 introduces the basic ideas of granular geometry and sketches the collaborative urban planning use case. We then exemplarily show how granular geometry helps support and augment the collective planning process.

2 Human–data interaction and collective intelligence amplification

We briefly introduce the core ideas of human–data interaction, collective intelligence amplification and cognitive cities in Sects. 2.12.3, respectively, where cognitive cities serve as an example application of collective intelligence amplification. Section 3.1 discusses the fundamental HDI research topic of data legibility. We argue that a human-centered approach to (big) data systems requires not only legibility, but intuitive legibility, and we propose three requirements for it. Section 3.3 briefly introduces granular computing and, based on the three identified requirements, gives arguments why we think that granular computing provides an appropriate theoretical, formal, and computational basis for addressing intuitive legibility in data systems, collective intelligence amplification and HDI in general.

2.1 Human–data interaction

With the proliferation of ubiquitous sensor technology and computing the mutual interdependence between human decision making and analytic and predictive algorithms grows tighter. From the increasing demands to finding new methods that help cope with the direct and indirect impact, this interdependence has on our lives as individuals and as a society emerges a new research field, namely the field of human–data interaction (HDI) (Mortier et al. 2014; Haddadi et al. 2013; Cafaro 2012; Kee et al. 2012; McAuley et al. 2011). While the long-standing research field of human–computer interaction (HCI) focuses primarily on the interaction with computers as artifacts (Mortier et al. 2014), HDI emphasizes other aspects of interaction with computer systems, namely

  • the individual and social interaction with (big) data itself, independent of the interfaces used, and, in particular,

  • the possibilities of interaction with derived (inferred) data that is generated by analytic and predictive algorithms.

As illustrated in Fig. 1, data are collected and analyzed with and without our knowledge or active participation, and information about us is derived from the results of analysis by automatic inference (Mortier et al. 2014). Frequently, the inferred information is fed back into the system as newly generated data, thereby forming a feedback loop. Here, the inferred data may either be directly used as input for further analysis and inference, or it may trigger actions that influence our behavior (e.g., through the creation of a search bubble). Our changed behavior in turn changes our personal data, which is again collected and fed back into analytic algorithms for further analysis and inference.

Fig. 1
figure 1

Human–data interaction today, after Mortier et al. (2014)

This iterated process of data creation, data collection, data analysis, inference and action increasingly often becomes a process with its own internal dynamics whose consequences may partially or fully escape the control of individuals or groups that are concerned by it. Well-known examples include identity theft, 3rd party disclosure or social profiling in online social networks (Gross and Acquisti 2005; Krishnamurthy and Wills 2009; Korolova 2012). It is the goal of HDI research as understood by Mortier et al. (2014) and Crabtree and Mortier (2015) to investigate how individuals and groups can control and exercise agency over data generated in these kinds of feedback loops. More generally, they propose to investigate possibilities for active interaction with derived data:

“HDI seeks to transform the current circumstances of interaction from a passive situation in which personal data or more specifically ‘data about you’ is generated in your mundane interactions with digital infrastructure and is increasingly accessible to third party use, into an active situation in which ‘my data’ and its subsequent use is actively managed and controlled by the people who produce it.” (Crabtree and Mortier 2015, p. 4).

This also includes the investigation of strategies to manage derived data that does not “belong” to a single individual or homogeneous group:

“The interactional situation is further complicated by the recognition [...] that ‘data about you’ may not be yours but may either be generated by you on behalf of a third party (e.g., the taxman) or by third parties (e.g., retailers) in your interactions with their services. This creates an interactional situation that negates ‘data containment’ - i.e., the idea that ‘data about you’ could be handed over to you and that third parties could be prohibited from distributing copies of it without your permission.” (ibid)

To address these challenges, Mortier et al. (2014) identified three focus areas of HDI research: Legibility “is concerned with making data and analytics algorithms both transparent and comprehensible to the people the data and processing concerns”, agency is “concerned with giving people the capacity to act within these data systems”, and negotiability concerns “the many dynamic relationships that arise around data and data processing”.

2.2 Collective intelligence amplification in the context of HDI

As pointed out by Mortier et al. (2014), providing legibility of (raw and derived) data is the basis and prerequisite for being able to provide agency and negotiability over personal data in HDI: People can only interact with data systems, if the collected and derived data is comprehensible to them. Yet, Mortier et al. also admit that “[...] we might reasonably anticipate that many people will not often need or desire the capacity to act within these data collection and processing systems. However, many will from time-to-time, and some enthusiasts may do so more frequently and we claim they must be supported in doing so” (Mortier et al. 2014, p. 8).

We believe that increasing agency in data systems and motivating people to actively participate in collaborative HDI endeavors is an important goal that enhances democratic participation and thereby increases the resilience of our society against external shocks. Here, we understand the term “personal data” in a broad sense as data that is related in any way to an identified or identifiable person. This may include a person’s financial transaction data as well as the pollutant emission levels or urban restructuring plans in a person’s home district or the fact that a person’s favorite singer has published a new song on an audio distribution platform. Accordingly, we feel that the closed data feedback loop described by Mortier et al. should be adapted to the potential of collective intelligence [i.e., groups of individuals acting collectively in ways that seem intelligent (Kelly and Hamm 2013)] by tightly interweaving it with collaborative, interactive and iterated human feedback.

Fig. 2
figure 2

Intelligence amplification merges computational and human intelligence and requires collaborative, iterative, interactive and intuitive and feedback. Modified after Mortier et al. (2014)

Humans can process their data on different levels of granularity, as emphasized in Fig. 2. They can also offer other humans access to their data, again, at different self-determined granularity levels. These levels mark a relative size, scale, level of detail, or depth of penetration that characterizes certain data, which can also be used for analytics reasons. Along the same line, also inferences may be done on different levels of granularity. For these inferences, we could not only rely on automation but also include other humans, which could draw inferences on certain levels of granularity.

On this view, intelligence is not just something that happens inside individual brains; it rather exists with groups of individuals (Siemens 2006).

In the last decade, a kind of enhanced collective intelligence has emerged—i.e., groups of people and computers, connected by the Internet, collectively doing intelligent things (Kelly and Hamm 2013; Portmann 2013). Examples are participatory mapping projects such as OpenStreetMap,Footnote 1 participatory planning as exercised in the 1940ies in Great Britain, or participatory budgeting as exercised in Porto Alegre (Brazil) since 1989.Footnote 2 The underlying information agenda of collective intelligence concentrates on helping human beings by understanding (their) context. To do this, Portmann et al. introduced a conceptual framework for global collective intelligence amplification (Portmann et al. 2012). A form of augmenting intelligence by hybrid human–machine learning can be achieved by the complementary interaction between humans and their data. Along these lines, amplified intelligence solutions always start from a user down, not from the data model (and analytics) up. The Internet (most often) helps dovetail these interactions in a respective intelligence amplification loop.

The loop-underlying effective use of information and communication technology (ICT) was first proposed by early computer pioneers as Ashby (1956), Licklider (1960), and Engelbart (1962). Nowadays, applying advanced techniques (e.g., machine learning and predictive modeling to big data) help for improved decision making. Computational Intelligence includes artificial neural networks, evolutionary computation (e.g., swarm intelligence and immunological paradigms) and fuzzy logic (and rough set theory) (Kacprzyk 2015). Kaufman and Portmann (2015) propose using nature-inspired computational methodologies and approaches to tackle complex real-world problems (to which traditional approaches often are ineffective or infeasible).

Today, computational intelligence is further refined and extended by collective intelligence, creating a feedback loop where humans themselves are also augmenting its advanced models. The notion is that human’s creativity (even in the age of machine learning) can and should continue to flourish. The goal, however, is mutual elevation. As machine learning is enhanced by computational intelligence, human beings have the opportunity for more nuanced and valuable pursuits. As these pursuits become increasingly nuanced and valuable, they put important feedback into the system. In this manner, with their feedbacks, humans are teaching computing systems how to process data and information. With this kind of supervised learning the systems can apply what they learned to large volumes of data and information. The overall outcome is that computational amplifies human intelligence to instantiate a kind of urban intelligence (Moyser 2013).

Starting from here, in the next section, we introduce the urban intelligence-overriding concept of cognitive cities (Portmann and Finger 2016) as one of the most far-reaching applications of an unleashed intelligence amplification loop.

2.3 The cognitive city as an example of intelligence amplification

Embedded in novel learning and cognition theory (Siemens 2006), a cognitive city is a cognition-enhanced city. The city’s underlying systems learn and interact naturally with citizens to extend what neither humans nor machine could do on their own. This human–machine symbiosis may be understood as enhanced collective intelligence (Portmann 2013) (or likewise the installation of urban intelligence (Moyser 2013) into the city’s ecosystem). That is to say, cognitive systems help individual citizens make better decisions by penetrating the complexity of ubiquitous city data. Accordingly, their main premise are to collect (big) city data from disparate sources and process it in the best possible way to produce useful information (i.e., to help decision makers to make the best informed decision for the benefit of the city).

Fig. 3
figure 3

Development of cities

A cognitive city is an efficient, sustainable, and resilient urban system, that emerges from the merge of a smart (i.e. efficient) city with a learning (i.e. sustainable) city. Based on the development status of present cities, Fig. 3 shows two possible paths towards cognitive cities. Many cities today address their challenges with smart city initiatives, i.e., they increase the degree of connectivity of information. According to Portmann and Finger (2016), smart city initiatives thereby mainly address efficiency issues, however, most often they leave sustainability and, ultimately, resilience unaddressed. In contrast, an increased degree of social connectivity leads to learning cities, where, beyond becoming more efficient, cities also become economically, socially, and ecologically more sustainable (Portmann and Finger 2016). Social connectivity thereby means more connections between individual humans, what favors addressing a group’s collective intelligence. This, in turn, helps the group to behave more sustainable; e.g., trough more carefully use of resources as water. The combination of increased information connectivity and social connectivity leads to cognitive cities, where urban systems ultimately become more efficient (i.e., acting well-organized and proficient), sustainable (i.e., able to continue over a long run), and, additionally, resilient (i.e., capable of withstanding external shocks).

To grow cognitive cities, social connectivity and information connectivity must be tightly interwoven, and citizens need to be empowered and motivated to actively participate in individual and collaborative HDI, i.e., to use the generated city data in their individual and collective best interest for building resilient social information networks. To achieve this goal, the means of human–data interaction must be advanced dramatically: city data must be made available in the form of open data; in the sense of Mortier et al. (2014) , data must be made legible for ordinary citizens; the citizen’s agency over their private data must be ensured; there must be a means of addressing negotiability of data over time, and, finally, citizens must be enabled and motivated to actively participate in individual and collective HDI by addressing the challenges discussed in Sect. 2.1.

3 Granular computing as a basis for HDI

We believe that a vision of future HDI that allows for collective intelligence amplification, e.g., for enabling the development of cognitive cities, can only be realized if we are successful in defining general tools and methods that allow for making interaction with a pervasive computing environment as natural to us as our daily interactions with the physical world. To this end, the following Sect. 3.1 refines the requirement of legibility proposed by Mortier et al. (2014). Section 3.2 introduces granular computing, and Sect. 3.3 shows that it supports the identified requirements, thereby providing a theoretical, formal and computational basis for HDI. In Sect. 3.4, we discuss dangers of the approach.

3.1 Intuitive data legibility

Refining the requirement of data legibility proposed by Mortier et al. (2014), we identify three requirements for reaching the goal of achieving a natural interaction with a pervasive computing environment:

3.1.1 Requirement 1: intuitive data representation methods

We believe that one requirement is to provide tools and methods that allow for mapping computer-processed data and their implications for certain individuals or groups to a format that is not only legible (i.e., comprehensible) for them, but, more specifically, intuitively legible (i.e., intuitively comprehensible): In a data-driven society, people cannot be expected to have the time and/or background knowledge to decipher histograms and time-series data or to contemplate the possible implications of every small decision they make during their mundane interactions with a pervasive computing infrastructure. An example already familiar today is that we often tend to accept the terms and conditions of a mobile application we download without reading them, because we do not fully grasp their implications for us or do not have the time to contemplate them. The main challenges here are the transformation of very detailed, often exact numerical data to an appropriate level of (less) detail by, e.g., abstraction, generalization, approximation or aggregation; the transition from data to information, i.e., the fact the same data can have different semantics and implications for different people depending on, e.g., a person’s background knowledge, experiences, opinions and circumstances of life; and the assessment of these implications for certain individuals.

3.1.2 Requirement 2: intuitive feedback methods

We believe that a second requirement for enabling and motivating people to actively participate in individual and collective HDI is to allow for providing feedback in an intuitive form, e.g., in graphical/visual form or in the form of natural language or even colloquial descriptions of circumstances. Challenges here are manifold and include, e.g., discourse analysis, disambiguation recognition of metaphors, analogies, ironies, humor, etc. Many of the underlying challenges root in the varying granularity (e.g., in temporal references) (Mulkar-Mehta et al. 2011), the inherent contextuality and “imperfection” of provided information (i.e., imprecision, ambiguity, vagueness, inconsistency, etc.). Considerable progress is made in these lines today, as demonstrated by projects such as Apple’s Siri,Footnote 3 IBM’s Watson,Footnote 4 or Wolfram’s Wolfram Alpha.Footnote 5 These and similar endeavors use methods of artificial intelligence including semantic technologies, natural language processing and machine learning in general to ‘understand’ human input.

3.1.3 Requirement 3: intuitive data processing logic

Yet, for an intelligence feedback loop to take effect, we think that it is not only necessary for machines to understand human input (i.e., syntax and semantics), but—as a third requirement—also to process information analogous to the way humans do to be able to assess possible implication for peoples lives (in the case of intuitive data representation) as well as people’s intentions (in the case of intuitive human feedback). Examples for endeavors in this line are, e.g., perception-based approaches to formal reasoning such as computing with words (CWW) (Batyrshin et al. 2007; Zadeh 2002), granular geometry (GG) (Wilke 2012, 2015), or formalizations of common sense models of the world such as naive physics (Hayes 1979) or naive geography (Egenhofer and Mark 1995). In Computing With Words, a critical point is “to develop reasoning mechanisms that are able to map inputs words, perceptions and propositions to words, decisions, etc.” (Mendel et al. 2010). granular geometry is based on cognitive science of mathematics proposed by Lakoff and Núñez (2000), and provides a framework for extending classical geometries so that automatic constructions can not only be made based on exact coordinate input, but also based on natural language input (which usually refers to extended objects as atomic entities). In Sect. 4, we give an example of collaborative city planning where a map is automatically generated from natural language input using granular geometry. Naive geography is concerned with designing formal (i.e., computer processable) models of common sense knowledge about representing and reasoning with geographic space. As pointed out by Egenhofer and Mark (1995), [n]aive geography is a necessary underpinning for the design of GISs [(Geographic Information Systems)] that can be used without major training by new user communities such as average citizens, to solve day-to-day tasks. [...] to date there are, for instance, no models for a comprehensive treatment of different kinds of spatial concepts and their combinations that are cognitively sound and plausible. More flexible and advanced methods are needed to capture the results from cognitive scientists’ studies”. Among the challenges of intuitive, human-oriented information processing are the specification of formal calculi that tolerate imperfect information and different levels of detail (i.e., information granularity). In particular, collaborative (participative) work and decision making for collective intelligence amplification (such as, e.g., collaborative urban planning as described in Sect. 4) are prone to including a great variety of granulation levels as well as partially contradictory information. In summary, we believe that for solving the challenges of human-centered HDI and exploiting the resulting feedback loop for collective intelligence amplification, data must be representable and processable by computers in a human-oriented, i.e., intuitive and cognitively sound and plausible way. To achieve this, it is necessary to provide knowledge representation and reasoning methods that allow for transforming data-oriented (mostly quantitative, detailed, and often objective and consistent) models of the world produced by computer processing and sensor technology to human-oriented (mostly qualitative, abstracted, and often subjective or even inconsistent) models of the world used by us, and conversely. This, in particular, concerns not only human-oriented representations, but also human-oriented data and information processing. In the following section, we briefly introduce the general framework of granular computing and argue that it can provide a fundament for such representation and reasoning methods, i.e. a basis for HDI.

3.2 Granular computing

Granular computing (GC) (Zadeh 1997, 2007; Pedrycz 2001, 2006, 2015; Lin 2003; Yao 2004; Pedrycz et al. 2008; Bargiela and Pedrycz 2008) is an umbrella term for methods, tools and techniques that explicitly represent and reason with different granularities of information (i.e. different levels of detail, abstraction or generalization).

Information granules are “collections of entities characterized by some notions of closeness, proximity, resemblance or similarity” (Pedrycz 2015, p. 1). For example, when referring to minutes, hours, days, weeks months and years, we use a hierarchy of temporal information granules: an hour is an aggregation of 60 consecutive minutes, a day is an aggregation of 12 consecutive hours, etc.; in descriptions of geographic space, we use hierarchies of spatial information granules, such as parcel, district, city, country, continent, etc., where a district is an aggregation of spatially close parcels of land, a city is composed of districts etc.; abstract concepts can also be seen as information granules and are often embedded in a granulation hierarchy, such as, e.g., natural numbers, integers and real numbers.

Reasoning with information granules “make[s] use of groups, classes or clusters of a universe, in the process of problem solving” (Yao 2004, p. 232). e.g., in giving directions, we as human use spatial information granules of different size in the form of physical objects (such as ’London’ and ’Piccadilly Circus’) for finding and describing the shortest path from A to B. This is in contrast to the reasoning approach implemented in automotive navigation systems, which use shortest path algorithms that are based only on exact numerical geographic coordinates. Granular calculi are formalizations of approaches to reasoning with information granules, and they are used in virtually every application domain. Examples are fuzzy rule-based systems in control, qualitative spatial calculi in geographic information systems (GISs) and robotics, or semantic inference on the Semantic Web. The basic ideas of granular computing have been explored in many disciplines, such as electrical engineering, theoretical computer science, artificial intelligence or geographic information science.

Granular computing as an emerging paradigm and general computing framework Pedrycz (2006) seeks to investigate the “essential commonalities between the surprisingly diversified problems and technologies used there [...]. (p. 20)” It investigates, e.g., the commonalities of granular models of the world, possible communication between these models, or common properties and operators of generic formal frameworks of granular computing (such as, e.g., fuzzy logic, interval analysis or rough sets theory). Granular computing provides a theoretical perspective on representation and reasoning with information granules, and thereby allows for automated information processing on different abstraction levels, mimicking a central element of human-oriented reasoning. As pointed out by Hobbs (1985), human’s ability to abstract from unnecessary detail is a pivotal ingredient in human reasoning: We (humans) “look at the world under various grain sizes and abstract from it only those things that serve our present interests [...] It enables us to map the complexities of the world around us into simpler theories that are computationally tractable to reason in.” Pedrycz (2015, p. 2) motivates the use of granular computing for automated human-oriented information processing:

“Human-centricity comes as an inherent feature of intelligent systems. It is anticipated that a two-way effective human–machine communication is imperative. Human perceive the world, reason, and communicate at some level of abstraction. Abstraction comes hand in hand with non-numeric constructs, which embrace collections of entities characterized by some notions of closeness, proximity, resemblance, or similarity. These collections are referred to as information granules. Processing of information granules is a fundamental way in which people process such entities. Granular Computing has emerged as a framework in which information granules are represented and manipulated by intelligent systems. The communication of such intelligent systems with the users becomes substantially facilitated because of the usage of information granules.”

3.3 Granular computing for intuitive data legibility

We argue that granular computing as a theoretical perspective to human-oriented automated information and knowledge representation and reasoning provides a formal and computational basis for human–data interaction as proposed by Mortier et al. (2014). With information granulation being an intrinsic part of human reasoning, it provides the indispensable background theory for making algorithmically processed data not only legible, but intuitively legible, thereby providing the fundament of for the collective, iterated, interactive and intuitive HDI necessary for collective intelligence amplification. In the remainder of this subsection, we elaborate on the role of granular computing in providing intuitive data legibility by referring to the three requirements to intuitive legibility proposed in the foregoing Sect. 3.1.

3.3.1 GC supports intuitive data representation methods

To make exact and detailed algorithmically processed data intuitively conceivable for humans, it is necessary to abstract from unnecessary detail. This function can be provided by granulation operators that allow for information granulation. i.e., they allow for the transformation from more detailed to less detailed information representation, e.g., by means of aggregation, abstraction, or simplification. This includes, in particular, methods for transforming exact (numerical, data-oriented) to abstracted (more qualitative and human-oriented) representations. Examples are the fuzzification operators of fuzzy systems or the approximation operators of rough stets theory. Here, the difference between necessary and unnecessary detail depends (besides other more subtle, e.g., psychological, influences) strongly on a persons background knowledge and her spatio-temporal context, and governs the choice of granulation operator to be used. Being part of a granular calculus, “[i]nformation granules come with underlying rules describing syntax and semantics. The semantics addresses the meaning conveyed by an information granule.” (Bargiela and Pedrycz 2003, p. 6). To the knowledge of the authors, there are today no or only very limited means of automatically choosing a granulation operator that is appropriate for an individual’s background knowledge and context. Yet, with the ever increasing collection of personal data that comes with the proliferation of pervasive computing, the possibilities of representing personal context and thereby allowing for personalized information granulation is expected to increase dramatically in the coming years.Footnote 6

3.3.2 GC supports intuitive feedback methods

When humans provide feedback to a data system, to use the granulated input information in conjunction with existing exact numerical data produced by algorithmic processing, both representations must be transferred in a common data format. This may be done by using granularity conversion operators, which are usually part of granular calculi: As mentioned above, granulation operators allow for transforming exact data in granulated information, which, together with the granular human input, may be further processed by a suitable granular calculus. Conversely, degranulation operators (such as, e.g., defuzzification operators) may be used to transform the granular human input in an exact representation format (e.g., by choosing a suitable exact representative) to allow for exact processing. Yet, this approach contains the risk of defective exact processing, since degranulation usually causes information loss (as, e.g., in defuzzification).Footnote 7 Another alternative is the use of granular calculi that can handle mixed granularity levels (including exact data), i.e., information granules of different sizes (including size zero). A well-known example in mathematics is the Landau notation that classifies mathematical functions according to their growth rate and allows for performing calculations with orders of magnitude. Here, the function classes can be seen as function granules, and rules exist for the combination of granules of different sizes [e.g., \(O(k)\cdot O(x^{2})=O(x^{2})\) for every linear constant k, where O(k) is ’smaller’ than \(O(kx^{2})\) in the sense of \(O(k)\subset O(kx^{2})\)]. Another example for a calculus that can handle mixed granule sizes is granular geometry, which we introduce in Sect. 4.

3.3.3 GC supports intuitive data processing logics

Information granules are only intuitively legible for people, if the semantic of information granules not only captures the current background knowledge, personal and spatio-temporal context of a person, but also the possible future implications of the piece of information at hand. Conversely, human granular feedback can only fully be ‘understood, if a person’s intentions can be assessed (i.e., possibly planned actions in the future). This is particularly important for the iterated feedback cycles of collective intelligence amplification, where computational intelligence in the form of intelligent algorithms are used to support, e.g., collective decision making. To asses possible implications or intentions, it must be possible to draw inferences on granular information. In other words, granular calculi must be available that implement formal (i.e., computer processable) models of human-oriented granular reasoning. While not all granular reasoning approaches must be cognitively sound in the sense of Egenhofer and Mark (1995), it is essential that such calculi are available, e.g., to assess the intention of human feedback. Examples of granular calculi are fuzzy rule-based systems (Zadeh 1992), computing with words and perceptions (CWP) (Zadeh 1996a), the region-connection calculus (RCC) (Randell et al. 1992), or Granular geometry (GG) (Wilke 2015). An example of formally modeling naive (i.e., common sense) human reasoning with qualitative relation queries is given by Fogliaroni and Hobel (2015) in the context of spatial reasoning. An essential functionality of granular calculi for data legibility in HDI is granularity conversion (i.e., the transition between different granulation levels) not only w.r.t. to representation, but also w.r.t reasoning, e.g., by providing different operations or algebras on different granulation levels, together with a matching between them. Granular calculi may alternatively allow for reasoning with mixed granule sizes (i.e., mixed levels). Since information granulation includes aggregation, approximation, abstraction or simplification, it introduces data imperfections. Consequently, another essential functionality is the ability to handle data imperfections in a (logically) sound, i.e. reliable, way. Examples of generic formalisms of granular computing that are explicitly designed for handling imperfect information are, e.g., Dempster–Shafer theory of belief (Dempster 1967; Fine 1977), probability theory, possibility theory (Zadeh 1999; Dubois and Prade 2001), rough sets theory (Pawlak 1981), fuzzy logic (Zadeh 1965), shadowed sets (Pedrycz 1998), interval computation (Kreinovich 2008), or qualitative calculi (Ligozat and Renz 2004). The ability to utilize higher order information granules for reconciling seemingly contradictory information as, e.g., generated in collaborative work, by interpreting it as higher order data imperfection is considered an essential functionality of granular calculi in HDI. Finally, the generic theoretical underpinning and unified framework provided by granular computing allows for embedding the many diverse granular methods, approaches and calculi in a single meta-theory. It thereby provides tools and methods (such as, e.g., methods of communication between granular worlds) to consolidate the different granular models and calculi that are potentially used in HDI and collective intelligence amplification.

3.4 Dangers of information granulation in the context of HDI

As discussed in the last section, we think that the granulation of data and information is an essential prerequisite for providing intuitive data legibility and, by extension, a prerequisite for the design of tools and methods that can help people regain control over their personal data. Yet, since granular operators for aggregation, simplification, abstraction etc. necessarily transform not only the syntactic representation of data, but also their semantics and pragmatics, it is obvious that the risk of deliberate semantic manipulation of information by ways of granular transformations with the goal of influencing or even controling people’s decisions is inherent to the approach. We believe that this risk cannot be avoided, but that it may be mitigated by a formal study of the semantic implications of granulation operators, and subsequently by their rigorous standardization. We consider the control of semantics and pragmatics one of the biggest research challenges in HDI. The granular computing framework as a potential formal basis for HDI carries the potential of facilitating this endeavour, since it strives at making data transformations explicit and controllable.

Summing up, we believe that HDI needs a theoretical, formal and computational basis, and that, despite the dangers inherent to information granulation, the framework of granular computing can provide this basis. In the remainder of the article, we exemplarily introduce a granular calculus—namely the calculus of granular geometry—and apply it to a cognitive cities use case to illustrate the potential of granular calculi for collaborative decision making.

4 Use case: granular geometry for collaborative urban planning in cognitive cities

In this subsection, we exemplify how granular computing can contribute in collective intelligence amplification using the granular calculus of granular geometry (Wilke 2015) in a cognitive cities use case. Granular geometry provides a formal framework for geometric reasoning with location granules. It thereby provides a mathematical basis for granular reasoning in geometry-based spatial planning tools such as vector-based geographic information systems. Here, the transformation of the mathematical theory of granular geometry to computational geometry is subject of further work.

In the described use case, we sketch a collaborative urban planning task in an envisioned cognitive city environment, where we use granular geometry to automatically generate a floor plan based on human suggestions and feedback that are provided in an online discussion forum. The floor plan is iteratively updated while new suggestions come in. We show in particular, how it handles imperfect information, and how it may help to reconcile seemingly contradictory information. Before elaborating on the use case in Sect. 4.2, the following Sect. 4.1 briefly introduces the calculus of granular geometry and its underlying ideas.

4.1 Granular geometry

We introduce the general idea of granular geometry (Wilke 2012, 2015) in Sect. 4.1.1 and briefly sketch its underlying ideas in Sects. 4.1.24.1.3.

4.1.1 What are granular geometries?

A granular geometry as envisioned by Wilke (2015) extends the traditional geometry-based vector data structure of spatial planning tools and geographic information systems (GIS) by allowing not only exact coordinate points and lines as basic data types, but by allowing also so-called position granules (PGs) in this role. For instance, Piccadilly Circus is interpreted as a granular point (since it can be represented as a neighborhood of exact coordinate points) and all classical point operations of classical geometry are extended accordingly. Similarly, my garden hedge is interpreted as a granular line segment (since it can be seen as a neighborhood of line segmentsFootnote 8), and again, all traditional line operations of classical geometry are extended accordingly. Since a granular geometry is intended for use with volunteered geographic information (VGI), it supports the perspective on spatial granulation that emerges from participatory mapping projects such as open street map.Footnote 9 Consequently, a granular geometry for use in a Granular GIS is intended to comply with at least four requirements:

  • The basic geometric data types should be extended to include position granules.

  • It should support granular geometric reasoning. i.e., all geometric operations that are available for exact coordinate points and lines in the traditional GIS data structure should be available for granular points and lines in the extended granular GIS data structure.

  • As in the exact case, spatial reasoning should be reliable (i.e., sound in the logical sense).

  • Conflicting information should be treated as higher order granularity.

In the literature, several approaches to defining a granular geometry have been proposed, yet these approaches mostly did not comply with all of the above requirements. Among the most prominent approaches within the GIS community are J. Perkal’ epsilon band model (Perkal 1956, 1966) and related approaches (Shi 1998, 2009; Buyong et al. 1997; Shi and Liu 2000; Shi et al. 2003a), where every point is associated with a zone of width epsilon around it, as well as Peucker’s ’Theory of the Cartographic Line’ (Peucker 1975), which postulates thickness as an intrinsic characteristic of cartographic lines. Other approaches emerged from the research field of fuzzy mathematics, such as the fuzzy geometries of Gupta and Ray (1993), Rosenfeld (1994) and Buckley and Eslami (1997a, b), and from the field of digital geometry, as, e.g., the ’Epsilon Geometry’ proposed by cf. Salesin et al. (1989). Most of these approaches define geometric operations without accounting for soundness (i.e., reliability) or second order granularity. The issue of soundless has been approached by Roberts (1973) and Katz (1980), who proposed axiomatic approaches to the topic that are based on mathematical fuzzy logic (more precisely, on Łukasiewicz fuzzy predicate logic). Due to the formal logical basis, soundness of the theory can be formally verified. This is analogous to the case of exact, classical geometries (such as Euclidean or projective geometry) that are formulated as axiomatic theories, from which all geometric tests and constructors used in algorithmic geometry are derived.

4.1.2 The granular geometry framework

Fig. 4
figure 4

A granular line connecting two granular points can be “more or less unique” (Wilke 2012)

Wilke (2015) proposed a granular geometry framework (GGF), which is an approach to establishing axiomatic theories of geometries that meet the above listed requirements. It is based on Lakoff and Núñez (2000) cognitive science of mathematics, and builds upon the central assumption that classical exact geometry is an idealized abstraction of a “real world geometry” perceived and used by humans in everyday life: A classical geometric statement may be wrong in this context, because humans use extended objects, i.e. information granules, in the role of basic geometric entities. In this manner, I may use the terms “my house” and “your house” as granular points and imagine connecting them by a (granular) line.

A classical geometric statement, such as “the line connecting two points is unique”, is usually wrong in such a granular context. Yet, if we take a closer look, we see that such a classical statement can be “more or less wrong”, depending on the relative sizes and distances of the involved granular points and lines, cf. Fig. 4 (Wilke 2012; Wilke and Frank 2010a, b). As a consequence of this observation, the GGF proposes to augment every classical geometric statement with a fuzzy degree of membership to the set of true statements. If only exact coordinate points and lines are involved, the membership degree is 1. If granular points and lines are involved, the membership degree is most often smaller than 1, and its value depends on the relative sizes and geometric configuration of the involved position granules. The result is a fuzzy geometry that consists of a fuzzy set of geometric statements. To perform spatial reasoning in this system, Fuzzy Logic with Evaluated Syntax (Novák et al. 1999; Gerla 2001) is used, which allows for syntactically propagating fuzzy membership degrees in logical theories. In the context of granular geometry, Fuzzy Logic with Evaluated Syntax is used as a similarity-based reasoning approach: The fuzzy degree of membership to the set of true statements is derived from a similarity measure for basic geometric objects and relations. E.g., for projective geometry, the basic geometric objects point and line, and the basic relations equality and incidence are sufficient. Put in other words, the fuzzy degree of membership is a measure of similarity to the truth, or truthlikeness measure (Godo and Rodríguez 2008a). The GGF assigns truthlikeness degrees to geometric statements in order to embed information about the intended granular model of the world in the syntax of the logical theory. As a result, a granular geometry in the sense of the framework is sound by design.

Fig. 5
figure 5

Are the two granular points equal? (Wilke 2012)

4.1.3 Designing a granular geometry

The granular geometry framework proposes to augment classical geometry by degrees of similarity to the truth, more specifically, by degrees of similarity to what we model as the truth. In the case of granular geometry, we intend to define a geometric calculus that allows for sound reasoning with position granules, i.e., the calculus should describe the geometric behavior of position granules. For example, the statement “two points can always be connected with a unique line” describes the behavior of exact points and lines, but it does not describe the behavior of granular points an lines (cf. Fig. 4). In other words, the statement is not true in a granular context. Yet, we can study the geometric behavior of position granules, and this behavior constitutes what we model as “the truth” in the context of granular geometry. From the difference between “this truth” and the behavior described by the classical geometric statement, we may derive a measure of distance from the truth, or, conversely, a measure of similarity to the truth.

As an example, consider the two granular points illustrated in Fig. 5. Is the classical statement “The granular points P an Q are equal.” true or not? If it is not true, how close is it to being true? Here, we may think of different similarity or closeness measures that may be applied, such as, e.g., a measure that relates to the area of their overlap, cf. also Wilke (2009). To define a sensible similarity measure, two requirements should be met:

  1. 1.

    The measure should allow for the classical geometric axioms to be not absolutely false, i.e., their truthlikeness degree should be greater than zero.

  2. 2.

    The measure should be consistent with the modality of imperfection associated with the position granule when modeling “the truth” (Wilke 2015).

Fig. 6
figure 6

Set distance used as a dissimilarity measure (Wilke 2012)

The first requirement stems from the fact that an absolutely false statement entails only absolutely false statements. Thus, a fuzzy theory that contains absolutely false statements would be useless. To understand the second requirement, assume exemplarily that the modality of imperfection is possibilistic. This is, for example, the case if position granule P is a location constraint for an exact coordinate point p (such as a park P that is a location constraint for my exact position p, in case I am in the park). Here, the modality of imperfection is possibilistic, because the position granule aggregates all possible exact locations. In this case, it is sensible to choose a similarity measure that reflects possible equality of the unknown exact locations. This is provided, e.g., by the following fuzzification of the overlap relation: If the two granular points overlap, the degree of possible equality is 1 (since it is true that the unknown exact locations are possibly equal). If the two granular points do not overlap, the degree of possible equality decreases with increasing set distance, cf. Fig. 6. i.e., the greater their distance, the more false the statement “P and Q are possibly equal.” As another example, we may consider a similarity measure that reflects veristic equality. This is the case, e.g., if the granular points P and Q represent solid objects in the world. In this case, the modality of imperfection is veristic, because the position granules are aggregations all exact positions \(p\in P\) and \(q\in Q\), i.e., all p and q that belong to P and Q, respectively. In this case, another measure is be a more sensible choice, such as the following: If the two granular points P and Q are equal (in the sense of set equality), the degree of veristic equality is 1 (since they occupy the same space). If they are not equal, the truthlikeness degree decreases with increasing minimax-distance. I.e., the smaller their overlap, the more false the statement “P and Q occupy the same space”.

Wilke (2012) defined similarity measures of the equality and incidence relation for possibilistic position granules and elaborates some fuzzy axioms for the real projective plane. Yet, a full fledged theory of granular possibilistic projective geometry is subject of ongoing work. Once such an axiomatic theory is established, fuzzy logic with evaluated syntax provides a simple way of propagating truthlikeness through the steps of geometric reasoning. Thereby, analogues to all classical geometric theorems can be derived and applied to real world geometric configurations.

Fig. 7
figure 7

A granular point can more or less lie on a granular line (Wilke 2012)

The design of a granular geometry following GGF readily allows for performing geometric tests, as they are often used in GIS spatial analysis. An example is the point-on-line test, where the algorithm checks if a given point lies on a given line segment. While in classical geometry, this question has a yes-or-no answer, the answer in granular geometry is a matter of degree, cf. Fig. 7. Yet, as shown by Wilke (2012), deriving granular construction operators for algorithmic geometry is not possible, since constructions cannot be graduated. Every granular geometric configuration allows for a whole family of constructions, which can be interpreted as a second order position granule. As an example, consider again Fig. 4: While in classical geometry, two (exact) points uniquely specify an (exact) line, in granular geometry, two (granular) points specify a family of granular lines. This behavior stems from the additional degrees of freedom introduced by allowing size as a parameter of geometric points and lines, and from the graduated truthlikeness of geometric predicates. As a result, every concrete construction algorithm only instantiates one of several possible constructions, and often none of them fully complies with the theorems of classical geometry, but instead complies with them only to a certain degree. Yet, granular geometry can be used to evaluate (heuristic) granular construction algorithms for their truthlikeness. Such heuristics are usually easy to design, because they do not have to satisfy the requirements of soundness and of tolerance for conflicting information. Consequently, in practical application, granular geometry serves as an error propagation calculus for heuristic algorithms. It can be used to notify a user, if the results produced by a such an algorithm become “too false” when applying long construction chains.

An example of a heuristic granular construction algorithm is given in Fig. 8: The algorithm generates a granular line L from two given distinct granular points P and Q. The granular line is constructed as the set of all exact lines that connect every exact point contained in P with every exact point contained in Q.

Fig. 8
figure 8

A construction heuristic for the granular line connection of two granular points

To see that heuristic constructions usually do not comply with classical geometry, consider the following: In classical geometry, if three points lie on the same line, it is implied that all lines that are constructed by connecting any two of the three points are equal (i.e., they result in the same one line going through all three points). This is different in granular geometry: The pairwise line connection are equal only to a certain degree. For the construction algorithm given in Figs. 8, 9a–c illustrates that the pairwise constructed granular lines (a) \(\overline{PQ}\), (b) \(\overline{QR}\) and (c) \(\overline{PR}\) are not equal. Their degree of equality depends on the similarity relation used to model graduated equality. Consequently, if we want to construct a granular line \(\overline{PQR}\) that connects three granular points PQR, we cannot resort to the two-point line connection algorithm as it would be possible in exact geometry. Instead, we need to define a new heuristic construction operator for the three-point line connection. This can be done, e.g., by employing the set union operator over all involved exact lines, Fig. 9d.

Fig. 9
figure 9

Heuristic line connection ac of two points, d of three points

In the next section, we illustrate how a granular geometry can be applied in collective intelligence amplification using the granular calculus of granular geometry (Wilke 2015) in a cognitive cities use case.

4.2 A cognitive cities use case

In this subsection, we introduce a use case that applies granular geometry in a collaborative urban planning task in an envisioned cognitive city environment. Section 4.2.1 introduces the use case itself, and Sect. 4.2.2 shows how granular geometry can be applied to automatically generate a floor plan based on user suggestions.

4.2.1 Collaborative urban planning

For our example, we build on neighborhoods initiatives as a model of bottom–up possibility for development of common urban space. In this neighborhood, the individual citizens can organize themselves by their involvement in respective activities. This way, building on connected learning and cognition theories (Siemens 2006), it is possible for the citizens (as well as the neighborhood and the city as a whole) to learn (bottom-up). Thus, let us assume that the neighborhood initiated a (spatial) planning for the re-landscaping of a public place in their neighborhood. The intention is to collectively create a new park.

To this end, the citizens contribute their (several) ideas on a democratic participatory platform thereby using natural language descriptions (of the perceived geographical conditions). In a learning city, as a consequence, a discussion of the VGI-based contributions takes place. This discussion goes down both online and offline. Since it is (mostly) laymen discussions, they thereby include a lot of (imprecise) words (of a natural language). Hence, the linguistic descriptions (of geographic information) severely complicate the comparison of citizens’ different ideas. Nevertheless, this dialogue is very important for a sustainable development of the neighborhood in which everybody can feel at home.

In order to automatically support this collaborative decision making process, and thereby allow for collective intelligence amplification as described in Sect. 2.2, it is necessary that a means of simple and intuitive human–data interaction is available, where citizens can provide their ideas as input to a collective data base, e.g., via a natural language interface or via embodied interaction, such as through interactive graphical visualization of their ideas for landscape planning of their park. Granular geometric analytics algorithms that can handle the provided perception-based information granules provided by citizens can then automatically provide visualizations of the different proposals and suggestions in a granular GIS. Additional analytics algorithms may be available that allow for comparing them visually and logically, perhaps clustering the different approaches, while pointing out the strongest commonalities and differences to the group in order to actively support collaborative decision making. Granular analytics may also be complemented with granular semantic inference algorithms that use external information (e.g., building regulations from available open government data or lessons learned from similar endeavors in other cities published online), to point out to the group the regulatory constraints, the advantages and disadvantages of the different ideas, or additional possibilities that nobody in the group had thought about.

The resulting learning process that is interwoven with supporting algorithmic analytics and inferences in an iterative and interactive intelligence amplification feedback loop constitutes a cognitive computing approach: It allows the group to learn collectively, together with the supporting algorithms in the course of a continual HDI to shine a light on better solutions (i.e., become efficient, sustainable and, thus, resilient). Using granular geometric analytics algorithms in a granular GIS is one of the ingredients in this process. In the next subsection, we will elaborate on the application of granular geometry in the context of this use case.

4.2.2 Automated map generation using granular geometry

Neighborhood meetings often take place in the form of physical meetings. Since people usually cannot attend them regularly du to other obligations, the group of attendants usually varies from week to week, a fact that strongly impedes the continuity of discussion threads and the collaborative planning effort. To address this problem, assume that an online discussion forum has been set up, where attendants can state and discuss their ideas for the park design online.

Fig. 10
figure 10

Community garden a without and b with added tree line

Suppose, one user started a thread about setting up a community garden in the park: “I always wanted to have a community garden in our neighborhood, and I imagine a good place for it between the tool shed, the children’s playground and the big oak tree. On the border around the garden we could plant a line of dwarf fruit trees.” In order to support the collective decision making process, a visualization can now be generated automatically for the users of the blog: A speech recognition software parses the sentences and extracts the geometric content encoded in the natural language sentences. Then a heuristic granular construction algorithm constructs a granular triangle, cf. Fig. 10a. Here, it is sensible to use the least specific construction heuristic available in order to allow for as many interpretations of the provided information as possible and thus to leave room for consistent incorporation of additional facts to come or for reconciliation with similar suggestions. For example, the construction heuristic used in Fig. 8 constructs a granular line L from two granular points P and Q as the set of exact line connections of exact points contained in P and Q.

Before publishing this result to the discussion forum, the algorithm tries to include all available information, which, in this case, also includes a visualization of the line of fruit trees to be used as a border of the community garden. To do this, a web knowledge retrieval algorithm may look up the concept of a dwarf fruit tree in Wikipedia or a similar knowledge base and passes to the granular geometry algorithm the geometric parameters of a typical dwarf fruit tree, such as, e.g., its (typical) diameter.Footnote 10 The granular geometry algorithm in turn constructs a granular line of dwarf fruit trees from this information, cf. Fig. 10b.

In the construction heuristic above, we used the set union operator for granular line connection, because it provides the least specific granular geometric configuration that maximizes the truth degree of the provided information: Using the interpretation of incidence (i.e., the lies on relation) used in Sect. 4.1.2, all three granular points lie on the heuristically constructed granular line with truthlikeness degree of 1.

After having constructed a granular line segment of dwarf fruit trees (based on assumptions on typical diameter, usual planting distance etc. received from a web knowledge retrieval algorithm), it can be added to the temporary map of the community garden as illustrated in Fig. 10b. Here, the idea of the user was that the fruit trees delineate the border of the garden, and granular geometry allows for automatically selecting the optimal placement accordingly. To do this, a placement is selected that, for each of the three border lines, maximizes the truthlikeness degree of the statement ”The line of fruit trees is equal to the border line”, using the fuzzified veristic equality relation. Since the granular border of the garden has a much greater width than the granular line of fruit trees, there are several possibilities to place the fruit trees inside the border line, that all produce a full truthlikeness degree of 1, and a semantic algorithm may decide which of them to choose based on additional information.

Notice that the granular information provided is usually gradually conflicting information, and that granular geometry is treating them as second order granularities. As an example, consider again the different possibilities to place the tree line and and the border relative to each other: The set of models (i.e., implementations) of the statement “The tree line and the border are equal” constitutes a fuzzy set of geometric configurations for actually placing the two granular lines. In each configuration, the degree of membership of the statement to the set of true statements (the truthlikeness degree) depends on the degree of overlap of the two granular lines. It is only the heuristic construction algorithm that selects one concrete implementation contained in the second order granule in order to allow a concrete construction. To do this, the algorithm chooses an implementation contained in the second order granule that maximizes truthlikeness.

Now that all information provided by the user is incorporated, the visualization of the proposed garden layout is published to the group. The visualization tool embedded in the blog contains an intuitive edit tool that allows the user to correct the automatically generated visualization if necessary, e.g., by moving the line of fruit trees a bit to the north, thereby providing user feedback to the algorithm. In a subsequent blog entry, another user may suggest that a small pond could be placed in the center of the garden, and the granular geometry algorithm would automatically construct the incenter of the granular triangle while again maximizing the truthlikeness degrees of all provided information.

5 Conclusions

We briefly introduced the newly emerging research field of human–data interaction. We argued that the HDI feedback loop described by Mortier et al. (2014) holds the potential for collective intelligence amplification, if computer generated data is not only legible for people, but intuitively legible. We identified three requirements for intuitive data legibility, namely the availability of intuitive representation methods, intuitive feedback methods and intuitive processing logic. We agued that the general computing framework of granular computing can provide the basis for such methods, as well as the necessary theoretical foundations to embed diverse methods and approaches in a single processing flow.

To exemplify the use of granular computing in HDI-based collective intelligence amplification, we first introduced the concept of a cognitive city as an application of intelligence amplification, as well as granular geometry (Wilke 2015) as an example of a granular calculus. We then discussed how the use of granular geometry can support collaborative decision making in a collaborative urban planning use case that is situated in a cognitive cities environment. Here, people, as well as information are assumed to be connected in a digital space. Granular geometry allows for processing and possibly reconciling different user inputs that are formulated in an intuitive form in natural language, using extended physical objects as the atoms of many-valued geometric reasoning. In this way, an iteratively updated visualization of user suggestions and ideas is generated in the form of a floor map that expresses common ideas of the persons involved, and allows them to immediately react with feedback.