Introduction

A closer engagement is required with the prospect of Artificial Intelligence’s (AI) radically transformative potential for outpacing human cognitive capabilities along with its ability for bringing technological and economic advances on unprecedented timescales (Bostrom et al. 2018). The advances in the capabilities and applications of AI systems have brought into sharper focus risks as well as opportunities for society (Yang et al. 2018). For instance, progress in AI technologies is increasingly making them powerful decision making tools, however, so far, the ability to capture the underlying logic and physical connotations of the problems they solve remain unclear (Guo et al. 2019; Silva et al. 2019).

There is an emerging body of research at the intersection of AI and individuals (Anderson and Rainie 2018), industry (Bolton et al. 2018; Hall and Pesenti 2017; Makridakis 2017), and society (Bostrom 2019; Cath et al. 2018). That AI is instrumental in shaping daily lives and key societal practices (Cai et al. 2014; Zheng et al. 2018) is apparent in mature information societies (Floridi 2016). As data and training remain core to AI algorithms and systems (Cath et al. 2018; McGovern et al. 2017), it becomes imperative to engage with and question the antecedents of what is known and how that knowledge influences the existing sociotechnical systems in order to imagine what community well-being would mean, in practice. Artificial Intelligence-based innovation is largely powered by corporations with some contribution from academia; the discussion around AI’s relevance to society highlights the need for objectives that weave in social and political accountability and long-term planning necessary for a more egalitarian approach to benefits and opportunities (Cath et al. 2018). Within this frame of reference, the social construction of meaning gains relevance. Social meanings, socially and psychologically constituted, provide an understanding of a person’s state of well-being in specific social and cultural contexts, also in the context of ‘living well together’ as a society influenced by social structures and institutions (Armitage et al. 2012; Deneulin and McGregor 2010). However, both meaning-making as well as the structures and institutions influencing such meaning-making continue to represent values and ideas that threaten societal well-being and require a wider conceptualization. Understanding community as more than a sum of parts, and capturing subjective aspects of local life as something that is not limited to the individual, but extends to ways in which people feel well together, is not an easy undertaking (Atkinson et al. 2019).

Therefore, using social theories of the self as relational makes it possible to put relations ahead of subjectivity, presenting the possibility for conceptualizing community well-being in terms of being well together. Such a relational approach to community well-being offers opportunities for engaging with complex societal interactions where issues are not limited to technology, but take into account political choice over time and space (Atkinson et al. 2019). Relational approaches could enable the conceptualization in terms of multiplicity of relations extending beyond people, and across structures, affects, materiality, places and so forth. This could demonstrate how the combination and assemblage of these relationalities generate ideas of identity, stability, and change for both individual and community well-being (Sung and Phillips 2018). A concept of assemblage that extends the understanding of how diverse aspects of life congregate at particular times and spaces is not easy to operationalize (Atkinson et al. 2019; Atkinson and Scott 2015; DeLanda 2016; Deleuze and Guattari 1988). As a tool, AI could be used for mapping and situating these multiple layers of relatedness. Making these layers visible would increase the possibilities for diverse ways of understanding well-being and letting pathways emerge towards such concepts. This complex weaving requires a distributed organizational model, similar to a polycentric organization (Aligica and Tarko 2012) able to accommodate self-determinative and participative formats attuned towards multiple and diverse ways of knowing.

In the backdrop of AI development, this acquires critical relevance as access to the world of information is increasingly being mediated by digital technologies with the aid of search-based AIs, owned and operated by a small cohort of companies, with discrete understandings of the world (Mccarthy 2017; Waller 2016; Halford et al. 2013). Literature on the semantic web delves into questions related to the development and cementing of knowledge and classification systems, processes through which such systems come to represent the world, including managing controversies arising from such representations (Mccarthy 2017). In a semantic web environment, content is consumed and generated by machines as well as humans, and it represents a new level of abstraction from the underlying network infrastructure. It allows programmers and users to refer to real world objectives without concerning themselves with the underlying documents that describe them (Silva et al. 2019; Hendler and Berners-Lee 2010). This places AI technologies in a position where they contain and furnish representations of the world, and since we are not fully conversant with these technologies from a social perspective, it is important to understand how they are developed and encoded. For instance, Friedman and Nissenbaum (1996) have used actual cases to illustrate how biases in terms of preexisting, technical, and emergent, take shape in computer systems.

Described as global socio-technical assemblages, AI systems and the diversity of their knowledge bases consist of ‘globally distributed material and expressive components that continuously attempt to affirm the assemblage’s identity and represent the world’ (Mccarthy 2017, p. 22), enabling certain understandings while excluding others. Such frames of understanding could become reinforced and even conjured and normalized. The field of social informatics, for instance, has made tacit knowledge processes (creation, sharing, and management) as well as ignorance processes such as the denial and obfuscation of knowledge (agnotology), explicit (Greyson 2019; Meyer et al. 2019). In business studies the presence of ‘wealth equals well-being’ construct has been held responsible for the dominance of a cognitive frame related to ‘business case’ within discourses related to sustainability (Painter-Morland et al. 2017; Hahn et al. 2015). A business case, from a practitioner’s perspective, is a bid for an investment in a project, idea, or initiative that promises to yield a suitably significant financial return to justify the investment (Carroll and Shabana 2010; Kurucz et al. 2008). This further constrains the broader understanding of well-being.

These issues are pertinent as there is growing awareness that science itself is in crisis. The very foundations of knowledge acquisition appear to be plagued by reductionist-deterministic perspectives (Nadin 2018). Further, the present planetary problems being experienced on multiple scales are a result of a reductionist worldview that demands approaches beyond the traditional and hierarchical (Fiorini 2019). The shortcomings of reductionism have given rise to approaches that incorporate complexity where emergence is a foundational premise (Chorafakis 2020). As well-being requires a combination of multi-disciplinary perspectives (Zevnik 2014; McGregor 2007) complexity and emergence become implicit in its framings. The potential for AI as a tool for capturing such complexity is encouraging. However, given that AI and related technologies are often characterized as disruptive owing to their novelty and lack of practical experience in deploying them, using AI for community well-being will require reframing the uncertainty associated with AI as a resource instead of a problem (Fiorini 2019).

There is a stream of literature currently dealing with the development of laws, rules, standards, and best practices for ensuring socially beneficial AI (Floridi and Cowls 2019), but the underlying knowledge frameworks that inform our understanding of community well-being from the perspective of technological innovation-driven economic growth remain underexplored. As Musikanski et al. (2018), indicate, so far, marketplace relevance driven by economic imperatives has dominated conversations in the AI universe, but both corporations and policy makers recognize that well-being could be a greater measure of value than economic indicators alone. Capturing such metrics of well-being will require scrutinizing how well-being is framed and understood, a complex process, given the myriad relationships and connections through which well-being emerges. For creative and innovative policy solutions, Colander and Kupers (2016) have pointed towards the possibility of embracing complexity that will allow society to be envisioned as a complex evolving system. Such a system, they argue, cannot be controlled but influenced by channeling social instincts, where profits become tools for solving societal problems instead of goals. As the complexity frame shifts the focus away from efficiency towards resilience (Colander and Kupers 2016), the conceptual logic of the frame needs to change as well.

Such a conceptual logic needs to take into account that information is the sole content of thought, and thinking is, at its core, the processing of information (Adkins 2019; Floridi 2013). However, as Floridi (2017) argues, a conceptual logic of information that is steeped in analyzing the structural properties of a given system ceases to be a logic for design. Therefore, a framework for reimagining well-being requires a conceptual logic of design (Floridi 2017) as a logic of requirement. It takes into account what needs to change. The ultimate goal of this article is to suggest such a framework and to illustrate the ways in which some structural properties of the current system need to be uncovered in order to design a framework that can leverage AI as a resource for community well-being. The first step would be to uncover some of the dominant discourses related to technological innovation-led growth, as it is implicit in framings of well-being.

The Dominant Discourses of Technological Innovation-Led Growth

Discourses and accounts about how technological innovations develop often neglect the social interplay and arrangements that contribute to their inspiration, production, normalization, and use. For instance, within business ecosystems, network pictures are often used as strategic tools by managers to understand how new technologies influence relationships within business networks (Abrahamsen et al. 2016; Hopkinson 2015; Laari-Salmela et al. 2015; Möller 2010). The network picturing process reveals managerial decisions on interactions, mobilization, and influencing of other networks. However, this fails to take into account the underlying assumptions that drive managerial decision-making. For instance, DesJardine and Bansal (2019) have illustrated how negative outlook on organizational performance shortens managerial time horizons. The fact that this negative outlook mainly relates to financial performance, is taken for granted.

An instrumental logic that encourages separate management of social and environmental issues from the financial reinforces the tension between business and society (Gao and Bansal 2013). It could also limit managerial sense-making capabilities as the limited perspectives manifested in the network pictures prevent managers from linking how social and environmental factors impact traditional strategic issues such as investment in innovation in product or service design. A logic encompassing the social and environmental systems that firms are embedded in could enable an integrative process capable of embracing the tension between economic, social, and environmental elements of the system (Gao and Bansal 2013). In order to facilitate this, the predominance of value priorities that equate well-being with wealth need to be challenged, while exploring mindsets that allow for a more comprehensive and expansive understanding of what well-being entails (Painter-Morland et al. 2017). This has led organizational scholars to question the very purpose of the theories of management, and call for ongoing conversations that encourage a human turn towards management. Such a turn, they argue, will prevent an algorithmic replication of ideas and practices that act as instruments of optimization and of alienation (Petriglieri 2020).

Similar discussions, within the field of science and technology studies (STS), have attempted to highlight the interplay of economic, social, and environmental elements, while resisting technological determinism. They have engaged with framings that explore how machines are made by humans and that powerful institutions decide which technologies are worth such investment (Jasanoff 2015). Studies from this field show how often discourses and accounts crowd out alternative visions and plot a model of the future that reflects and reinforces existing problems and biases of socio-economic and socio-political systems. For instance, Sadowski and Bendor’s (2019) study of IBM and Cisco’s work on smart cities uncovers a narrative that tries to fit, and then subsequently sell and disseminate, different ideas and initiatives into a single coherent view of smart urbanism. Benjamin (2016) has combined STS and critical race theory to propose an expansive understanding of health and safety as forms of classification and control, in an environment of science and technology where subjugation is never an explicit objective. These perspectives echo Bostrom’s (2019) call for a deeper engagement with assumptions that frame all technological progress as beneficial, examining the efficacy of complete scientific openness, and exploring if our societies have the requisite capabilities to deal with the aftermath of the potential downside of a technology already invented.

Drawing from STS, research relating to transition to sustainable energy reveals how and why singular narratives emerge and become the norm. It illustrates how the incumbent energy regime is organized through socio-technical configurations of technological artifacts, market structures, user practices, regulatory frameworks, cultural and scientific meanings (Fouquet 2016; Geels 2004). Such configurations privilege a certain narrative that places well-being within the realm of economic growth. These insights highlight how scientific and technological developments driving industrial society have resulted in wealth creation yet also actively contributed to global ecological degradation and social inequality (Schot and Kanger 2018; Kanger and Schot 2019).

However, to understand technological change, the analysis should be directed within the context of the social structure where such a change takes place (Castells 2002). The current growth-based and market-driven economic system that dominates our societal structures is at the heart of the interactions between the modes of production and modes of development that are instrumental in the generation of new social and spatial forms and processes (Castells 2002). Evidence points towards how the system contributes to environmental degradation, perpetuates unequal access to resources and knowledge, and is instrumental in aggravating global financial crises, thus threatening social and community well-being as well (Matthey 2010; Stiglitz et al. 2010). The social and spatial forms and processes emerging from this system work towards institutionalizing the ideas, beliefs, and norms, thereby reinforcing these problems. This has led to studies critiquing the incumbent system as well as offering new transition pathways.

For instance, Rockström et al. (2009) have identified the nine critical boundaries essential for maintaining the planetary biosphere - climate change, ocean acidification, stratospheric ozone depletion, disruption of the nitrogen and phosphorus cycles, global freshwater use, land use changes, biodiversity loss, aerosol loading in the atmosphere, and chemical pollution. Based on these, Raworth (2012) contends that any vision for development needs to be within these boundaries. Raworth offers that resources should be mobilized to improve social indicators and has identified 12 social priorities: health, education, income and work, water and sanitation, energy, networks, housing, gender equality, social equity, political voice, and peace, and justice. Terming this as a safe and just space, Raworth conceptualizes the objective to be able to fit into a ‘doughnut’ where the planetary boundaries are represented in the outer border and the social foundation form the inner core. However, building on Raworth’s model O’Neill et al. (2018) have questioned whether fitting into the doughnut is even possible given the relationship between social performance and resource use. Based on the current relationship between resource use and human well-being, meeting the basic needs of all people would result in humanity breaching multiple limits. It suggests that our systems need restructuring for these needs to be met at a much lower level of resource use, implying that the very idea of growth and the related assumptions, including those of well-being, needs deeper scrutiny.

The field of degrowth actively discusses and debates the role of technology. Some see potential threats from certain technologies to human societies (Samerski 2018; Andreoni and Galmarini 2014) and call for refraining from technology (Heikkurinen 2018), while others consider some technologies beneficial for democratization and facilitating alternative forms for production and consumption (Rommel et al. 2018; Bradley 2018). Some bring up concerns related to biophysical limits (Bonaiuti 2018; Gomiero 2018), and some want to appropriate technology for different purposes (Likavčan and Scholz-Wäckerle 2018). These discussions are relevant and important because they highlight and comment on the traditional idea of ‘good living’ that is tied to GDP (gross domestic product) growth policies. They urge for a rethink of economic policies and public discourse in general that continuously repeat the need for GDP growth thereby perpetuating social addiction to it (Hickel 2019).

As human society transitions from an industrial mode of development towards an information mode of development, the distribution and processing of information remain a constant challenge (Hayek 1945) but the source of productivity lies in the quality of knowledge (Castells 2002). The role of AI in the information mode of development is important as it has the ability for substituting, supplementing, and/or amplifying almost all mental tasks (Makridakis 2017) that could have profound implications on the quality of knowledge. Knowledge is implicated in all modes of development, be it agrarian, industrial or informational, as the level of knowledge determines the process of production. However, in the informational mode, knowledge mobilizes the generation of new knowledge for prompting higher productivity (Castells 2002). The generation of new knowledge becomes the key source of productivity through its impact on other elements of the production process and their relationships. Even if well-being may be a determinant of higher levels of productivity, the way such productivity is pursued could potentially undermine well-being (Isham et al. 2020).

It is against this backdrop that AI’s role in community well-being needs to be understood. Its ability to contribute will depend on the knowledge framework that is used for its deployment, as learning is at the core of intelligence (Minsky 1961). Emotions, intuitions, and feelings are not distinct things, but ways of thinking that take the form of carefully reasoned analysis at times while turning to emotions at other times (Minsky 2007). To be able to recognize and embrace these ways of thinking also requires rich and varied experiences (data) that could then enable us to build intelligent tools. These tools could assist in making decisions with the ability to embrace the uncertainty inherent to futures where the objective is that of creating individual well-being that is accordant with the communities such individuals are embedded in.

Conceptualizing Community Well-Being

Well-being as a concept is an objective of development, in addition to being an approach to developing an understanding about how people perceive the idea of ‘living well’ (Atkinson et al. 2019; Armitage et al. 2012; Copestake 2008; Gough and McGregor 2007). A social conception of well-being allows for the individualistic and basic needs aspects to exist within a wider social-psychological and cultural needs perspective of living well (Deneulin and McGregor 2010; Coulthard et al. 2011). Taking into account three dimensions – material, relational, and subjective – that reflect both the development and the social psychology perspectives, this idea recognizes human well-being as an outcome and a process (Armitage et al. 2012).

The three dimensions help in understanding how the different facets of a ‘life well lived’ come together to conceptualize well-being, not just as an objective to be desired but also an analysis of elements that drive our choices and behavior, indicating what makes us thrive. The material dimension accommodates the physical and financial assets essential to well-being, and the relational dimension emphasizes interactions, that could take into account reputation, sense of community, and reciprocity. The subjective dimension plays on contentment and sense of happiness that is part of everyday as well as long-term decision making. The three dimensions influence individual and collective behaviors and are instrumental in capturing well-being at different scales (Coulthard 2012).

However, negotiating the multiple definitions, measurements, and the often hidden assumptions underpinning the acts of being individual and collective is a complex and theoretically challenging process, that can impede the conceptualization of the complex relationships pertaining to interior life, self or relational selves and the external environment (Atkinson et al. 2019; Allin and Hand 2017). Given this backdrop, it becomes imperative that without explicit recognition of the assumptions that drive the operationalization of these interactions, their impacts will remain under-specified. Community well-being centers on an understanding of community and fulfilling the needs and desires of its members (Sung and Phillips 2018). Therefore, theorizations that focus on relationality enable a notion of community that is greater than a sum of its parts as well as highlight neglected aspects of community well-being. The complex inter-relationships that characterize how lives are lived in relation to other people, places, materiality and so forth, enable understandings of community well-being that derive meaning and acquire importance locally as well as through the wide range of interactions.

Within the policy domain, this approach has urged for a focus on how aspects of the local community impact individual well-being and on the quality of collective life as relational. Sung and Phillips (2018) have contended that community well-being that is premised on the autonomous, individual subject rather than the relational aspects leads to an impoverished understanding of what it means to be human, but more significantly, obscures the processes through which lived lives are differentiated. For greater transparency and awareness about the positions that contribute towards operationalizing community well-being, relational aspects offer a wide range of economic, social, environmental, political, and cultural dimensions that could uncover how communities are governed and make sense of their environments. Therefore, this conception of well-being derives from social needs of individuals and communities, and recognizes the dynamic, multidimensionality, and variability of human development and quality of life.

AI as a Resource for Community Well-Being

Buchanan (2005, p. 53) describes the history of AI as that of ‘fantasies, possibilities, demonstration, and promise’. Influenced by disciplines such as engineering (cybernetics including feedback and control), biology (neural networks in simple organisms), experimental psychology, communication theory, game theory, mathematics and statistics, logic and philosophy, and even linguistics, AI has grown beyond these disciplines and in turn influenced them (Buchanan 2005). From Simon’s ‘satisficing’ as the fundamental principle of AI, (Rainey 2001), that looked at heuristics as a way of problem solving, in the absence of an effective method for decision making, to Minsky’s (1961) search for effective techniques for learning, AI has evolved over time.

So far, AI’s accomplishments have skewed mainly towards automating tasks associated with intelligence, without being intelligent itself. For instance, it has been observed that researchers applying deep neural networks for modelling limit their focus on the inputs and outputs, while the models themselves remain as ‘black boxes’ (Guo et al. 2019, p. 926). The neural architectures are designed based on the experiences and intuition of researchers who often fail to link the problems to their physical backgrounds (Guo et al. 2019). Therefore, AI’s progression towards deep learning warrants a better understanding of complexity, along with the ability to distinguish between the reactive nature of AI and the anticipatory nature of living intelligence (Nadin 2019). This is evident in the recent trajectory of research focusing on reassessing the existing knowledge systems and new ways of understanding them (Geva et al. 2019; Guo et al. 2019; Sap et al. 2019; Wang et al. 2019b; Nadin 2018). The impact of AI on social, socio-economic and environmental dimensions (Stanovsky et al. 2019; Schwartz et al. 2019), and how it can bring about social change (Abebe et al. 2020) is gaining relevance as well. Collectively, this literature indicates the importance of considering the contexts in which such technological tools should be deployed, as they determine the need, the design, and the effectiveness of such tools, which offers better understandings of the contexts leading to better designed tools. For instance, data-based discrimination has been proven to be a reality for millions when algorithms try to predict and prioritize outcomes that affect basic human rights and imperil economic equity (Eubanks 2018; Noble 2018).

The ethical dimensions of AI, as it becomes the preferred option for efficient service delivery, are proving to be significant. On the surface, such a shift towards automated and algorithmic tools for determining eligibility and providing services might appear positive and efficient, as direct interactions always carry the risk of internal biases or poor work culture of individuals (Lipsky 2010). This does not reduce prejudice; on the contrary, it builds bias into the system with complete reliance on stored information and predictive algorithms producing results that are more difficult to scrutinize (Eubanks 2018). Friedman and Nissenbaum (1996) identified three categories of bias, preexisting, technical, and emergent, while analyzing actual cases. They proposed that preexisting bias is rooted in social institutions, practices, and attitudes, while technical bias is a result of technical constraints, and emergent bias arises in a context of use.

The advances in AI as it increasingly comes under the rubric of machine learning, where computers are programmed to learn from experiences and examples (Agrawal et al. 2017) open it up to a range of contexts. The contexts in which these experiences occur and the examples recorded, offer a more layered understanding. For instance, AI can significantly improve community well-being by aiding farmers to adapt to climate change, predicting disease outbreaks, and making congested urban centers more livable, among other things. However, it can also impinge on privacy, surveil and repress marginalized communities, lead to loss of jobs or trigger an arms race (Bostrom 2019). To extrapolate from Phillips and Wong (2017), well-being is influenced by, and evolves through, a number of issues and constraints that are part of a dynamic social context, and AI needs to take into account these issues and constraints for developing a decision intelligence system for community well-being. However, the data scientists and programmers often have limited experience or knowledge about these complex interactions and even as they design user-friendly digital tools, the validity of such design within diverse contexts remains under-explored. These systems, though nonhuman and automated, are created with specific goals in mind and are often laden with values of those designing them.

Within this frame of reference, acknowledging the complex interactions that are intrinsic to the very conception of well-being, and recognizing that addressing the problems threatening well-being emerge from those very interactions is critical. What we know is the outcome of our learning. Human evolution itself is in some ways the manifestation of how learning supports life, as it changes continuously (Nadin 2018). What we choose to learn indicates our anticipated action, therefore, when knowledge acquired is meaningful it reinforces life changes and when inappropriate, undermines them (Nadin 2018). The kind of knowledge, the way it is understood and presented, is core to the decision-making process. This capability is intrinsic to how AI systems could become a powerful resource for effective decision-making that would contribute towards a more diverse understanding of well-being.

Conceptualizing AI as a resource demands acknowledgment that problems emerging from complexity require solutions that are framed through a collective understanding, by applying rules governing complexity, with a variety of tools that are able to address such problems at their respective scales (Gatzweiler 2020). As such emergent challenges are beyond the problem-solving capabilities of individuals, the resourcefulness of AI could come into play by uncovering the relatedness of our socio-technical systems, and enable us to use the knowledge to design context-specific solutions that contribute to our well-being. The design of such a framework needs to be non-linear and dynamic to accommodate intelligence as it evolves, and in doing so requires incorporating both the reactive nature of AI as well as the anticipatory nature of living intelligence (Nadin 2019).

A Framework for Leveraging AI for Community Well-Being

Societies are random, complex, and dynamic multiscale systems comprised of actors with varied and diverse interactions, experiences, and knowledge that emerge from sensitivities and sense-making from initial conditions. These combine in different ways leading to decisions that introduce aspects of unpredictability, chaos, non-linearity, yet could also exhibit self-organization. A framework for leveraging AI for community well-being needs to recognize and subsequently build in tools that can learn from these phenomena in order to offer intelligent decision-making models within diverse contexts.

The first step in designing such a framework would be to draw upon the definitions of well-being and to come up with one that best matches a particular localized context. The relational aspects could be captured through a combination of insights that take into account different aspects of AI and related technologies and the socio-technical, political, economic, and ecological aspects that influence well-being within diverse contexts. This is important for negotiating and conceptualizing well-being in diverse contexts.

Corea’s (2019) AI knowledge map (AIKM) (see Fig. 1 below) can be a good guide for thinking about how this data could be categorized and organized. It presents a general understanding of various AI tools available for solving problems. The AI paradigms (X-axis) are approaches used by AI researchers for solving specific AI-related problems and the AI problem domains (Y-axis) are the type of problems AI has been able to solve until now. The AIKM is an effort to help access knowledge on AI and is a useful tool for designing solutions that target specific problems within their contexts. This AIKM could serve as an inspiration for categorizing disparate data sets. The model could be seen like an open floor plan that allows individuals with different needs to design their own solutions at the same time being aware of how these solutions relate to others.

Fig. 1
figure 1

Corea (2019) AI Knowledge Map: How to Classify AI Technologies. In: An Introduction to Data (pp. 25–29). Studies in Big Data, vol 50. Springer, Cham

The different classes of AI technologies represented in the Fig. 1 above are clustered into groups, each representing the activities they perform. The nature of the problem determines the technology or rather the mix of technologies deployed. However, as discussed earlier, even as these technologies are being deployed, there is a lack of understanding of the new kinds of problems such technologies create in solving the existing ones. As these technologies are just tools, the concerns relate to how they are being used and could be addressed by developing data structures that are representative of diverse knowledge frameworks. The potential capabilities of AI technologies represented within the AI Paradigms become valuable tools for capturing the relevant data representing the complexity of interactions and relationships through which the idea of well-being is framed, within diverse contexts. These frames could offer opportunities for combining the tools depending on the need and context. For instance, the logic based tools used for knowledge representation and problem solving could be made more effective by data inclusiveness that adopts wide and diverse perspectives taking into account the material, relational, and subjective aspects of well-being. This could have consequences for the AI problem domain of perception and reasoning, and subsequently on knowledge, planning, and communications.

A diversity of perspectives would enable a rich and layered representation of knowledge while unveiling the connections and relationships embedded in these perspectives of well-being. Incorporating such diverse perspectives continuously in the design of the data structures that the algorithms depend on, would strengthen the tools and transform the AI technologies into valuable resources. The ontologies and databases of notions, information, and rules that inform the knowledge-based tools with better representation of rich and layered data would help design better probabilistic tools during incomplete information scenarios. This would make AI systems resilient, in addition to being efficient. Richer and more diverse data would allow for better search and optimization, and robust machine learning. This could enable a diverse and holistic understanding of well-being. The diverse knowledge or data sets become the ingredients for visualizing different scenarios or pathways. The data set is the source of inspiration as it brings together various forms of knowledge that could be mined based on the insights that evolved in step one.

Once this knowledge discovery process is underway, one can look at descriptive analytics (making sense of historical data) and/or diagnostic analytics (what factors influence thinking, behaviors, and events). Depending on the needs and requirements of particular communities, one can use statistical inference to make decisions about designing models for well-being. These decision models could be used as examples for learning about the models and predicting outcomes within diverse scenarios. As the data becomes richer, the models will become more sophisticated.

The Fig. 2 below is a conceptual view of AI as a resource for developing models for well-being. It takes into account the social conception of well-being as an emergent and scale-sensitive interplay of the objective (circumstances shaped by material and relational dimensions) and the subjective (values and perceptions) dimensions of agency and capabilities (Coulthard 2012). This conceptualization makes it possible to make sense of the data sets and their relatedness. Articles discussing decision-making by AI and data science practitioners and the literature reviewed for researching this article has inspired the steps presented. There is growing recognition among AI researchers about the importance of explaining what lies behind the algorithms for providing evidence to support the decisions being made by the algorithms and also to identify biased correlations. Coulthard’s (2012) conceptualization captures the complex negotiations that the idea of well-being embodies, that can then serve as important guidelines for understanding the data with the AI tools and in the process make the tools more robust.

Fig. 2
figure 2

A framework for leveraging digital intelligence for community well-being derived from Coulthard (2012). Can we be both resilient and well, and what choices do people have? Incorporating agency into the resilience debate from a fisheries perspective. Ecology and Society, 17(1)

The first step uses data to improve the performance of the AI tools. The tools, as we know, are designed to support and expand human cognition and in some cases even replace them. The AI paradigms and the tool clusters, as described in Fig. 1 offer us a general understanding of their growth in processing power and accuracy in handling different levels of complexity. This could be understood as algorithmic complexity. However, as discussed in the previous sections, there is a growing consensus towards understanding how these algorithmic decisions are arrived at, and more importantly, for the purpose of this article, a clear understanding of the content that these decision models draw upon, or the content’s semantic interpretability. For designing community well-being models, the framework represented by Fig. 2 gains relevance, as it draws upon diverse data sources pertaining to social, technological, economic, and ecological factors that influence well-being within material, relational, and subjective dimensions.

In the next step, this data could be used for descriptive and diagnostic analytics to create a rich and layered understanding of well-being. The AI tools could become critical resources in understanding the complex relationships, for instance, between income levels, education, health, and general quality of life, within layered historical social, cultural, and political contexts. Such perspectives would contribute towards designing better decision intelligence models for well-being and contribute towards learning continuously through the experiences of these models, as represented by steps, 3 and 4 in Fig. 2. Even though the steps are described in a linear form, in practice this is a non-linear process. As the data sets acquire further layers through experiences, feedback mechanisms enrich the process of learning and the outcomes.

Discussion

This article suggests that designing a framework for leveraging digital intelligence for community well-being would require conceptualization of intelligence in terms of what it does (functionality) as well as where it belongs (context), and capturing how the two elements interact. Such a conceptualization classifies intelligence as a dynamic and evolving process making the design process an emergent one.

The AIKM’s Y-axis (problem domain) corresponds to what AI can do and the X-axis (paradigms) corresponds to the contexts where specific problems belong or are located and experienced. In order to leverage digital intelligence – an interplay between functionality and context – for community well-being, the levels of analysis will need to take into account the dominant discourses related to technological innovation-led growth and scrutinize how well-being is framed within these discourses. The framework illustrated in Fig. 2 borrows from Coulthard’s (2012) perspective on well-being through material, relational, and subjective dimensions, as it lends a systemic and relational understanding of well-being. It then suggests deploying AI tools for organizing diverse data sources pertaining to social, technological, economic, and ecological factors that influence well-being. This data is further enriched by building contexts and in doing so it can potentially reveal discourses that limit or restrict well-being and address the lack of social and political accountability and long-term planning that has been missing from the largely corporate-driven development of AI (Cath et al. 2018).

Take for instance, a case where a leading online retailer was found to be discriminating against a certain community of customers regarding delivery options (Ingold and Soper 2016, April 21). A closer look revealed that it was not just the retailer’s algorithmic bias that favored those customers who buy more but also a historical factor that placed the discriminated community in a postcode not known for economic affluence. Relying on decisions based purely on economic objectives, as is the norm, can lead to unfair discrimination. Therefore, for this analysis to be effective, it is important to identify, recognize, and call to attention some of the fundamental issues plaguing our knowledge systems and frameworks. There is a crisis in how we conduct science, in terms of how knowledge is acquired, validated, and shared. For instance, the diverse ways of understanding are lost in the pursuit of a homogeneous reality (Nadin 2018). Research from the field of social informatics is uncovering the significance of portions of knowledge production and consumption migrating online. The internet’s ability to allow the distributed and shared production of knowledge has resulted in expanding the scope and scale of research using digital materials, yet it has also reconfigured the ways knowledge is created across disciplines (Meyer et al. 2019). Social informatics, in the era of distributed knowledge offers ways to understand the many aspects of knowledge production, distribution, and use. In doing so, it has revealed how intelligent agents could potentially reconfigure work, erode trust in technology, reduce privacy, and create social detachment. Evidence of intelligent agents’ ability to manufacture computational propaganda, as a social and technical phenomenon, has come into focus recently and these could have serious consequences on community well-being. This reinforces the need for deeper scrutiny of ideas, norms, and practices that define the underlying logic of our current system, and the relationships that reinforce certain value systems.

The logic baked into the basic characteristics of the technological paradigm has fundamental social consequences as semantic data relies on definition of terms, properties, and relationships (Mccarthy 2017). Semantic interpretability is a requirement for any approach dealing with AI and should be relatable to a system’s output as well as its architecture (Silva et al. 2019). Such technologies are embedded in the broader production and organizational system that has social roots, whose development is in turn fueled by these technologies (Castells 2002). The emergent complex interacting system that is giving rise to a new mode of development could potentially influence social and cultural mores in significant ways. The expanded role of information and its ability to influence and transform social relationships needs serious consideration. Hayek (1945) argued that the distribution and processing of information is a central problem in economics and urged for building an understanding of how markets distribute and process information. His solution was to move away from centrally planned economies. Following this solution and giving primacy to markets in all aspects has led to a concentration of market power followed by discourses of economic growth that perpetuate this power.

With AI being developed and deployed by a few powerful companies geared towards profits, it is critical to start questioning the underlying assumptions that drive well-being, that of endless economic growth. As this kind of growth implies a constantly increasing rate of energy and material demand resulting in increasing rates of resource depletion and environmental damage (Andreoni and Galmarini 2014), the efficient deployment of AI technologies itself could emerge as an issue with consequences for community well-being. For instance, the complexity of machine learning algorithms employed for face recognition could, and have been known to, result in misclassifications, and it illustrates how the interpretability of the model influences decision intelligence systems with critical consequences for individual lives. Such systems are used for medical diagnosis, insurance and credit assessment, and criminal recidivism prediction, among others, therefore, explaining and justifying such decisions is important for building trust and also improving the decision making process. Adopting the smart agency of AI means willingly ceding some of our decision-making power to technological artefacts, requiring the balancing of decision-making power between humans and artificial agents, while being cognizant of the risk of undermining the flourishing of human autonomy in favor of the artificial (Floridi and Cowls 2019). Such an arrangement reinforces one of the objectives of this article, which is about taking into account the ways and means through which information is collected, understood, and processed. There is growing evidence on different kinds of issues that range from annotator, gender, racial, and class biases (Geva et al. 2019; Stanovsky et al. 2019; Wang et al. 2019a; Eubanks 2018; Noble 2018; Benjamin 2016) to fears of how such technological progress might impact people’s capabilities or incentives in ways that could destabilize our societies (Bostrom 2019). This has prompted calls for a clear and convincing understanding of what a ‘good AI society’ would entail, accompanied by suggestions that this can be best achieved through independent, international, multi-stakeholder process of research and consultation on AI and Data Ethics (Cath et al. 2018).

This article proposes an approach by drawing attention to some of the fundamental systemic issues that need acknowledgment for diverse ways of understanding well-being. The approach presents the possibility for combining objective circumstances surrounding people and their perception of it, as a dynamic interplay of outcome and processes. Dynamic interplays need to be understood as being located in society and shaped by social, economic, political, cultural, and psychological processes (Gough and McGregor 2007), therefore it becomes imperative to investigate the ideas guiding these processes. If the general societal discourses maintain the need for economic growth and wealth as the cornerstones of well-being (Hickel 2019; Painter-Morland et al. 2017) then they will influence the basic conception of well-being in terms of what a person has, what they do with that, and how they think about what they have and can do (Gough and McGregor 2007). As corporations lead the development of AI, their narrow understanding of well-being (DesJardine and Bansal 2019; Painter-Morland et al. 2017; Gao and Bansal 2013), including a lack of social and political accountability (Cath et al. 2018), poses a challenge to well-being. The evidence of this happening is already building up (Eubanks 2018; Noble 2018; Benjamin 2016). In such a scenario, the framework presented above offers a two-pronged approach towards understanding well-being – the first involves taking a systemic and relational view of well-being in order to build context, and this leads to the second, which is attempting to uncover discourses that limit or restrain the concept of well-being.

Most frameworks assessing well-being are centered on the individual and how community aspects impact subjective individual well-being, ignoring layers of complexities brought about by spatial and social inequality, multiple settings and scales, and temporality and past legacies (Atkinson et al. 2019). In framing well-being through relationality as opposed to individual subjectivity, Atkinson et al. (2019) have offered a view that highlights a complex systems approach towards well-being where a community is understood to be more than a sum of its parts. This view echoes a growing number of scholars who are advocating for a similar approach (see Nadin 2018; Chorafakis 2020). A complexity approach highlights the relational elements of individual parts and in doing so opens up the possibilities for designing and learning from the diverse conceptualizations of community well-being. With AI as a tool, this becomes a possibility.

Conclusions

Algorithmic systems have the potential for predicting outcomes and allocating societal resources accordingly, and in such high-stake decision-making the lack of adequate understanding and contextualizing of information could introduce, perpetuate, and worsen issues related to well-being Such concerns are becoming more pronounced as technical interventions in the form of AI are becoming a way of life. Be it driverless cars, or the use of machine learning for improving healthcare and financial services, AI is instrumental in shaping daily practices and transforming certain fundamental aspects of our societal systems in the process. Sophisticated statistical and probabilistic methods, availability of enormous amounts of data associated with increasing transformation of places into IT-friendly environments, and cheap computational power are generating concerns about AI’s impact on societies. As fundamental questions relating to the ethical, social, and economic impact of AI remain unanswered (Cath et al. 2018), its ability to shape decisions for community well-being needs to be carefully examined. This article argues that one way to do this is to analyze the knowledge systems and the societal and institutional infrastructures that inform the current understanding and conceptualization of well-being. The framework represented in Fig. 2 treats AI technologies as powerful tools for developing diverse ways of understanding and contextualizing information for imagining and creating well-being models tailored to community needs and aspirations.

As societies become more “information mature” and their reliance on AI increases, it is expected that this pervasiveness will make its existence and influence non-transparent, leading to a paradox where the more AI matters the less visible it will be (Floridi 2016). In such an event, it becomes even more important to examine the current knowledge systems and their influence on perceptions of well-being. Deploying AI as a resource to manage the layers of complexity implicit in the relational aspects and subsequent conceptualizations of community well-being could provide dynamic solutions tailored to specific communities.