We are calling for a new area of research on the nexus of community well-being and artificial intelligence (AI). Three components of this research we propose are (1) the development and use of well-being metrics to measure the impacts of AI; (2) the use of community-based approaches in the development of AI; and (3) development of AI interventions to safeguard or improve community well-being. After providing definitions of community, well-being, and community well-being, we suggest a definition of AI for use by community well-being researchers, with brief explanations of types and uses of AI within this context. A brief summary of threats and opportunities facing community well-being for which AI could potentially present solutions or exacerbate problems is provided. The three components we propose are then discussed, followed by our call for cross-sector, interdisciplinary, transdisciplinary and systems-based approaches for the formation of this proposed area of research.
There is a growing body of research on the nexus of artificial intelligence (AI), and individuals (Anderson and Rainie 2018a; Argawal et al. 2017), industries (Makridakis 2017), as well as society (Bostrom 2018; Cowls and Floridi 2018). Further, courses and new academic areas of study in AI are emerging (Rahwan and Cebrian 2018). We suggest that AI will have an ever-increasing impact on community well-being and that the nexus between AI and communities seems a juncture where attention is needed. There are three dimensions that we propose as critical to explore: (1) creating indicator frameworks that allow for measuring, monitoring and managing the nexus of AI and community well-being via well-being metrics; (2) developing community-based approaches to the development of AI projects; and (3) developing AI projects that enhance or protect community well-being. We propose that this new area of research should be developed via cross-sector, interdisciplinary (combining disciplines), transdisciplinary (creating new areas of study and possibly new disciplines) and systems-based approaches.
Community, Well-being and Community Well-being Defined
First, we define community, well-being, and community well-being for the purposes of this proposal. Community is “an umbrella term [defined]...in geographic terms...as a neighborhood or town (place-based or communities of place definitions); or in social terms, such as a group of people sharing common chat rooms on the internet, a national professional association, or a labor union (communities of interest definitions)” (Phillips and Pittman 2015, p. 3–10). As such, we include both place-based or interest-based communities in our definition of community. We suggest that in terms of both digital and non-digital interest-based community, the impacts on well-being from AI are by and large unknown or not considered. Both place and interest-based communities may have undergone many AI-induced changes relatively unnoticed, in part because of their ubiquity and also because of the lack of metrics to understand the impact of AI. We further suggest that the well-being impacts from AI may vary for different on-line or digital communities, such as gamers and participants in virtual worlds, on-line dating communities, social media communities, and other types of communities, just as they may be very different for place-based communities. Thus, the proposed new area of research should encompass diverse communities.
We follow the Organization of Economic Cooperation and Development’s (OECD) “well-being framework” (OECD, 2019a, p. 2) for defining well-being, thereby defining well-being as encompassing “people’s living conditions and quality of life today (current well-being), as well as the resources that will help to sustain people’s well-being over time (natural, economic, human and social capital)” (OECD 2019c, p. 2). McGregor explains that well-being arises “in the context of society and social collectivity” (McGregor 2007, p. 318). In other words, what separately and collectively influences well-being evolves via a “set of interlocking issues and constraints...embedded in a dynamic social context” (Phillips and Wong 2017, p.1). Of note is that we define well-being to include current and future well-being, and to encompass the natural, social, and economic environments. Phillips and Wong (2017) define community well-being as “embedded with multidimensional values including the economic, social, and environmental aspects that impact people” (p. xxix) and their places. Based on their definition, we consider that community well-being encompass the domains of (1) community, (2) culture, (3) economy-standard of living (which includes housing, food, transportation and information and communication technology), (4) education, (5) environment, (6) government, (7) health, (8) psychological well-being, (9) subjective well-being and affect, (10) time balance and (11) work. This list is aligned with findings of Bagnell et al. (2017) who found commonalities in the domains of well-being among 47 communities in the United Kingdom. It is also consistent with the OECD Better Life Index and Bhutan’s Gross National Happiness Index. We suggest that community well-being is influenced by individual well-being, and vice versa and while this concept is important, exploration into it is outside the scope of this article.
Having defined community, well-being and community well-being, we provide a definition of AI. It should be noted that within AI communities, there is great divergence regarding definitions, fueled by the rapid development of the field of AI, whereby newly minted terms and re-assignment of terms is not uncommon. The definitions provided are intended to give the community well-being researcher and others not versed in the field of AI a contextual definition of AI.
The term artificial intelligence was coined by McCarthy, a mathematician, in 1956 (Computer History Museum n.d.). He defined it as both “the science and engineering of making intelligent machines” (Wang 2012, p. 2), and “the ability of machines to understand, think, and learn in a similar way to human beings, indicating the possibility of using computers to simulate human intelligence” (Pan 2016, p. 412). These days there are a wide variety of AI definitions, ranging from “generating adaptive behavior…maximizing probability of success...doing well in a broad range of tasks…[and] achieving complex goals in complex environments” (Legg 2008, p. 164–167) to “an agent’s ability to achieve goals in a wide range of environments” (Legg and Hutter 2006, p. 2).
Types of AI
Another way to define AI is in terms of its type, of which there are three. These are: weak (also called narrow), strong (also called general) or super (also called conscious or self-aware) intelligence (Kaplan and Haelein 2018, p. 4).
Weak (or narrow) AI is based on algorithms designed to solve a specific problem or set of problems in a certain context. Weak AI is comprised of “algorithms…designed to make decisions” (West and Allen 2018, p. 6). Weak AI is often very complex with multiple layers and algorithmic relationships between the layers. One form of weak AI is machine learning. An example is a search engine that learns which neighborhood, room size or other attributes of a venue are searched, delivering more desirable options over time with each subsequent search. Deep learning AI, also called artificial neural networking, is a subset of machine learning that has multiple hierarchical levels. An example of deep learning AI is image recognition, whereby AI goes from pixels to lines to areas and patterns to arrive at recognizing one person in a group. The attribute of learning is developed by programming the objective of optimization into an algorithm. Deep learning AI is programmed to identify and assess errors and change the constraints, thereby yielding greater desired outcomes. Weak AI can outperform humans in some tasks, such as analyzing large amounts of data, but it cannot solve problems beyond the scope of its focus. For example, Deep Blue, the AI program that beat Kasparov at chess, would not be able to query for a venue to hold a community event in a specific geographic area or identify a specific individual in a photo of a crowd of people (Emspak 2017; Greenemeirer 2017). There are a plethora of other names for weak AI that are used including weak, narrow, machine or deep learning. It should be noted there is no agreement as to which terms to use or how the terms should be defined within the AI community, much less outside it.
General AI (AGI), which does not yet exist, would be able to jump from one function to another, such as from playing chess to finding the best venue for a community event, and to “reason, plan, and solve problems” (Kaplan & Haelein, 2019, p. 3). The problems Kaplan and Haelein refer to are complex and multidimensional problems, for example in a community well-being context, issues of isolation, mental illness, homelessness, or achievement of an equitable economy and sustainable development. Immense resources are being spent to develop strong AI (Friend 2018, p. 3). GoodAI, an international nonprofit, offers a five-million-dollar prize to the first to develop AGI (Google AI n.d.). Lo et al. (2019) believe that “it is not far-fetched to foresee AGI to exist within our lifetime” (p. 1). If or when AGI exists, the need to understand, develop and monitor the nexus of AI and community well-being will be all the more pressing.
Super AI would be self-aware and likely pass what is known as the Turing test whereby a human is unable to distinguish artificial from human intelligence (Turning 1950). Muller and Bostrom (2016) surveyed experts about the possibility of developing superintelligence in the near future and the chance that it is bad or extremely bad for humanity. They found 33% of experts forecasted a bad or extremely bad future (defined as an “existential catastrophe”) (p. 12). Fiction writers and the entertainment industry depict super AI forming its own community and dramatically changing human communities. Thus, we propose that understanding implications on community well-being from AI is particularly crucial in the event super AI is realized. Like AGI, super AI does not yet exist.
AI Defined Contextually
In addition to exploring types of AI, understanding the context in which AI is used can be helpful to the community-well-being researcher. Pan (2016) outlines several ways that AI is deployed: (1) AI robotics; (2) big data and cross cross-media AI; (3) crowd sourced AI; and (4) human-machine hybrid augmented intelligence. These are explained briefly below with a few broad suggestions for their use.
• AI robotics: A special kind of AI system, this is used by many industries, including automotive, health care, food service, military and transportation industries. Imagined and real uses by communities of AI robotics are vacuum cleaners, chat bots, companions, information kiosks, plastic from water recovery, recycling, teaching, toxic waste cleanup, etc.
• Big data AI: The “transformation of big data into knowledge” (Pan 2016, p. 411) is being used by data centers to conserve energy, make energy grids more efficient, and support closed-loop zero emissions energy systems in place-based communities. One form of big data AI in early stages of development is cross-media AI that integrates “text, images, voice and video” (Pan 2016. p.411), similar to how Pokémon Go integrates images and videos as part of a game. Imagined and real uses by communities of cross-media AI include community asset mapping and building, fostering cohesion among neighborhood residents through use and control of public spaces, and engagement of community members in governmental budgeting or planning processes, etc.
• Crowd sourced AI: Grounded in “participation and interaction of individuals on the internet” (Pan 2016, p. 411), this type can be used to solve problems, develop knowledge and manage projects. Wikipedia is an example of a platform that uses crowdsourced AI. Some ways communities could participate in crowd sourced AI could include solving complex problems at a local and global scale, such as climate change, crime prevention, neighborhood safety and disaster planning, wildlife species protection, water protection, etc.
• Human-machine hybrid augmented intelligence: Includes “wearable devices, intelligent-driving vehicles, exoskeleton devices, and human-machine collaborative surgeries” (Pan 2016, p. 411). Pan (2016) suggests this form of AI can help solve social and environmental problems, such as enhancing peoples’ lives, ensuring sustainable use of natural resources and realizing smart cities. Some uses of human-machine hybrid augmented intelligence include diabetes management, virtual reality to challenge beliefs and assumptions (e.g. racism, prejudice, bias, climate change, etc.), and potentially, repair of neural functions in the brain or prosthetic limbs with sensory capacity, etc. In relation to this use of AI, in 2010, the Cyborg Foundation was founded with the aim of helping humans transition into cyborgs and advocate for cyborg rights. As such, the development of human-machine hybrid augmented intelligence could be resulting in formation of a community.
Threats and Opportunities to Community Well-being
In this section we briefly outline the threats to community well-being from AI as well as global threats to communities, and a few suggestions how this proposed area of research could help solve these and other problems. We then briefly outline benefits to community well-being from AI. Research conducted by Grace et al. (2018) found that “AI researchers predict AI will outperform humans in many activities in the next ten years, such as translating languages (by 2024), writing high-school essays (by 2026), driving a truck (by 2027), working in retail (by 2031), writing a bestselling book (by 2049), and working as a surgeon (by 2053)” (p. 1). Thus, we suggest that the threats and opportunities to community well-being we have identified below are starting places for understanding how AI may grow in influence over time.
Threats to Community Well-being from AI
Cowls and Floridi (2018) identified problems AI posed to well-being as encompassing the (1) devaluing of human skills, (2) eroding of human self-determination, (3) reducing of human control, and (4) removing of human responsibility. Stenfors (2017) posits that if cell phones or airpods are a form of AI, any human using them could potentially be thought of as a cyborg. Twenge (2019) found information technologies, including cell phones, is having a deleterious impact on youth and the elderly, including lack of sleep, isolation and depression. Other threats that AI presents or can compound include bias, privacy, data ownership and personal identity, data governance, manipulation, trustworthiness, (IEEE 2019) as well as unemployment, economic inequality and social ethics crisis (Harari 2016). Analysis of Google’s Sidewalk Labs project, such as that conducted by Goodman and Powles (2019) and Orash (2019), found that AI, when used by government or in public-private partnerships, can lead to erosion in trust, social disruption, and problems with privacy and data management practices.
Another threat to community well-being from AI may be found in the economic environment where AI is developed and deployed. Alphabet (Google), Amazon, Apple, Facebook, and Microsoft are amongst the biggest companies in the world that develop and utilize AI, and all are publicly traded (Statisica 2018). AI research and development is increasingly being conducted by for-profit corporations for the purpose of increasing profit (Brynjolfsson and McAfee 2017). The well-being of communities does not come into consideration for most companies that are publicly traded, as short-term profits are considered the foremost goal and responsibility, following Friedman’s doctrine that “there is one and only one social responsibility of business--to use its resources and engage in activities designed to increase its profits so long as it stays within the rules of the game, which is to say, engages in open and free competition without deception or fraud,” (Friedman 1970, p. 179). Some respond that today’s marketplace is not characterized by open and free competition and deception and fraud is widespread (Tepper and Hearn 2019). The extent of an unfair advantage is magnified in communities where the resources, knowledge and organizational structures are missing for communities to engage with and contribute to the development of AI. Montes and Goertzel (2019), citing the unfair advantage of large corporations over academics and small business enterprises, call for “distributed, decentralized, and democratized…(research and development of AI and an) infrastructure for coordinated action...(to)…facilitate the evolution of AI...that is both highly capable and beneficial for humanity and beyond” (p. 354). We suggest that research into the nexus of AI and community well-being could aid in such an evolution.
In addition to threats to community well-being from AI, there are many global threats to human well-being and therefore community well-being. Environmental threats include climate change and other pollution, non-renewable energy, plastics in the ocean, decimation of biodiversity, species extinction and water scarcity (Pecl et al. 2017; McMichael and Lindgren 2011). Economic threats include economic inequality, unemployment or underemployment, monopolies, inadequate and lack of affordable housing, homelessness, food scarcity, unsustainable transportation, and lack of access to information and communication technology (ICT) (Wilkinson and Pickett 2018; Tepper and Hearn 2019; de Graaf and Batker 2011). ICT is information or communication technologies such as cell phones, tablets, laptops and other devices as well as networks such as the internet and wireless networks in addition to other modes of communication. Societal threats range from gender inequality, discrimination and prejudice, mental illness, as well as corruption, widespread distrust of people and institutions, and disengagement in the governmental process (Victorian Government 2018; Grollman 2014; Anderson and Raine 2018b; Flateau et al. 2000; Gillam and Charles 2019). One of the outcomes of a new research field for AI and community well-being could be the widespread development of AI by communities, corporate entities, and others that addresses the threats and safeguards or improves community well-being.
Benefits to Community Well-being from AI
Applications of AI are used in many industries with some benefit to communities, including the automotive (Li et al. 2018), finance (Wall 2018; West and Allen 2018), medicine (Topol 2019), retail (Yang and Siau 2018) and social media (Pan 2016) industries. It is used by researchers (Kaplan and Heanlein 2018) in primary schools and nursing homes (Breazeal 2019), as well as for traffic management (Zhu et al. 2016), governmental service delivery (Halaweh 2018), and crime prevention (West and Allen 2018). Online communities use AI extensively for analytics, chatbots, content management, matchmaking, and modeling (Johnson 2018). It is now being installed in homes (Mims 2019). Today, people who use ICT, such as a cell phone, tablet or computer, also use some form of AI (Reinhart 2018). As ICT often includes AI, a few ICT projects are presented in the following.
The World Economic Forum and the Institut Européen d’Administration des Affaires found that once basic needs are met, ICT can be used to increase access to governmental services, education, and healthcare, reduce unemployment, and provide improvements to quality of life in cities (Dutta and Bilboa-Osorio 2012). Shirado and Christakis (2017) found that ICT may also enhance a sense of a global community and collective problem solving. In the face of disasters, ICT technology can be used to secure individual safety and safeguard community well-being (Shklovski et al. 2008; Ciuciu et al. (2012) identified sharing of energy systems to manage energy consumption by small enterprises and communities, which presents the potential of using AI to enhance aspects of community well-being. In 2018, the Artificial Intelligence for Global Good Summit (Mead 2018) and in 2019, the Global Governance of AI Roundtable included discussions about the use of AI to achieve the UN Sustainable Development Goals (R. Al Hashmi, personal communication, January 28, 2019).
While one can assume there are benefits to some communities from the uses of AI listed above and many others, we propose that the impact on community well-being of these applications is, by and large, unknown. However, the authors suggest that AI could be developed with the goal of community well-being, and its impact on communities better understood using well-being indicators to measure and assess progress towards common goals.
Three Proposed Areas for Research
We suggest there is a dearth of research into the nexus of AI and community well-being. Two exceptions are found in reports from Stanford University and the IEEE. In 2016, a team of Stanford researchers issued a report surveying implications of AI in cities, focusing on issues of education, food distribution, healthcare, law enforcement, low-resourced communities, service robots, transportation, and work (Stone et al. 2016). In 2019, the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (2019) issued a report entitled Ethically Aligned Design, First Edition that included a discussion of a holistic and humanistic community-based approach to the development of ethics for AI based on the South African Ubuntu philosophy (p. 56). Both reports call for research into aspects of the nexus of AI and community well-being. Three components of the research we propose focus on indicators, community-based AI development and AI interventions.
Indicators for Assessing the Impact of AI on Community Well-being
To our knowledge, there does not yet exist research or applied projects to develop frameworks and tools to measure the effects of AI on community well-being, although there are forays to understand the impact of technology on some dimensions such as inclusion, health, and the environment (OECD, 2019a; Chui et al. 2018). There are efforts to develop a means of assessing the impact of AI or digital technology on well-being. One of these is by the IEEE with the launch of their 2017 IEEE Project 7010 Recommended Practice for Assessing the Impact of Autonomous and Intelligent Systems on Human Well-being. The project incorporates well-being indicators based on the European Social Survey, OECD Better Life Index, United Kingdom Office for National Statistics Measurements of National Well-being, United Nations Sustainable Development Goal indicators, World Values Survey, as well as other indices (Musikanski et al. 2018). The project contemplates indicators for domains of well-being that include community, as well as other domains. While it could be used to measure the impact of AI on a community, it is not specifically designed for that purpose.
In a similar project, ISO (formerly International Standards Organization) launched a project for AI called ISO/IEC NP TR 24368. The ISO project includes an effort to address what is termed ethical and societal concerns. In 2018, the OECD also began another similar effort. Based on recommendations by a group of experts, their report How’s Life in the Digital Age? Opportunities and Risks of the Digital Transformation for People’s Well-being (OECD 2019b), provided a set of 33 well-being indicators covering the domains of education and skills; ICT access and use; health; environmental quality; governance and civic engagement; income and wealth; job and earnings; personal security; social connections; subjective well-being and work. The indicators were issued as a starting place, with the intent to continue developing them (M. Durand, personal communication, October 4, 2019). The OECD (2019b) states that the “report is based on an imperfect set of indicators related to community well-being, such as climate change, economic inequality, gender inequality, population growth, poverty, and species extinction” (p.12).
We propose that before long, AI will be deeply embedded into many aspects of all kinds of communities and that assessing the impact of AI on communities could be included in assessments of community well-being. Well-being indicators have gained momentum as tools for assessing community and regional well-being (Durand 2018; Lee et al. 2015; Musikanski et al. 2017; Phillips and Wong 2017; Sirgy et al. 2009; Sung and Phillips 2018). Community well-being indicators gather data that are usually reflective of what is valued by the community’s residents (Phillips 2003), and thus can be used to understand the impacts of AI on community well-being as well as to aid in decision-making. An important component of the new field of research that we call for is the development and use of well-being metrics to understand, measure and monitor impacts of AI on community well-being. In addition, we suggest there should also be a community-based approach to the development of these indicators, building upon the work of many researchers and practitioners in the field of community well-being indicators (Lee et al. 2015; Phillips and Wong 2017; Sirgy et al. 2009; Sung and Phillips 2018).
Community-Based Development of AI
We propose that means for community-based development of AI is important to community well-being. A community-based development of AI could draw from the concept of community development. Phillips and Pittman (2015) define community development as “A process: developing and enhancing the ability to act collectively, and an outcome: (1) taking collective action and (2) the result of that action for improvement in a community in any or all realms: physical, environmental, cultural, social, political, economic, etc.” (p. 6). Thus, community development is defined as both a process and an outcome. A community development approach can also include the application of an action research methodology. Action research involves an iterative process of collaboration with community partners in order to deeply understand the local context of existing perspectives and problems, and to design solutions that contribute to positive change (Hayes 2011). An action research methodology could be used to aid in understanding the interdependency between the process of developing and deploying AI and impacts on community well-being.
Other approaches may be adapted to formulate community-based development of AI, such as the human-in-the-loop system approach. These are systems in which a human has a crucial role in the control, optimization and maintenance of AI (Rahwan 2017). Human-in-the-loop systems have been explored within the field of supervisory control, which studies the different roles and forms of supervision in human-computer interaction (Allen et al. 1999; Sheridan 2006). Rahwan (2017) proposed the concept of society-in-the-loop system design, whereby the entire society is involved in the process of supervising AI. We propose an investigation into community-in-the-loop systems. There are other frameworks for community-based development of AI that may also be applicable and adaptable, such as human-centric design and ethically aligned design.
Castells and Cardoso (2005) described a social transformation termed the Network Society in which there is an “emergence of a new form of social organization based on networking, that is on the diffusion of networking in all realms of activity on the basis of digital communication networks” (p. 3). We have provided a few suggestions for starting places for community-based development of AI, with the hope that they inspire a society in which communities are part of the transformation whereby communities that use and are impacted by AI are integrated into every aspect of AI conception, development, deployment and optimization.
Development of AI to Solve Problems and Improve Community Well-being
We propose that AI is developed with the purpose of solving AI-generated threats to community well-being. Conversely, AI also presents a means to contribute to solving environmental, social and economic problems that threaten community well-being. A number of research publications and industry-led initiatives are developing the concept of AI for social good, also termed AI for Good (GoodAI n.d.; Microsoft n.d.; Chui et al. 2018; AI for Social Good 2019). The concept began through a series of interdisciplinary workshops and is grounded in “ongoing work, be it in urban computing, sustainability, health, public welfare, leading the way for applying AI for Social Good” (Hager et al. 2019, p. 15). We suggest that researchers and the industry learn from and participate in the development of these concepts and consider adapting them for the development of AI for solving problems threatening as well as contributing to community well-being.
Guiding Frameworks for a New Area of Research
We suggest that the study of the intersection of AI and community well-being incorporate cross-sector, interdisciplinary, transdisciplinary and systems-based frameworks. We briefly outline the application of these frameworks within the context of AI and community well-being.
A cross-sector approach would bring community organizers, the corporate sector, researchers and academics, policy makers and other interest groups together for the creation of this new area of research. The role of culture has been noted as important to AI by Ma and Seaver. At the first AI for Social Good workshop at the 2018 NeurIPS Machine Learning research conference, Ma said “I used to think that culture should have a seat at the table but now I think that culture is the table from which economic development should happen” (Y. Y. Ma, personal communication, December 8, 2018). Seaver (2017) advises researchers to “think of algorithms not ‘in’ culture ... but ‘as’ culture: part of broad patterns of meaning and practice that can be engaged with empirically” (p. 1). We propose community is a manifestation of culture, and a cross-sector approach could be one way for communities to be at the table and in the culture, so to speak.
A cross-sector approach to developing an indicator framework for assessing impacts of AI on community well-being could help to account for diverse cultural contexts when developing a means to measure and monitor these relationships. A cross-sector approach could also facilitate the engagement of communities in development of AI solutions and their implementation.
Interdisciplinary approaches involve collaboration between two or more disciplines. This approach is used for the development of new fields of study of AI, such as critical algorithm studies (Cicirello 2007). Critical algorithm studies “systematically study and tackle legal, ethical and social challenges of data science” (Pfeffer and Mayer n.d.. p. 2). Critical algorithm studies include sociology, anthropology, science and technology studies, geography, communication, media and legal disciplines, among others (Gillespie and Seaver 2015). Other fields that take an interdisciplinary approach includes computational sustainability, sustainability studies, enzymology and bioinformatics. An interdisciplinary approach to the engagement of communities in the development of AI could benefit from the scholarship of many fields, including community development, urban planning, public policy, sustainability studies, cultural studies, as well as information and computer sciences and AI studies. An interdisciplinary approach can be a step towards a transdisciplinary approach.
A transdisciplinary approach “dissolves the boundaries between the conventional disciplines and organizes teaching and learning around the construction of meaning in the context of real-world problems or themes” (International Bureau of Education n.d., p. 1). Brown et al. (2010) suggest transdisciplinary approaches can be used to come to a “collective understanding” (p. 4). Transdisciplinary approaches were conceived, in part, to address problems that cannot be solved with a single solution, or that are very complex and traditional problem-solving does not apply. These are termed wicked problems, which Rittel and Webber (1973) first explored in the context of policy and planning in the 1970s. They pointed out, rightly so, that these type problems have a no stopping rule – that is, work on the problem is never done due to complexity and changing natures of such problems. The threats and opportunities facing community well-being and the role of AI could be considered a wicked problem, and thus this approach is suggested for this proposal. Transdisciplinary approaches may also create new areas of study and possibly new disciplines could emerge.
We suggest a framework to guide this area of research should be grounded in a systems-based approach. Systems can be defined in a multitude of ways, but intuitively one can understand that a family is a system, as is a community, a city, and a biosphere. Meadows (2008) defined a system as “an interconnected set of elements that is coherently organized in a way that achieves something” (p. 11). AI can play multiple roles in a system, from being an element, such as teaching robots used in primary schools (Breazeal 2019). It could also play a role as an organizing force, as many people experience when using social media such as Google Maps and Facebook, or in self-reinforcing feedback loops facilitating the achievement of something, such as Amazon or YouTube’s recommender systems designed to change behavior patterns (Jiang et al. 2019).
Leverage points are “places in the system where a small change could lead to a large shift in behavior” (Meadows 2008, p. 145). In an effort to help researchers and others understand how a systems-based approach could be integrated into this proposed area of research, leverage points for a system identified by Meadows (1999) are condensed and explained in terms of AI and community well-being below (ordered by magnitude of impact from low to high).
9. Changing the parameters: Incentivizing or funding community-based development of AI; developing a standard for the development of AI; and dis-incentivizing the development of AI that harms community well-being.
8. Changing the size of buffers relative to constant changes: Encouraging interdisciplinary or transdisciplinary approaches to AI and community well-being studies and research; reducing the barriers to AI development for general population; and planning and developing common work and play physical and online spaces for community members to engage in the development, controlling, monitoring and optimization of AI.
7. Changing the physical structure that dictates behavior and movement: Redesigning discipline (schools and college) occupation of buildings on university campuses to facilitate interdisciplinary approaches; and changing how people commute, exercise, play, and work together so that by using AI face-to-face interactions increase and community members build relationships through AI facilitated interpersonal interactions.
6. Changing the feedback loops so that systems can self-correct or self-reinforce: Ensuring that the economic, environmental, and social gains from AI do not increase economic inequality, unemployment, unequal representation in government and other externalities, and do benefit people and communities by decreasing income inequality, increasing job security, ensuring equal representation in government and accounting for other externalities.
5. Changing access to data and information relative to the rate of change in the system: Having access to information about the use of personal and community-level data for individuals and a community, as well as having decision making power about how and when the data is used and no longer used.
4. Changing the rules: Enacting laws requiring an accounting of the impacts of AI on community well-being and requirements for adjustments to AI if it negatively impacts community well-being.
3. Changing the structure of the system: Putting in place collective, open source, or easily accessible development, monitoring and adjustment of AI by and for communities; and prioritization of goals, indicators and data for well-being by governments and other institutions.
2. Changing the metrics, goals and values of the system: Changing from the metrics of profit and economic growth to guide the development and deployment of AI to well-being metrics, thereby shifting the values from wealth, status and physical appearance to caring for community, connecting and personal well-being (Kasser and Ryan 1993).
1. Changing one’s own mindset: Not accepting what is accepted and normal necessarily as truth or reality, and seeing things in a different way.
On the last point, as suggested by the Meadows (1999) list of leverage points, and considered the most important by her:
“In a nutshell, you keep pointing at the anomalies and failures in the old paradigm, you keep speaking louder and with assurance from the new one, you insert people with the new paradigm in places of public visibility and power. You don’t waste time with reactionaries; rather you work with active change agents and with the vast middle ground of people who are open-minded.” (p. 18).
AI holds the potential to either exacerbate or mitigate many threats to community well-being. The concept of development and application of AI within the context of and for the goals of community well-being is nascent. There are potentially very complex, multi-faceted and sometimes unsolvable problems that may emerge for the nexus of community well-being and AI development and application. Because of this complexity, we suggest an approach that is systems-based as well as cross-sector and transdisciplinary in nature. It will take many to address the potentials and pitfalls of AI in the community context.
Cowls and Floridi (2018) state that “fear, ignorance, misplaced concerns or excessive reaction may lead a society to underuse AI technologies below their full potential, for what might be broadly described as the wrong reasons” (p. 1). A new area of research is needed to ensure AI reaches its potential to help rather than harm community well-being. This field of research should also explore the development of well-being metrics to assess the impact of AI on communities, means for community-based development of AI, and development of AI with the goal of safeguarding and improving community well-being. We further suggest the need and value of cross-sector, interdisciplinary, transdisciplinary, and systems-based approaches for the formation of the proposed research area. Our hope is that our proposal will contribute to communities world-wide where the role of AI is to safeguard and improve all domains of well-being.
AI for Social Good. (2019). AI for social good. Retrieved from https://aiforsocialgood.github.io/iclr2019/ Accessed December 23, 2019.
Allen, J., Guinn, C., & Horvtz, E. (1999). Mixed-initiative interaction. IEEE Intelligent Systems and Their Applications, 14(5), 14–23. https://doi.org/10.1109/5254.796083.
Anderson, J. & Rainie, L. (2018a). Artificial intelligence and the future of humans. Washington DC: Pew Research Center. Retrieved from https://www.pewinternet.org/2018/12/10/artificial-intelligence-and-the-future-of-humans/ Accessed December 23, 2019.
Anderson, J. & Raine, L. (2018b). 3. Concerns about the future of people’s well-being. Pew Research Center. Retrieved from https://doi.org/10.1007/s42413-019-00054-6. https://www.pewresearch.org/internet/2018/04/17/concerns-about-the-future-of-peoples-well-being/ [Online Resource].
Argawal, A, Gans, J. & Goldfarb, A. (2017). What to expect from artificial intelligence. Cambridge, MA: MIT Sloan Management Review. Retrieved from https://static1.squarespace.com/static/578cf5ace58c62ac649ec9ce/t/589a5c99440243b575aaedaa/1486511270947/What+to+Expect+From+Artificial+Intelligence.pdf [Online Resource].
Bagnell, A., South, J., Mitchell, B., Pilkington, G., Newton R. & Di Martino, S. (2017). Systematic scoping review of indicators of community wellbeing in the UK. London, UK: What works for wellbeing. Retrieved from whatworkswellbeing.org/product/community-wellbeing-indicators-scoping-review/ (select and click to download required). [online resource].
Bostrom, N. (2018). The vulnerable world hypothesis. Oxford, UK: The Future of Humanity Institute. Retrieved from https://nickbostrom.com/papers/vulnerable.pdf.
Breazeal, C. (2019). Living and flourishing with AI. Retrieved from vimeo.com/313938302 [Online Resource].
Brown, V., Harris, J., & Russell, J. (Eds.). (2010). Tackling wicked problems through the transdisciplinary imagination. Washington DC: Earthscan.
Brynjolfsson, E. & McAfee, A. (2017). The business of artificial intelligence: What it can - and cannot do - for your organization. Brighton, MA: Harvard Business Review Publishing. Retrieved from hbr.org/cover-story/2017/07/the-business-of-artificial-intelligence [Online Resource].
Castells, M. & Cardoso, G. Eds. (2005). The network society: From knowledge to policy. Washington, DC: Johns Hopkins Center for Transatlantic Relations. Retrieved from https://www.dhi.ac.uk/san/waysofbeing/data/communication-zangana-castells-2006.pdf [Online Resource].
Chui, M, Harrryson, M., Manyika, J. Roberts, R. Chung, R., Nel, P & van Huteren, A. (2018). Applying artificial intelligence for social good. McKinsey Global Institute. Retrieved from https://www.mckinsey.com/featured-insights/artificial-intelligence/applying-artificial-intelligence-for-social-good [Online Resource].
Cicirello, V. (2007). An interdisciplinary course on artificial intelligence designed for a liberal arts curriculum. Journal of Computing Sciences in Colleges, 23(3), 120–127. Retrieved from https://pdfs.semanticscholar.org/fe45/2b318223a22d8d8303757effc133213903b8.pdf. Accessed December 23, 2019.
Ciuciu, I., Meersman, R., & Dillon, T. (2012). Social network of smart-metered homes and SMEs for grid-based renewable energy exchange. In Paper presented at 6th IEEE international conference on digital ecosystems and technologies (DEST). Campione d’Italia: Italy. https://doi.org/10.1109/DEST.2012.6227922.
Computer History Museum. (n.d.). John McCarthy 1999 fellow. Retrieved from www.computerhistory.org/fellowawards/hall/john-mccarthy/ [Online Resource]. Accessed December 23, 2019.
Cowls, J., & Floridi, L. (2018). Prolegomena to a white paper on an ethical framework for a good AI society. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3198732.
de Graaf, J., & Batker, D. (2011). What’s the economy for anyway. New York, NY: Bloomsbury Press.
Durand, M. (2018). Countries’ experiences with well-being and happiness metrics. In J. Sachs, A. Adler, A. Bin Bisher, J. de Neve, M. Durand, E. Diener, J. Helliwell, R. Layard, & M. Seligman (Eds.), Global happiness policy report. New York, NY: Sustainable Development Solutions Network.
Dutta, S. & Bilboa-Osorio, B. (2012). The global information technology report 2012. Geneva, Switzerland: The World Economic Forum. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.357.1969&rep=rep1&type=pdf [Online Resource].
Emspak, J. (2017). What is intelligence? 20 years after Deep Blue, Ai still can’t think like humans. Live Science. Retrieved from https://www.livescience.com/59068-deep-blue-beats-kasparov-progress-of-ai.html Accessed December 23, 2019.
Flateau, P., Galea, J., & Petridis, R. (2000). Mental health and wellbeing and unemployment. The Australian Economic Review, 22(2), 161–181. https://doi.org/10.1111/1467-8462.00145.
Friedman, M. (1970, September 13). The social responsibility of business is to increase its profits. New York Times Magazine. Retrieved from umich.edu/~thecore/doc/Friedman.pdf [Online Resource].
Friend, T. (2018, May 7). How frightened should we be of AI? The New Yorker. Retrieved from https://www.newyorker.com/magazine/2018/05/14/howfrightened-should-we-be-of-ai [Online Resource].
Gillam, C., & Charles, A. (2019). Community wellbeing: The impacts of inequality, racism and environment on a Brazilian coastal slum. World Development Perspectives, 12, 18–24. https://doi.org/10.1016/j.wdp.2019.02.006.
Gillespie, T. & Seaver, N. (2015). Critical algorithm studies: A reading list. Retrieved from https://socialmediacollective.org/reading-lists/criticalalgorithm-studies/ [Online Resource].
GoodAI. (n.d.). General AI challenge by GoodAI. Retrieved from www.general-ai-challenge.org/ [Online Resource]. Accessed December 23, 2019.
Goodman, E., & Powles, J. (2019). Urbanism under google: Lessons from sidewalk Toronto. Fordham Law Review. (Forthcoming). https://doi.org/10.2139/ssrn.3390610.
Google AI. (n.d.). Using AI for social good. Retrieved from https://ai.google/education/social-good-guide/ [Online Resource].
Grace, K., Salvatier, J., Dafoe, A., Zhang, B. & Evans, O. (2018). When will AI exceed human performance? Evidence from AI experts. Journal of Artificial Intelligence Research, 62, 729–754. Retrieved from www.jair.org/index.php/jair/article/download/11222/26431/.
Greenemeirer, L. (2017). 20 years after deep blue: How AI has advanced since conquering chess. Scientific American. Retrieved from https://www.scientificamerican.com/article/20-years-after-deep-blue-how-ai-has-advanced-since-conquering-chess/.
Grollman, E. (2014). How discrimination hurts health and wellbeing. Scholar strategy network. Retrieved from https://scholars.org/contribution/howdiscrimination-hurts-health-and-personal-wellbeing.
Hager, G. Drobnix, A., Fang, F., Ghani, R., Greenwald, A., Lyons, T., … Tambe, M. (2019). Artificial intelligence for social good. Proceedings of a computing community consortium (CCC) workshop, Washington DC, USA, June 7th, 2016. Retrieved from https://arxiv.org/abs/1901.05406.
Halaweh, M. (2018). Viewpoint: Artificial intelligence government (Gov. 3.0): The UAE leading model. Journal of Artificial Intelligence Research, 62269–62272. doi: https://doi.org/10.1613/jair.1.11210 Retrieved from: https://jair.org/index.php/jair/article/view/11210/26421.
Harari, Y. (2016). Homo Deus: A brief history of tomorrow. New York, NY: Random House.
Hayes, G. (2011). The relationship of action research to human-computer interaction (article 15). ACM Transactions on Computer Human Interaction, 18(3), 1–20. https://doi.org/10.1145/1993060.1993065.
Institute of Electrical and Electronic Engineers (IEEE) Global Initiative on Ethics of Autonomous and Intelligent Systems. (2019). Ethically aligned design first edition. Piscataway, New Jersey: IEEE Publishing. Retrieved from ethicsinaction.ieee.org/ [Online Resource – requires filling in a form to download]. Accessed December 23, 2019.
International Bureau of Education. (n.d.). Transdisciplinary approach. Retrieved from http://www.ibe.unesco.org/en/glossary-curriculumterminology/t/transdisciplinary-approach [Online Resource]. Accessed December 23, 2019.
Jiang, R., Chiappa, S., Lattimore, T., Gyorgy, A. & Kohli, P. (2019). Degenerate feedback loops in recommender systems. Proceedings of AAAI/ACM Conference on AI, Ethics, and Society (AIES ‘19), Honolulu, HI, USA, January 27-28, 2019. Retrieved from https://arxiv.org/abs/1902.10730.
Johnson, B. (2018, July 26). AI use cases for communities & networks. Retrieved from structure3c.com/2018/07/26/ai-use-cases-for-communities-networks/ [online resource].
Kaplan, A., & Heanlein, M. (2018). Siri, Siri, in my hand: who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62(1), 15–25. https://doi.org/10.1016/j.bushor.2018.08.004.
Kasser, T., & Ryan, R. (1993). A dark side of the American dream: Correlates of financial success as a central life aspiration. Journal of Personality and Social Psychology, 65(2), 410–422. https://doi.org/10.1037/0022-35184.108.40.2060.
Lee, S., Kim, Y., & Phillips, R. (2015). Exploring the intersection of community well-being and community development. In: S. Lee, Y. Kim, & R. Phillips (Eds.), Community well-being and community development, SpringerBriefs in well-being and quality of life research (pp. 1–7). Cham, Switzerland: Springer.
Legg, S. (2008). Machine super intelligence (doctoral dissertation). University of Lugano, Lugano, Switzerland. Retrieved from http://www.vetta.org/documents/Machine_Super_Intelligence.pdf.
Legg, S. & Hutter, M. (2006). A formal measure of machine intelligence. Procedures of 15th annual machine learning conference of Belgium and the Netherlands (pages 73–78), Gent, Belgium, May 11–12, 2016. Retrieved from https://arxiv.org/pdf/cs/0605024.pdf.
Li, J., Cheng, H., Guo, H., & Qui, S. (2018). Survey on artificial intelligence for vehicles. Automotive Innovation, 1(1), 2–14. https://doi.org/10.1007/s42154-018-0009-9.
Lo, Y., Woo, C. & Ng, K. (2019). The necessary roadblock for artificial general intelligence: Corrigibility. Easy Chair Print, 846. Retrieved from https://wwww.easychair.org/publications/preprint_download/gRlw [Online Resource].
Makridakis, S. (2017). The forthcoming artificial intelligence (AI) revolution: Its impact on society and firms. Futures, 90, 46–60. https://doi.org/10.1016/j.futures.2017.03.006.
McGregor, J. (2007). Research well-being: From concepts to methodology. In I. Gough & J. A. McGregor (Eds.), Well-being in developing countries: From theory to research (pp. 316–355). New York: Cambridge University Press.
McMichael, M., & Lindgren, E. (2011). Climate change: Present and future risks to health, and necessary responses. Journal of Internal Medicine, 207(5), 401–413. https://doi.org/10.1111/j.1365-2796.2011.02415.x.
Mead, L. (2018). Global summit focuses on the role of artificial intelligence in advancing SDGs. Retrieved from https://sdg.iisd.org/news/global-summitfocuses-on-the-role-of-artificial-intelligence-in-advancing-sdgs/ [Online Resource].
Meadows, D. (1999). Leverage points: Places to intervene in a system. Hartland, VT: The Sustainability Institute. Retrieve from http://donellameadows.org/wp-content/userfiles/Leverage_Points.pdf [Online Resource].
Meadows, D. (2008). Thinking in systems a primer by Donella Meadows. Hartland, VT: The Sustainability Institute. Retrieved from https://wtf.tw/ref/meadows.pdf [Online Resource].
Microsoft. (n.d.). AI for good. Retrieved from https://www.microsoft.com/en-us/ai/ai-for-good [Online Resource]. Accessed December 23, 2019.
Mims, C. (2019, June 1). Amazon’s plans to move into your next apartment before you do. The Wall Street Journal. Retrieved from https://www.wsj.com/articles/amazons-plan-to-move-in-to-your-next-apartment-before-you-do-11559361605 [Online Resource].
Montes, G., & Goertzel, B. (2019). Distributed, decentralized, and democratized artificial intelligence. Technological Forecasting and Social Change, 141, 354–358. https://doi.org/10.1016/j.techfore.2018.11.010.
Muller, V., & Bostrom, N. (2016). Future Progress in artificial intelligence: A survey of expert opinion. In V. Muller (Ed.), Fundamental issues of artificial intelligence (pp. 553–571). NYL Springer: New York. https://doi.org/10.1007/978-3-319-26485-1_33.
Musikanski, L., Polley, C., Cloutier, S., Berejnoi, E., & Colbert, J. (2017). Happiness in communities: How neighborhoods, cities and states use subjective well-being metrics. Journal of Social Change, 9(1), 32–35. Retrieved from http://scholarworks.waldenu.edu/jsc/vol9/iss1/3/ .
Musikanski, L., Havens, J. & Gunsch, G. (2018). IEEE P7010 well-being metrics standard for autonomous and intelligence systems. IEEE Standards Association. Retrieved from http://sites.ieee.org/sagroups-7010/files/2019/01/IEEEP7010_WellbeingMetricsforA_IS_ShortPaper_December272018For_Submission_reviewedbyIEEELegal-1.pdf [Online Resource].
Orash, A. (2019). Platform governance and the Smart City: Examining citizenship in Alphabet’s ‘sidewalk Toronto.’ Digitalization challenges to democracy, 19/1. Hamilton, CA: McMaster University. Retrieved from https://globalization.mcmaster.ca/research/publications/workingpapers/2019/working-paper-oct-2019.pdf#page=89 [Online Resource].
Organisation for Economic Cooperation and Development (OECD). (2011). Your better life index. Paris, France: OECD Publishing. Retrieved from www.oecd.org/social/yourbetterlifeindex.htm [Online Resource]. Accessed December 23, 2019.
Organization for Economic Cooperation and Development (OECD). (2019a) Artificial intelligence in society. Paris, France: OECD Publishing. Retrieved from https://ec.europa.eu/jrc/communities/sites/jrccties/files/eedfee77-en.pdf [Online Resource]. Accessed December 23, 2019.
Organisation for Economic Cooperation and Development (OECD). (2019b). How’s life in the digital age? Opportunities and risks of the digital transformation for People’s well-being. Paris, France: OECD Publishing. Retrieved from https://www.oecd-ilibrary.org/sites/9789264311800-en/index.html?itemId=/content/publication/9789264311800-en [Online Resource]. Accessed December 23, 2019.
Organization for Economic Cooperation and Development (OECD). (2019c). Measuring well-being and progress. OECD Better Life Initiative. Retrieved from https://www.oecd.org/sdd/OECD-Better-Life-Initiative.pdf [Online Resource]. Accessed December 23, 2019.
Pan, Y. (2016). Heading toward artificial intelligence 2.0. Engineering, 2(4), 409–413. https://doi.org/10.1016/J.ENG.2016.04.018.
Pecl, G., Araujo, M., Bell, J., Blanchard, J., Bonebrake, T., Chen, I., et al. (2017). Biodiversity redistribution under climate change: Impacts on ecosystems and human well-being. Science, 355(6332), 1–9. https://doi.org/10.1126/science.aai9214.
Pfeffer, J. & Mayer, K. (n.d.). Critical data and algorithm studies. Retrieved from https://www.frontiersin.org/research-topics/9570/critical-data-andalgorithm-studies.
Phillips, R. (2003). Community indicators. PAS report no. 517. Chicago: American Planning Association.
Phillips, R., & Pittman, R. (Eds.). (2015). An introduction to community development. London: Routledge/Taylor & Francis Group.
Phillips, R., & Wong, C. (Eds.). (2017). Handbook of community well-being research. Dordrecht: Springer.
Rahwan, I. (2017). Society-in-the-loop: Programming the algorithmic social contract. Ethics and Information Technology, 20(1), 5–14. https://doi.org/10.1007/s10676-017-9430-8. Accessed 23 Dec 2019.
Rahwan, I. & Cebrian, M. (2018, March 29). Machine behavior needs to be an academic discipline. Retrieved from http://nautil.us/issue/58/self/machinebehavior-needs-to-be-an-academic-discipline.
Reinhart, R. (2018, March 6). Most Americans already using artificial intelligence products. Retrieved from news.gallup.com/poll/228497/americansalready-using-artificial-intelligence-products.aspx [Online Resource].
Rittel, H. & Webber, M. (1973). Dilemmas in a general theory of planning. Policy Sciences 4:2(155–169). Retrieved from http://urbanpolicy.net/wpcontent/uploads/2012/11/Rittel+Webber_1973_PolicySciences4-2.pdf. Accessed 23 Dec 2019.
Seaver, N. (2017). Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big Data & Society, 4(2), 1–12. https://doi.org/10.1177/2053951717738104.
Sheridan, T. (2006). Supervisory control. In G. Slavendy (Ed.), Handbook of human factors and ergonomics (pp. 1025–1052). Hoboken, NY: John Wiley & Sons, Inc..
Shirado, H., & Christakis, N. (2017). Locally noisy autonomous agents improve global human coordination in network experiments. Nature, 545, 370–374. https://doi.org/10.1038/nature22332.
Shklovski, I., Palen, L., & Sutton, J. (2008). Finding community through information and communication technology through disaster response, Proceedings of the 2008 ACM conference on computer supported cooperative work (pp. 127–136). San Diego, CA. https://doi.org/10.1145/1460563.1460584.
Sirgy, M., Phillips, R., & Rahtz, D. (2009). Community quality-of-life indicators: Best cases II. Dordrecht, The Netherlands: Springer.
Statisica. (2018). The 100 largest companies in the world by market value in 2018 (in billion US dollars). Retrieved from www.statista.com/statistics/263264/top-companies-in-the-world-by-market-value/ [Online Resource]. Accessed December 23, 2019.
Stenfors, S. (2017). You and me in a cyborg society. Retrieved from youtu.Be/31M82WBS08k [online resource].
Stone, P., Brooks, R., Brynjolfsson, E., Calo, R., Etzioni, O., Hager, G., ... Astro Teller A. (2016). Artificial intelligence and life in 2030. One hundred year study on artificial intelligence: Report of the 2015–2016 study panel. Stanford, CA: Stanford University. Accessed from: http://ai100.stanford.edu/2016-report [Online Resource].
Sung, H., & Phillips, R. (2018). Indicators and community well-being: Exploring a relational framework. International Journal of Community Well-Being, 1(1), 63–79. https://doi.org/10.1007/s42413-018-0006-0. Accessed 23 Dec 2019.
Tepper, J., & Hearn, D. (2019). The myth of capitalism. Hoboken, NY: John Wiley & Sons, Inc..
Topol, E. (2019). High-performance medicine: The convergence of human and artificial intelligence. Nature Medicine, 24, 44–56. https://doi.org/10.1038/s41591-018-0300-7.
Turning, A. (1950). Computing machinery and intelligence. Mind, 49, 433–460. Retrieved from www.csee.umbc.edu/courses/471/papers/turing.pdf [Online Resource].
Twenge, J. (2019) Chapter 5: The sad state of happiness in the United States and the role of social media. In J. Helliwell, R. Layard, & J. Sachs (Eds). World happiness report 2019. New York, NY: Sustainable Solutions Network. Retrieved from http://worldhappiness.report/ed/2019/the-sad-state-of-happinessin-the-united-states-and-the-role-of-digital-media/ [Online Resource].
Victorian Government. (2018). Gender inequality affects everyone. Retrieved from https://www.vic.gov.au/gender-inequality-affects-everyone [online resource].
Wall, L. (2018). Some financial regulatory implications of artificial intelligence. Journal of Economics and Business, 100, 55–63. https://doi.org/10.1016/j.jeconbus.2018.05.00.
Wang, F. (2012). A big-data perspective on AI: Newton, Merton, and analytics intelligence. IEEE Intelligent Systems, 27(5), 2–4. https://doi.org/10.1109/MIS.2012.9.
West, D. & Allen, J. (2018, April 24). How artificial intelligence is transforming the world. Washington DC: The Brooking Institute. Retrieved from www.brookings.edu/research/how-artificial-intelligence-is-transforming-the-world/ [Online Resource].
Wilkinson, R., & Pickett, K. (2018). The inner level: How more equal societies reduce stress, restore sanity and improve Everyone’s wellbeing. London, England: Penguin Books.
Yang, Y. & Siau, K. (2018). A qualitative research on marketing and sales in the artificial intelligence age. Paper presented at Midwest United States Association for Information Systems (MWAIS) 2018 proceedings, St. Louis, Missouri. Retrieved from aisel.aisnet.org/mwais2018/41.
Zhu, F., Li, Z., Chen, S., & Xiong, G. (2016). Parallel transportation management and control system and its applications in building smart cities. IEEE Transactions on Intelligent Transportation Systems, 17(6), 1576–1585. https://doi.org/10.1109/TITS.2015.2506156.
Appreciation to Sari Stenfors, Augmented Leadership Institute, firstname.lastname@example.org for comments.
Conflict of Interest
There are no conflicts of interests. No.
research involving human or animal participants was involved in the formation of this essay. All relevant ethical standards were observed.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Musikanski, L., Rakova, B., Bradbury, J. et al. Artificial Intelligence and Community Well-being: A Proposal for an Emerging Area of Research. Int. Journal of Com. WB (2020). https://doi.org/10.1007/s42413-019-00054-6
- Artificial intelligence
- Community well-being
- Well-being indicators
- Community indicators