1 Introduction

Technology has historically been created to serve the needs and desires of humans, and almost exclusively only to serve a need or desire of a specific human or group of humans first. Artificial Intelligence (AI)—including forms of Artificial Narrow Intelligence (ANI)Footnote 1 designed for tasks that are informed by (or seek to inform) human decisions—is increasingly however gaining the capacity to serve much broader ambitions. In discussing his popular book, AI 2041: Ten Visions for Our Future, author Kai-Fu Lee posits that in the future “AI will learn to serve human needs” [2]. Similarly, Human-Centered AI (HCAI)Footnote 2 has “serve human needs” as a primary application goal [4], and it has been suggested that the defining characteristic of all technologies is their capacity to serve human needs [1].

On the surface, serving human needsFootnote 3 appears to be a laudable goal for AI and AI developers, and within reach given the current fast-paced evolution of AI-related technologies. Yet, in this article, we weigh the ethical and pragmatic implications of this ambition—and consider what it would take to make needs-aware AI a reality. After all, currently we do not even have broad agreement(s) across communities, disciplines or cultures on a single definition (or a set of co-existing definitions) of what needs are (and are not) [5, 6], let alone what constitutes high-priority needs for individuals, organizations, or societies. Nor do we know how AI can assist in determining what responses are going to best satisfy needs, or even how needs satisfaction is best measured. From the barriers and technical challenges, to the driving forces that we believe can push societies toward needs-serving technological futures, in this first article (of what we hope will be a series of articles by many contributors with diverse perspectives), we start to reflect on (and then co-create) a future where AI systems have growing capacity to help us meet needs.


The burning platform

We are at a crucial point in the development of “intelligent” systems that (when combined with other emerging technologies and approaches) can, in the not too distant future, substantially influence both the well-being of humans (or even the being of humans) and the sustainability of our societies. We are not fully there yet, so now is the time to solidify needs as a measurableFootnote 4 construct and input into decisions that can also be used to evaluate our success, so that we (and our machines) can use needs in defining and creating a future that we [all] desire.

There is an urgency to beginning this journey [7], a “burning platform” [8] of sorts: more and more AI applications are in development, AI is increasingly important in many aspects of peoples’ lives, and AI development won’t necessarily wait for needs scholars and practitioners to sit on the fences of the issues we outline in this article. AI development is evolving rapidly, and though there is still a great distance to go before artificial general intelligence (AGI), today’s intelligent agents are already changing lives at home, work, and societies without adequate systematic, comprehensive, or practical ways to integrate the awareness of needs into their design, implementation, or evaluation.

Similar to “intelligence”, needs are difficult to define in a sense that is acceptable for each and every one of usFootnote 5, especially among scholars from different disciplines and schools of thought. Hence, providing an ultimate one-line definition of need does not seem to be feasible, and may not even be desirable. This is, among others, due to 1) the general difficulty of defining “concepts” using natural languages (as widely discussed in philosophy and cognitive sciences, see e.g. [10]), 2) the wide usage of need in both common and professional contexts, and 3) the potential complexity and multidimensionality of needs and the knowledge of needs (see e.g. [11]). And yet, if people can’t agree on what needs are, and are not, then how can emerging AI systems be expected to serve needs.

Need(s) in this context is a specific term, just as are the terms “intelligence” and “artificial intelligence”. The word need (especially when used as a noun) is deliberately selected by authors (including us) because it has the connotation of meaning a[n intrinsic] necessity for [the well-being or well-functioning of] a system (e.g., a human, a living agent, an organization, or a society)Footnote 6. This perspective, we hope, can be helpful to distinguish needs from other terms such as “wants”[12], “cravings”, “wishes”, “motivators”, or “desires” in most cases. We also distinguish needs and satisfiers. For example, an individual may have a need (e.g., improved nutrition in order to maintain well-functioning) that can be satisfied by a specific food (e.g., cauliflower, or carrots) in a specific context (location, time, situation, etc.). Here, the person’s need (more specifically, the difference between the nutrition necessary to maintain their well-functioning and their nutrition level at the time) is not the same as the potential satisfiers of that need in the described context. Clearly, the same or different people in different contexts can satisfy a similar need (e.g. nutrition) through different satisfiers (e.g. bread, pizza, or rice), and in the future both the need and potential satisfiers may very well change a little or quite substantially. Moreover, a satisfier may not always be an object (such as food), but could also be actions and activities (such as “meeting friends” or “exercising”), or a combination of objects and activities. Additionally, the mapping between needs and satisfiers (depending on our level of abstraction) can be complex: multiple needs can be satisfied by a single or multiple satisfiers, and multiple satisfiers can satisfy single or multiple needs. Needs and satisfiers are likewise related to goals (individual, group, organizational, and societal), and these relationship further expand the importance of needs-awareness for AI systems. Here–considering these complexities–we find both challenges and contexts in which needs-aware AI technologies could be potentially quite helpful.


Ethical and sustainable AI

In the development of the technologies that power AI (and those that are powered by AI), we contend AI-driven sociotechnical systems are ideally sustainable. Here, we will apply Human-centric, Accountable, Lawful, and Ethical AI (Sustainable HALE AI)Footnote 7 as a framework for sustainable AI. Noting, however, that, even within this framework, needs should find a more applied role than in current approaches to HALE AI–which is one of our motivations for writing this article.

It has been suggested that AI developers are often placed in social dilemmas with societal good on one side and commercial pressures on the other [14]. We submit that part of the solution to resolving these dilemmas, beyond ethical and legal/regulatory frameworks, is the introduction of measurable needs (and measurable needs satisfaction) into ongoing efforts to achieve AI that helps to satisfy (i.e. is aware of) needs. By identifying and measuring needs (e.g., societal, organizational, and individual needs), we have the best chance of finding an appropriate equilibrium that serves them meaningfully and in a balanced manner. We cannot, however, achieve this by ignoring needs, or incorporating them just superfluously without working definitions or standards for what they are, what they are not, how they relate, and how they can/should be enacted, utilized, satisfied or measured.

Likewise, attempting to develop AI that responds to and/or are responsive to underprivileged communities (whether based on race, gender, ethnicity, economics, or combinations of these and other variables) demands a multi-disciplinary understanding of human needs–integrating multiple ‘levels’ (individual, organizational, societal needs) [15]. In other words, needs can fundamentally contribute towards shifting the practice of one-size-fits-all AI to a more human-aware (human-centric), pluralist, and inclusive approach [16]).

“Both wants and needs are always tied to value prioritizations—they are not value neutral. Needs evolve within certain historical and cultural contexts.” [17] Needs can, therefore, put decisions in historical and cultural contexts, just as those contexts shape what are needs at the time. Situational and historical contexts matter immensely as we look to implement needs-aware AI, throughout design, development, and implementation. For example, during a pandemic an AI in a hospital will require different considerations of needs in comparison to non-pandemic times, and the needs considerations of a hospital AI will constantly be distinct in comparison to an AI that manages vending machines. Needs can be informative in all of these contexts, but contexts would vary how/when needs are included in the judgements of both humans and machines (giving needs-aware AI the capacity to be both human-centric and machine-centric, among others).


For AI, through AI, By AI, and with AI

Whereas some of the existing approaches (e.g. various regression and Bayesian tools especially) have been very successful in creating valuable machine learning tools (such as classifiers) in domain-specific applications, we posit that the future development of sustainable HALE AI requires additional concepts and tools associated with needs–among others. Needs (philosophically, sociotechnically, and computationally) is a construct that has the capacity to guide human and societal decisions in creating AI systems, along with guiding machine decisions and behaviours during implementation. This capacity of needs can be applied (along with predictive tools and HALE frameworks) at various phases of AI development and application to create AI that is capable of serving human needs.

Needs can be for AI, through AI, by AI, and with AI. That is to say, for AI refers to understanding (and utilising) needs to be used in AI systems, through AI refers to understanding needs through (during) the process of co-constructing needs-aware AI systems; by AI refers to using AI systems to understand needs and how to satisfy needs (e.g., needs-miningFootnote 8, mapping needs to satisfiers, or evaluating needs satisfaction), and with AI refers the needs (human, animal, machine, environment, etc.) that emerge (co-created) through the collaboration (or interactions) of humanFootnote 9 and AI. Many interconnected and overlapping questions may be associated with each of these, for example, and among many others:

  • Needs For AI

  • When is AI (or needs-aware AI) useful to achieve the desired result? When is AI (or needs-aware AI) a necessary (and/or sufficient) means to achieve the desired result? How? Why?

  • What is necessary (and/or sufficient) for AI to know about [human] needs in order to make a contribution to satisfying [human] needs? in which case or context?

  • What characteristics are necessary (and/or sufficient) for needs to be a system input (or element) to needs-aware AI (e.g., concise, clear, standardized, measured, cross-cultural)? What can be other required inputs?

  • How can needs be explicitized (become representable), utilized or enactized?

  • How can needs be considered and included in different AI realisation phases from imaginary, design, development, and evaluation, to sustaining and improvement—i.e. each and all co-creation processes and dimensions?

  • What are the different types of needs-aware AI? Which type is intended to be developed in a specific case and context?Footnote 10 Why? How?

  • Needs Through AI

  • What is necessary for humans (or organisations) to know about [their] needs (or satisfiers) to support (or be involved in) [needs-aware] AI development?

  • How can the processes of [co-]creating needs-aware AI improve [human] understanding of needs?

  • How can needs-aware AI help humans better identify, understand, measure, or evaluate our needs?

  • Needs By AI

  • How can needs-aware AI assist humans in identifying and implementing satisfiers that are necessary to meet [human] needs?

  • How can collective, organisational, or societal needs be satisfied by AI?

  • Are there hierarchies and priorities between needs or needs satisfaction? What kind of hierarchies? In which case or context?

  • What patterns of needs and needs satisfaction can needs-aware AI find [in data]?

  • How can data related to needs be collected, analysed, enacted, or utilized?

  • How can we ensure the sustainability, human-awareness (or human-centricity), accountability, lawfulness, and ethicality (Sustainable HALE) of needs by AI?Footnote 11

  • Needs With AI

  • What new (or changed) [human] needs (or satisfiers) emerge from interacting, living, or working with AI?

  • How are new (or changed) needs (or satisfier) co-created by humans and needs-aware AI?

  • What is the influence of human-AI collaboration on non-human needs (needs of animals, machines, environments, etc.)?

  • What are the individual, collective, organisational, or societal consequences of newly created (or changed) needs (or satisfiers)?

  • How can we actively influence the co-creation of needs (or satisfiers), in particular in a sustainable, human-aware, lawful, and ethical manner?

Needs For AI, Through AI, By AI, With AI are, as you can see above, not mutually exclusive or independent classes. Nevertheless, the For, By, Through, With provides a valuable structure to understanding the input, process, and output roles of needs (as both a construct and variable) in the future of AI. Similar to other sociotechnical systems, it is important to consider sustainability, human-awareness, accountability, lawfulness, and ethicality in [needs-aware] AI and develop them based on inclusive, pluralist, and fair approaches that in particular consider underprivileged and marginalised people, as well as environmental and societal concerns. Many questions regarding these aspects and dimensions should also be investigated and carefully answered.


Our goals

A primary goal of this article is to initiate a multi-disciplinary and interdisciplinary professional dialogue about what are the appropriate roles for needs in the design, development, and application of AI technologies in the coming decades. We do not propose answers, nor are we naive enough to believe that this can be done overnight. Rather, we want to focus attention on the valuable role that a measurable (\(\approx\) explicitizable, accessible, utilizable, or enactable) construct of needs can have from the design decisions that going into creating a sustainable HALE AI-based sociotechnical system, through to the technical weighing of options the systems must do to make decisions and/or recommendations. Moreover, we will reflect on a set of potential challenges, barriers, gaps, drivers, enablers, and considerations regarding the application of needs in AI and the development of HALE needs-aware AI systems.

2 The necessity of (Re-)introducing Needs

Needs have played an essential role throughout the history of philosophy and science. From Aristotle to Marx, many philosophers have used both the concept of needs, and the powerful literary tool of the word need, as a part of their philosophical frameworks (see [6] for an overview). More recently, psychologists, cognitive scientists, social scientists, economists, and experts from many disciplines and sectors have also conceptualized and applied needs in practical ways (see  [19] for a collection of references). Similarly, computer scientists and AI experts also continue to consider needs in various architectures and systems (e.g., [16, 20, 21]). While such attempts are precious, in the following, we suggest that now is the time to reinvigorate research and professional dialogues on the roles for needs in AI systems (from novel aspects, to multi-dimensional and interdisciplinary approaches, to new measurements).

  1. 1.

    The next level co-production A vast number of concepts and evidence (from the sociology of science and technology, e.g. [22, 23], to cognitive science, e.g. [24, 25]) emphasize that humans and technologies co-produce (i.e., co-create, or co-construct) each other. AI is no exception–that is, humans and AI systems co-produce each other. For instance, the dynamic relationships of people and social media recommendation engines ([26, 27], the shaping of behavior through Internet of Things (IOT; see [28]), the co-evolution of law and technology [29], and the changes in how people associate their knowledge in relation to knowledge that is always available to them online [30]). The emergence of ubiquitous [31, 32] and pervasive [33] computing has also led to a global web of ambient intelligence  [34,35,36] everyware [32], making the importance of this specific human-technology co-production more apparent–and more far-reaching. Beyond ubiquity, there are other aspects that make human-AI co-production an important matter of consideration. For instance, computational systems in general (and AI systems in particular) can embody many capabilities that past technologies could hardly achieve–such as memorizing, computation, inference, decision making, visualization, etc. With such capabilities, AI can/might co-create humans’Footnote 12needs and the ways they satisfy their needs. Therefore, we argue, needs, and needs satisfaction, should be well considered (and studied) in relation to AI development and applications–from initial design decisions and [training] data selection, to development, application, evaluation, and beyond. But this is just one-side of the AI–human co-production of needs. The other side is that our understanding of needs and the imaginaries (i.e., shared visions and values) we have about them will also contribute to the co-construction of AI systems in the future. In other words, AIFootnote 13 will fuel our dreams of what AI can do, giving us new ideas about what we might want to accomplish in the future. When needs are considered, AI and humans do, can, and will have multiple intersecting relationships. Humans, for instance, [might] develop AI systems based on perceived or imagined needs, they are also routinely the beneficiaries of actions to address needs, they identify emerging needs, relate needs to wellness[37], and likewise they are often in the role of assessing current needs and evaluating the extent to which needs have been satisfied. For their part, AI systems are just starting to assist people in identifying and prioritizing activities to satisfy needs, improving the efficiency of solutions to address needs, and at the same time creating new needs that didn’t exist in prior generations (necessities for both humans and AI systems alike)Footnote 14. However, AI might play more roles in the coming years, and the weightings of this continuous co-construction might change.

  2. 2.

    AI vs AIs There is, of course, no one [human] need, as there is no single concept of AI. Both are complex and contextual, and yet they must consistently interact. One current challenge in these relationships is that many recent advancements in AI are mainly based on machine learning approaches, which routinely rely heavily on models and conceptualizations in which needs do not play a central role. We suggest that AI developers who intend to identify, address, or co-produce needs with/for humans (e.g., through HCAI methods) can benefit from integrating needs into both their design (such as, identifying which needs they intend to address) as well as in their architecture (such as, complementing regression-based ML techniques with necessity and sufficiency analyses). This is not new to AI either, the AI pioneer Judea Pearl (see e.g., [20]) proposed formulas for calculating the probability of necessity and probability of sufficiency; but exploring the role of needs has been overshadowed in recent years by mainstream approaches. AI is an emerging and evolving field, what is meant by AI and how it is practised, can be different in different domains, contexts, application areas and times. We believe that re-introducing needs to AI can contribute to develop variants of AI that can better meet individual, organizational and societal needs.

  3. 3.

    Human-awareness and other HALE dimensions In parallel to the increasing application of AI in our personal lives, workspaces, cities, and societies, many concerns regarding its [potential] negative consequences have been raised. Making and keeping AI human-aware, accountable, lawful, and ethical (HALE) is crucial for the future of our societies. Since need is a fundamental construct that is well connected with different HALE dimensions, it is difficult to imagine advanced HALE AI without considering needs. In particular, human-aware AI has to address different interconnected and overlapping aspects of human’s sociocognitive dimensions (see [3]), i.e. cognitive and social (collective and contextual) dimensions represented in Figure 1. When considering needs for AI, needs through AI, needs by AI, and needs with AI, many questions regarding needs and needs satisfaction in relation to these cognitive, collective and contextual dimensions need to be answered. Our proposed call for the realisation of needs-aware AI will push us to investigate these dimensions in more detail. This can not only lead to a better understanding of [human] needs but also can provide a solid basis for the development of human-aware AI as a necessity for our societies.

Fig. 1
figure 1

Interconnected and overlapping cognitive and social (contextual and collective) dimensions of human-aware AI systems (adopted from [3])

  1. 4.

    Recent Scientific and Technological Advancements While need is an old concept and attempts toward considering needs in AI systems are likewise not new (see above), recent advancements in disciplines such as cognitive science, sociology of science and technology, and computer science can provide novel concepts, methodologies, and approaches for developing innovative needs-aware AI. This however demands, in many cases, a fundamental rethinking about the conceptualizations and implementations of needs for AI, through AI, by AI and with AI systems. For example, the recent advancements regarding the predictive processing account of cognition [38] might provide useful concepts and approaches regarding one way of realization of needs-aware AI systems–among others. Moreover, in conjunction with these advancements in cognitive science, AI (and AI-related technologies) are becoming increasingly commonplace in people’s lives. From IoT devices feeding data to ML algorithms that in turn shape people’s behaviour [28], to self-driving cars and AI supported medical decision aids, the expansion of AI into the lives of people requires a renewed focus on how AI systems can co-produce and co-address needs of diverse varieties–creating AI that serves human needs. As a consequence, we propose, both need sciences (i.e. disciplines that study needs) and AI have advanced enough in recent years to construct novel enabling spaces [39] for rethinking need-AI relations. This rethinking can bring together human-centric/aware constructs of needs and align those to the machine-implemented tasks of AI.

  2. 5.

    Collective needs and digital sustainability Needs are not limited to individuals. Teams, organizations, communities, and societies all have needs as well (e.g., [40, 41]). These collective needs are each, and together, relevant to the design and implementation of AI systems. Identifying needs at all levels, finding solutions to needs (plural, in different levels), not sub-optimizing decisions related to one need (at one level) at the expense of another need (possibly at another level), all have to be considered in order to have digital sustainability. That is, the needs of the individual must be considered in relation to the needs of the society, and vice versa across multiple levels. For example, the use of natural resources is just one example where societal needs (for instance, associated with clean energy and climate change) must be considered in relation to the development of AI systems (where training a single large language model can require enormous amounts of electricity, while the outputs of the model may assist many individuals in meeting a personal need).

  3. 6.

    Needs and the AI technoscience Today, from mathematics and cognitive science, to economics and cosmology, AI plays a fundamental role by generating data and the means for understanding of many scientific findings. For instance, in cognitive science the nature of cognition and interactions of [cognitive] systems is increasingly explored through AI supported research. In this sense, AI is not only an engineering approach but also a partner in many scientific disciplines. The notion of AI technoscience might capture the integration and co-construction of both AI as a technology and AI as a science. AI as a technoscience can highly support the current efforts towards understanding the nature of [human] needs and needs satisfaction (see e.g. [38, 42] as basic attempts in this direction). For example, AI-based simulations can not only be used as means for the evaluation of the emerging perspectives, but also can inform such perspectives, or even inspire new perspectives towards needs.

3 Drivers and enablers

Current successes (such as with deep learning and very large language models) will likely lead AI developers to continue down the paths they are on—largely, adding more data and harnessing more powerful computing resources to improve results. But there are other aspects and advancements that we also believe can/will push AI developers to look to needs as a tool for building AI systems that are increasingly useful and valuable to people.


1. The emerging sociotechnical imaginaries of needs-aware AI Sociotechnical imaginaries, the visions or values in related to a technology that are shared or common within the members of a community or a society, can influence how technologies are realized in practice (see e.g. [43,44,45]). In this respect, sociotechnical imaginaries are important non-human actors of co-production. Generally, we contend, people are growing to expect more out of future AI than they expect today. From a human-centric perspective, many individuals expect AI to satisfy needs, as Kai-Fu Lee and Ben Shneiderman already suggest; including needs of different layers and levels (from individual needs to collective, organizational, and societal needs) as well as needs associated with a sundry of physical, psychological, technical, and economic aspects of our lives. These growing expectations can be framed as emerging sociotechnical imaginaries (within different communities and societies) regarding need-aware AI systems. We believe that these imaginaries can/will/are push/ing for new conversations and demands about the roles of AI – and thereby needs.

Integrating needs into AI (i.e., for AI, through AI, by AI) can empower [46, 47] people in their relationship with AI systems of the future. From developers to end-users, need-aware AI could better partner with people to address needs. Likewise, with integrated needs in AI systems, organizations (public and/or private) could in the future better prioritize and target resources based on formalized needs rather than today’s reliance on assumptions (e.g., those living in poverty must “need”...) or ascriptions (e.g., you “need”...). Equally, societies could benefit from the additional insights and guided actions of people and organizations with needs-aware AI systems. This is not a naive position, since AI co-produces needs with people, we suggest that it is especially important for a diverse array of people (including needs scholars, practitioners, and others) to get out ahead of mainstream AI developments in terms of how needs will be defined, prioritized, and measured. This approach empowers humans, empowers organizations, empowers societies to set the needs agenda for, through, by, and with AI.

Moreover, as discussed above, the AI technoscienceFootnote 15 (and the imaginaries of AI as a technoscience) can be very helpful in the study of needs. For example, AI can potentially help assist public health researchers and policymakers identify and prioritize among competing needs within the communities they serve. This human-machine partnership can be an important driver for the development of needs-aware AI as a scientific endeavour.


2. Calls for Sustainable HALE AI In recent years, ethical concerns and debates regarding the development of AI systems have been frequent and intense. Besides the call to develop ethical AI systems (however they are defined), there are parallel interdisciplinary attempts regarding sustainability, human-centricity, accountability, and lawfulness of AI systems (on industrial, academic, and sociopolitical levels). Privacy, fairness, trustworthiness, transparency, understandability, controllability, explainability, and many other aspects of AI systems have been widely discussed. Yet, when it is about practical means for implementing Sustainable Human-centric Accountable Lawful and Ethical (Sustainable HALE) AI systems, the communities have much less to offer in comparison to conceptual ideas or policy documents.

From our cognitive systems to our values, from our responsibilities to our rights, needs are one of the most fundamental aspects of human worlds that play direct or indirect roles in shaping concepts and constructs that are important for the realization of Sustainable HALE sociotechnical systems. Therefore, we suggest that rethinking needs by, for, and through AI can highly change our basic framings, assumptions, conceptualizations, and consequently, our policies, laws, roadmaps, guidelines, standards, frameworks, approaches and solutions. This is in particular essential when (and if) we embody pluralist and inclusive positions that take all and every individual into accountFootnote 16. Every individual (person, organization, society, system, etc.) might satisfy their needs differently. The ultimate integrated and well-functioning X-centricities (e.g. joint human-centricity, ecology-centricity, society-centricity, etc.) and a higher level of non-discrete sustainability, accountability, lawfulness, and ethicality can only be achieved if needs, among others, are taken into account.


3. AI won’t [necessarily] let us wait The rapid advancement of AI and computing technologies over the last decade is putting pressure on researchers and practitioners in many other fields (from medicine and education to political science and zoology) to consider if they are prepared for how AI will influence their work, and how their work may influence the AI development. From quantum computing to brain-computer interfaces, the technology is moving fast. From digital humanities to social work, the capabilities of AI to inform and guide decisions is touching almost every field and discipline.

AI research is already actively applying the concept of needs in their work, though almost always without recognition, definition, or clarification of what needs are within their context. Just as one example (out of many), [50] suggests “[i]n an ideal world, ConvAI [conversational AI] technology would help us build LUIs [language user interfaces] that allow users to convey their needs as easily as they would with other people.” Yet, as this example illustrates, it is often assumed in AI research that, among other, (i) people can readily distinguish their needs (i.e., what is necessary) from their wants or desires, (ii) people can easily (and in an understandable manner for AI) express their needs, and (iii) that individuals’ needs should be treated as paramount in relation to those of others, or organizations/groups, or even societal needs. Moreover, (iv) it is assumed that AI can easily assess what can satisfy a specific individual’s need(s) in a specific context. Likewise, (v) how ConvAI should, in this example, act differently based on people’s perceived needs versus their other requests is an ignored aspect that requires further consideration by both the AI researchers and the communities that interact with the AI systems in the future. Nevertheless, this lack of complexity in how needs are considered and addressed in these early stages of AI research will, we suggest, set the precedent for how (or if) needs are dealt with future AI. In other words, if needs are not better defined and addressed soon, then assumptions and ascriptions about the needs of others will dominate in AI development.


4. From Digitization to Digital Transformation Giving humans (i.e. users, customers, citizens, employees, etc.) a central role and considering their needs and values while digital sociotechincal systems (including AI systems) are co-produced is a fundamental distinguishing factor between digitization (that refers more to the improvement of processes and efficiency) and digital transformation (DX) that focuses more on [humans’] needs, values and experiences [51, 52]. The last decades witnessed the impactful waves of digitization. In the recent years, going more steps forward, governments, companies, organizations, and communities are more and more investing in digital transformation. As a result, it is commonly accepted that digital transformation is going to fundamentally change our lives, relationships, perspectives, economies, political systems, science, and societies.


Needs–in different levels and dimensions–are among the most important aspects of digital transformation as a sociotechnical program. Needs can inform our digital transformation strategies, provide basis and assessment criteria for our digital transformation policies, inform our digital transformation ethical frameworks, and play a fundamental role in the real-world and applied co-creation of future sociotechnical systems. It is hard to imagine Sustainable Human-centric, Accountable, Lawful and Ethical digital transformation (Sustainable HALE DX [13]) without rethinking needs into the development of digital sociotechnical systems.

Understanding and meeting diverse and contextual needs and values through providing adaptive and personalized servicesFootnote 17 are among the most common expectations of DX outcomes. AI is the most promising candidate technology to fulfil such expectations. Therefore, we argue, re-thinking needs into AI can contribute towards better practices of digital transformation. Looking it from the other side, the increasing demand for DX, we suggest, is an important enabler for re-thinking needs into AI, since it makes the existing gap more than ever apparent.

Besides these, it is worth emphasizing that, among others, the significance and ethical requirements of needs (both subjectively and objectively) in these increasingly impactful uses of AI are important to get right: among others, “[b]ecause infrastructural technologies undergird systems of production, they begin changing societies and a people’s way of life by transforming the nature of work. The change occurs on two fronts: what people do for a living and how people do what they do” [53]. In parallel, AI has the potential to reinforce, change, or even dismantle the structures of “power” (social, political, economic, interpersonal, etc.) in most any group, organization, or society. Even if only a fraction of AI’s potential impact on our societies, communities, and individual lives comes true in the next decade or two, we believe these possibilities are drivers for deliberate and systematic efforts to establish the foundations for needs-aware AI.


5. Our Imaginaries How we can imagine needs-serving AI is another likely driver toward needs-aware AI. Just as movies like The Terminator and The Matrix have shaped public perceptions of AI to date, the imaginaries available to people for visualizing AI in the future are also important to what technologies eventually get developed for, through, by AI, and with AI. What roles do we want for AI? What role should AI have in helping us identify and prioritize our needs? What needs do we want AI to help us satisfy? Should AI help us strike balances between individual, organizational, and societal needs? Answers to these questions should also be part of the conversations that shape the next generation of “imaginaries” of AI– which can be both positive (e.g., not dystopian) and not naive at the same time.

Just as we don’t want those making the laws or regulations also profiting from the policies they are creating, striking an appropriate equilibrium of needs-aware AI (i.e., AI that informed by human needs in design, implementation, and evaluation) and AI that is trying to satisfy specific human needs is essential. Given that humans and AI now co-produce needs, the boundaries of these relationships must be understood and guidance put in place to reduce the risk of human manipulation (based on needs) or other threats. Additionally, satisfying one need routinely equates to not satisfying some other needs. Making these determinations is about considering the context and priorities of the various needs involved, and those trade-offs are both challenging and potentially lucrative. This is true both in prioritizing needs and selecting “satisfiers” (i.e. “which needs” through/by “which satisfier”).

How is the latter discussion on the complexity of needs-aware AI related to the former reflection on imaginaries? Here, we like to point to our ignorance about the future and complexity of the world we are living in. While we advocate an active discussion and assessment of our imaginaries regarding needs-aware AI, we should also be aware that the future world will not be exactly as we imagine it. In other words, sociotechnical imaginaries are important actors that contribute to the co-creation of needs-aware AI, however, we should not be deceived by our imaginaries—forgetting our ignorance. That is why taking HALE considerations into account is so crucial. Even sometimes, it seems that calling for slower co-construction of technologies that gives us more time to reflect, discuss, and manage our potential mistakes seems to be a wise position. It is hard to manage all—sometimes competing—actors involved in the development of AI, but it is not impossible to at least do our best to be actively and impactfully involved in the co-creation of the future AI.

4 Gaps and barriers

One of the first steps towards rethinking needs in AIFootnote 18 is to identify the existing challenges that are limiting the use of needs by, for, through, and with AI development today, or making it a difficult challenge. There are of course many gaps and barriers, we reflect here on a short list with some of the most impactful ones (from our perspective) that should be considered initially—and weighed regularly against others, as well as emerging gaps or barriers.


1. Defining need and needs: a historical challenge What is a need, what are [human] needs, what are potential categories or classes of needs, what are the relationships (or potential hierarchies) among needs, and how can needs be satisfied, have been a topic of inquiry since the time of ancient philosophers [54]. In the last century, other disciplines diversified the discourse (see [6, 55]), but no common answers, definition, or agreement within and across different disciplines exist. Here is a small sample of the variety of definitions from across disciplines:

  • Things without which someone will be seriously harmed or else will live a life that is vitally impaired [56]

  • A particular category of goals which are believed to be universalisable [57]

  • Objective deficiencies that actually exist and may or may not be recognized by the person who has the need [58]

  • Measured discrepancy between the current state and the desired one [59]

  • Gaps in results [60]

  • The necessary conditions and aspirations of full human functioning [61]

  • Nutriments that must be procured by a living entity to maintain its growth, integrity, and health (whether physiological or psychological) [62]

From philosophy to economics, and medicine to psychology, needs are examined through diverse lens, leading to little agreement on what should (and thereby what should not) be classified as a need (or if we can even know our needs at all; [11]). For example, for economists needs are routinely viewed through the lens of income elasticity of demand (see [63]), whereas for psychologists the focus is typically on an individual’s motivation derived from their needs (see [64, 65]). In health care alone, there are at least five interpretations of what needs are [66]. We do not attempt to seek a solution to this barrier here, rather we simply want to acknowledge that if the propositions such as [in the future] “AI will learn to serve human needs” [2] is to become a reality, then we must continuously try to actively construct joint basic understandings or working definitions on what needs are so that AI can assist in meeting them. We also do not suggest that a single universal definition is required, or maybe even desired, but rather that the challenge of presenting coherent and valuable [working] definitions for distinct use cases and ways of dealing with diverse (and even disagreeing [67]) definitions must become a priority for needs scholars regardless of their discipline. The capacity of AI systems to help us meet our needs is contingent on our ability to first determine how those systems define and measure needs.

We also recognize that need is not an isolated concept; that is to say, needs are usually closely associated with values, rights, desires, wants, preferences, motivations, goals, and other constructs that contribute to shaping our daily perceptions and decisionsFootnote 19. Likewise, need is routinely framed and defined from a human-centric perspective. Nevertheless, other systems have needs as well, including animals, environments, and machines. Even from a human-aware perspective (see Figure 1) understanding different cognitive, collective, and contextual dimensions of needs and needs satisfaction is still an open challenge.

As a result, it is challenging to model needs-aware AI systems—in particular cognitivist ones—[16], without considering other related constructs, which likewise adds to the complexity. Moreover, needs are often fluid on multiple dimensions among these concepts as well, which can make the construction of needs-aware systems a matter of challenging interpretation and context-based distinction between needs and the other constructs. For needs-aware AI, we also have to “translate” this human concept of need into machine language (or dynamics); requiring the integration of human-centric and machine-centric processes (a truly interdisciplinary task). This complexity should not, however, dissuade us from the task; rather, we should leverage the capabilities of current technologies to assist us in recognizing and benefiting from the complexity.


2. Needs vs satisfiers Needs are routinely considered an implicit construct, sometimes informally even defined by “I know it when I see it” criteria, although some explicit measurable definitions are also available [41]). Satisfiers, however, are considered explicit; for example, a specific food or liquid that is necessary and sufficient in a specific context (time, location, environment) for a specific person “in need”. Based on this perspective, need satisfaction—as an action—refers to the process of satisfying one or more needs. Clearly, satisfiers could be both specific objects (e.g., a specific food), specific actions (e.g., talking with a friend in a specific context), or a combination of objects, environments and actions (e.g., a specific party).

Though what satisfies a need should not be confused, within that relationship, with the need itself. Humans—as embodied cognitive systems that are enacting in their environments—normally satisfy a set of needs simultaneously while interacting with (i.e., living in) their environments. Understanding and applying the distinction between needs (implicit and potential), satisfiers (normally, explicit and realized), and need satisfaction (normally, explicit realized process, or ‘explicitizable’) can be a challenging task for the co-development of needs-aware AI systems. If needs, satisfiers, and need satisfaction should be modelled or represented (in an AI system) as constants, variables, functions, processes, decisions, actions, or even as overall system’s dynamics or states (e.g., in line with some of the dynamical models or enactive approaches in cybernetics, system engineering or cognitive science) is a challenging philosophical, scientific, and engineering question. Moreover, it is important to consider the potential perspectives, or interpretations involved. For example, by observing someone’s behaviours, an AI system (from a third-person perspective) might infer that the person is satisfying a specific need, while the person might not have the same opinion or experience (from a first-person perspective). The same can be valid for AI systems, scientists and experts involved, as well as different individuals or organizations or societies. To add to this complexity, these relationships of needs and satisfiers are continuously co-produced through human—AI interactions (see above).

Given the conflicting definitions of needs—and confusion of terms related to needs—an associated barrier to introducing needs at various phases of AI development and implementation is limited means and metrics for measuring, ‘explicitizing’, utilizing, ‘enactizing’ and evaluating needs and needs satisfaction. Since 1) we are not consistent in defining what is a need (and what is not) and 2) the application domains and contexts might vary, we rely on varied measures and utilization mechanisms regarding needs and needs satisfaction; from very subjective to a fairly objective measurements and mechanisms. And without measures and utilization mechanisms it might be difficult for AI systems to assist with identifying needs, or prioritizing needs (within and across multiple levels—such as, individual, organizational, and societal needs), or defining what is required of potential activities to satisfy needs, or evaluating when needs have been met. Each of these, and others, would represent valuable ways that AI could help serve human needs.


3. The dominance of needs-blind AI Today, we do not expect that the AI systems being developed at Meta (formerly, Facebook), Google, Microsoft, OpenAI, Baidu, or elsewhere will have necessarily any direct knowledge of our needs. We know that some of these technologies might assist us in meeting our needs, but this can often be an unintentional secondary benefit—not as a direct result of our needs as an ‘input’ into their computations (or designs). There are, after all, no technological, legal, or even ethical frameworks or guidelines at this time that could rigorously facilitate AI serving human needs if we suddenly decide to expect such direct benefits (i.e., not just as a potential by-product). We recognize that HCAI approaches are a positive move toward AI (and will be closely related partner in the development of needs-aware AI) that help meet needs. Nevertheless, often the HCAI research literature refers to human needs (see, for example, [21, 68]) without any definitions, deep reflection, or applicable conceptualization of what those needs are, how they would be measured, how they would be satisfied, or how AI developers would know if needs were met. Rather HCAI approaches routinely seem to imply the AI developers will consistently and correctly recognize the needs of other people (e.g., future users or beneficiaries) through conversation or observation—which is simply inaccurate and unrealistic (as has been the experience of development economics, where a similar approach has been tried for identifying needs of those living in the poorest countries of the world). Without any tools for the systematic assessment of needs, at this point in time commercial success dominates what we (as individuals, organizations, and as societies) expect from AI systems. This can change, but it will require effort; and we propose that re-introducing needs into those change efforts is essential.

From a societal perspective, we also recognize that in some (if not most) of the existing socioeconomic and power-related structures and institutions, AI development has a tendency to to create wants-serving machinery. We are well aware that, tinkering with the technology alone is not enough to reverse this trend, and AI itself might even conceal structural dynamics, power relations, etc. that reinforce these structures. A potential barrier towards the development of needs-aware AI could, therefore, be needs-blind socioeconomic perspectives and structures within our institutions. Some of these perspectives or structures are derived from not recognizing the potential of/for needs-aware AI; and some of perspectives are shaped by a potential conflict of interest with the realization of needs-aware AI. We hope that this article, and future contributions by others, can lead toward the development of imaginaries and perspectives that can enable a shift in our socioeconomic perspectives and structures towards needs-awareness (in AI and beyond).


4. Missing needs in guidelines, standards, regulations, and policies Technologies are co-created by different human and non-human actors. Guidelines, standards, regulations, and policies are important non-human actors [69, 70] in the development of any technology, including AI. For many AI-related concepts (e.g., privacy, bias), there are emerging legal requirements, ethical frameworks, or policy mandates, that guide AI developers in making decisions that lead to improved (from a societal perspective) AI systems. We propose that similar efforts to provide valuable guidance to AI developers based on what shared societies want from AI would beneficial for further introduction of needs.

Need cannot however just be “window dressing” on policies, frameworks, or the like; needs must be integrated into the fabric of what we want for, through, by, and with AI systems. For instance, we cannot just say that AI developers should assess needs, but go no further into (i) what needs are–and are not, (ii) how needs are prioritized, (iii) how needs will be assessed, measured, and their satisfaction evaluated, or (iv) how societal needs will be balanced with those of individuals. These and other questions must be considered, debated, and revised as we move forward to develop AI systems that have the capacity to help meet our needs. Moreover, considering needs, and need-aware AI systems in the network of actors and constructs is critically important to co-constructing guidelines, standards, regulations, and policies that not only consider needs and needs satisfaction, but also other related (and interrelated) constructs such as privacy, agency, rights, values, etc. (all together).


5. Need-community: A non-existing reality From an actor-network perspective, as well as a co-production point of view, communities play an important role in co-realization of sociotechnical digital systems (including AI systems). Surrounding needs (in engineering, sciences, humanities, social sciences, etc.), however, no global community exists. The existing small communities also do not have many connections. Like many topics of research and discussion, needs are often debated within the silos of individual disciplines or speciality areas. Those in philosophy debate needs in isolation from those debating needs in public health or political science. Psychology examines needs in the context of human motivations whereas the field of human/organizational performance measures needs as gaps between current and desired results. Likewise, while authors like Pearl [20] write about needs (necessity and sufficiency) in computer science, each of these conversations are disconnected from discussion of needs in social work and education. For AI to assist in meeting needs, each of these (and other) communities of scholars and practitioners must come collaborate across silos—which also means collaborating across epistemological divides [71] or disagreements [67].

These initial gaps we have identified are also not isolated systems, acting on their own. They are responding to each other, co-creating new gaps, and gaining in complexity. This presents us with unique opportunities, at this particular point in time, to re-introduce needs into AI. While believe that there is, fortunately, wind behind the sails of this effort to re-introduce needs into AI, we also recognize that there are threats that will challenge these efforts.


6. AI Manipulation As we discussed earlier, there is no doubt that technologies and humans have been co-producing each other throughout the history of homo sapiens. While our technologies have been very useful, and even might be used for knowledge generation, they did not have much knowledge about us (if any)Footnote 20. However, digital technologies in general (and AI systems in particular) are increasingly able to construct different types of knowledge about the humans (and other actors or systems) that they are interacting with. Needs-aware AI might, in the future, could possess vast knowledge about what we need and how we meet our needs—more than any other technology. And scientia potentia est (i.e. knowledge is power), in particular if we consider the ubiquitous application of Ambient Intelligenceeveryware. As a result, a potential threat of needs-aware AI systems, if not co-constructed and managed in a sustainable HALE mannerFootnote 21, is their potential power to manipulate humans—and other systems from groups and organizations to societies—at their very core, i.e. their perceptions of their needs and need satisfaction.

When referring to individual humans (or individual entities in general), it should be also considered that needs are important for people (and entities), and in efforts to meet their needs people (or entities) can find themselves in vulnerable positions. The threat of human manipulation based on their needs (e.g., need X will be met but only if you do Y, or your need is not ‘A’ but rather it is really ‘B’) is real. Today’s social media companies already do similar (intentionally or unintentionally), with people routinely forfeiting some of their privacy to access content that meets their social “wants” (which many people perceive as “needs”). As we increase our understanding of how humans both identify and satisfy their needs (especially using digital technologies), it will become increasingly important for needs to be integrated into policy, regulatory frameworks, and sociotechnical standards and guidelines that help protect human agency and other rights. Those (people or machines, individuals or institutions) who define needs have both implicit and explicit power—and yet that is rarely recognized power since it routinely lost in the common usage of the term. From needs-mining to algorithms that price access to water or health care, numerous new areas with the potential for manipulation or overreach of power (e.g., defining what is necessary and limiting alternatives) are being created all the time.


7. Stalling A final threat is the potential of not doing anything related to AI and needs because we do not know how to do it. It is easy for challenges like those we are discussing here to overwhelm our capacity to plan and act, since none of this is easy. But we posit that making a decision not to act (even if that is just starting conversations with colleagues about issues of needs and AI) would be a regrettable choice. AI is continuing to develop every day, with literally more than a hundred new papers being shared most days on arXiv.org alone. And if the professional communities across multiple disciplines don’t come together, or wait too long to begin our conversations, then AI developers will answer many questions about the role of needs in AI (many of which will be hard to change later).

5 What comes next?

In this article, we have attempted to make the case for why we should integrate (conceptually, computationally, and systemically) needs for AI, through AI, by AI, and with AI. There are, of course, other considerations, barriers, and enablers beyond our limited list here; nevertheless, our goal for making this case is as “call to action”. For needs to be assimilated into the future of AI, we (i.e., needs scholars and practitioners, AI researchers and developers, policymakers, and other actors) must begin our efforts to ensure that needs are not left out of AI. In this vein, we propose the following:


1. Reconstruct the concept of “need” How we [choose to] describe (or define) needs is a necessary step to move us toward clearer conceptualizations and operationalizations of needs (and needs satisfaction). This can help us to co-construct novel transparent and applied measurementsFootnote 22 of needs and needs satisfaction. We suggest that if AI developers are going to be able to utilize needs in the [co-]design and implementation of AI systems (in order for those systems to help meet needs), then we must begin here as our foundation.

A common set of [working] definitions of what needs are, we believe, is essential to creating AI systems that are needs-aware. In the end, these definitions may vary by context, for example, a health care AI using one variant and a criminal justice system using another. Or through application, one or two operational definitions of needs may be found as most useful and productive for AI systems. In either case, a collaborative interdisciplinary approach may create a continuum from needs-blind systems (i.e., those that disregard needs in design and/or implementation) to needs-based (i.e., those that prioritize needs in design and implementation as their most central aspect)—with needs-aware being a term to describe all different types of systems in the continuum that are not needs-blind.


2. Create communities of action Progress on integrating needs for, through, by, and with AI in the future, depends on establishing broad interdisciplinary community(ies) that take actions. Some of the required actions are traditional academic endeavours, such as writing articles, creating discussion forums, writing blogs, holding conferences (and conferences within conferences), teaching about needs in courses, and of course conducting rigorous research. Others include supporting the development of AI literacyFootnote 23 in the social sciences and humanities so that the next generation of students, scholars, and practitioners are well versed in the technical and social aspects of conversations. The interdisciplinary communities, however, cannot remain isolated in academia; they must include public and private sector partners who can envision the positive roles of needs-aware AI as well.

The development of commercial AI products is one avenue for engaging broader audiences in the design and implementation of AI systems (e.g., psychologists and Woebot). An on-going dialogue on the roles of needs in AI is another path that can bring people with diverse perspectives into the AI community; to help guide ethical considerations related to needs, co-construct the imaginaries (or scenarios) that illustrate the potential relationships of humans and AI systems, of to look at policy implications of AI co-production and co-addressing of needs, multi-level measurement of needs and needs satisfaction, and many other essential topics. From philosophers to social workers, and medical doctors to educators, the topic of needs can enlarge and diversify the community of AI researchers, designers, and developers.


3. Integrate needs into ethical AI frameworks We propose that a critical step towards needs-aware AI, is to introduce needs into AI ethical frameworksFootnote 24. Currently, multiple ethical frameworks are being proposed and debated (including, for example, those from the EU, USA, and China), none of which integrate the construct or measurement of needs into their design. The development of these initial frameworks lays for the foundation for future improvements, so missing the current opportunity to introduce needs for, through, and by AI now will only make it more challenging to introduce it later–thus, this also has to be a priority for needs and AI scholars and practitioners.


4. Rethinking Needs-awareness to socioeconomic perspectives and structures Lastly, our socioeconomic perspectives and structures (from business models to power-structures, from public institutions to government policies) would benefit from needs-awareness [in AI and other sociotechnical systems]. For many, the introduction of needs in their models, dialogues and decisions have been actively avoided since the complexity of needs was beyond their capacity at the time. Many others might even have conflict of interest with the realization of needs-aware sociotechnical systems [including needs-aware AI]. Nevertheless, emerging advanced technology enable us to now consider needs for, through, by, and with AI—finding and benefiting from patterns in data that were not accessible before.

Needs-aware AI will, nevertheless, also require new approaches to ethics, regulations, and the appropriate use of the “power” that comes with knowledge of needs (i.e., what is really necessary). Integrated and interdisciplinary rethinking of needs and how the construct permeates our individual, group, organizational, and societal decision-making is vital, and will have to be a substantial component of a movement towards needs-aware AI. These effort will provide the broad structural supports required for the hard work of developing AI that serves needs.

6 Conclusions

For those of us that study needs (including philosophers, ethicists, educators, social workers, cognitive scientists, etc.) the recent developments in AI are creating pressure to move more quickly in their debates and deliberations, otherwise they might find that their efforts come too late to have influence in the future of AI and our society. Market opportunities, scientific inquiry, and practical wants/desires–each are pushing some of us (i.e., needs scholars and practitioners) to come to terms with what needs are and how might AI serve human needs. As with raising a child, if you don’t integrate ethics (including the integrated concept of needs) at the beginning when they are relatively young, then it is much harder to add it into their knowledge-base and character later on.

This is not to suggest that a needs community has to come to a single universal approach to needs in AI, this is likely neither possible nor desirable. But rather, interdisciplinary communities can/should soon create active resources and partner with AI developers; where AI developers can get guidance on how to bring the construct of needs into their work and needs scholars and practitioners can learn more about how AI can help advance our understanding of needs. In other words, it is expected that needs scholar and practitioners become influential actors in the co-construction of AI systems–before it is too late for these relationships to have substantial influence or impact.

If, as we propose, needs are essential to a future of AI that adds practical value to the lives of people, then a pragmatic approach to integrating needs will most likely be found. Even maybe, for instance, through brute-force by trying many, many different ways to estimate needs until a workable approach is found. This can, of course, be done with or without the ethical, philosophical, and humanistic qualities that are potentially available through broad interdisciplinary partnerships. This timely push, however, we believe can serve as the impetus for interdisciplinary collaborations that put needs into AI of the future. Constructing a wide acceptance that re-thinking needs for, through, by, and with AI is essential for our societies can be seen as a first important step. Clearly, answering the many how questions ahead (and finding/constructing many more questions) is a matter of intensive joint interdisciplinary collaborations and co-creations, as next steps.

We have outlined above the major gaps, barriers, enablers and drivers for needs (as a specific construct that can be described, utilized, enactized, measured and distinguished from other constructs) in the development of sustainable HALE AI. We have done so in hopes of igniting an interdisciplinary professional dialogue on the roles of needs, and jump-starting real-world actions that can assist and guide the future of the AI that is capable of serving human needs-a goal that can’t be achieved without first coming to terms with our current lack of knowledge and understanding of our needs or the needs of others. We hope that in response to this initial attempt to frame future conversations, others from philosophy, ethics, cognitive science (including psychology, neuroscience, cognitive biology, anthropology, etc.), political science, health, and many other disciplines that have been working on needs and ways to assess needs for decades, will share their perspectives in this dialogue. At the same time, our desire is likewise to engage AI researcher and developers to engage with us and this topic, so that our efforts can lead to meaningful and impactful guidance and tools for creating future AI systems that meet our ideals and help us achieve our ideals. Ultimately, we hope that similar to rights (e.g. human rights) that have become a fundamental aspect of our imaginaries about technologiesFootnote 25, needs also find their appropriate position in our shared visions: we imagine a world in which AI systems are co-created to satisfy [human] needs; we imagine a world in which AI systems are—among others—planned, funded, designed, evaluated and judged based on the needs they satisfy. Please join the conversation by talking with your colleagues about needs, integrating needs into your work, and/or contributing editorials or articles about the roles for needs in AI within your professional communities–and beyond.