Abstract
Crowdsourcing holds great potential: macro-task crowdsourcing can, for example, contribute to work addressing climate change. Macro-task crowdsourcing aims to use the wisdom of a crowd to tackle non-trivial tasks such as wicked problems. However, macro-task crowdsourcing is labor-intensive and complex to facilitate, which limits its efficiency, effectiveness, and use. Technological advancements in artificial intelligence (AI) might overcome these limits by supporting the facilitation of crowdsourcing. However, AI’s potential for macro-task crowdsourcing facilitation needs to be better understood for this to happen. Here, we turn to affordance theory to develop this understanding. Affordances help us describe action possibilities that characterize the relationship between the facilitator and AI, within macro-task crowdsourcing. We follow a two-stage, bottom-up approach: The initial development stage is based on a structured analysis of academic literature. The subsequent validation & refinement stage includes two observed macro-task crowdsourcing initiatives and six expert interviews. From our analysis, we derive seven AI affordances that support 17 facilitation activities in macro-task crowdsourcing. We also identify specific manifestations that illustrate the affordances. Our findings increase the scholarly understanding of macro-task crowdsourcing and advance the discourse on facilitation. Further, they help practitioners identify potential ways to integrate AI into crowdsourcing facilitation. These results could improve the efficiency of facilitation activities and the effectiveness of macro-task crowdsourcing.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Artificial intelligence (AI) holds the potential to transform collaborative activities such as crowdsourcing (Griffith et al. 2019; Introne et al. 2011; Kiruthika et al. 2020; Manyika et al. 2016; Seeber et al. 2020). In crowdsourcing, a crowd collaborates to solve a task in a digital participative environment, such as an online platform (Estellés-Arolas and González-Ladrón-de-Guevara 2012). Thereby, the crowd may be diverse, including individuals from diverse disciplinary backgrounds (Cullina et al. 2015; Dissanayake et al. 2019). When a crowd is dedicated to tackling complex and interdependent tasks collaboratively, the practice is referred to as macro-task crowdsourcing (Robert 2019; Schmitz and Lykourentzou 2018). Macro-tasks are tasks that are difficult or sometimes impossible to decompose into smaller (interdependent) subtasks (Robert 2019). The use of crowdsourcing to address macro-tasks is rarely straightforward and requires a specific skill set and knowledge of the crowd (Schmitz and Lykourentzou 2018). A prominent example of macro-tasks are wicked problems. Wicked problems are highly complex and thus require the involvement of many different stakeholders (Alford and Head 2017; Head and Alford 2015; Ooms and Piepenbrink 2021). Global challenges that are very broad in scope, such as the advancement of the sustainable development goals as defined by the United Nations (2015), may be understood as wicked problems of current relevance (McGahan et al. 2021). In response to these problems, existing macro-task crowdsourcing initiatives such as OpenIDEO or Futures CoLab elaborate on sustainability-related improvements and solution approaches (Gimpel et al. 2020; Kohler and Chesbrough 2019).
For macro-task crowdsourcing to realize its potential and tackle such complex problems, structure, guidance, and support are needed to coordinate the collaborating crowd workers (Adla et al. 2011; Azadegan and Kolfschoten 2014; Shafiei Gol et al. 2019). If this need is satisfied through unbiased (human) observation and intervention, it is known as facilitation (Adla et al. 2011; Bostrom et al. 1993). Although facilitation has already been widely analyzed in other contexts such as group interaction (Bostrom et al. 1993), face-to-face meetings (Azadegan and Kolfschoten 2014), and open innovation (Winkler et al. 2020), it has barely been investigated in macro-task crowdsourcing. AI is seen as a system’s ability to interpret and learn from external data to achieve a predetermined goal (Kaplan and Haenlein 2019). With AI breaking human text challenges (Wang et al. 2019), new potentials arise, especially for text-based applications like crowdsourcing. The high transformative potential of AI gives rise to the question: Can AI support the facilitation of macro-task crowdsourcing? If it can, the quality of crowdsourcing results might be improved, leading to better outcomes and results. For example, an AI with semantic text understanding could recognize novel or innovative-yet-unrecognized ideas and highlight these as focal points for further discussion within the crowd (Toubia and Netzer 2017). Furthermore, by relieving the bottleneck of labor- and knowledge-intensive facilitation, macro-task crowdsourcing could be applied to more wicked problems.
AI and facilitation may be closely interwoven in macro-task crowdsourcing. Facilitation, in the specific context of macro-task crowdsourcing, requires human facilitators as well as technological advancements which support the former by fulfilling a large variety of burdensome activities (Briggs et al. 2013; de Vreede and Briggs 2019; Franco and Nielsen 2018; Khalifa et al. 2002; Seeber et al. 2016; Winkler et al. 2020). Among other duties, a facilitator is responsible for understanding the problem to be tackled by the macro-task crowdsourcing, motivating and guiding the crowd and its dialogues, and making sense of the outcome. Lately, AI – as one specific technological advancement – has been investigated for its supportive potential (Rhyn and Blohm 2017; Seeber et al. 2016; Tavanapour and Bittner 2018a). AI carries various functionalities, including text mining or natural language processing, that can support macro-task crowdsourcing facilitation. For instance, intelligent conversational agent systems could guide the crowd through the crowdsourcing process (Derrick et al. 2013; Ito et al. 2021) or issue detailed instructions to crowd workers in the form of specific tasks (Qiao et al. 2018). The evaluation of the workers’ contributions could also be drastically simplified by designing appropriate systems that leverage the potential of text mining and natural language generation to automatically generate reports or summaries (Füller et al. 2021; Rhyn et al. 2020). Such AI-augmented facilitation systems could improve human facilitation (Adla et al. 2011; Siemon 2022), (partially) automate facilitation processes (Gimpel et al. 2020; Jalowski et al. 2019; Kolfschoten et al. 2011), or even wholly replace the facilitator with an AI agent (de Vreede and Briggs 2019).
Although AI has considerable potential in macro-task crowdsourcing and assisting human problem solving (Rhyn and Blohm 2017; Schoormann et al. 2021; Seeber et al. 2020), there are only a few AI-related contributions in the literature on macro-task crowdsourcing or crowdsourcing facilitation. A holistic understanding of how AI could be applied to facilitate problem-solving in on- or offline groups is missing. However, such a holistic understanding is necessary to guide further research on crowdsourcing and facilitation and inform practitioners as to how crowdsourcing initiatives might be improved. We set out to investigate how AI can and may enable macro-task crowdsourcing facilitation. Therefore, we pose the following research questions (RQs), which address both the identified lack of research into macro-task crowdsourcing facilitation and the need for a holistic understanding of AI in this given context:
RQ1: Which activities comprise macro-task crowdsourcing facilitation?
RQ2: What action possibilities does AI afford for macro-task crowdsourcing facilitation?
We apply a two-stage, bottom-up approach to establish a theory-driven understanding validated and refined using practical insight to answer these research questions. In our approach, we turn to affordance theory (Volkoff and Strong 2017), which is known to help develop better theories in IT-associated transformational contexts (Ostern and Rosemann 2021). Given AI’s high potential to transform crowdsourcing, affordance theory can be seen as an established, suitable, and meaningful lens to theorize the relationship between the technological artifact of AI and the goal-oriented actor – namely, the facilitator (Lehrer et al. 2018; Markus and Silver 2008; Ostern and Rosemann 2021; Volkoff and Strong 2013). In the first stage, we develop initial sets of macro-task crowdsourcing facilitation activities and AI affordances. Both sets are based on a structured search and review of extant scholarly knowledge. The second stage validates and refines our facilitation activities and AI affordances. We observe two real-world macro-task crowdsourcing initiatives and perform six interviews with experts from the crowdsourcing facilitation and AI domain, thus, including insights from practice.
For RQ1, our results provide a detailed understanding of macro-task crowdsourcing facilitation comprising 17 facilitation activities. We answer RQ2 by developing a set of seven AI affordances relevant to macro-task crowdsourcing facilitation. We also detail manifestations of the affordances that demonstrate actionable practices of AI-augmented macro-task crowdsourcing facilitation. Our findings increase the scholarly understanding of macro-task crowdsourcing facilitation and the application of AI therein. Furthermore, the results will help practitioners to evaluate potential ways of integrating AI in crowdsourcing facilitation. These results will increase the efficiency of facilitation activities and, ultimately, increase the effectiveness of macro-task crowdsourcing.
The remainder of the paper is structured as follows: Sect. 2 provides theoretical background on macro-task crowdsourcing facilitation, AI-augmented facilitation, and affordance theory. We outline our research process in Sect. 3. Section 4 presents the macro-task crowdsourcing facilitation activities, the AI affordances, and the manifestations of AI in macro-task crowdsourcing facilitation. After discussing the implications and limitations of our results in Sect. 5, we conclude with a brief summary in Sect. 6.
2 Theoretical Background
2.1 Macro-Task Crowdsourcing Facilitation
2.1.1 Macro-Task Crowdsourcing
Crowdsourcing is an umbrella term that can have many meanings. The concept was first introduced in an article in Wired magazine (Howe 2006b). Elsewhere, Howe (2006a) defines crowdsourcing as “the act of a company or institution taking a function once performed by employees and outsourcing it to an undefined (and generally large) network of people in the form of an open call.” Since then, understandings of crowdsourcing have evolved. Estellés-Arolas and González-Ladrón-de-Guevara (2012) proposed a holistic definition that we will use in this paper:
“Crowdsourcing is a type of participative online activity in which an individual, an institution, a non-profit organization, or company proposes to a group of individuals of varying knowledge, heterogeneity, and number, via a flexible open call, the voluntary undertaking of a task.”
A panoply of different crowdsourcing types exists, ranging from corporate to social or public contexts (Vianna et al. 2019). In a corporate context, open innovation is used to strategically manage knowledge flows between an external crowd and a firm to improve the firm’s innovation processes (Bogers et al. 2018). With crowdfunding, entrepreneurs can receive funding, via an open call, from funders who may receive a private benefit in return (Belleflamme et al. 2014). Organizations can use micro-task crowdsourcing to outsource low-complexity tasks (e.g., image tagging, or phone number verification) completed by independent crowd workers (Hossain and Kauranen 2015; Schenk and Guittard 2011). More complex tasks (e.g., invention or software engineering) require collaboration among crowd workers (Kittur et al. 2013). Flash organizations, for example, are computationally built structures comprised of a crowd automatically arranged into a hierarchy, where participants are assigned to smaller units focused on complex tasks according to their particular skills (Valentine et al. 2017). The structure of the resultant crowd organization can adapt over time, allowing it to efficiently collaborate and achieve open-ended goals relating to complex tasks (Retelny et al. 2014; Valentine et al. 2017). Real-world problems can be approached by using citizen science, a participative way of performing research involving experts and non-experts (Hossain and Kauranen 2015; Wiggins and Crowston 2011). For example, Fritz et al. (2019) underline the scientific value of citizens’ contributions of data which helped to track the progress of the United Nations’ sustainable development goals.
As a step beyond predominant crowdsourcing types, we define macro-task crowdsourcing based on Leimeister (2010), Lykourentzou et al. (2019), Malone et al. (2010), and Vianna et al. (2019):
Macro-task crowdsourcing leverages the collective intelligence of a crowd through facilitated collaboration on a digital platform to address complex or wicked problems.
The problems being addressed with the help of macro-task crowdsourcing may range from open innovation product design, to software development, or to grand social challenges (Kohler and Chesbrough 2019; McGahan et al. 2021) like climate change (Introne et al. 2011). Many of these are rooted in wicked problems characterized by their high complexity and their need to elicit broad stakeholder involvement (Alford and Head 2017; Ooms and Piepenbrink 2021). Macro-task crowdsourcing differs from existing crowdsourcing types in several ways. Although the boundaries between macro-task and, for example, micro-task crowdsourcing are blurred, there are some distinguishing characteristics, which are presented in Table 1.
The fact that the problem cannot easily be broken down into smaller constituent parts means it requires a high level of crowd diversity – i.e., providing multiple perspectives from experts with different levels of expertise and knowledge in various disciplines (Lykourentzou et al. 2019; Robert 2019). Due to the complexity of the underlying problem and the broad stakeholder involvement, a guiding, moderating, and neutral central agent is necessary, which we refer to as a facilitator (Gimpel et al. 2020). It is important to note that the results produced by the crowd will not necessarily be the final solution to the overarching problem. Existing macro-task crowdsourcing initiatives such as Climate CoLab (Introne et al. 2013), Futures CoLab (Gimpel et al. 2020), and OpenIDEO (Kohler and Chesbrough 2019) tend to produce valuable but non-conclusive approaches to addressing a wicked problem from one specific angle. These approaches have evolved and matured during several guided phases (Gimpel et al. 2020; Introne et al. 2013), making macro-task crowdsourcing even more reliant on a facilitator and a clear understanding of its role within the crowdsourcing initiative.
The panoply of different crowdsourcing types has produced a variety of terminologies with synonyms and ambiguities now requiring unification. Figure 1 depicts an abstract view of existing terms and definitions within crowdsourcing. Generally, we use the term macro-task crowdsourcing initiative to refer to an overarching set of online activities that aim to address a problem (Estellés-Arolas and González-Ladrón-de-Guevara 2012). We refer to a crowdsourcing exercise as a whole process of crowdsourcing techniques (Vukovic and Bartolini 2010) that may be applied multiple times or in combination with other exercises as part of a macro-task crowdsourcing initiative.
The context of an exercise is highly relevant. A macro-task crowdsourcing initiative may conduct multiple exercises in various (e.g., geographical) environments, using different strategies or workflows (e.g., to address the problem) with different infrastructural prerequisites (e.g., hard- and software). The nature of the problem being tackled by an exercise influences how a task is designed and, therefore, how contributions are generated (Zuchowski et al. 2016).
In an exercise, three different groups of people participate. The requestor is an organization or an individual that seeks help, the worker is part of a help-offering crowd capable of (partially) addressing or solving the requestor’s problem (Pedersen et al. 2013). The facilitator acts as a crucial intermediary who tries to understand the requestor and facilitates the crowd of workers to reach a predefined goal concerning the problem (Franco and Nielsen 2018; Gimpel et al. 2020; Rippa et al. 2016). From an activity-driven perspective, exercises consist of three major process phases: preparation, execution, and resolution. While preparation refers to “breaking down a problem or a goal into lower level, smaller sub-task” (Vukovic and Bartolini 2010), execution describes the elaboration on the task by a diverse crowd of workers (Zuchowski et al. 2016) supported and guided by one or more facilitators. Evaluating and synthesizing the workers’ contributions finishes the last process phase, termed resolution (Lopez et al. 2010). IT enables all participants to collaborate online in a distributed or decentralized way. Typically, a digital platform is used to capture and store the interactions and communication between individuals (Lopez et al. 2010). Interactions on a digital platform for macro-task crowdsourcing can include rating, creation, solving, and processing (Geiger and Schader 2014). Sometimes tools like video communication software are used to run the exercise more efficiently or effectively. To steer the exercise within the given context, governance, using a dedicated strategy, creates suitable boundary conditions for the people (Blohm et al. 2018; Pedersen et al. 2013). While rules can define norms or desired conducts, roles govern responsibilities and accountabilities, and culture creates a desirable and productive atmosphere for collaboration.
Each exercise results in different types of outcomes (Zuchowski et al. 2016). Contributions represent manifestations of work on the communicated and processed task. We see tacit knowledge gained during the exercises as learnings that could, for instance, be achieved by reflections or feedback. We distinguish these from consequences, which represent immutable conclusions that have been caused by performing the exercise (e.g., unsatisfied workers who will not contribute to future exercises). Finally, every stakeholder of the macro-task crowdsourcing initiative can perceive value in the exercise. While the requestor could, for example, see value in the synthesized contributions, a worker could perceive value in social recognition within the crowd.
Behind each of these terms, there is a whole range of activities that, taken together, should be carefully aligned with a goal during the macro-task crowdsourcing initiative, in order to contribute to the overarching problem. Thus, facilitation is important to ensure proper alignment and goal orientation. Thereby, facilitators play an essential role, particularly – yet, not only – during the exercises.
2.1.2 Facilitation in Crowdsourcing
In the crowdsourcing domain, crowdsourcing governance aims to facilitate workers in performing their tasks and steer them toward a solution (Pedersen et al. 2013; Shafiei Gol et al. 2019). According to Shafiei Gol et al. (2019), whether crowdsourcing governance is centralized or decentralized, the task is to control and coordinate workers on the crowdsourcing platform. This involves activities such as defining the task (Zogaj and Bretschneider 2014), providing proper incentives (Vukovic et al. 2010), ensuring the quality of the contributions (Blohm et al. 2018), and managing the community and its culture (Zuchowski et al. 2016). Crowdsourcing governance is often analyzed in environments involving paid work or smaller tasks (Blohm et al. 2018; Shafiei Gol et al. 2019). Hence, activities like controlling costs and standardizing procedures also gain relevance (Shafiei Gol et al. 2019). Despite extensive frameworks (Blohm et al. 2020; Shafiei Gol et al. 2019; Zogaj et al. 2015), crowdsourcing governance is often conceptualized on the organizational and platform level, which could explain why it is also referred to as a management activity (Blohm et al. 2018; Jespersen 2018; Pohlisch 2021; Zogaj and Bretschneider 2014). The increasing complexity of the problems under investigation means increasingly sophisticated governance strategies are required to deliver successful crowdsourcing initiatives (Blohm et al. 2018; Boughzala et al. 2014; Pedersen et al. 2013). Since macro-task crowdsourcing initiatives are known for their complex (sometimes even wicked) underlying problems in a collaborative environment, utilizing facilitation can be a suitable and effective governance strategy. Facilitation is primarily focused on the crowd, enabling workers to collaborate on complex tasks and, ultimately, reach an overarching goal (Gimpel et al. 2020; Kim and Robert 2019; Lykourentzou et al. 2019).
To tackle increasingly complex – often wicked – problems using macro-task crowdsourcing, the facilitation of groups is both highly relevant and very challenging (Khalifa et al. 2002; Shafiei Gol et al. 2019). Following Bostrom et al. (1993), the main aim of facilitation is to ensure unified goal orientation among collaborating workers. This challenging task can require various social and technical skills or abilities to support problem-solving (Antunes and Ho 2001). Researchers have explored several types of facilitation specifically tailored to collaborative settings. Adla et al. (2011) differentiate between four overlapping types: Technical facilitation mainly aims to support participants with technology issues. Group process facilitation strives to ensure all members of a group jointly reach overarching goals such as motivation or moderation. Process facilitation assists by coordinating participants or structuring meetings. Finally, content facilitation focuses on, and introduces changes to, the content under discussion. Facilitators serve as experts practicing techniques to support problem-solving processes (Winkler et al. 2020), for example, in face-to-face meetings (Azadegan and Kolfschoten 2014; Bostrom et al. 1993). Besides completing a burdensome amount of work before, during, and after the collaboration (Vivacqua et al. 2011), facilitators must also evince particular character and behavioral traits (Dissanayake et al. 2015a). Training and experience (Clawson and Bostrom 1996), appearance and behavior within a group (Franco and Nielsen 2018; Ito et al. 2021; McCardle-Keurentjes and Rouwette 2018), and the handling of feedback and reflection (Azadegan and Kolfschoten 2014; de Vreede et al. 2002) play an essential role here. Thereby, facilitators maintain a delicate balance between situations in which they moderate and observe the group and instances in which they intervene – for instance, due to content-related issues (Khalifa et al. 2002) – without compromising the outcome of the exercise (Dissanayake et al. 2015b). To better assist the group and balance the workload, multiple facilitators with different foci may sometimes be involved, making it possible to split the work among the facilitators (Franco and Nielsen 2018) and maintain a good relationship with all the workers (Liu et al. 2016). However, some scholars note that face-to-face facilitation techniques may be less effective when applied in distributed or virtual environments (Adla et al. 2011). Hence, it is difficult for crowdsourcing facilitators to rely on facilitation knowledge established in other contexts. This difficulty could be rooted in the fundamentally different nature of collaboration on a crowdsourcing platform (Gimpel et al. 2020; Nguyen et al. 2013).
Building upon the current, broad understanding of crowdsourcing governance (Blohm et al. 2020; Pedersen et al. 2013; Shafiei Gol et al. 2019) and facilitation (Antunes and Ho 2001; Bruno et al. 2003; Kolfschoten et al. 2011; Maister and Lovelock 1982; Zajonc 1965) offered in the literature – in particular, an existing definition by Bostrom et al. (1993) – we define macro-task crowdsourcing facilitation thus:
Facilitation in macro-task crowdsourcing initiatives comprises all observing and intervening activities used before, during, and after a macro-task crowdsourcing exercise to foster beneficial interactions among crowd workers aimed at making (interim-) outcomes easier to achieve and ultimately align joint actions with predefined goals.
Despite substantial knowledge of crowdsourcing governance and facilitation, an overarching and integrated understanding of relevant facilitation activities in macro-task crowdsourcing is missing. Therefore, it is challenging for facilitators to delimit their competencies in crowdsourcing endeavors involving many participants and perspectives (Zhao and Zhu 2014).
2.2 Advances in AI-Augmented Facilitation
AI uses technologies and algorithms to simulate and replicate human behavior or achieve intelligent capabilities (Alsheibani et al. 2018; Simon 1995; Stone et al. 2016; Te’eni et al. 2019). AI may be defined as a “[…] system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation […]” (Kaplan and Haenlein 2019). Although more general definitions exist – such as those by Rai et al. (2019) and Russell and Norvig (2021) – in this paper, we follow the definition by Kaplan and Haenlein (2019). Their socio-technical system perspective focuses on the interrelationship between humans and AI, which is highly relevant in the context of macro-task crowdsourcing facilitation. While AI has been a subject established in science for over seven decades (Haenlein and Kaplan 2019; Rzepka and Berger 2018; Simon 1995), in recent years, it has received increasing attention in both research and practice (Bawack et al. 2019; de Vreede et al. 2020; Hinsen et al. 2022; Hofmann et al. 2021; Leal Filho et al. 2022; Pumplun et al. 2019; Rai 2020). AI is expected to disrupt the interplay between user, task, and technology (Maedche et al. 2019; Rzepka and Berger 2018) and the nature of work (Brynjolfsson et al. 2017; Iansiti and Lakhani 2020; Nascimento et al. 2018). This expectation is accompanied by many unrealistic expectations, and the timeless question of “[W]hat can AI do today?” (Brynjolfsson and McAffe 2017). There is a stream of AI research that answers this question using terminology usually related to humans or animals, including intelligence, learning, recognizing, and comprehending (Asatiani et al. 2021; Benbya et al. 2021; Rai et al. 2019), that explicitly considers human-inspired AI and humanized AI (Kaplan and Haenlein 2019). For example, Hofmann et al. (2020) answer the question of “what can AI do today?” by providing a structured method to create AI use-cases applicable to various domains. Thereby, they distinguish seven abstract functions, defined in Table 2, through which AI can occur as a solution: perceiving, identification, reasoning, predicting, decision-making, generating, and acting (Hofmann et al. 2020). However, such approaches may also lead to the over-humanization of AI and should not distort the fact that AI systems are human-made artifacts, not humans.
AI differs in the presence of cognitive, emotional, or social intelligence (Kaplan and Haenlein 2019). To be able to best support – or even replace – the facilitator would require an AI to hold all three types of intelligence, thereby resulting in a self-conscious and self-aware humanized AI (de Vreede and Briggs 2019; Kaplan and Haenlein 2019). Humanized AIs are not yet available in the facilitation domain, which could either be due to the complexity of collaboration (Kolfschoten et al. 2007) or the limited capabilities of current AI systems (Briggs et al. 2013; Kaplan and Haenlein 2019; Sousa and Rocha 2020). Hence, scholars from the facilitation domain focus on projects and approaches to building human-inspired AIs (Seeber et al. 2018). We refer to these as AI-augmented facilitation systems, which could have a vast impact on team collaboration (Maedche et al. 2019; Seeber et al. 2018, 2020). For instance, Derrick et al. (2013) and Ito et al. (2021) propose the first results of conversational AI capable of issuing instructions to team members or responding to workers’ contributions. Further inspired by the widespread application of AI (Dwivedi et al. 2021; Wilson and Daugherty 2018), researchers have also begun to explore more specifically AI’s potential use in crowdsourcing facilitation (de Vreede and Briggs 2018; Rhyn and Blohm 2017; Tavanapour and Bittner 2018a). For instance, some approaches seek to automate facilitation activities and decision-making by integrating AI such as text mining or natural language processing (Gimpel et al. 2020). Most of these AI-augmented approaches are prototypes, suggesting that further investigation of possible AI-augmented facilitation may be warranted (Askay 2017; Ghezzi et al. 2018; Robert 2019).
2.3 Affordance Theory
Within our research, we use affordance theory as a conceptual lens. Affordances are action possibilities that characterize the relationship between a goal-oriented actor and an artifact within a given environment (Burlamaqui and Dong 2015; Gibson 1977; Markus and Silver 2008). The concept of affordances was initially introduced in ecological psychology to describe how animals perceive value and meanings in things within their environment (Gibson 1977). Scholars have translated the concept of affordances to technological contexts (Achmat and Brown 2019; Autio et al. 2018; Bayer et al. 2020; Gaver 1991). Affordances theory now serves as an established lens to investigate socio-technical phenomena emerging from information technology (Dremel et al. 2020; Du et al. 2019; Keller et al. 2019; Lehrer et al. 2018; Malhotra et al. 2021; Markus and Silver 2008). Thereby, affordances describe the relationship between an actor and an information technology to determine goal-oriented action possibilities available to the actor and using specific information technology at hand (Faik et al. 2020; Markus and Silver 2008; Volkoff and Strong 2017). Actors can perceive or actualize affordances (Ostern and Rosemann 2021). Perceiving affordances requires that the actor holds a certain level of awareness regarding the information technology and is, hence, able to identify its potential uses (Burlamaqui and Dong 2015; Volkoff and Strong 2017). The information about a perceived affordance can lead actors to an affordance’s actualization. Herein, the actor makes efforts to realize the affordance, unleashing the value it holds in relation to the actor’s goal (Ostern and Rosemann 2021).
To analyze the “cues of potential uses” (Burlamaqui and Dong 2015) of AI in a specific environment, researchers often turn to affordance theory (Burlamaqui and Dong 2015; Kampf 2019; Volkoff and Strong 2017). In our endeavor, the particular environment is macro-task crowdsourcing with the facilitator as the actor and AI as the specific information technology. This confluence of technology and actor in our macro-task crowdsourcing context is a complex socio-technical phenomenon, where affordance theory can help better understand the interrelationships. With RQ2, we aim to exploratively investigate the relationship between the actor and the technology, revealing the action possibilities of AI in macro-task crowdsourcing facilitation. In line with the original definition by Gibson (1977) and following technology-related affordance literature (Faik et al. 2020; Leonardi 2011; Norman 1999; Steffen et al. 2019; Vyas et al. 2006), we focus on perceived affordances throughout our research endeavor. Hence, we define affordances in our context as perceived action possibilities arising from AI in macro-task crowdsourcing facilitation that do not necessarily need to be performed (Askay 2017). We see these perceived affordances as necessary to compose the nucleus of AI’s intersubjective meaning for facilitators (Suthers 2006). The most salient perceived affordances will ultimately support collaboration among the crowdsourcing workers.
3 Research Design
Our research set out to address the lack of knowledge on macro-task crowdsourcing facilitation and the need for a holistic understanding of how AI might augment facilitation in this context. Thereby, we followed a two-stage, bottom-up approach to establish a theory-driven understanding that we then validated and refined from a practical perspective. In our approach, we turned to affordance theory as an established lens to theorize the relationship between the technological artifact, AI, and the goal-oriented actor, the facilitator (Lehrer et al. 2018; Markus and Silver 2008; Ostern and Rosemann 2021; Volkoff and Strong 2013). Our approach served to identify macro-task crowdsourcing facilitation activities and AI affordances in macro-task crowdsourcing facilitation. Firstly, in the initial development stage, we conducted two literature searches. We identified 17 macro-task crowdsourcing activities and 116 statements about AI in macro-task crowdsourcing being further processed to manifestation (i.e., specific action possibilities) that substantiate AI’s potential use for macro-task crowdsourcing facilitation. From this, we identified seven AI affordances for macro-task crowdsourcing. Secondly, we iteratively refined our results in the validation & refinement stage through two observed macro-task crowdsourcing initiatives and six semi-structured interviews with experts from the AI and crowdsourcing facilitation domain. Figure 2 depicts the overarching research design, which yielded seven AI affordances for macro-task crowdsourcing.
3.1 Initial Development Stage
We developed an initial set of AI affordances in three steps. The aim in the first two steps was to gain an understanding of facilitation and AI within macro-task crowdsourcing. Thereby, we developed macro-task crowdsourcing facilitation activities necessary for performing the third step, which served to combine relevant insights from literature into an initial set of AI affordances.
In step I) Facilitation activities list, we conducted a structured literature search to extract activities that describe macro-task crowdsourcing facilitation. In an initial broad search, we identified the journal ‘Group Decision and Negotiation’ as an adequate source of broad, foundational knowledge about facilitation (Laengle et al. 2018). A searched for the term ‘facilitation’ in this journal returned a total of 176 papers, which we sequentially screened by title, abstract, and full text to determine whether facilitation was the core subject of each article. In doing so, we identified ten papers, plus one additional relevant paper from another outlet (Appendix A.1), whose full-text we further processed. We extracted 477 statements (i.e., excerpts) about activities or capabilities (i.e., repeatable patterns of action) relevant for facilitation. For each statement, we then decided whether the activity or capability was transferable to macro-task crowdsourcing facilitation. We excluded statements if the underlying activity did not necessarily need to be performed by a facilitator (e.g., recruitment of the worker) or if it neither contributed to fostering beneficial interactions among crowd workers or aligning joint actions with predefined goals (e.g., communication of the exercises’ results or distributing rewards to workers). We categorized the 317 remaining statements into 17 broader macro-task crowdsourcing facilitation activities that iteratively emerged in the researcher team’s discussions. These 17 activities served as comprehensive, foundational knowledge about macro-task crowdsourcing facilitation in the next two steps.
Step II) AI in macro-task crowdsourcing served to capture manifestations of AI in macro-task crowdsourcing. As highlighted above, the digital nature of crowdsourcing platforms means the application of AI in crowdsourcing is more widespread than in other situations where facilitation plays an essential role (e.g., face-to-face meetings). We conducted a systematic literature review on the topic of macro-task crowdsourcing (vom Brocke et al. 2015; Wolfswinkel et al. 2013). In keeping with our research goal of exploring “cues of potential uses” (Burlamaqui and Dong 2015, p. 305) of AI (i.e., affordances), we had identified ‘information systems,’ ‘computer science,’ and ‘social science’ as our fields of research. Hence, we selected four established databases (i.e., AIS eLibrary, ACM Digital Library, IEEE Explore Digital Library, and Web of Science) that covered this broad disciplinary spectrum. Our search query did not include specific AI terms since the literature includes various definitions and terms to refer to corresponding AI technologies (Bawack et al. 2019). Instead, we iteratively developed our search query and ended up with a more general tripartite version representing a process-driven perspective on online crowdsourcing:
(‘crowd*’ OR ‘collective intelligence’) AND (‘task’ OR ‘activity’ OR ‘action” OR ‘process’ OR ‘capability’ OR ‘facilitat*’) AND (‘platform’ OR ‘information system’ OR ‘information technology’ OR ‘information and communications technology’).
Applying this search query to the identified databases resulted in a total of 5,808 hits. To refine our sample of papers, we identified and removed 502 duplicates, which led to 5,306 distinct papers. In manually screening the papers, we applied the criteria listed in Table 3 to narrow our search results to macro-task crowdsourcing and ensure high levels of relevance and rigor.
Using these criteria, we narrowed the search results by sequentially analyzing title and abstract, which narrowed the total to 283 papers potentially relevant to macro-task crowdsourcing. We read these 283 papers in full text, finally identifying nine papers that name and describe AI in the context of macro-task crowdsourcing. We also included three papers, found elsewhere during our research process, that matched all of our defined criteria. We analyzed these 12 papers (Appendix A.2) in-depth to extract 116 statements about AI manifestations in macro-task crowdsourcing. In the next step, these statements were used together with the previous results to develop AI affordances.
In step III) Initial AI affordances, we developed an initial set of AI affordances by combining and aggregating the results of steps I) and II). Thereby, we assigned 116 manifestations (Appendix A.3) of AI in macro-task crowdsourcing to the 17 activities of macro-task crowdsourcing facilitation. To further distinguish and explain the role of AI in each manifestation, we used AI functions proposed by Hofmann et al. (2020). In doing so, we assigned each manifestation one specific AI function, describing how AI occurs or could occur as a solution in the selected manifestation (Hofmann et al. 2020). This two-dimensional matrix resulted in an AI manifestation mapping for macro-task crowdsourcing facilitation.
To create the initial set of affordances, the research team held discussions to identify archetypes within the manifestation mapping. We remained open-minded about whether an archetype would be created based on the functioning of AI (horizontal axis of the matrix) or the facilitation actions (vertical axis of the matrix). To support the development of AI affordances, we reached out to three scholars with expertise in affordance theory. They contributed valuable input regarding common pitfalls and best practices during the development stage. We recognized seven archetypes whose manifestations we then analyzed to identify affordances. Every affordance is described and classified in terms of AI functions and facilitation activity (see Sect. 4.1).
3.2 Validation & Refinement Stage
Although we rigorously identified our AI affordances based on scholarly knowledge, a practical validation was necessary to ascertain potential end-users’ perceptions. To this end, we validated and refined our initial set of AI affordances from step III) with two observed macro-task crowdsourcing initiatives as well as six semi-structured interviews (Myers and Newman 2007).
In step IV) Crowdsourcing initiatives, we longitudinally observed two macro-task crowdsourcing initiatives, namely Trust CoLab (TCL) and Pandemic Supermind (PSM). Observing these initiatives not only helped us to validate our facilitation activities but also to gain rare practical insights on macro-task crowdsourcing facilitation and real-world AI manifestations. Table 4 describes the two macro-task crowdsourcing initiatives under consideration.
Our primary sources of data collection were documentation (e.g., mails, final reports, and meeting protocols) and participant observation (e.g., discussion within the facilitation team and analysis of AI tools used) that we gained from both crowdsourcing initiatives. Observation of the facilitators‘ actions in the macro-task crowdsourcing initiatives supported the set of 17 facilitation activities from step I). Each of the activities was observed, and no other major activities were found. Additionally, we could refine and enhance the manifestations within our AI manifestation mapping, which was created in step III), by analyzing the application of AI tools and the perceived demand for AI support within both initiatives. Nevertheless, the limited application of AI tools in both initiatives could not validate all affordances and suggested an additional validation and refinement step. Hence, in step V) Interviews, we conducted six semi-structured interviews, which we used to uncover potential affordances (Volkoff and Strong 2013). We selected experts from academia and practice with multiple years of experience in the AI or facilitation domain, as listed in Table 5 (Myers and Newman 2007; Schultze and Avital 2011).
Interviews lasted between 37 and 72 min, were held in the native language of the interviewee, and were recorded with the consent of each interviewee. We informed the interviewees about the research topic and sent a detailed interview guide in advance to better allow the interviewees to prepare for the interview. The guide contained definitions and illustrations, the then-current set of affordances, and the intended structure of the interview. Appendix C.1 provides more details about the structure of the interview as well as the prepared questions.
The semi-structured interviews started with a short description of the research project and definitions of crowdsourcing and facilitation necessary to ensure a mutual understanding of crowdsourcing facilitation. After that, we encouraged the interviewees to share their experience of AI within an ideation section (i.e., a less structured and guided part of the interview). Next, we sought open-ended feedback on the affordances by asking questions regarding the completeness, comprehensiveness, meaningfulness, level of detail, and applicability of the criteria in relation to today’s crowdsourcing initiatives (Sonnenberg and vom Brocke 2012). During the interviews, we took notes to highlight the experts’ essential statements and better respond to the interviewee in the course of the conversation. We iteratively adapted and refined our affordances after each interview. The experts’ feedback led us to overhaul one affordance entirely (i.e., workflow enrichment; previously: environment creation) and improve the descriptions of two other affordances (i.e., improvement triggering and worker profiling).
To align all of the practical and theoretical insights gained, we conducted a final reflective refinement after the interviews. Therein, we followed Schreier (2012) to carefully analyze all six experts’ statements regarding our predefined criteria and enrich our AI manifestation, mapping with potential use cases of AI within macro-task crowdsourcing facilitation named by the experts. Appendix C.3 contains some exemplary expert quotes. Our concept-driven coding frame (Schreier 2012) comprised two categories: (1) feedback regarding artificial intelligence affordances and (2) potential use-cases of AI within macro-task crowdsourcing facilitation. While the feedback is structured in five subcategories according to our defined criteria, the potential use-cases encompass 17 subcategories representing the facilitation activities developed in step I). We extracted transcripts of all relevant statements from the interviewees and mapped these to our coding frame. Finally, we refined the AI affordances and AI manifestation mapping accordingly. The two validation and refinement steps yielded a validated and refined list of seven affordances and 44 manifestations. Appendix B contains a detailed description of the refinement and Appendix C.2 a description of the validation.
4 Results
4.1 Macro-Task Crowdsourcing Facilitation
Based on our literature search, we identified 17 macro-task crowdsourcing facilitation activities. Table 6 comprises an exhaustive list of activities found in the current literature, from facilitation joining crowdsourcing’s specific conditions. We argue that the distinction between a more straightforward administrative activity (e.g., sending invitation emails to the workers) and a more complex facilitation activity (e.g., writing a motivational text for the workers’ invitation) can depend on each particular exercise. The borders of this distinction can also be fluid. Nevertheless, it is essential to clearly define the facilitator’s role in each exercise to avoid misunderstandings between the facilitator and other stakeholders of the macro-task crowdsourcing initiative (e.g., the platform administrator or the requestor).
In the validation & refinement stage, we observed two macro-task crowdsourcing initiatives (i.e., TCL and PSM). By carefully observing the facilitators within TCL and PSM, we were able to identify action patterns that matched the facilitation activities’ descriptions. Thereby, we confirmed the existence of all 17 facilitation activities, although their scope varied within the two initiatives under consideration. Table 7 depicts exemplary actions in TCL and PSM, mainly performed by the facilitator, that matched the elaborated description of the 17 facilitation activities. Some of these facilitation activities were AI-augmented (i.e., the facilitator was supported by an AI tool), making both initiatives valuable subjects for further analysis regarding our AI affordances.
Throughout the macro-task crowdsourcing initiatives of TCL and PSM, dedicated teams used two different AI tools to support the facilitators in their work. In TCL, the facilitator was mainly supported in aggregating the workers’ contributions between the four phases. All of the workers’ contributions were first exported from the platform before a natural language processing Python script preprocessed the contributions (i.e., performing stemming and lemmatization). The script then created a detailed word cloud to provide the facilitator with a broad overview of the main concepts. Finally, the contributions were semantically clustered by the script using the Universal Sentence Encoder algorithm (Cer et al. 2018). We refer to Appendix D.1 for two interim results of the Python script. The results were discussed by the initiative’s stakeholders and manually refined by the facilitator and the supporting team. In PSM, the two facilitators were supported by a web application written in R. The application could directly access the latest contribution data via an application programming interface provided by the crowdsourcing platform. Therefore, the facilitators could use the web application’s algorithms during internal meetings to discuss the latest contribution data. In TCL, the AI tool only used the codified contributions made by the workers while the web application also used metadata such as comments or likes on the contributions. This metadata allowed a broad set of functionalities such as keyword extraction, topic modeling, word co-occurrences, network analysis, and word searches. We refer to Appendix D.2 for two screenshots of the web application.
4.2 Artificial Intelligence Affordances
Given the extensive knowledge base on macro-task crowdsourcing facilitation, we searched for AI manifestations by conducting a second literature search to create an initial AI manifestation mapping. By analyzing the macro-task crowdsourcing initiatives TCL and PSM regarding potential use-cases of AI-augmented facilitation, and gathering statements about potential use-cases for AI in facilitation from the six expert interviews, we were able to refine and extend our initial AI manifestation mapping. Therein, we searched for archetypes of manifestations that could lead to potential affordances. Table 8 lists the final seven affordances of AI for macro-task crowdsourcing facilitation and one example of how AI-augmented facilitation could be implemented in the case of each affordance.
In the following, we describe each of the affordances in detail. Thereby, we explain the relationship between the facilitator’s goal and AI within macro-task crowdsourcing. To further elaborate on the affordances, we highlight some AI manifestations found in the literature (step II), our two macro-task crowdsourcing initiatives (step IV), or our interviews (step V). These manifestations provide examples of what AI is perceived to afford within macro-task crowdsourcing facilitation.
1) Contribution Assessment. In bringing a macro-task crowdsourcing initiative to fruition, one of the biggest challenges facilitators face is dealing with the number of contributions made by workers (Blohm et al. 2013; Nagar et al. 2016) and “[understanding] all the results from a crowdsourcing exercise in a way that’s empirical and meaningful” (Expert 5). AI affords the analysis of contributions such that the quality can be assessed and valuable ideas or relevant content can be extracted. Facilitators could use AI to analyze the content of a contribution via semantical natural language processing (Gimpel et al. 2020) to determine its novelty or similarity compared to other contributions. (Semi-)automated contribution assessments could decide whether each contribution brings the initiative one step closer to the goal (Haas et al. 2015; Nagar et al. 2016; Rhyn and Blohm 2017). This could involve removing unnecessary information to allow a better assessment by the facilitator further downstream (Expert 4, 6) or to detect outliers by assessing each contribution’s relevance to the topic at hand (Case PSM).
2) Improvement Triggering. In crowdsourcing, facilitators often face a 90-9-1 distribution, where only 1% of the workers create nearly all of the contributions (Troll et al. 2017). Since macro-task crowdsourcing heavily relies on active knowledge exchange and idea cross-fertilization between various workers (Gimpel et al. 2020), non- or low-active workers need to be triggered to contribute, thereby stimulating a better thematic discourse (Expert 2, 4). Yet, even if all workers contribute, their contributions may sometimes lack quality; ideas may lack originality or readability. AI affords recognition of individual contributions that are unoriginal or add no value (e.g., due to the existence of similar or identical contributions) (Hetmank 2013; Rhyn and Blohm 2017), and of workers who do not actively participate in the exercise (e.g., through lack of time or attention). Intelligent mechanisms such as personalized nudging (Expert 2, 4, 5) can improve behavior or quality (Chiu et al. 2014; Haas et al. 2015; Riedl and Woolley 2017). One approach would be to use natural language understanding to automatically notify workers during the creation of a contribution that theirs is similar to other available contributions or is not sufficiently comprehensive (Case PSM) – for example, by displaying a uniqueness score (Expert 3).
3) Operational Assistance. When creating a contribution, workers may experience technical difficulties or develop questions regarding idea formulation (Adla et al. 2011; Hosseini et al. 2015). Usually, workers will either stop working on their contributions or contact the facilitator, who then has to step in and solve the problem (Adla et al. 2011), consuming the worker’s and the facilitator’s precious time. AI could identify the cause of either process- or technical-related problems and offer assistance. Missing information, which could hinder the workflow, could be identified by AI and provided at the appropriate time (Chittilappilly et al. 2016; Seeber et al. 2016). Robotic process automation (also referred to as intelligent automation technologies) could assign workers appropriate tasks based on the worker’s domain knowledge, previous crowdsourcing experience (Expert 4), or a lack of contributions in a specific task (Case PSM). Deep learning algorithms could help translate contributions or overcome language barriers (Expert 1). Pre-trained AI chat assistants could interactively explain the contribution creation process to the workers on a step-by-step basis and answer their questions accordingly (Tavanapour and Bittner 2018a).
4) Workflow Enrichment. To use the workers’ time as effectively as possible, facilitators break down the goal of an exercise into smaller tasks (Vukovic and Bartolini 2010). They efficiently integrate these tasks into an effective workflow supported by a crowdsourcing platform (Hetmank 2013). This is usually accompanied by a reduction in special attention to the needs of individual workers. AI could suggest the facilitator integrate additional or new information into the workflow (Chittilappilly et al. 2016; Riedl and Woolley 2017) or adjust the proposed next steps (Xiang et al. 2018). This could lead to a modified workflow or improved effectiveness. Natural language understanding could identify mismatches between the facilitator’s proposed task and the workers’ contributions, which may be the result of ambiguous task descriptions (Case PSM). Depending on the extent of the worker’s domain knowledge, the description of the task could be paraphrased or extended using natural language generation (Expert 1, 5). If workers do not find appropriate resources supporting their idea, natural language processing could identify the topic and the facilitator could then refer the worker to relevant data or scientific sources (Expert 6).
5) Collaboration Guidance. Facilitation, in a narrow sense, involves fostering collaboration and interdisciplinary exchange of information (Expert 2). However, such thematic exchange can go astray and move away from the exercise’s actual goal, despite facilitative support. Therefore, facilitators have to decide whether the existing discourse should be maintained or if the worker should be guided in a different direction (Xiang et al. 2018; Zheng et al. 2017). AI affords the evaluation of workers’ moods and the direction of the discussion with reference to the content. This provides the facilitator with a better understanding of the current atmosphere among the workers and the thematic focus of their collaboration. On the one hand, automated text mining, like sentiment detection, could generate semantic embeddings of the contributions (Expert 4) (Nagar et al. 2016), which could help to assess the maturity of the collaboration (Gimpel et al. 2020; Qiao et al. 2018). On the other hand, word-to-vec algorithms could focus the content of the discussion and uncover unprocessed areas (Expert 2, 3, 5) and frequently discussed topics (Case PSM), or help the facilitator to detect emerging topics (Case TCL).
6) Worker Profiling. Experienced facilitators mobilize the varied skills and expertise of the workers participating in an exercise (Tazzini et al. 2013). However, as the number of workers increases, getting to know one another becomes more difficult, particularly in an online crowdsourcing environment. Hence, facilitators may lack important information about workers, such as their previous experience in crowdsourcing or domain-specific skillsets. AI affords the use of interaction among workers (Dissanayake et al. 2014), as well as information on their backgrounds (Bozzon et al. 2013; Tazzini et al. 2013), to better assess the workers’ activity and the characteristics of their collaboration (Gimpel et al. 2020). Natural language generation could be used to process information from the worker’s publications or the worker’s social media profile to create a summary of the individual’s background (Expert 4). However, the rules of platform governance, as defined by the initiative stakeholders, must be upheld in any such investigations in order to avoid ethical concerns on the part of the workers (Alkharashi and Renaud 2018; Kocsis and Vreede 2016; Schlagwein et al. 2019). Alternatively, activity reports could be generated from the crowdsourcing platform via the use of (social) network algorithms (Case TCL) or natural language processing (Expert 1), leading to fully-automated dashboard generation for tracking the workers’ activity (Expert 6).
7) Decision-making Preparation. After one or more exercises, the workers will have provided several contributions. Facilitators then have to aggregate and synthesize these contributions into a meaningful foundation for decision-makers, such as a final report (Chan et al. 2016; Gimpel et al. 2020). AI affords support in decision-making preparation and could provide a synthesis, such as a decision template or recommendation for action to the requestors (Hetmank 2013) (Expert 2). Neural networks that have been specifically trained using vocabulary from the exercise’s domain could cluster the contributions and highlight unique ideas (Case TCL and PSM) (Expert 2, 5). Natural language understanding could be used to perform question answering based on the contributions, which could help a facilitator interact with the contributions and better understand the workers’ ideas, even after the exercise and without contacting the workers (Expert 3). Furthermore, summary generation algorithms could comprehensively synthesize the clustered contributions in ways that are meaningful for the decision-maker (Case PSM) (Expert 1, 2).
Despite the immense potential of AI in macro-task crowdsourcing facilitation, as reflected by the seven affordances, the interviewed experts stressed that researchers and facilitators must carefully consider which facilitation activity should be enabled or performed by AI (Expert 1, 2, 3, 4, 5). AI is prone to biases (Expert 4) and could systematically discriminate against specific workers (e.g., a natural language processing contribution evaluation algorithm could systematically down-rate contributions from workers with dyslexia). On top of that, the unreflected use of AI could lead facilitators to blindly believe in the underlying model and thereby reduce the overall level of goal achievement. Experts also argued that AI has limitations in understanding ethical and cultural factors and cannot fully imitate human interactions as facilitators (Expert 1, 5, 6). “I think it is really nice to have a name and a face to identify with a person who is communicating and asking you to do these things.” (Expert 5). Furthermore, facilitators should also consider the effort and difficulties during the development: “The art of AI is often not to solve the task, but to explain and teach the AI the task.” (Expert 3). Ultimately, the ill-considered use of AI in macro-task crowdsourcing could induce much bias in the outcome of an exercise (Expert 4) or decrease the workers’ participation and performance (Expert 6).
Even after a full review, we could not establish a hierarchy among the affordances. Nonetheless, an interlocking of individual affordances cannot be ruled out. To better illustrate the interdependencies of the affordances, the individual facilitation activities, and AI functions, Table 9 shows the revised version of the AI manifestation mapping. This table records all AI manifestations (i.e., specific action possibilities) that occurred during our research process. Every AI manifestation therein was observed either in literature (L), our observed crowdsourcing initiatives (C), or our interviews (I) and describes a possible shape of the corresponding affordance (1)-7)) concerning a facilitation activity or AI function.
We observed that the way AI emerges in macro-task crowdsourcing facilitation is strongly dependent on the nature of the facilitation activity. In particular, we want to highlight two patterns in Table 9: Firstly, we did not find evidence for the AI function perceiving in any of the facilitation activities. We deem this to be due to the very nature of crowdsourcing initiatives since all data relevant to the facilitation of a crowdsourcing exercise has already been processed from the analog world (Hofmann et al. 2020). For instance, a conversation is not performed face-to-face but is stored in codified form on the crowdsourcing platform. Secondly, we argue that culture development is highly human-centered and requires empathy or social and emotional intelligence. Hence, there are very few cases wherein AI would have sufficient capabilities to perform this activity.
5 Discussion
5.1 Theoretical Contribution
In this research, we have addressed two research questions on the intersection of macro-task crowdsourcing, facilitation, and AI. To answer our research questions, our results encompass three novel theoretical contributions for scholars: a more precise understanding of macro-task crowdsourcing, an extensive list of 17 macro-task crowdsourcing facilitation activities, and seven holistic AI affordances in macro-task crowdsourcing.
Our work advances the domain of macro-task crowdsourcing by distinguishing macro-task crowdsourcing from other crowdsourcing types such as micro-task crowdsourcing or flash organizations. In doing so, it highlights the unique features of macro-task crowdsourcing, such as the low level of the problem’s decomposability and the nature of collaboration among workers that, together, form the demand for a facilitating instance. We further provide in-depth insights into two real-world macro-task crowdsourcing initiatives, including their particular AI tools, namely TCL and PSM. Both initiatives are dedicated to tackling wicked problems. Scholars can build upon this extensive understanding of macro-task crowdsourcing and better position their work in this area.
Our research contributes to the facilitation domain by deriving 17 macro-task crowdsourcing facilitation activities that holistically theorize facilitation as a suitable governance strategy for macro-task crowdsourcing. We also introduce a broad definition of macro-task crowdsourcing facilitation to merge the specific collaborative circumstances of macro-task crowdsourcing (Gimpel et al. 2020) with the current understanding of facilitation (Bostrom et al. 1993). This definition, along with the 17 activities, extends existing knowledge of facilitation and particular governance strategies for complex tasks in crowdsourcing, and may apply to other types of crowdsourcing or online collaboration. Our extensive understanding of facilitation in macro-task crowdsourcing differs from traditional knowledge of facilitation in that we consider the digital nature of crowdsourcing’s collaborations. With a more vital link to the crowd and increased attention to collaboration levels, our facilitation activities also extend existing crowdsourcing governance concepts more focused on the platform or the initiative. Fellow researchers can, for example, harness these activities as a starting point for further investigation in the context of crowdsourcing, which may be expanded over time and with technological advancement.
Finally, we use affordance theory as a socio-technical lens to extend the body of knowledge on AI-augmented facilitation. Our research identifies seven perceived AI affordances in macro-task crowdsourcing and generalized manifestations triangulated from practice, literature, and expertise. These manifestations were structured on seven abstract AI functions (Hofmann et al. 2020). Even though these could be seen to contrast to other (more technical) conceptualizations of AI, they performed well in the socio-technical context of macro-task crowdsourcing, describing how AI (could) occur within the 17 facilitation activities. Our affordances further extend these insights and holistically describe how AI can be applied by the facilitator in macro-task crowdsourcing facilitation. Through the insights from two macro-task crowdsourcing initiatives and six expert interviews, it is clearly demonstrated that AI currently holds only supportive potential for crowdsourcing. Although AI is now delivering super-human performances in some specific tasks, and while the digital starting conditions provided by crowdsourcing are promising, (digital) collaboration as an environment for AI is proving particularly challenging due to the subtle nuances of human interaction. However, we are convinced that AI’s potential will continue to increase as technologies evolve and will soon extend to collaboration and automation. Hence, we presume our AI affordances pave the way for AI scholars to undertake further research, for example, by helping scholars to structure future research projects or identify future research trends.
5.2 Practical Implications
From a practical perspective, we see two major stakeholder groups benefiting from our research findings: AI developers and facilitators.
Firstly, developers of AI-augmented facilitation systems or functionalities can use our seven affordances as a starting point to identify areas for action or improvement and to implement innovative systems, tools, or functionalities that support facilitation in crowdsourcing. On top of this, our two macro-task crowdsourcing initiatives revealed good practices in which AI functionalities could add value to crowdsourcing exercises. AI developers could pick these insights and integrate AI functionalities, for example, into the crowdsourcing platform accordingly. Developers could also use the AI manifestation mapping to illustrate the status-quo in AI opportunities.
Secondly, the fact that current research on crowdsourcing facilitation lacks insights about AI means practitioners also stand to benefit from our results. With our seven developed and validated AI affordances, we provide guidance on which functionalities could add value when integrated before, during, or after crowdsourcing exercises. Thereby, our observed macro-task crowdsourcing initiatives and interviews with experts point out possible ways to include and integrate AI, which could be highly relevant when setting up new crowdsourcing initiatives. Furthermore, the 44 manifestations within our AI manifestation mapping provide initial indications of which AI functionalities or use-cases have already been considered and help facilitators correctly assess AI’s maturity. However, we recognize that the affordances are not equally relevant for all crowdsourcing exercises due to the complexity and variety of the latter. We argue that the list of affordances can also help communicate the usage and integration of AI tools or functionalities of existing crowdsourcing initiatives. This would also foster the exchange of knowledge in macro-task crowdsourcing, which is essential to find new approaches to tackling, for example, wicked problems in practice. Thereby, active and newly created crowdsourcing initiatives could increase their effectiveness and the efficiency of facilitation activities therein.
5.3 Limitations and Future Research
Despite the comprehensive nature of our results, grounded in two literature searches and subject to a two-way practical validation with crowdsourcing initiatives and expert interviews, our research features some limitations in both the development and validation of the macro-task crowdsourcing facilitation activities and AI affordances.
The development of our 17 macro-task crowdsourcing activities was based on a structured literature search solely within the journal ‘Group Decision and Negotiation’. We deem its broad understanding of facilitation, developed over many years, to be sufficient for use in a crowdsourcing context. Nevertheless, we could have enhanced this literature search by including other journals or databases contributing to the crowdsourcing or facilitation domains.
Regarding the development of our AI research in macro-task crowdsourcing facilitation, we focused on creating perceived affordances. Due to the dynamic of the AI domain in crowdsourcing facilitation, we neither analyzed the actualization of their affordances nor elaborated on their existence (Ostern and Rosemann 2021; Volkoff and Strong 2017). Although we considered different forms of input (i.e., literature, crowdsourcing initiatives, expert interviews), we cannot formally claim that our affordances are complete. We argue that the fast-moving nature of research in AI could also impede efforts to compile an exhaustive list of AI affordances. Hence, increasing the scope of our literature searches could have resulted in a broader knowledge base. Furthermore, since we only analyzed macro-task crowdsourcing facilitation, we cannot verify the applicability or direct transferability to other types of crowdsourcing or collaboration. Testing whether our results can be generalized to other types of crowdsourcing and collaboration remains a task for future research.
Regarding the validation and refinement of our affordances, we only observed two macro-task crowdsourcing initiatives. We performed six interviews which, by their very nature, were both prone to bias (Yin 2018), such as response or selection bias. Even though we did not receive more insights at the end of our validation stage, we acknowledge that more initiatives or interviews could further improve or enhance our results. For instance, although we interviewed two experts with implementation knowledge, we can only make limited statements about the feasibility of implementing all affordances. Besides, even though experts have confirmed completeness, we still cannot guarantee that we have developed a complete list of affordances.
We hope that future research will address these limitations, offering multiple avenues for further investigation. Future research could specifically look into the practicability of the affordances. We anticipate that implementing AI-augmented facilitation prototypes for macro-task crowdsourcing based on our affordances will deliver valuable insights as to the feasibility of our affordances and their level of abstraction. Such prototypes could also enhance knowledge of actualized affordances or AI information systems in general and shed more light on how AI-augmented crowdsourcing could more efficiently tackle macro-tasks and their underlying complex problems. Furthermore, scholars could go beyond the descriptive nature of our results and (prescriptively) elaborate on how and why particular affordances enable macro-task crowdsourcing facilitation and how this could improve macro-task crowdsourcing initiatives. They also could use our affordances to extend design knowledge to design AI-augmented facilitation assistants (Maedche et al. 2019; Volkoff and Strong 2017). For example, researchers could derive design guidelines or principles that would aid facilitators in their burdensome amount of work (Gimpel et al. 2020).
6 Conclusion
Our research was driven by opportunities that emerge from AI’s widespread application to aid facilitators. We turned to affordance theory to analyze AI’s potential applications in macro-task crowdsourcing facilitation. We answered our two research questions by defining macro-task crowdsourcing facilitation, constituting 17 activities, and introducing seven perceived AI affordances of macro-task crowdsourcing facilitation. We followed a two-stage, bottom-up approach consisting of an initial development stage comprising two literature searches, and a (second) validation & refinement stage, involving two macro-task crowdsourcing initiatives and six expert interviews. Our results could increase the efficiency of facilitation activities and the effectiveness of macro-task crowdsourcing, ultimately contributing to tackling wicked problems, such as the sustainable development goals.
Data Availability
Not applicable.
Code Availability
Not applicable.
References
Abhinav K, Dubey A, Jain S, Bhatia GK, McCartin B, Bhardwaj N (2018) “Crowdassistant: a virtual buddy for crowd worker,” in Proceedings of the 5th International Workshop on Crowd Sourcing in Software Engineering, pp. 17–20 (doi: https://doi.org/10.1145/3195863.3195865)
Achmat L, Brown I (2019) “Artificial intelligence affordances for business innovation: a systematic review of literature,” in Proceedings of 4th International Conference on the Internet, Cyber Security and Information Systems, pp. 1–12
Adla A, Zarate P, Soubie J-L (2011) A proposal of toolkit for GDSS facilitators. Group Decis Negot 20(1):57–77. doi: https://doi.org/10.1007/s10726-010-9204-8)
Alabduljabbar R, Al-Dossari H (2016) “A Task Ontology-based Model for Quality Control in Crowdsourcing Systems,” in Proceedings of the International Conference on Research in Adaptive and Convergent Systems, pp. 22–28 (doi: https://doi.org/10.1145/2987386.2987413)
Alford J, Head BW (2017) Wicked and less wicked problems: a typology and a contingency framework. Policy and Society 36(3):397–413. doi: https://doi.org/10.1080/14494035.2017.1361634)
Alkharashi A, Renaud K (2018) “Privacy in crowdsourcing: a systematic review,” in ISC 2018: Information Security, pp. 387–400 (doi: https://doi.org/10.1007/978-3-319-99136-8_21)
Alsheibani S, Cheung Y, Messom C (2018) “Artificial intelligence adoption: AI-readiness at firm-level,” in Proceedings of the 22nd Pacific Asia Conference on Information Systems (PACIS 2018), Association for Information Systems
Antunes P, Ho T (2001) The design of a GDSS meeting preparation tool. Group Decis Negot 10(1):5–25. doi: https://doi.org/10.1023/A:1008752727069)
Asatiani A, Malo P, Nagbøl PR, Penttinen E, Rinta-Kahila T, Salovaara A (2021) Sociotechnical Envelopment of Artificial Intelligence: An Approach to Organizational Deployment of Inscrutable Artificial Intelligence Systems. J Association Inform Syst 22(2):8. doi: https://doi.org/10.17705/1jais.00664)
Askay D (2017) “A conceptual framework for investigating organizational control and resistance in crowd-based platforms,” in Proceedings of the 50th Hawaii International Conference on System Sciences (HICSS 2017)
Assis Neto FR, Santos CAS (2018) Understanding crowdsourcing projects: A systematic review of tendencies, workflow, and quality management. Inf Process Manag 54(4):490–506. doi: https://doi.org/10.1016/j.ipm.2018.03.006)
Autio E, Nambisan S, Thomas LDW, Wright M (2018) Digital affordances, spatial affordances, and the genesis of entrepreneurial ecosystems. Strateg Entrepreneurship J 12(1):72–95. doi: https://doi.org/10.1002/sej.1266)
Azadegan A, Kolfschoten G (2014) An assessment framework for practicing facilitator. Group Decis Negot 23(5):1013–1045. doi: https://doi.org/10.1007/s10726-012-9332-4)
Bawack RE, Wamba F, Carillo KDA (2019) “Artificial intelligence in practice: implications for IS research,” in Proceedings of the 25th Americas Conference on Information Systems (AMCIS 2019), Association for Information Systems
Bayer S, Gimpel H, Rau D (2020) IoT-commerce - opportunities for customers through an affordance lens. Electron Markets. doi: https://doi.org/10.1007/s12525-020-00405-8)
Belleflamme P, Lambert T, Schwienbacher A (2014) Crowdfunding: Tapping the right crowd. J Bus Ventur 29(5):585–609. doi: https://doi.org/10.1016/j.jbusvent.2013.07.003)
Benbya H, Pachidi S, Jarvenpaa S (2021) Special Issue Editorial: Artificial Intelligence in Organizations: Implications for Information Systems Research. J Association Inform Syst 22(2):10. doi: https://doi.org/10.17705/1jais.00662)
Blohm I, Leimeister JM, Krcmar H (2013) Crowdsourcing: how to benefit from (too) many great ideas. MIS Q Exec 12:4
Blohm I, Zogaj S, Bretschneider U, Leimeister JM (2018) How to manage crowdsourcing platforms effectively? Calif Manag Rev 60(2):122–149. doi: https://doi.org/10.1177/0008125617738255)
Blohm I, Zogaj S, Bretschneider U, Leimeister JM (2020) How to Manage Crowdsourcing Platforms Effectively. NIM Mark Intell Rev 12(1):18–23. doi: https://doi.org/10.2478/NIMMIR-2020-0003)
Bogers M, Chesbrough H, Moedas C (2018) Open Innovation: Research, Practices, and Policies. Calif Manag Rev 60(2):5–16. doi: https://doi.org/10.1177/0008125617745086)
Bostrom RP, Anson R, Clawson VK(1993) “Group facilitation and group support systems,”Group support systems: New perspectives(8), pp.146–168
Boughzala I, de Vreede T, Nguyen C, Vreede G-Jde (2014) “Towards a Maturity Model for the Assessment of Ideation in Crowdsourcing Projects,” in Proceedings of the 47th Hawaii International Conference on System Sciences (HICSS 2014), pp. 483–490 (doi: https://doi.org/10.1109/HICSS.2014.67)
Bozzon A, Brambilla M, Ceri S, Silvestri M, Vesci G (2013) “Choosing the right crowd: expert finding in social networks,” in Proceedings of the 16th International Conference on Extending Database Technology, pp. 637–648 (doi: https://doi.org/10.1145/2452376.2452451)
Briggs RO, Kolfschoten GL, de Vreede G-J, Lukosch S, Albrecht CC (2013) Facilitator-in-a-box: process support applications to help practitioners realize the potential of collaboration technology. J Manage Inform Syst 29(4):159–194. doi: https://doi.org/10.2753/MIS0742-1222290406)
Bruno JF, Stachowicz JJ, Bertness MD (2003) Inclusion of facilitation into ecological theory. Trends Ecol Evol 18(3):119–125. doi: https://doi.org/10.1016/S0169-5347(02)00045-9)
Brynjolfsson E, McAffe A(2017) “The business of artificial intelligence,”Harvard Business Review, pp.1–20
Brynjolfsson E, Rock D, Syverson C (2017) “Artificial intelligence and the modern productivity paradox: a clash of expectations and statistics. ” National Bureau of Economic Research
Burlamaqui L, Dong A (2015) “The use and misuse of the concept of affordance,” in Design Computing and Cognition, pp. 295–311 (doi: https://doi.org/10.1007/978-3-319-14956-1_17)
Cer D, Yang Y, Kong S, Hua N, Limtiaco N, Constant R, Guajardo-Cespedes N, Yuan M, Tar S, Sung C, Strope Y-H, Kurzweil R (2018) “Universal Sentence Encoder,”
Chan J, Dang S, Dow SP (2016) “Improving Crowd Innovation with Expert Facilitation,” in Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, pp. 1223–1235 (doi: https://doi.org/10.1145/2818048.2820023)
Chittilappilly AI, Chen L, Amer-Yahia S (2016) A survey of general-purpose crowdsourcing techniques. IEEE Trans Knowl Data Eng 28(9):2246–2266. doi: https://doi.org/10.1109/TKDE.2016.2555805)
Chiu C-M, Liang T-P, Turban E (2014) What can crowdsourcing do for decision support? Decis Support Syst 65:40–49. doi: https://doi.org/10.1016/j.dss.2014.05.010)
Clawson VK, Bostrom RP (1996) Research-driven facilitation training for computer-supported environments. Group Decis Negot 5(1):7–29. doi: https://doi.org/10.1007/BF02404174)
Cullina E, Conboy K, Morgan L (2015) “Measuring the crowd: a preliminary taxonomy of crowdsourcing metrics,” in Proceedings of the 11th International Symposium on Open Collaboration (doi: https://doi.org/10.1145/2788993.2789841)
de Vreede G-J, Briggs R (2018) “Collaboration engineering: reflections on 15 years of research & practice,” in Proceedings of the 51st Hawaii International Conference on System Sciences (HICSS 2018)
de Vreede G-J, Briggs RO (2019) A program of collaboration engineering research and practice: contributions, insights, and future directions. J Manage Inform Syst 36(1):74–119. doi: https://doi.org/10.1080/07421222.2018.1550552)
de Vreede G-J, Niederman F, Paarlberg I (2002) Towards an instrument to measure participants’ perceptions on facilitation in group support systems meetings. Group Decis Negot 11(2):127–144. doi: https://doi.org/10.1023/A:1015225811547)
de Vreede T, Steele L, de Vreede G-J, Briggs R (2020) “LeadLets: Towards a Pattern Language for Leadership Development of Human and AI Agents,” in Proceedings of the 53rd Hawaii International Conference on System Sciences, T. Bui (ed.), Hawaii International Conference on System Sciences (doi: https://doi.org/10.24251/HICSS.2020.084)
Derrick DC, Read A, Nguyen C, Callens A, de Vreede G-J (2013) “Automated group facilitation for gathering wide audience end-user requirements,” in Proceedings of the 46th Hawaii International Conference on System Sciences (HICSS 2013), pp. 195–204 (doi: https://doi.org/10.1109/HICSS.2013.109)
Dissanayake I, Nerur S, Zhang J (2019) “Team formation and performance in online crowdsourcing competitions: the role of homophily and diversity in solver characteristics,” in Proceedings of the 40th International Conference on Information Systems (ICIS 2019), Association for Information Systems
Dissanayake I, Zhang J, Gu B (2014) “Virtual team performance in crowdsourcing contests: a social network perspective,” in Proceedings of the 35th International Conference on Information Systems (ICIS 2014), Association for Information Systems
Dissanayake I, Zhang J, Gu B (2015a) Task division for team success in crowdsourcing contests: resource allocation and alignment effects. J Manage Inform Syst 32(2):8–39. doi: https://doi.org/10.1080/07421222.2015.1068604)
Dissanayake I, Zhang J, Yuan F, Wang J (2015b) “Peer-recognition and performance in online crowdsourcing communities,” in Proceedings of the 48th Hawaii International Conference on System Sciences (HICSS 2015), pp. 4262–4265 (doi: https://doi.org/10.1109/HICSS.2015.646)
Dremel C, Herterich MM, Wulf J, vom Brocke J (2020) Actualizing big data analytics affordances: a revelatory case study. Inf Manag 57(1):103121. doi: https://doi.org/10.1016/j.im.2018.10.007)
Du W, Pan SL, Leidner DE, Ying W (2019) Affordances, experimentation and actualization of FinTech: a blockchain implementation study. J Strateg Inf Syst 28(1):50–65. doi: https://doi.org/10.1016/j.jsis.2018.10.002)
Dwivedi YK, Hughes L, Ismagilova E, Aarts G, Coombs C, Crick T, Duan Y, Dwivedi R, Edwards J, Eirug A, Galanos V, Ilavarasan PV, Janssen M, Jones P, Kar AK, Kizgin H, Kronemann B, Lal B, Lucini B, Medaglia R, Le Meunier-FitzHugh K, Le Meunier-FitzHugh LC, Misra S, Mogaji E, Sharma SK, Singh JB, Raghavan V, Raman R, Rana NP, Samothrakis S, Spencer J, Tamilmani K, Tubadji A, Walton P, Williams MD (2021) Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. Int J Inf Manag 57:101994. doi: https://doi.org/10.1016/j.ijinfomgt.2019.08.002)
Erickson LB, Petrick I, Trauth EM (2012) “Organizational Uses of the Crowd: Developing a Framework for the Study of Crowdsourcing,” in Proceedings of the 50th Annual Conference on Computers and People Research, New York, NY, USA: ACM, pp. 155–158 (doi: https://doi.org/10.1145/2214091.2214133)
Estellés-Arolas E, González-Ladrón-de-Guevara F (2012) Towards an integrated crowdsourcing definition. J Inform Sci 38:2. doi: https://doi.org/10.1177/0165551512437638)
Faik I, Barrett M, Oborn E (2020) How Information Technology Matters in Societal Change: An Affordance-Based Institutional Perspective. MIS Q 44:3. doi: https://doi.org/10.25300/MISQ/2020/14193)
Faullant R, Dolfus G (2017) “Everything community? Destructive processes in communities of crowdsourcing competitions,” Business Process Management Journal (23:6, SI), pp. 1108–1128 (doi: https://doi.org/10.1108/BPMJ-10-2016-0206)
Franco LA, Nielsen MF (2018) Examining group facilitation in situ: the use of formulations in facilitation practice. Group Decis Negot 27:5. doi: https://doi.org/10.1007/s10726-018-9577-7)
Fritz S, See L, Carlson T, Haklay M, Oliver JL, Fraisl D, Mondardini R, Brocklehurst M, Shanley LA, Schade S, Wehn U, Abrate T, Anstee J, Arnold S, Billot M, Campbell J, Espey J, Gold M, Hager G, He S, Hepburn L, Hsu A, Long D, Masó J, McCallum I, Muniafu M, Moorthy I, Obersteiner M, Parker AJ, Weisspflug M, West S (2019) Citizen science and the United Nations Sustainable Development Goals. Nat Sustain 2(10):922–930. doi: https://doi.org/10.1038/s41893-019-0390-3)
Füller J, Hutter K, Kröger N (2021) Crowdsourcing as a service – from pilot projects to sustainable innovation routines. Int J Project Manage 39:2. doi: https://doi.org/10.1016/j.ijproman.2021.01.005)
Gaver WW (1991) “Technology affordances,” in Proceedings of the SIGCHI conference on Human factors in computing systems Reaching through technology, S. P. Robertson, G. M. Olson and J. S. Olson (eds.), pp. 79–84 (doi: https://doi.org/10.1145/108844.108856)
Geiger D, Schader M (2014) Personalized task recommendation in crowdsourcing information systems — Current state of the art. Decis Support Syst 65:3–16. doi: https://doi.org/10.1016/j.dss.2014.05.007)
Geiger D, Seedorf S, Schulze T, Nickerson RC, Schader M (2011) “Managing the Crowd: Towards a Taxonomy of Crowdsourcing Processes,” in Proceedings of the 17th Americas Conference on Information Systems (AMCIS 2011), Association for Information Systems
Ghezzi A, Gabelloni D, Martini A, Natalicchio A (2018) Crowdsourcing: a review and suggestions for future research. Int J Manage Reviews 20(2):343–363. doi: https://doi.org/10.1111/ijmr.12135)
Gibson JJ (1977) The theory of affordances. Hilldale USA 1:2
Gimpel H, Graf-Drasch V, Laubacher RJ, Wöhl M (2020) Facilitating like Darwin: supporting cross-fertilisation in crowdsourcing. Decis Support Syst. doi: https://doi.org/10.1016/j.dss.2020.113282)
Griffith TL, Sawyer JE, Poole MS (2019) Systems Savvy: Practical Intelligence for Transformation of Sociotechnical Systems. Group Decis Negot 28(3):475–499. doi: https://doi.org/10.1007/s10726-019-09619-4)
Haas D, Ansel J, Gu L, Marcus A (2015) “Argonaut: macrotask crowdsourcing for complex data processing,” Proceedings of the VLDB Endowment (8:12), pp. 1642–1653 (doi: https://doi.org/10.14778/2824032.2824062)
Haenlein M, Kaplan A (2019) A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence. Calif Manag Rev 61(4):5–14. doi: https://doi.org/10.1177/0008125619864925)
Head BW, Alford J (2015) Wicked problems: implications for public policy and management. Adm Soc 47(6):711–739. doi: https://doi.org/10.1177/0095399713481601)
Hetmank L (2013) “Components and functions of crowdsourcing systems - a systematic literature review,” in Wirtschaftsinformatik Proceedings (WI 2013), p. 2013
Hinsen S, Hofmann P, Jöhnk J, Urbach N (2022) “How Can Organizations Design Purposeful Human-AI Interactions: A Practical Perspective From Existing Use Cases and Interviews,” in Proceedings of the 55th Hawaii International Conference on System Sciences (HICSS 2022)
Hofmann P, Jöhnk J, Protschky D, Urbach N (2020) “Developing purposeful AI use cases - a structured method and its application in project management,” in Proceedings of the 15th International Conference on Wirtschaftsinformatik (WI 2020), pp. 9–11
Hofmann P, Rückel T, Urbach N (2021) “Innovating with Artificial Intelligence: Capturing the Constructive Functional Capabilities of Deep Generative Learning,” in Proceedings of the 54th Hawaii International Conference on System Sciences (HICSS 2021) (doi: https://doi.org/10.24251/HICSS.2021.669)
Hossain M, Kauranen I (2015) Crowdsourcing: a comprehensive literature review. Strategic Outsourcing: An International Journal 8(1):2–22. doi: https://doi.org/10.1108/SO-12-2014-0029)
Hosseini M, Phalp K, Taylor J, Ali R (2015) “On the Configuration of Crowdsourcing Projects,” International Journal of Information System Modeling and Design (6:3, SI), pp. 27–45 (doi: https://doi.org/10.4018/IJISMD.2015070102)
Howe J (2006a) “Crowdsourcing: a definition,” available at https://crowdsourcing.typepad.com/cs/2006/06/crowdsourcing_a.html
Howe J (2006b) The rise of crowdsourcing. Wired magazine 14:6
Iansiti M, Lakhani KR (2020) Competing in the Age of AI. Harvard Business Review Press
Introne J, Laubacher R, Olson G, Malone T (2011) “The Climate CoLab: large scale model-based collaborative planning,” in International Conference on Collaboration Technologies and Systems, pp. 40–47 (doi: https://doi.org/10.1109/CTS.2011.5928663)
Introne J, Laubacher R, Olson G, Malone T (2013) Solving Wicked Social Problems with Socio-computational Systems. KI - Künstliche Intelligenz 27(1):45–52. doi: https://doi.org/10.1007/s13218-012-0231-2)
Ito T, Hadfi R, Suzuki S (2021) An Agent that Facilitates Crowd Discussion. Group Decis Negot. doi: https://doi.org/10.1007/s10726-021-09765-8)
Jalowski M, Fritzsche A, Möslein KM (2019) Facilitating collaborative design: a toolkit for integrating persuasive technologies in design activities. Procedia CIRP 84:61–67. doi: https://doi.org/10.1016/j.procir.2019.04.290)
Jespersen KR (2018) Crowdsourcing design decisions for optimal integration into the company innovation system. Decis Support Syst 115:52–63. doi: https://doi.org/10.1016/j.dss.2018.09.005)
Kamoun F, Alhadidi D, Maamar Z (2015) Weaving Risk Identification into Crowdsourcing Lifecycle. Procedia Comput Sci 56:41–48. doi: https://doi.org/10.1016/j.procs.2015.07.181)
Kampf CE (2019) Intermingling AI and IoT affordances: the expansion of social opportunities for service users and providers. Scandinavian J Inform Syst 31:2
Kaplan A, Haenlein M (2019) Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Bus Horiz 62(1):15–25. doi: https://doi.org/10.1016/j.bushor.2018.08.004)
Keller R, Stohr A, Fridgen G, Lockl J, Rieger A (2019) “Affordance-experimentation-actualization theory in artificial intelligence research - a predictive maintenance story,” in Proceedings of the 40th International Conference on Information Systems (ICIS 2019), Association for Information Systems
Khalifa M, Kwok R-W, Davison R (2002) The effects of process and content facilitation restrictiveness on GSS-mediated collaborative learning. Group Decis Negot 11(5):345–361. doi: https://doi.org/10.1023/A:1020449317854)
Kim S, Robert LP (2019) “Crowdsourcing Coordination: A Review and Research Agenda for Crowdsourcing Coordination Used for Macro-tasks,” in Macrotask Crowdsourcing, V.-J. Khan, K. Papangelis, I. Lykourentzou and P. Markopoulos (eds.), pp. 17–43 (doi: https://doi.org/10.1007/978-3-030-12334-5_2)
Kiruthika U, Somasundaram TS, Raja SKS (2020) Lifecycle Model of a Negotiation Agent: A Survey of Automated Negotiation Techniques. Group Decis Negot 29(6):1239–1262. doi: https://doi.org/10.1007/s10726-020-09704-z)
Kittur A, Nickerson JV, Bernstein M, Gerber E, Shaw A, Zimmerman J, Lease M, Horton J (2013) “The future of crowd work,” in Proceedings of the 2013 conference on Computer supported cooperative work, p. 1301 (doi: https://doi.org/10.1145/2441776.2441923)
Kocsis D, Vreede G-Jde (2016) “Towards a taxonomy of ethical considerations in crowdsourcing,” in Proceedings of the 22nd Americas Conference on Information Systems (AMCIS 2016), Association for Information Systems
Kohler T, Chesbrough H (2019) From collaborative community to competitive market: the quest to build a crowdsourcing platform for social innovation. R & D Management 49:3. doi: https://doi.org/10.1111/radm.12372)
Kolfschoten GL, den Hengst-Bruggeling M, Vreede G-Jde (2007) Issues in the design of facilitated collaboration processes. Group Decis Negot 16(4):347–361. doi: https://doi.org/10.1007/s10726-006-9054-6)
Kolfschoten GL, Grünbacher P, Briggs RO (2011) Modifiers for quality assurance in group facilitation. Group Decis Negot 20(5):685–705. doi: https://doi.org/10.1007/s10726-011-9234-x)
Laengle S, Modak NM, Merigo JM, Zurita G (2018) Twenty-five years of group decision and negotiation: a bibliometric overview. Group Decis Negot 27(4):505–542. doi: https://doi.org/10.1007/s10726-018-9582-x)
Leal Filho W, Wall T, Rui Mucova SA, Nagy GJ, Balogun A-L, Luetz JM, Ng AW, Kovaleva M, Azam S, Alves FM, Guevara F, Matandirotya Z, Skouloudis NR, Tzachor A, Malakar A, Gandhi O (2022) Deploying artificial intelligence for climate change adaptation. Technol Forecast Soc Chang 180:121662. doi: https://doi.org/10.1016/j.techfore.2022.121662)
Lehrer C, Wieneke A, vom Brocke J, Jung R, Seidel S (2018) Materiality, Affordance, and the Individualization of Service. J Manage Inform Syst 35(2):424–460. doi: https://doi.org/10.1080/07421222.2018.1451953). “How Big Data Analytics Enables Service Innovation:
Leimeister JM (2010) Collective intelligence. Bus Inform Syst Eng 2(4):245–248. doi: https://doi.org/10.1007/s12599-010-0114-8)
Leonardi (2011) When Flexible Routines Meet Flexible Technologies: Affordance, Constraint, and the Imbrication of Human and Material Agencies. MIS Q 35:1. doi: https://doi.org/10.2307/23043493)
Liu S, Xia F, Zhang J, Pan W, Zhang Y (2016) Exploring the trends, characteristic antecedents, and performance consequences of crowdsourcing project risks. Int J Project Manage 34(8):1625–1637. doi: https://doi.org/10.1016/j.ijproman.2016.09.002)
Lopez M, Vukovic M, Laredo J (2010) “Peoplecloud service for enterprise crowdsourcing,” in IEEE International Conference on Services Computing, pp. 538–545 (doi: https://doi.org/10.1109/SCC.2010.74)
Lykourentzou I, Khan V-J, Papangelis K, Markopoulos P (2019) “Macrotask crowdsourcing: an integrated definition,” in Macrotask Crowdsourcing, V.-J. Khan, K. Papangelis, I. Lykourentzou and P. Markopoulos (eds.), pp. 1–13 (doi: https://doi.org/10.1007/978-3-030-12334-5_1)
Maedche A, Legner C, Benlian A, Berger B, Gimpel H, Hess T, Hinz O, Morana S, Söllner M (2019) AI-based digital assistants. Bus Inform Syst Eng 61(4):535–544. doi: https://doi.org/10.1007/s12599-019-00600-8)
Maister DH, Lovelock CH (1982) Managing facilitator services. Sloan Manag Rev 23(4):19
Malhotra A, Majchrzak A, Lyytinen K (2021) Socio-Technical Affordances for Large-Scale Collaborations: Introduction to a Virtual Special Issue. Organ Sci 32(5):1371–1390. doi: https://doi.org/10.1287/orsc.2021.1457)
Malone TW, Laubacher R, Dellarocas C (2010) The collective intelligence genome. MIT Sloan Management Review 51(3):21
Manyika J, Lund S, Bughin J, Woetzel JR, Stamenov K, Dhingra D (2016) Digital globalization: The new era of global flows. McKinsey Global Institute San Francisco
Markus ML, Silver M (2008) A Foundation for the Study of IT Effects: A New Look at DeSanctis and Poole’s Concepts of Structural Features and Spirit. J Association Inform Syst 9(10):609–632. doi: https://doi.org/10.17705/1jais.00176)
McCardle-Keurentjes M, Rouwette EAJA (2018) Asking questions: a sine qua non of facilitation in decision support? Group Decis Negot 27:5. doi: https://doi.org/10.1007/s10726-018-9573-y)
McGahan AM, Bogers MLAM, Chesbrough H, Holgersson M (2021) Tackling Societal Challenges with Open Innovation. Calif Manag Rev 63(2):49–61. doi: https://doi.org/10.1177/0008125620973713)
Myers MD, Newman M (2007) The qualitative interview in IS research: examining the craft. Inf Organ 17(1):2–26. doi: https://doi.org/10.1016/j.infoandorg.2006.11.001)
Nagar Y, Boer P, Garcia B (2016) A. C. “Accelerating the review of complex intellectual artifacts in crowdsourced innovation challenges,” in Proceedings of the 37th International Conference on Information Systems (ICIS 2016), Association for Information Systems
Nascimento AM, da Cunha MAlexandraV, Cortez S, de Meirelles F, Scornavacca E, de Melo VV (2018) “A literature analysis of research on artificial intelligence in management information system (MIS),” in Proceedings of the 24th Americas Conference on Information Systems (AMCIS 2018), Association for Information Systems
Nguyen C, Oh O, Kocsis D, Vreede G-J (2013) “Crowdsourcing as lego: unpacking the building blocks of crowdsourcing collaboration processes,” in Proceedings of the 34th International Conference on Information Systems (ICIS 2013), Association for Information Systems
Nguyen C, Tahmasbi N, de Vreede T, de Vreede G-J, Oh O, Reiter-Palmon R (2015) “Participant Engagement in Community Crowdsourcing,” in Proceedings of the 23th European Conference on Information Systems (ECIS 2015), Association for Information Systems
Norman DA (1999) Affordance, conventions, and design. Interactions 6(3):38–43. doi: https://doi.org/10.1145/301153.301168)
Onuchowska A, de Vreede G-J (2018) “Disruption and Deception in Crowdsourcing: Towards a Crowdsourcing Risk Framework,” in Proceedings of the 51st Hawaii International Conference on System Sciences (HICSS 2018) (doi: https://doi.org/10.24251/HICSS.2018.498)
Ooms W, Piepenbrink R (2021) Open Innovation for Wicked Problems: Using Proximity to Overcome Barriers. Calif Manag Rev 63(2):62–100. doi: https://doi.org/10.1177/0008125620968636)
Ostern N, Rosemann M (2021) “A Framework for Digital Affordances,” in Proceedings of the 29th European Conference on Information Systems (ECIS 2021), Association for Information Systems
Pedersen J, Kocsis D, Tripathi A, Tarrell A, Weerakoon A, Tahmasbi N, Xiong J, Deng W, Oh O, de Vreede G-J (2013) “Conceptual foundations of crowdsourcing: a review of IS research,” in Proceedings of the 46th Hawaii International Conference on System Sciences (HICSS 2013), pp. 579–588 (doi: https://doi.org/10.1109/HICSS.2013.143)
Pohlisch J (2021) “Managing the Crowd: A Literature Review of Empirical Studies on Internal Crowdsourcing,” in Internal Crowdsourcing in Companies, pp. 27–53 (doi: https://doi.org/10.1007/978-3-030-52881-2_3)
Pumplun L, Tauchert C, Heidt M (2019) “A new organizational chassis for artificial intelligence-exploring organizational readiness factors,” in Proceedings of the 27th European Conference on Information Systems (ECIS 2019), Association for Information Systems
Qiao L, Tang F, Liu J (2018) “Feedback based high-quality task assignment in collaborative crowdsourcing,” in IEEE 32nd International Conference on Advanced Information Networking and Applications, pp. 1139–1146 (doi: https://doi.org/10.1109/AINA.2018.00163)
Rai A (2020) Explainable AI: from black box to glass box. J Acad Mark Sci 48(1):137–141. doi: https://doi.org/10.1007/s11747-019-00710-5)
Rai A, Constantinides P, Sarker S (2019) “Next Generation Digital Platforms: Toward Human-AI Hybrids,” MIS Quarterly (43:1), pp. iii-ix
Retelny D, Robaszkiewicz S, To A, Lasecki WS, Patel J, Rahmati N, Doshi T, Valentine M, Bernstein MS (2014) “Expert crowdsourcing with flash teams,” in Proceedings of the 27th annual ACM symposium on User interface software and technology, pp. 75–85 (doi: https://doi.org/10.1145/2642918.2647409)
Rhyn M, Blohm I (2017) “Combining collective and artificial intelligence: towards a design theory for decision support in crowdsourcing,” in Proceedings of the 25th European Conference on Information Systems (ECIS 2017), Association for Information Systems
Rhyn M, Blohm I, Leimeister JM (2017) “Understanding the emergence and recombination of distant knowledge on crowdsourcing platforms,” in Proceedings of the 38th International Conference on Information Systems (ICIS 2017), Association for Information Systems
Rhyn M, Leicht N, Blohm I, Leimeister JM (2020) “Opening the Black Box: How to Design Intelligent Decision Support Systems for Crowdsourcing,” in Proceedings of the 15th International Conference on Wirtschaftsinformatik (WI 2020), pp. 50–65
Riedl C, Woolley AW (2017) Teams vs. crowds: a field test of the relative contribution of incentives, member ability, and emergent collaboration to crowd-based problem solving performance. Acad Manage Discoveries 3(4):382–403. doi: https://doi.org/10.5465/amd.2015.0097)
Rippa P, Quinto I, Lazzarotti V, Pellegrini L (2016) Role of innovation intermediaries in open innovation practices: differences between micro-small and medium-large firms. Int J Bus Innov Res 11(3):377. doi: https://doi.org/10.1504/IJBIR.2016.078872)
Robert LP (2019) “Crowdsourcing controls: a review and research agenda for crowdsourcing controls used for macro-tasks,” in Macrotask Crowdsourcing, V.-J. Khan, K. Papangelis, I. Lykourentzou and P. Markopoulos (eds.), pp. 45–126 (doi: https://doi.org/10.1007/978-3-030-12334-5_3)
Russell SJ, Norvig P (2021) Artificial intelligence: A modern approach, Hoboken: Pearson
Rzepka C, Berger B (2018) “User interaction with AI-enabled systems: a systematic review of IS research,” in Proceedings of the 39th International Conference on Information Systems (ICIS 2018), Association for Information Systems
Schenk E, Guittard C (2011) Towards a characterization of crowdsourcing practices. J Innov Econ 7:1. doi: https://doi.org/10.3917/jie.007.0093)
Schlagwein D, Cecez-Kecmanovic D, Hanckel B (2019) Ethical norms and issues in crowdsourcing practices: a Habermasian analysis. Inform Syst J 29(4):811–837. doi: https://doi.org/10.1111/isj.12227)
Schmitz H, Lykourentzou I (2018) Online sequencing of non-decomposable macrotasks in expert crowdsourcing. ACM Trans Social Comput 1(1):1–33. doi: https://doi.org/10.1145/3140459)
Schoormann T, Strobel G, Möller F, Petrik D (2021) “Achieving Sustainability with Artificial Intelligence - A Survey of Information Systems Research,” in Proceedings of the 42nd International Conference on Information Systems (ICIS 2021), Association for Information Systems
Schreier M (2012) Qualitative content analysis in practice. Sage publications
Schultze U, Avital M (2011) Designing interviews to generate rich data for information systems research. Inf Organ 21(1):1–16
Seeber I, Bittner E, Briggs RO, de Vreede G-J, de Vreede T, Druckenmiller D, Maier R, Merz AB, Oeste-Reiß S, Randrup N (2018) and others. “Machines as teammates: A collaboration research agenda,” in Proceedings of the 51st Hawaii International Conference on System Sciences (HICSS 2018)
Seeber I, Bittner E, Briggs RO, de Vreede T, de Vreede G-J, Elkins A, Maier R, Merz AB, Oeste-Reiß S, Randrup N, Schwabe G, Söllner M (2020) Machines as teammates: a research agenda on AI in team collaboration. 57:103174. https://doi.org/10.1016/j.im.2019.103174). 2
Seeber I, Waizenegger L, Demetz L, Merz AB, de Vreede G-J, Maier R, Weber B (2016) “IT-supported formal control: how perceptual (in) congruence affects the convergence of crowd-sourced ideas,” in Proceedings of the 37th International Conference on Information Systems (ICIS 2016), Association for Information Systems
Shafiei Gol E, Stein M-K, Avital M (2019) Crowdwork platform governance toward organizational value creation. J Strateg Inf Syst 28(2):175–195. doi: https://doi.org/10.1016/j.jsis.2019.01.001)
Siemon D (2022) Elaborating Team Roles for Artificial Intelligence-based Teammates in Human-AI Collaboration. Group Decis Negot. doi: https://doi.org/10.1007/s10726-022-09792-z)
Simon HA (1995) Artificial intelligence: an empirical science. Artif Intell 77:1. doi: https://doi.org/10.1016/0004-3702(95)00039-H)
Sonnenberg C, vom Brocke J (2012) “Evaluation patterns for design science research artefacts. In: Helfert M, Donnellan B (eds) ” in Practical Aspects of Design Science. Springer Berlin Heidelberg, Berlin, Heidelberg, pp 71–83. doi: https://doi.org/10.1007/978-3-642-33681-2_7).
Sousa MJ, Rocha Á (2020) Decision-Making and Negotiation in Innovation & Research in Information Science. Group Decis Negot. doi: https://doi.org/10.1007/s10726-020-09712-z)
Steffen JH, Gaskin JE, Meservy TO, Jenkins JL, Wolman I (2019) Framework of Affordances for Virtual Reality and Augmented Reality. J Manage Inform Syst 36(3):683–729. doi: https://doi.org/10.1080/07421222.2019.1628877)
Stone P, Brooks R, Brynjolfsson E, Calo R, Etzioni O, Hager G, Hirschberg J, Kalyanakrishnan S, Kamar E, Kraus S (2016) and others. “Artificial intelligence and life in 2030,” One Hundred Year Study on Artificial Intelligence: Report of the 2015–2016 Study Panel, p. 52
Suthers DD (2006) Technology affordances for intersubjective meaning making: A research agenda for CSCL. Int J Computer-Supported Collaborative Learn 1(3):315–337. doi: https://doi.org/10.1007/s11412-006-9660-y)
Tavanapour N, Bittner EAC (2018a) “Automated facilitation for idea platforms: design and evaluation of a chatbot prototype,” in Proceedings of the 39th International Conference on Information Systems (ICIS 2018), Association for Information Systems
Tavanapour N, Bittner EAC (2018b) “The Collaboration of Crowd Workers,” Research-in-Progress Papers
Tazzini G, Montelisciani G, Gabelloni D, Paganucci S, Fantoni G (2013) “A structured team building method for collaborative crowdsourcing,” in 2013 International Conference on Engineering, Technology and Innovation (ICE) & IEEE International Technology Management Conference, IEEE, pp. 1–11 (doi: https://doi.org/10.1109/ITMC.2013.7352708)
Te’eni D, Avital M, Hevner A, Schoop M, Schwartz D (2019) “It Takes Two to Tango: Choreographing the Interactions between Human and Artificial Intelligence,” in Proceedings of the 27th European Conference on Information Systems (ECIS 2019), Association for Information Systems
Toubia O, Netzer O (2017) Idea Generation, Creativity, and Prototypicality. Mark Sci 36(1):1–20. doi: https://doi.org/10.1287/mksc.2016.0994)
Troll J, Naef S, Blohm I (2017) A Mixed Method Approach to Understanding Crowdsourcees’ Engagement Behavior, available at https://aisel.aisnet.org/icis2017/HumanBehavior/Presentations/34
United Nations (2015) “Transforming our world: the 2030 agenda for sustainable development,”
Valentine MA, Retelny D, To A, Rahmati N, Doshi T, Bernstein MS(2017) “Flash Organizations: Crowdsourcing Complex Work by Structuring Crowds As Organizations,” in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 3523–3537 (doi: https://doi.org/10.1145/3025453.3025811)
Vianna F, Peinado J, Graeml AR(2019) “Crowdsourcing platforms: objective, activities and motivation,” in Proceedings of the 25th Americas Conference on Information Systems (AMCIS 2019), Association for Information Systems
Vivacqua AS, Marques LC, Ferreira MS, de Souza JM (2011) Computational indicators to assist meeting facilitation. Group Decis Negot 20(5):667–684. doi: https://doi.org/10.1007/s10726-011-9235-9)
Volkoff O, Strong DM (2013) Critical realism and affordances: theorizing IT-associated organizational change processes. MIS Q 37:3. doi: https://doi.org/10.25300/MISQ/2013/37.3.07)
Volkoff O, Strong DM(2017) “Affordance theory and how to use it in IS research,”The Routledge companion to management information systems, pp.232–245
vom Brocke J, Simons A, Riemer K, Niehaves B, Plattfaut R, Cleven A(2015) “Standing on the shoulders of giants: challenges and recommendations of literature search in information systems research,” Communications of the Association for Information Systems (37) (doi: https://doi.org/10.17705/1CAIS.03709)
Vukicevic A, Vukicevic M, Radovanovic S, Delibasic B (2022) BargCrEx: A System for Bargaining Based Aggregation of Crowd and Expert Opinions in Crowdsourcing. Group Decis Negot 1–30. doi: https://doi.org/10.1007/s10726-022-09783-0)
Vukovic M, Bartolini C(2010) “Towards a research agenda for enterprise crowdsourcing,” in Leveraging Applications of Formal Methods, Verification, and Validation, T. Margaria and B. Steffen (eds.), pp. 425–434 (doi: https://doi.org/10.1007/978-3-642-16558-0_36)
Vukovic M, Laredo J, Rajagopal S(2010) “Challenges and Experiences in Deploying Enterprise Crowdsourcing Service,” in Web Engineering, B. Benatallah, F. Casati, G. Kappel and G. Rossi (eds.)
Vyas D, Chisalita CM, van der Veer GC(2006) “Affordance in interaction,” in Proceedings of the 13th Eurpoean conference on Cognitive ergonomics trust and control in complex socio-technical systems, p. 92 (doi: https://doi.org/10.1145/1274892.1274907)
Wang A, Pruksachatkun Y, Nangia N, Singh A, Michael J, Hill F, Levy O, Bowman S(2019) “Superglue: A stickier benchmark for general-purpose language understanding systems,” Advances in neural information processing systems (32)
Wedel M, Ulbrich H(2021) “Systematization Approach for the Development and Description of an Internal Crowdsourcing System,” in Internal Crowdsourcing in Companies, pp. 55–78 (doi: https://doi.org/10.1007/978-3-030-52881-2_4)
Wiggins A, Crowston K(2011) “From Conservation to Crowdsourcing: A Typology of Citizen Science,” in 2011 44th Hawaii International Conference on System Sciences, Kauai, HI. 04.01.2011–07.01.2011, IEEE, pp. 1–10 (doi: https://doi.org/10.1109/HICSS.2011.207)
Wilson HJ, Daugherty PR (2018) Collaborative intelligence: humans and AI are joining forces. Harvard Business Rev 96(4):114–123
Winkler R, Briggs RO, de Vreede G-J, Leimeister JM, Oeste-Reiss S, Sollner M (2020) Modeling Support for Mass Collaboration in Open Innovation Initiatives—The Facilitation Process Model 2.0. IEEE Trans Eng Manage 1–15. doi: https://doi.org/10.1109/TEM.2020.2975938)
Wolfswinkel JF, Furtmueller E, Wilderom CPM (2013) Using grounded theory as a method for rigorously reviewing literature. Eur J Inform Syst 22(1):45–55. doi: https://doi.org/10.1057/ejis.2011.51)
Xia F, Liu S, Zhang J(2015) “How Social Subsystem and Technical Subsystem Risks Influence Crowdsourcing Performance,” in Proceedings of the 19th Pacific Asia Conference on Information Systems (PACIS 2015), Association for Information Systems
Xiang W, Sun L, You W, Yang C (2018) Crowdsourcing intelligent design. Front Inform Technol Electron Eng 19(1):126–138. doi: https://doi.org/10.1631/FITEE.1700810)
Yin RK (2018) Case study research and applications: design and methods. SAGE Publications, Inc, Thousand Oaks, California
Zajonc RB (1965) Social facilitation. Sci (New York N Y) 149:3681. doi: https://doi.org/10.1126/science.149.3681.269)
Zhao Y, Zhu Q (2014) Evaluation on crowdsourcing research: current status and future direction. Inform Syst Front 16(3):417–434. doi: https://doi.org/10.1007/s10796-012-9350-4)
Zhao Y, Zhu Q (2016) Conceptualizing task affordance in online crowdsourcing context. Online Inf Rev 40(7):938–958. doi: https://doi.org/10.1108/OIR-06-2015-0192)
Zheng Q, Wang W, Yu Y, Pan M, Shi X(2017) “Crowdsourcing complex task automatically by workflow technology,” in Management of Information, Process and Cooperation, J. Cao and J. Liu (eds.), pp. 17–30 (doi: https://doi.org/10.1007/978-981-10-3996-6_2)
Zogaj S, Bretschneider U(2014) “Analyzing governance mechanisms for crowdsourcing information systems: a multiple case analysis,” in Proceedings of the 22nd European Conference on Information Systems (ECIS 2014), Association for Information Systems
Zogaj S, Leicht N, Blohm I, Bretschneider U(2015) “Towards Successful Crowdsourcing Projects: Evaluating the Implementation of Governance Mechanisms,” in Proceedings of the 36th International Conference on Information Systems (ICIS 2015), Association for Information Systems
Zuchowski O, Posegga O, Schlagwein D, Fischbach K (2016) Internal crowdsourcing: conceptual framework, structured review, and research agenda. J Inform Technol 31(2):166–184. doi: https://doi.org/10.1057/jit.2016.14)
Funding
Not applicable.
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interest/Competing interests:
The authors declare that they have no conflict of interest or competing interests.
Ethics Approval:
Not applicable.
Consent to Participate:
Not applicable.
Consent for Publication
Not applicable.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A - Literature Searches
1.1 Appendix A.1 - Papers for Macro-Task Crowdsourcing Facilitation Activities
1.2 Appendix A.2 - Papers for Artificial Intelligence Manifestations
1.3 Appendix A.3 - Assignments of the Found Statements About Artificial Intelligence
Appendix B - Refinement of the AI Manifestation Mapping
During the analysis of TCL, PSM, and their corresponding platforms, we derived 29 potential use-cases of AI-augmented facilitation, of which 10 have been actualized and 19 have been perceived as helpful to facilitate crowdsourcing initiatives. These 29 potential use-cases resulted in 18 distinct manifestations using the same matching approach as in step II). Three manifestations were not found within our literature search. Thus, we extended our AI manifestation mapping (i.e., recognizing, in crowd coordination; recognizing, in risk management; reasoning, in task communication). These three manifestations could all be assigned to the existing seven affordances. Hence, it was not necessary to change the affordances. Besides the potential use-cases and manifestations, we observed that the timing of the decision to use AI impacted the potential for integration. In TCL, the decision to include AI was made following our recognition that the workers had created many contributions during an exercise. Hence, considerations about the use of AI revolved around aggregating the contributions, which can be seen as the affordance 7) decision-making preparation. In PSM, a dedicated AI team had been set up during the workflow design, which specifically developed an extensive set of AI functionalities for facilitators, including use-cases of 6) worker profiling and 5) collaboration guidance. We conclude that, if facilitators are aware of AIs’ potential before the start of the exercise, they can derive more benefits from it.
In step V) Interviews, we conducted six expert interviews to validate and refine the AI manifestation mapping and the initial AI affordances. We integrated the input we received into the affordances after each interview and later analyzed the content in a structured manner in line with Schreier (2012). We gathered 72 statements about potential use-cases for AI in facilitation, which led to 30 distinct manifestations. Three manifestations were not identified by literature or the case studies (i.e., recognizing in culture development, decision-making in culture development, and recognizing in tool usage & integration). We were able to assign these three new manifestations to our affordances.
Appendix C - Interviews
1.1 Appendix C.1 - Interview Guide
Ideation
-
1.
Can you tell me in three or four sentences about your previous experience with Artificial Intelligence or Crowdsourcing Facilitation?
-
2.
Have you ever experienced the use of Artificial Intelligence in collaboration for facilitation purposes in general?
-
3.
How do you imagine Artificial Intelligence affecting Crowdsourcing Facilitation?
-
4.
Imagine the following situation: We both are organizing a crowdsourcing exercise with about 200 experts. Our capacity regarding facilitators is not limited. So we can let the facilitators do whatever we want. What activities would you let these facilitators do in order to maximize the value of the crowdsourcing exercise?
Detailed Feedback
-
1.
In box "4" of our OnePager (see Figure 3), we depicted a framework of crowdsourcing from a processual perspective.
-
a.
Where do you see the most significant impact of AI in this figure?
-
b.
Do you see any of these activities that can not be supported or enabled by AI?
-
a.
-
2.
Please take a moment to read through the table on page 2 (reference was made to the respective status of the AI affordances - see Table 6). These are AI opportunities in crowdsourcing that we found in the literature.
-
a.
What do you think about them?
-
b.
How comprehensible do you find this list?
-
c.
To what extent do you believe that these AI opportunities are feasible or applicable in today’s crowdsourcing exercises?
-
d.
What is missing?
-
a.
-
3.
(This part was added after the first two interviews) Please take a look at page 3 (reference was made to the AI manifestations as well as a graphical representation of AI with-in macro-task crowdsourcing - see Table 7 and Figure 4). These are two different illustrations that try to shed more light on how artificial intelligence influences crowdsourcing.
-
a.
Which illustration is more appealing to you and why?
-
b.
What could be improved in the illustration you prefer to make it more informative?
-
a.
1.2 Appendix C.2 - Validation of the Artificial Intelligence Affordances
We validated our affordances according to four defined criteria: completeness, comprehensiveness, meaningfulness, level of detail, and applicability in today’s crowdsourcing initiatives (Sonnenberg and vom Brocke 2012). We could confirm four affordances without changes to their description or name (i.e., contribution assessment, operational assistance, collaboration guidance, and decision-making preparation). We could also confirm two further affordances with changes to their description (i.e., improvement triggering and worker profiling). For 2) improvement triggering experts highlighted that low activity does not necessarily mean that a worker should be considered “bad” (Expert 2, 6) and that it is essential to “understand why underperforming is happening” (Expert 5). Since the other experts see AI as capable of identifying and nudging inactive workers or triggering improvements (Expert 1, 3, 4), we enhanced the description of the affordance by including the identification of non-active workers. Experts (Expert 2, 4) specifically noted that including information about workers’ backgrounds could be beneficial for monitoring the performance and moderating the crowd. Even though we agree with Expert 1 that facilitators have to consider ethical standards when profiling workers, we see benefits in considering the workers’ skills (e.g., a better understanding of the workers’ communication network or identifying echo chambers) and so changed the description of 6) worker profiling. Finally, we could confirm 4) workflow enrichment after substantially and successively revising the name (previously: environment creation) and the affordance description. Experts felt that the previous affordance description was not comprehensive enough (Expert 1, 2), too broad (Expert 4, 6), or not applicable to crowdsourcing initiatives (Expert 2, 3). We also picked up the feedback that the intended affordance is still mainly human-driven (Expert 2) and that AI could enrich the exercise with relevant information regarding the workflow or the discussed content (Expert 1, 3, 4). Hence, we shifted the focus of this affordance from creating and maintaining a highly productive collaboration and communication environment to providing and integrating useful information and knowledge into the predefined workflow, which was perceived as meaningful (Experts 5, 6)
In addition to these suggestions for improvement, experts stated that there was no relevant aspect missing (Expert 2, 3, 4, 5, 6), from which we concluded that our affordances were complete. Five of the experts explicitly confirmed the comprehensiveness of the name and the description of the affordances (Expert 2, 3, 4, 5, 6). Expert 5 specifically stated, “I think it is the first time I had a research interview where there had been this comprehensible material for me to look at.” According to all of the interviewed experts, our affordances are meaningful. Even though the affordances are not mutually exclusive and show some differences in the granularity of formulation, the experts did not see any problems regarding the level of detail (Expert 2). The experts also explicitly acknowledged the applicability of specific affordances in today’s crowdsourcing initiatives (Expert 1, 2, 3, 4, 6)
1.3 Appendix C.3 - Empirical Evidence From the Interviews
Appendix D - Observed Macro-Task Crowdsourcing Initiatives
1.1 Appendix D.1 - Interim Results of the TCL Python Script
1.2 Appendix D.2 - Screenshots of the PSM Artificial Intelligence Tool
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Gimpel, H., Graf-Seyfried, V., Laubacher, R. et al. Towards Artificial Intelligence Augmenting Facilitation: AI Affordances in Macro-Task Crowdsourcing. Group Decis Negot 32, 75–124 (2023). https://doi.org/10.1007/s10726-022-09801-1
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10726-022-09801-1