Large numbers of incomplete, unclear, and unspecific submissions on idea platforms hinder organizations to exploit the full potential of open innovation initiatives as idea selection is cumbersome. In a design science research project, we develop a design for a conversational agent (CA) based on artificial intelligence to facilitate contributors in generating elaborate ideas on idea platforms where human facilitation is not scalable. We derive prescriptive design knowledge in the form of design principles, instantiate, and evaluate the CA in two successive evaluation episodes. The design principles contribute to the current research stream on automated facilitation and can guide providers of idea platforms to enhance idea generation and subsequent idea selection processes. Results indicate that CA-based facilitation is engaging for contributors and yields well-structured and elaborated ideas.
Organizations face challenges in discovering and developing innovations due to limited internal resources (Hansen & Pries-Heje, 2017) and the fact that “when focusing on a limited solution space, companies only apply the most obvious instead of the most efficient of all solutions in order to solve an innovation problem” (Lüttgens et al, 2014, p. 342). In this regard, open innovation approaches have been identified to be an effective strategy to improve the efficacy of organizations’ innovation capabilities (Chesbrough, 2003; Lüttgens et al., 2014). Digital platforms, e.g. idea platforms, enable organizations to apply idea sourcing by involving external contributors to access widely dispersed external knowledge and expertise beyond their boundaries (Boudreau & Lakhani, 2013; Cricelli et al., 2021; Di Gangi & Wasko, 2009). However, organizations struggle to harness the potential of idea platforms (Piezunka & Dahlander, 2015), as such idea sourcing initiatives generate highly diverse input whose utilization and valorization remains a key challenge. In particular, the large quantity of contributions pose major challenges in terms of textually unstructured ideas with an insufficient level of detail and indistinct causalities (Barbier et al., 2012; Kipp et al., 2013). As a result, organizations invest a great expenditure of human capacity and time during idea selection to organize and evaluate ideas to select those with high potential (Blohm et al., 2013; Kittur et al., 2013; Merz, 2018). Nevertheless, familiar contributions or ideas with detailed information but little implementation potential might be selected over those with a lack of details and great potential (Bansemir & Neyer, 2009; Piezunka & Dahlander, 2015).
Idea selection could be more efficient, if ideas followed a defined structure to create a common basis to compare them with each other and if they delivered a rich description to establish causalities. One possible way to reach this objective is facilitating external contributors’ idea generation process on idea platforms (Briggs et al., 1998; Dennis et al., 1990; Fjermestad, 2000). Previous research has shown that structured facilitation by a human leads to favorable results for collaborative work practices in small teams (Bittner & Leimeister, 2014; Niederman et al., 1996). However, human facilitation reaches its boundaries for large-scale, distributed idea generation on idea platforms as humans can hardly deal with many different parallel work streams and are not constantly available in asynchronous collaboration settings. With the rise of the so-called “Facilitator-in-a-Box” paradigm (Briggs et al., 2013), an approach has been established to shift facilitation tasks from humans to system restrictions and prompts implemented in automated scripts. However, the implementation of such concepts for idea generation runs the risk of discouraging contributors. More specifically, filling various fields in a standard submission form might reduce contributors’ enjoyment and cognitive involvement, as they usually do not receive direct rewards via the idea platform (Bretschneider, 2012). Therefore, the user interface and process flow should be designed in such a way that they are engaging for contributors (Attfield et al., 2011) to increase the likelihood of continuing participation while diminishing the detrimental effect of declining motivation levels (Corney et al., 2009; Kim et al., 2013). As idea contributors participate voluntarily, it is therefore paramount to ensure an engaging idea generation process to counteract these adverse effects.
Previous studies have shown increased perceptions of social presence on web-based platforms with virtually embedded social cues (e.g., emotionally rich text, personalized greetings) that approximate face-to-face interactions (e.g., Cyr et al., 2007). In addition, several studies have demonstrated that conversational user interfaces can be rich in social cues (e.g., Pütten et al., 2010). Therefore, contributors’ level of engagement and willingness to invest cognitive effort during idea generation could be fostered by the deployment of automated conversation-based facilitation (Schuetzler et al., 2020). To leverage this conversation-based logic, the design of artificial intelligence (AI) involving machine learning (ML) to process natural language can provide an increased level of reactivity and proactivity in comparison to pre-defined time-based sequences of system prompts and state changes. However, the interaction between humans and AI requires more than intelligent algorithms in order to solve specific problems collaboratively and effectively (Harper, 2019; Seeber et al., 2020). In this vein, scholars have recently pointed out that AI-based agents, i.e., in the form of conversational agents (CAs), can be designed to serve the role of a facilitator to support individuals during task execution (Bittner et al., 2019a; Seeber et al., 2018). Moreover, initial research has shown that CAs can guide contributors on idea platforms to generate and submit their ideas in task-oriented conversations (Tavanapour & Bittner, 2018). However, prescriptive design knowledge on how to develop such a solution is still scarce (Bittner et al., 2019b; Diederich & Brendel, 2019; Seeber et al., 2018). Therefore, the following research question is addressed:
RQ: How should a CA be designed and instantiated to facilitate contributors’ idea generation and foster their engagement on idea platforms?
Consequently, the aim of this study is to enhance organizations’ idea generation via external contributors with a CA as a facilitator and to lay the foundation for improved subsequent organizational idea selection. Therefore. the AI-based facilitation on idea platforms should result in an engaging process to support individuals in voluntarily generating a contribution to an “open call” (Chesbrough & Brunswicker, 2014; Lüttgens et al., 2014) and yield idea submissions with a common structure comprising specific and detailed descriptions. To investigate the potential of the proposed AI-based facilitator for idea generation on idea platforms, the CA concept needs to be instantiated with a software prototype. Thereby, the implementability of the derived design knowledge can be tested with state-of-the-art CA technology. Furthermore, potential effects of facilitation support by CAs during the idea generation process can be explored. Accordingly, in this study, we present a multi-cycle design science research (DSR) project that addresses the stated challenges and research gap with the following structure. First, we present related work about facilitation of idea generation on idea platforms and CAs as facilitators. Second, we outline the research approach by delineating the steps of the DSR project. Third, derived design requirements (DR) and design principles (DP) are described followed by an instantiation of the CA design with a full-featured CA incorporating insights from previous DSR steps. Subsequently, we present the results of the ex-ante and ex-post evaluation stages. Last, we discuss the findings of the study, its limitations, and present an outlook before closing with a conclusion.
2 Related Work
2.1 Facilitation of Idea Generation on Idea Platforms
By applying the outside-in process, organizations access and utilize external ideas, technologies and/or know-how in one or more of the four phases of open innovation (1) idea generation, (2) experimentation, (3) manufacturing, and (4) marketing and sales (Lazzarotti & Manzini, 2009). In the early phase of organizational innovation processes, idea generation and selection constitute fundamental steps (Hansen & Birkinshaw, 2007; Kornish & Hutchison-Krupat, 2017). To generate ideas, organizations involve external contributors to source their ideas and knowledge (Hilgers & Ihl, 2010; Poetz & Schreier, 2012). Subsequently, a small number of promising ideas are identified and selected to enhance the quality of organizational innovation initiatives (Chesbrough, 2003; Chesbrough & Bogers, 2014; A. King & Lakhani, 2013). To support and improve organizations’ process of gathering ideas, well-designed and adequately managed information and communication technology (ICT) can be utilized to provide external contributors the means to share their valuable input with organizations (Bogers et al., 2018; Chatterjee et al., 2021; Gassmann, 2006; Kornish & Hutchison-Krupat, 2017). An established technology to acquire ideas across organizational boundaries represents web-based idea platforms (Di Gangi & Wasko, 2009; Holle et al., 2016). However, despite the benefit of rapidly gathering and exploiting innovation ideas, organizations face several challenges in managing this ICT to fuel their innovation processes.
First, the lack of knowledge about mechanisms to enhance contributors’ motivation has led to research about user engagement (Füller et al., 2008; Kosonen et al., 2013). This concept is defined as “a quality of user experience (UX) that is characterized by the depth of an actor’s cognitive, temporal, and/or emotional investment in an interaction with a digital system” (O’Brien & McKay, 2018, p. 73). With user engagement, a continuing participation can be established through involving and captivating individuals, which produces positive affective reactions, a focused attention, and motivation through novel experiences. In this regard, studies have shown that user engagement in open innovation initiatives can be positively influenced by the design of an interface (Attfield et al., 2011), the presentation of a task (Benz et al., 2019), and the clarity of the task goal (T. de Vreede et al., 2013). Second, large amounts of collected ideas and the absence of strategies to systematically converge them has provoked research about the idea selection step of innovation processes (Dellermann et al., 2018; Merz, 2018; Seeber et al., 2017; G.-J. de Vreede et al., 2021). More specifically, research has identified the challenge for organizations with limited absorptive power (e.g., time constraints, limited cognitive resources) to select valuable ideas from a large pool with varying attributes (e.g., specificity, comprehensibility) (Schulze et al., 2012), as an extensive proportion is incomprehensible and unstructured (Bjelland & Wood, 2008; Blohm et al., 2013). In this respect, the investigation of organizational idea selection strategies in open innovation initiatives has shown that several strategies involving different agents are applied (Haller et al., 2017; Merz, 2018). Ideas can be selected either by (1) an external crowd, (2) a small team comprising different stakeholders, (3) a specialized algorithm, or (4) a hybrid team consisting of an algorithm and crowd or a small team (Merz, 2018). However, regardless of the involved agents, a lack of mechanisms to make the selection process as efficient as possible to select the best idea(s) has been identified (Merz, 2018).
As the structure and richness of ideas in platform-based settings has been shown to be significantly lower compared to those generated in facilitated focus groups (Schweitzer et al., 2012), the structured guidance of individuals’ idea generation could provide more consistent idea attributes. Accordingly, idea selection could be improved, independently of the involved agents, by guiding contributors during idea generation to gather contributions with a pre-defined set of required information. Thereby, contributors’ difficulty in providing relevant information and necessary details to increase the implementation likelihood of their idea can be counteracted (Li et al., 2016). Moreover, contributors could be assisted socio-emotionally, as constructive feedback and emotional support have been shown to positively affect individuals’ idea generation (Perry-Smith & Mannucci, 2017; Schweitzer et al., 2012). Consequently, to leverage these effects, facilitation can be utilized to enable structural guidance while simultaneously considering socio-emotional factors and a systematic documentation of ideas.
The concept of facilitation is defined as interventions in a structured and dynamic process that are executed by a designated person with the main goal to guide members of a group towards efficiently achieving their common goal (Bostrom et al., 1993; Clawson & Bostrom, 1996; Kelly & Bostrom, 1997). Facilitation has shown the potential to produce high quality group outcomes in face-to-face meetings (Bittner & Leimeister, 2014; Bowers et al., 2000; Langan-Fox et al., 2004). Furthermore, with the raise of group support systems (GSS), the role of the facilitator has been extensively investigated in the context of ICT-mediated meetings (Clawson & Bostrom, 1996; Clawson et al., 1993; Kelly & Bostrom, 1997). In the “Facilitation Framework” of Bostrom et al. (1993), previous findings have been consolidated to describe necessary actions of a digital facilitator. The framework distinguishes three sets of activities that are executed by a facilitator: (1) process, (2) task, and (3) relationship (Bostrom et al., 1993). Process related facilitation activities (How?) serve to support the accomplishment of tasks (What?) by individuals. Relationship facilitation (Feel about) influences the relational outcome during this process. As an extension to previous research, the “Facilitator-in-a-Box” paradigm has been developed to automate facilitation processes and substitute a human facilitator with a pre-defined sequence of system prompts and state changes (Briggs et al., 2013). However, this approach neglects the conversational nature of facilitation and socio-emotional dimensions of facilitative activities. In order to cover all facilitation dimensions (process, task, and relationship), evolving ML-based AI technology in the form of CAs represents an applicable solution to automate the facilitation of users’ idea generation (Seeber et al., 2018).
Overall, an AI-based CA facilitation could meet organizations’ requirement to effectively manage and implement emerging technologies to establish an approach to efficiently source and select external ideas (Kornish & Hutchison-Krupat, 2017) by utilizing a structured and engaging idea generation process.
2.2 Conversational Agents as Facilitators
CAs are software systems that are capable of interacting with humans via natural language in a dialogical fashion (Araujo, 2018; Bittner et al. 2019b; Diederich & Brendel, 2019). The concept of CAs is inspired by the idea to emulate naturalistic text- or speech-based conversations between intelligent machines and humans by analogy to human interaction (Elshan et al., 2022; Laumer et al., 2019; McTear et al., 2016). Different terms have been utilized for CAs (e.g., virtual or cognitive agent, dialogue system, and chatbot or chatterbot) referring to the modes of either spoken or written interaction and interactive or static forms of representation (Gnewuch et al., 2017; Hill et al., 2015; Shawar & Atwell, 2007). The capabilities of CAs have steadily evolved over the years. The initial CA ELIZA responded with questions to requests by analyzing users’ input to find pronouns and turn them into the opposite (Weizenbaum, 1966). Since then, technological advancements in the fields of ML and natural language processing (NLP) have led to a significantly improved pattern recognition in human language which has elevated CAs’ capabilities to identify responses matching to users’ input (Io & Lee, 2017; Knijnenburg & Willemsen, 2016). This technological progress enables more human-like interactions with CAs (Nguyen et al., 2021). Nevertheless, naturalistic interactions are not yet fully feasible due to the complexity of natural language conversations (Ashktorab et al., 2019; Schuetzler et al., 2021; Shah et al., 2016). Misinterpretation of user input, incorrect responses, and tedious interactions often fail to meet users’ high expectations of conversations with CAs (Luger & Sellen, 2016). To counteract this potential dissatisfaction, dialogs are designed to be engaging in order to encourage users to continue a conversation despite erroneous interactions (Grudin & Jacques, 2019; Schuetzler et al., 2020).
In research, two general streams focus on different types of CAs. On the one hand, studies concentrated on developing and investigating general CAs that should be capable of reacting to any utterance by a human counterpart with a suitable solution or answer (Gnewuch et al., 2017; Hill et al., 2015). On the other hand, a growing body of literature has evolved on domain-specific CAs. With a limited knowledge base, these CAs are used in specific application domains such as education, customer service, finance, human resources, and health care (Følstad et al., 2019; Janssen et al., 2020). In the latter research stream, domain-specific CAs have already been utilized to provide facilitation toward accomplishing specific goals or to structure conversations for well-defined, recurring tasks. For example, prior studies have shown that triggers in the form of questions posed by a CA induce favorable behavior in terms of reasoning and elaboration in computer supported cooperative learning (Kumar & Rosé, 2014; Tegos et al., 2014, 2015) and citizen participation (Ito et al., 2021). Furthermore, Wang et al. (2007) demonstrated that a virtual agent could support an individual during idea generation, which resulted in more ideas in comparison to interactions between two humans. Louvet et al. (2017) proposed an interaction process model, where the agent is able to express requests for precision, reformulation, or verbalization in reaction to certain triggers. Complementing and extending these previous studies about automated facilitation, the study at hand focuses on facilitation by a CA that supports external contributors to submit an elaborated idea to an open call and structures their idea generation process on idea platforms. Therefore, we introduce a definition for a CA facilitator which bases on various related definitions. As Lieberman (1997) defines an agent as a program that acts as a facilitator rather than a tool and Bailenson and Blascovich (2004, p. 65) refers to it as “a perceptible digital representation whose behaviors reflect a computational algorithm designed to accomplish a specific goal or set of goals”, a CA facilitator can be defined as an intelligent artificial agent that is capable of guiding through a structured process utilizing natural language to support an individual or group to achieve a common task goal.
With the objective of developing an AI-based CA with a static representation that interacts via written language serving the role of a facilitator, the presented study aims to contribute to the stream of research about domain-specific CAs (Bittner et al., 2019b). To achieve this, the design of a CA facilitator needs to be informed with meaningful insights from research on behavioral aspects that affect its facilitation capabilities. Studies in this field have, inter alia, shown that social cues which mimic human behavior are beneficial to support task- and productivity-related aspects (Medhi Thies et al., 2017; Morrissey & Kirakowski, 2013; Nunamaker et al., 2011). Moreover, recent research derived application-oriented design knowledge to guide research attempts in developing CAs as facilitators for idea generation processes (Strohmann et al., 2018; Tavanapour & Bittner, 2018). Apart from these preliminary investigations, the design and development of CAs in the domain of idea sourcing has not been extensively addressed and needs to be intensified (Diederich & Brendel, 2019).
3 Research Approach
In order to address the research aim of assisting and engaging contributors during idea generation to lay the foundation for a systematized selection of submitted ideas, we conduct a DSR project with multiple consecutive design cycles (see Fig. 1) (Gregor & Hevner, 2013; Vom Brocke et al., 2020). With the design and development of an artifact in the form of a full-featured CA facilitator incorporating insights from previous design cycles, we intend to provide a novel and innovative solution to the prevalent real-world problem of unsystematized and insufficiently engaging idea generation processes that are commonly deployed for open innovation initiatives. To ensure research rigor and generate substantial prescriptive design knowledge, we follow the established iterative six-step approach by Peffers et al. (2007).
Two preceding design cycles were completed to iteratively approach the identified problem. The scope of the first cycle was to gain exploratory knowledge about automated facilitation for idea generation with a CA. Correspondingly, micro and macro scripts were defined to generate tentative design knowledge in the form of interaction scripts (Gregor & Hevner, 2013). The macro script serves to define the process sequence and conversation flow, whereas the micro script specifies relationship-related aspects (e.g., affirmative statements, motivational explanations) for the CA facilitation. To assess the potential of the interaction design for a CA facilitated idea generation process, a Wizard-of-Oz (WoO) experiment was performed (Kelley, 1983). For this purpose, uninformed participants interacted with an undisclosed human wizard, who used the micro and macro scripts to facilitate the idea generation process. The wizard controlled the system to make the participants believe that they are interacting with a CA. The results of the WoO experiment, on the one hand, served as a proof-of-concept for follow-up investigations. On the other hand, the findings were used to inform the improvement of the conceptual CA design. The first cycle was completed by communicating the derived insights (Bittner et al., 2019a).
Guided by the validated micro and macro scripts, an initial CA prototype was developed for automated facilitation on idea platforms. Based on the conversation protocols from the WoO experiment, data was derived to train the open-source NLP framework RasaFootnote 1 for the CA prototype. For the design, DRs were identified with a comprehensive literature search according to Webster and Watson (2002) drawing on justificatory knowledge from the fields of AI, NLP and ML. The prototypical CA was evaluated with a user test. The communication of initial design knowledge and evaluation results completed the second cycle (Tavanapour & Bittner, 2018).
The third cycle represents the focus of this publication. The objective is to combine and extend insights from the first two cycles to address the joint problem identification of this DSR project. Step one (problem identification) has been addressed in the introduction and related work section. In the second step objectives of a solution, previous tentative prescriptive knowledge is expanded by developing DPs, which are based on extended and refined preliminary DRs from cycle two. This revision builds on a literature-based derivation of requirements and an analysis of results from the evaluation in the preceding cycle. In the third step design and development, the DPs are instantiated. Informed by derived DRs a revised and full-featured version of the CA facilitator is implemented. To this end, training data from cycle two is updated with refined intents and entities to improve the performance of the NLP module of Rasa. Moreover, micro and macro scripts from cycle two were utilized to construct the facilitation process sequence of the instantiated CA. Regarding steps four and five, an evaluation of the design comprising ex-ante (demonstration) and ex-post stages (evaluation) is conducted (Venable et al., 2016) (see Table 1). With the ex-ante evaluation, the applicability, operationality, and completeness of the designed artifact for the described problem statement of the DSR project is demonstrated (Sonnenberg & Vom Brocke, 2012). In this evaluation activity, exploratory focus groups (EFG) were conducted to obtain valuable input and modify the design and corresponding functionalities of the CA (Nielsen, 1997; Tremblay et al., 2010; Venable et al., 2016). Therefore, the DPs and their instantiation in the CA were presented, tested, and discussed in two focus groups with potential users (four participants, 59 min. duration) and researchers with different contextual knowledge (software developers, CA/AI experts) (five participants, 91 min. duration) (see Sect. 6). To perform a naturalistic ex-post evaluation, a two-fold strategy is applied to leverage an extensive set of empirical data and gain insights on the efficiency and feasibility of the instantiated DPs (Venable et al., 2016). First, data on characteristics of submitted ideas from real users was gathered. For this purpose, the CA was deployed on a website during a research project involving partners from research and practice in the field of public administration. After initiation of the open call by several project stakeholders, 40 external participants submitted an idea on the topic “Mobility of the Future”. Based on these submissions, the characteristics of the ideas were examined, on the one hand, in semi-structured interviews with four experts from, inter alia, the fields of innovation and product management. On the other hand, a computer-based analysis was conducted to investigate the linguistic attributes of the collected ideas to draw inferences about the affective and cognitive processes of the idea contributors. Second, the CA facilitator and a standard submission form for idea generation were compared to assess the level of engagement and perceived social presence induced in potential idea contributors. Therefore, 221 participants were divided into two conditions to observe one animated mock-up simulating the respective idea generation process. Subsequently, participants completed a questionnaire-based evaluation of the simulation. Step six (communication) will be completed with the publication of this study.
Being part of a multi-cyclic DSR project (see Fig. 1), this research aims to make a two-fold contribution by providing prescriptive design knowledge and a corresponding design entity in the form of an instantiated CA facilitator (Drechsler & Hevner, 2018; Gregor & Hevner, 2013). Besides codifying the functioning and construction of the artifact, the utility character of the generated design knowledge is established via the comprehensive evaluation (Kuechler & Vaishnavi, 2012; Venable, 2006). In the following sections, we elaborate on the delineated steps of the third cycle of the DSR project covered in this publication.
4 Objectives of a Solution
4.1 Design Requirements for a CA Facilitator
The development of CAs requires scientifically substantiated design knowledge (Amershi et al., 2019; Diederich & Brendel, 2019). To determine characteristics and behaviors of a CA facilitator in the form of DRs to deduce DPs, extant literature was analyzed, and suitable theoretical insights were incorporated. The principal theoretical basis for the proposed design builds on the Social Response Theory (Nass & Moon, 2000) and Social Presence Theory (Daft & Lengel, 1986; Gefen & Straub, 1997; Short et al., 1976). The Social Response Theory postulates that individuals unconsciously apply social rules to computers if they perceive social cues that are associated with human attributes or behavior, whereas the Social Presence Theory refers to the perception of humanness in a medium determined inter alia by its communication richness. According to these theories, CAs’ anthropomorphic characteristics evoke unconscious social responses in users due to their virtual identity and capability to interact via natural language (Gong, 2008; Pütten et al., 2010). These responses, combined with the application of social rules, fuel users’ expectations of human-like behavior toward CAs. Consequently, a design approach is required that affords CAs’ human-like facilitation behavior supported with current technological capabilities of AI. With regard to these principal theories, the derivation of DRs was structured with the “Facilitation Framework” of Bostrom et al. (1993), as this framework summarizes relevant facilitation skills categorized into several acts that are directed toward the task at hand, the process to achieve the associated goal, or the relationship between facilitator and participants.
With process and task, facilitative acts are addressed which refer to the capabilities of supplying instructions about the task, providing relevant information, and guiding through a process (Clawson & Bostrom, 1996; Clawson et al., 1993). Therefore, a CA facilitator should present the task and associated steps to initiate the process (DR1.1) (S. Kim et al., 2020). In addition, the CA facilitator should ensure that users follow the idea generation process and guide them with goal-oriented behavior to assure the achievement of an idea submission (Clawson & Bostrom, 1996). In this regard, Morrissey and Kirakowski (2013) showed that CAs’ construction of engaging conversations leads to elevated levels of user acceptance and productivity, which leverages substantial input (Tegos et al., 2014). Accordingly, the CA should take initiative to actively direct and lead the conversation to support users in the process (DR1.2) (Jain et al., 2018; Montero & Araki, 2005; Morrissey & Kirakowski, 2013; Nouri et al., 2020). To ensure productivity-oriented behavior that promotes users’ engagement and motivation (Brandtzaeg & Følstad, 2017; Medhi Thies et al., 2017), the CA has to prevent conversations from ending at critical points by asking smart, suitable, and process-relevant questions (DR1.3) (Montero & Araki, 2005). In extension to this, the CA should prompt users to edit initial or enrich missing input (DR1.4) (Morrissey & Kirakowski, 2013; Tegos et al., 2014). Another relevant characteristic of facilitators referring to process-related acts is their ability to assure an optimal outcome by maintaining the focus on the defined task goal (Clawson & Bostrom, 1996). Thus, the CA should, on the one hand, prevent deviations from the conversation topic to avoid flops in dialog flow or process phases and be aware of the current task state by tracking users’ progress (DR1.5) (Liao et al., 2018; Nouri et al., 2020; Poser & Bittner, 2020). On the other hand, CAs should be capable of flexibly reacting to users’ utterances regarding the present phase of the process by providing information and explanations on demand about the current activity and specific terminology to ensure users’ understanding of and engagement with the task (DR1.6) (Schuetzler et al., 2018).
Regarding relationship-focused acts, facilitators provide an open and positive atmosphere to engage people in the process and task at hand (Bostrom et al., 1993; Clawson et al., 1993; Kelly & Bostrom, 1997). As the utilization of natural language increases users’ perception of artificial entities’ humanoid characteristics (Nass & Moon, 2000), the CA should emulate human-like and reciprocal conversational behavior that is adjusted to a specific audience to strengthen users’ trust, enjoyment, and perceived usefulness (DR2.1) (Gefen & Straub, 1997; Hassanein & Head, 2007; Johannsen et al., 2018; Knijnenburg & Willemsen, 2016). In doing so, the CA should create a positive dialog environment by following a socio-emotional facilitation style. More specifically, the CA should foster engagement, confidence, and show sensitivity by making approving and motivating statements during the process (DR2.2) (Jenkins et al., 2007; Nimavat & Champaneria, 2017; Portela & Granell-Canut, 2017; Poser & Bittner, 2020). To intensify the positive atmosphere and personalize the relationship with users, CA’s linguistic cues and style should increase friendliness perceptions (DR2.3) (Adams et al., 2012; Araujo, 2018; Medhi Thies et al., 2017; Verhagen et al., 2014). Accordingly, the CA should use informal language as well as typical dialogical cues such as greeting the user and wishing farewell (DR2.4) (Araujo, 2018). In addition, users’ names should be captured to reference it during the interaction (DR2.5) (Johannsen et al., 2018).
These facilitation-related design aspects need to be enabled by CAs’ general technical capabilities. Therefore, a CA facilitator should be able to construct a conversation and recognize users’ intentions and deliver adequate reactions to ensure successful task accomplishment (DR3.1) (Ghose & Barua, 2013). As the interaction with users should imitate human conversational behavior, pre-set answers via buttons should not dominate the dialog and the CA should have a short, human-like response latency (DR3.2) (Acerbi et al., 2010; Diederich et al., 2019; Gnewuch et al., 2018; Loftsson et al., 2010; Zamora, 2017). Furthermore, CA’s conversation texts should be short understandable, and characterized by correct grammar and spelling (DR3.3) (Morrissey & Kirakowski, 2013; Salomonson et al., 2013). Moreover, the CA should be equipped with intervention strategies to proactively trigger user actions in adequate situations, such as silent moments (DR3.4) (Morrissey & Kirakowski, 2013; Tavanapour & Bittner, 2018).
4.2 Design Principles for a CA Facilitator
The identified set of DRs was utilized to derive four DPs (see Fig. 2). Following a supportive approach, 15 DRs were elicited based on insights from the knowledge base to develop the DPs of the type form and function (Chandra et al., 2015; Möller et al., 2015). The resulting DPs are categorized according to the classification of facilitative acts by Bostrom et al. (1993) differentiating between process and task, or relationship.
Process and task: To facilitate users during the idea generation and submission process, the CA should be able to initiate a conversation by supplying relevant information about the task and steps to subsequently direct and lead users in a productivity-oriented and pleasant manner by posing questions and preventing deviations to other topics (DP1). The directed facilitation process should yield elaborated outcomes. Therefore, the CA requires capabilities to react to and motivate the user in different situations or offer support on demand by delivering explanations about the process steps and topic-related terms (DP2). To efficiently facilitate users through the idea submission process, the CA must be equipped with technical capabilities. The CA needs NLP capacity to correctly identify users’ intentions and respond with pre-defined, understandable, short messages with excellent grammar and spelling in a short amount of time. Moreover, the CA requires a strategy to counteract silent moments by proactively offering support when users are inactive for a certain period of time (DP3).
Relationship: For the provision of a positive dialog environment during the facilitated idea submission process, the CA should offer socio-emotional support by motivating and approving users’ input. In addition, to foster users’ acceptance, the CA should develop a personalized interaction, act friendly, polite, and utilize informal language (DP4).
5 Design and Development
5.1 CA Development
The development of the CA facilitator was guided by the micro and macro scripts from cycle one. The macro script served to determine the process sequence and conversation flow, whereas the micro script defines relationship-related aspects for the CA facilitation. Accordingly, the facilitative acts  introduction,  generate,  build consensus, and  closing from the macro script were implemented to develop the logic of the process and conversation flow. The CA follows the depicted sequence of steps in Fig. 3: In the  introduction the user is asked to indicate the desired form of address (name vs. anonymous), the number and content of process steps are explained, and the idea generation process is started, if desired. In  generate, the CA poses questions to record the ideas. To reassure the correctness of idea items and allow users to edit content, the CA shows a summary in  build consensus. Lastly, in  closing, the CA expresses farewell. In line with the micro script, CA’s utterances across all macro script steps include affirmative feedback (e.g., “Thank you very much!”), motivational explanations (e.g., “For others to understand your idea well, you should describe it as clearly as possible.”) and general reactions (e.g., “I’m sorry. Unfortunately, I do not have a suitable answer to your input.”).
To develop the CA according to the macro script logic, the open-source framework Rasa was used. This allowed to fulfill research-related constraints such as expandability and sovereignty over data. The Rasa framework is divided into the submodules Rasa Core and Rasa NLP. Rasa Core is responsible for administrating the dialog flow and Rasa NLU for processing natural language. The dialog structure is modeled by a finite set of intents, entities, and slots. Intents are utterances with which the user confronts the CA. Entities represent the information the CA extracts from the conversation. Rasa NLU recognizes the intents and entities from the messages sent by the user. Rasa Core directs the dialog flow and triggers actions that correspond to the intents. The recognized entities are stored in the respective slots. In our case, the intents and corresponding training data were derived from previous studies in cycles one and two. The eight slots (S1-8) of the CA are filled sequentially during  introduction and  generate from the macro script (see Fig. 3). The first slot refers to the name of the participant, which is registered, if indicated by the user. The remaining slots (S2-S7) correspond to the seven previously identified relevant items of an idea: (S2) idea text, (S3) keywords, (S4) which problem is solved, (S5) novelty of idea, (S6) target audience, and (S7) title (Bittner et al., 2019a). To facilitate users and react to their input in a suitable manner, the CA is designed to identify different intentions in users’ input during  generate (see Fig. 3). More specifically, the CA can, corresponding to DP2, differentiate between five different categories of questions posed by the user referring to the topic, task, or process by recognizing terms and vocabulary and reply with appropriate answers. In accordance with DP3, the CA can detect silent moments and react by offering support. The silent moment was set to trigger after five minutes of inactivity, which has proven to be a meaningful threshold for activating users (Tavanapour & Bittner, 2018). In addition, the CA can detect users’ intentions of aborting the process and offers to end the process. Most importantly, the CA can actively lead the conversation by posing questions to fill the slots (S2-8) (see Figs. 3 and 4). If the CA was not successful in filling a slot due to a different intention of the user (e.g., a question referring to specific terms), the CA repeats the question for that slot until it was successfully filled. In case users abort the facilitation process before it is completed, the input for the slots filled up to that point is saved.
5.2 Instantiation of the CA Facilitator
For the implementation of the CA facilitator, the defined DPs were matched to artifact features that cover the specifications of the prescriptive knowledge and the depicted architectural settings in Figs. 3 and 4. Once triggered by a user, the CA initiates a conversation in line with the DPs and facilitation acts of the macro script. The implementation of the dialog management was supported with pre-set response options to sustain the process logic of the CA facilitation. Figures 5 and 6 visualize several functionalities of the CA during an exemplary conversation (translated from the original language) with a user. The conversation snippets show the CA a) introducing into the process, b) leading the conversation for the idea generation process, and c) reacting to a question regarding a specific term or to a silent moment. Furthermore, in d) the editable summary is depicted.
The ex-ante evaluation episode focuses on the formative assessment of the created and instantiated design in the form of an automated conversation-based CA facilitation to purposefully address the identified real-world problem of unsystematized idea submissions due to limited support for contributors during the idea generation process (Sonnenberg & Vom Brocke, 2012; Venable et al., 2016). To evaluate the applicability, operationality, and completeness of the created and instantiated CA design, we utilized EFGs to leverage a rich qualitative data set.
Following the proposed steps by Tremblay et al. (2010), we conducted two EFGs to obtain profound feedback on design-related aspects as well as technical functionalities of the initial CA version. The first focus group (EFG1) lasted 59 min and consisted of four participants, each of whom had participated at least once as a contributor in an open innovation initiative. Accordingly, the two female and two male participants represent potential CA users. The second focus group (EFG2), which comprised five male participants from research with expert knowledge in software development of CAs and/or AI, lasted 91 min. Each focus group was recorded and followed a pre-defined procedure: (1) presentation, (2) demonstration, and (3) discussion. In the first phase, the context and objective of the study was presented. Subsequently, participants individually conducted a click-through and executed functional tests to evaluate the CA during the idea generation task. After that, a prepared guideline with open-ended questions based on the four DPs was utilized to validate and refine the design. Based on transcripts, a qualitative content analysis according to Mayring (2014) was individually conducted by two researchers and resulting codes were continuously harmonized to obtain insights about the DPs and derive opportunities for improvement.
In general, the analysis of qualitative data showed that participants of both focus groups rated the CA facilitator as applicable and operational to record and select ideas in a homogenized format for further processing. The user interaction was described as flawless, intuitive, engaging, and human-like. The participants rated the interactive questioning process as detailed, coherent, and targeted. Furthermore, they reported that the submission of an idea was supported by the transparent progress in the process (e.g., “The process design is designed in such a way that it can be followed very firmly, and it is also developed in such a way that the process can be easily tracked” (EFG2)). With reference to DP1, the participants assessed the requirements to be clearly outlined, the posed questions by the CA to be very goal-oriented, and the process design to be easy to follow. The logical step-by-step approach helped “writing things down, which is good when developing a spontaneous idea” (EFG1). Therefore, participants had “the feeling of being guided toward reaching a goal” (EFG2). Moreover, the participants agreed that “CA’s utterances build upon each other” (EFG2) and are suitable for the task of facilitating the idea generation process. For DP2, participants valued the flexible interaction and possibility to ask questions, although only a few of them used this functionality. The CA’s intention to acquire elaborated input was recognizable, e.g., “when he asked whether I would like to confirm or change specifications” (EFG2). In addition, participants considered the support with optional information about the topic at the beginning of the process to be valuable. The “definitions and further instructions during the process steps were goal-oriented, when asked for” (EFG1). In relation to DP3, the user guidance was judged to be well-managed with clearly formulated, precise, and understandable statements and questions. According to the participants, the content of the messages had a suitable length and language level, keeping mental effort at minimum level. In this context, one participant affirmatively stated that “it was easy to follow, it was very clear what was meant and there was little room for misinterpretation” (EFG2). During the click-through demonstration, the CA reacted correctly with prompt responses which was perceived to sustain the progression through the process. While participants rated the strategy of the CA to counteract silent moments as generally useful, the implementation was rated to require improvement, as one participant experienced a mistakenly triggered reaction by the CA: “I wrote a long text and was already asked before sending it” (EFG2). Regarding DP4, participants stated that they were aware of interacting with a CA. Nevertheless, the text-based and friendly conversation was considered conducive to the atmosphere, as it conveys a sense of humanness. One participant commented: “the natural language equilibrates and pulls it away from a pure technical impression” (EFG1). Furthermore, statements from the CA between process phases were perceived as motivational support. The interactivity of the process was regarded to reduce the initial hesitation of starting to submit an idea. The personal address created sympathy among participants for the CA. In particular, referring to the user by name during the process had a positive influence, as one participant reported: “it gives a personal touch, and I am a person who likes to be called by my first name” (EFG1).
With the tentative CA version, the completeness of relevant design aspects could be demonstrated. In addition to confirming insights, focus group members highlighted potential for improvement related to support behavior and technical features of the CA. To increase the advantage of support upon request, the CA should clearly indicate how and for what purpose. The silent moment should not be triggered too early when users are actively making entries, as this unnecessarily interrupts the writing process. From a technical perspective, “the interaction capability is expandable” (EFG2). Accordingly, the language model requires fine-tuning, since intents were sometimes recognized incorrectly and participants were occasionally offered process termination. To encourage users to write extensive and detailed ideas, the entry field should be larger, since “it is better if one has the possibility to see the multiline text” (EFG2).
For the ex-post evaluation, an adapted and improved CA facilitator was implemented based on the findings from the demonstration. To gain insights into the efficiency and feasibility of the instantiated DPs, we conducted a naturalistic evaluation of the final artifact (Creswell et al., 2003; Sonnenberg & Vom Brocke, 2012; Venable et al., 2016). To this end, we completed two field studies and applied various evaluation activities. On the one hand, we deployed the CA facilitator on a website to gather submissions from real users and subsequently analyze the characteristics of the ideas (see Sect. 7.1.). On the other hand, we assessed the feasibility of the proposed solution to engage contributors and provoke a perception of social presence by comparing the levels of engagement and social presence of CA facilitation with a standard submission form.
7.1 Evaluation of Ideas
To gather data on characteristics of CA facilitated ideas from real users, we initiated an open call during a research project involving partners from research and practice in the field of public administration. The call on the topic of “Mobility of the Future” was distributed via different university and city mailing lists, social media, and student groups to invite a wide group of participants to generate ideas. Guided to a website via link in the open call, participants were provided with information about the subsequent task and the possibility of winning vouchers. The topic was presented in the form and length of an abstract describing advantages and disadvantages of current mobility solutions. The participants were asked to propose ideas for a change of mobility at the national level. The idea generation process with the CA could be started by clicking a designated button. In total 40 ideas could be collected and served as data for a two-fold idea evaluation, reported in the following. First, interviews with domain experts were conducted to qualitatively assess the collected ideas. Second, computerized text-based analyses of the submitted ideas were performed to examine textual features of the ideas and establish links between idea contributors’ social behaviors and cognitive processes.
7.1.1 Expert Evaluation of Ideas and CA Facilitation
To allow an in-depth evaluation of the subject matter, the ideas and the utilized approach were assessed by four experts with different backgrounds of relevant experience in the domain of open innovation and ideation (see Table 2). Based on established idea evaluation dimensions in literature (Dean et al., 2006), we conducted semi-structured interviews via video call that lasted between 41 and 53 min. The interviews were conducted with an interview guide comprising open-ended questions. Questions about the general impression of the ideas and the approach of CA facilitation were followed by questions about the completeness, level of detail, comprehensibility, originality, acceptability, and relevance of the submitted ideas. Prior to the interviews, each expert was provided with context information regarding the conceptual approach, process, and topic, as well as a random subset of ten idea submission. The interviews were recorded and transcribed by paraphrasing and noting verbatim statements.
The experts understood the CA facilitator approach to gather external ideas and considered it useful, even if CA technology is currently applied for different use cases in their organization (i.e., all experts were familiar with CA technology). In particular, the dialogue-based interaction was judged to be promising to receive ideas from external contributors as part of an open innovation initiative (“It is easier for contributors, because you receive feedback from the CA.”). Regarding the presented ideas, the experts emphasized the formulated ideas to be an extension of their own perspective. In this respect, some ideas particularly stood out, which were considered surprisingly unusual and novel (e.g., “I wouldn't have thought of such a thing.”). However, the experts noted that some ideas might be too radical from their point of view to be generally accepted. Nevertheless, one interviewee mentioned that radical approaches are a good sign, as they show an open process (e.g., “These are good food for thought and you don't want to see them stalled either.”).
The ideas were judged to be well elaborated and understandable. Regarding the level of detail, however, it was noted that even more idea-specific input would have been desirable. This would have allowed to go even deeper into the minds of the idea providers. It was suggested that the CA could have been even more proactive about specific terms used by the contributors, such as ridesharing, and asked specific questions (e.g., “What exactly do you mean by this?”). This would allow to obtain even more contextual knowledge. For example, the CA could also actively, i.e., without being addressed, have provided suitable suggestions from a database as an additional stimulus for the contributors to elaborate their idea (“It would be useful if there was a kick-start to trigger participation”). In relation to the assumed goal of the CA facilitation, i.e., collecting a large number of ideas, the experts mentioned that the ideas were already very well elaborated for a first idea collection step. “More detail is always possible, but it was enough for understanding” and an even more detailed level of elaboration could also complicate the idea screening and selection (e.g., “Who is meant to read through all that?”). Whether more content would be advantageous for a (partially) automated evaluation could not be conclusively assessed by the experts. The advantage of a more intensive dialogue should be weighed against the possible tendency of idea contributors to abort the process and a declining motivation to finish the idea generation (e.g., “They might get bored despite the engaging conversation at some point.”). Despite this, the experts expressed that the clear structure of ideas is certainly an advantage for the subsequent evaluation and selection, regardless of the method applied. Looking at the entire subset of ideas, the content was judged to be mostly consistent in terms of the different idea attributes. No obvious extreme deviations were noted by the interviewees.
When asked to what extent the provided ideas solve a problem in the context of the subject matter, it was stated that “the ideas address and comprehensively include the problem” and that very promising ideas had been proposed. However, further details would have been desirable and useful in some cases. Nevertheless, these ideas were a suitable starting point to identify one visionary idea among many in order to enter an in-depth exchange with this individual about his or her idea for solving the problem. Regarding the advantages of using a CA facilitator, the overall adaptability, and the possibility of accessing a current and large database that can be incorporated into the process of idea generation were highlighted. In the same context, the need for a large amount of data and its preparation for CA training was considered critical. One advantage that one expert emphasized was that a dialogical CA facilitation is suitable to involve all users regardless of their individual prerequisites, i.e., from a cognitive perspective, who can also have very useful ideas. In this regard, automatic adaptation of CA’s behavior and utterances based on personal characteristics of the idea contributor was considered potentially valuable and could be leveraged with technological advances (e.g., “Especially when you think about the future possibilities that you don't want to miss, this is a great playground.”).
7.1.2 Text-based Evaluation of Ideas
To link idea contributors’ written language style in the gathered texts during the idea generation process to affective (e.g., negative and positive emotions) and cognitive processes (e.g., problem-solving), we conducted linguistic analyses with computerized text analysis. This form of text analysis has been used to study social networking sites, online discussion forums, group dynamics, and interactions between individuals (Kacewicz et al., 2014; Oliver et al., 2021; van Swol & Kane, 2019) and yields reliable psychological insights about individuals’ thought processes, emotional states, intentions, and motivations (Boyd & Pennebaker, 2015; Tausczik & Pennebaker, 2010).
We examined the collected idea texts by applying a dictionary approach. We used the program Linguistic Inquiry and Word Count (LIWC) (Pennebaker et al., 2015a, b). LIWC utilizes over 90 pre-defined categories, analyzes and classifies words within these categories, which allows for a consistent measurement of words, leading to concordant validity (Humphreys & Wang, 2018; Moore et al., 2021; Pilny et al., 2019). The fundamental power of the LIWC dictionary lies in the fact that it was thoroughly developed using established and standardized psychometric procedures that ensure external validity and high internal reliability (Boyd, 2017; Pennebaker et al., 2015a, b). Given the German text corpora, we rely on the translated German LIWC2015 dictionary (Meier et al., 2019), which captures an average of 83 percent of the words people use in written and spoken language.
To prepare the linguistic analysis, we followed the guidelines for German text samples to preprocess the texts before analysis (Meier et al., 2019). For the linguistic analysis (see Table 3), we use five general descriptive dimensions: word count (WC), words per idea (WPI), words per sentence (WPS), percent of words in the text that are longer than six letters (Sixltr), and percent of target words captured by the dictionary (Dic). In addition, we utilized four summary variables: analytical thinking (Analytic), clout, authenticity (Authentic), and emotional tone (Tone).
The four summary measures each reflect a 100-point scale ranging from 0 to 100 with standardized scores. The underlying complex algorithms are proprietary. The variables are constructed from various LIWC variables based on previous language research (Boyd & Pennebaker, 2015). The scale values reliably reflect the following psychological dimensions (Boyd & Pennebaker, 2015, pp. 21–22):
Analytical thinking: a high number reflects formal, logical, and hierarchical thinking; lower numbers reflect more informal, personal, here-and-now, and narrative thinking.
Clout: a high number suggests that the author is speaking from the perspective of high expertise and is confident; low Clout numbers suggest a more tentative, humble, even anxious style.
Authentic: higher numbers are associated with a more honest, personal, and disclosing text; lower numbers suggest a more guarded, distanced form of discourse.
Emotional tone: a high number is associated with a more positive, upbeat style; a low number reveals greater anxiety, sadness, or hostility. A number around 50 suggests either a lack of emotionality or different levels of ambivalence.
The level of analysis refers to the gathered texts during the idea generation process steps, i.e., idea text, elaboration questions (which problem is solved, idea novelty, target audience), and title, as these are sufficiently self-contained and distinct from each other to allow meaningful intra-process comparison. Keywords were not included, since the analysis of individual keywords based on the LIWC categories appears to make little sense. A total of 54 keywords, mostly one to two per idea and compound words (see explanation below), were assigned for identification purposes from the idea contributors. A single idea was submitted without any keyword.
Although idea titles were relatively short on average (3.33 words per idea), they were included in the LIWC analysis because they are potentially informative covering a range from concise and descriptive to bold and lurid in a wording continuum. 54.14% of words in the title text were longer than six letters, which is notably higher than the respective percentages for the idea texts (35.98%) and question answers (37.50%). The result for the titles is related to the frequent utilization of compound words. Compound words consist of several nouns attached to each other and their extensive use is a peculiarity of the German language. While some of the most common compound words are included in the German LIWC dictionary, less common compound words are not recognized (Meier et al., 2019). This was reflected in the title texts with 54.89% of the target words identified.
The percentage of words longer than six letters were fairly at the same level regarding the idea texts and answers to the elaboration questions at 35.98% and 37.50%, indicating more active, i.e., less frequent use of long compound words, and consistent language across the process steps. Accordingly, the percentage of target words captured by the dictionary for the idea texts and answers were higher than for titles, at 79.31% and 76.30%, respectively. This puts them at about the same level as the fundamental German LIWC dictionary capture rate of 83%.
Considering the idea texts and the answers to the elaboration questions, the phrased sentences were almost one-half shorter at 15.42 to 8.88 words. This discrepancy is associated to the mixture of key phrases and rather short sentences in the answers to the questions. Remarkably, though, answers to the elaboration questions with 36.40 words per idea were longer than the idea texts with an average of 32.38 words. Thus, the elaboration questions contributed substantially to the overall idea generation process.
The idea texts, answers to questions, and titles are characterized by strong analytical thinking (opposed to narrative thinking) with each over 97-scale points. Accordingly, during the idea generation process, the idea contributors predominantly used a formal, categorical style of textual language that is associated with increased abstract thinking and a logical, complex way of cognitive processing. Individuals with such a predisposition in processing information tend to analyze, break down problems and are more likely to weigh facts (Boyd & Pennebaker, 2015; Pennebaker et al., 2014).
The texts of the ideas with 68.78 points and the titles with 69.14 points on the clout scale were almost on par. The answers to the questions were somewhat lower with 60.02 points. Compared to the grand mean clout score of 60.63 (SD = 14.86) points from the German LIWC dictionary (Meier et al., 2019), these scores reflect a somewhat greater level of contributors’ competence and confidence in the text. In addition, individuals who score high on the clout dimension usually use more outward words and are more focused on the people they interact with than on themselves. This type of interaction has been found to be conducive in the context of online discussion forums supporting the type of interaction and engagement required to build knowledge (Adaji & Olakanmi, 2019; Kacewicz et al., 2014; Moore et al., 2021).
Authenticity scores for the text segments ranged from 33.31 (idea texts), and 41.20 (answers to elaboration questions, to 57.71 (title). Compared to the grand mean value of 48.34 (SD = 24.41) (Meier et al., 2019), the value for the idea texts was relatively low and the value for the titles was relatively high. In order to understand these values, it is helpful to look at base rates of word usage from which the grand mean was calculated. The data sets of “Expressive writing” (76.73 points) and German-speaking “Reddit” (35.09 points) formed the ends of the authenticity continuum. Reddit is a social media platform where individuals discuss and exchange ideas on various subject matters (e.g., sports, politics, and leisure) in the form of threads and forums. Expressive writing, on the other hand, comprised samples from cross-sectional and longitudinal studies in which individuals wrote about profoundly personal issues in stream of consciousness mode (Meier et al., 2019). This put the idea texts at about the same level as social media which reflects informal, netspeak language (Meier et al., 2019). Nevertheless, the relatively low values are related to a rather reserved and distanced form of communication.
Looking at the scale for emotional tone, it is noticeable that the answers to the elaboration questions reflected a lack of emotional terms (39.82 points). In comparison, the scores for the idea texts (83.54) and the titles (88.32) showed a rather high occurrence of positive verbal signs of emotion on the emotion scale, suggesting that the idea contributors were more emotionally involved during these steps in the idea generation process.
7.2 Evaluation of Idea Generation Process
To explore the phenomena of interest, namely engagement and social presence, we developed two animated mock-ups simulating the process of idea generation. We opted for the simulation of two context-based scenarios, as this allows us to obtain the necessary power for a statistical analysis, i.e., the required number of participants, in a resource-oriented manner. In these two independent simulations, one showed a person generating an idea being facilitated by the developed CA. The other simulation showed a person using a standard submission form. The latter serves as a control condition that corresponds to the conventional method for idea generation on idea platforms. The topic of the idea generation was presented to the participants and was identical to that of the open call (“Mobility of the Future”) to perform the idea evaluation (see Sect. 7.1.). In both simulations the same idea was presented, which was obtained through the open call. Participants were informed about the research-only data processing and comprehensively introduced to the context of the study. Next, participants were randomly shown one mock-up simulation and asked to answer a subsequent questionnaire.
Participants were recruited through two platforms (i.e., poll-pool.com, prolific.co), enabling researchers to identify suitable participants while ensuring a diversified sample. The platforms allow participants to earn points by taking part in studies, which in turn can be passed on to participants in their own studies or can be redeemed monetarily. The platforms also ensure that surveys are conducted correctly, e.g., respondents who fall short of the median completion time are penalized or even excluded. This makes it more likely that participants will provide complete responses, rather than rushing the survey or completing it incorrectly. Moreover, prolific respondents tend to provide reliable data and prove to be more honest compared to participants from other platforms (Peer et al., 2017).
Nevertheless, we manually checked the data for discrepancies (i.e., very short completion times, identical and extreme answers), but did not have to disregard any subjects. To collect data, we utilized perceptual measures for engagement and social presence (see Appendix 1) in an ex-post survey. The questionnaire items for each construct were adapted from existing studies (i.e., Gefen & Straub, 2003; Webster & Ho, 1997), which have delivered reliable results before and have been modified for different contexts (e.g., Cyr et al., 2009; O’Brien et al., 2018). The original wording was adjusted to cover the features of the subject in this study. All items were measured through a five-point Likert scale with response options from 1 (strongly disagree) to 5 (strongly agree).
A total of 221 participants answered the questionnaire. 115 participants (44.3% female, 55.7% male, mean age 29.24 years, SD = 10.14) responded to the CA condition. Of these, 13 participants had relevant experience with idea generation processes, 100 had none, and two did not respond. In the standard submission form condition, 106 participants (48.1% female, 51.9% male, mean age 29.6 years, SD = 10.66) answered the questionnaire. Of these participants, 18 had relevant experience with idea generation processes, 85 had none, and three did not answer this question.
After examining the data and frequencies of valid responses, the descriptive statistics were examined, i.e., inter-item correlations, medians, means, and standard deviations of scales (see Table 5). The reliability coefficients of the constructs were greater than 0.8, as measured by Cronbach’s α, indicating a satisfactory internal consistency (Nunnally & Bernstein, 2008). The conducted graphical analysis and the Shapiro–Wilk test indicated that the data were not normally distributed. The correlation coefficients between variables for both conditions are displayed in Table 4. Negative correlations between engagement and gender r = -0.22, p < 0.05 and social presence and gender r = -0.25, p < 0.01 were found in the CA condition. Engagement and social presence were positively correlated in both conditions r = 0.62, p < 0.01 (CA), r = 0.59, p < 0.01 (standard submission form).
A Mann–Whitney-U-Test was calculated to determine if there were differences in engagement and perceived social presence between the conditions conversational agent and standard submission form (see Table 5). The test showed a statistically significant difference between both conditions in engagement U = 3497.00, Z -5.47, p < 0.001, r = -0.37, and perceived social presence U = 2525.50, Z = -7.57, p < 0.001, r = -0.51. The effect sizes of the difference between means can be considered as medium |r|= 0.37 and large |r|= 0.51 (Cohen, 1992).
Organizations strive to leverage external knowledge and expertise by applying open innovation approaches to promote their innovation capability. To manage idea platforms for the outside-in process in such a way that prospective contributors are motivated and supported to voluntarily submit an idea and the large number of emerging ideas can be efficiently selected, we propose a design for a CA facilitated idea generation process. Building on the vast body of theoretical knowledge regarding the concept of facilitation, we derive design knowledge to determine purposeful characteristics and behavior of a CA facilitator. By evaluating the instantiated design knowledge in a dialog-based CA facilitator for idea generation, we provide results regarding the nature of ideas and characteristics of the process. The evaluation with knowledgeable experts and a computerized linguistic analysis revealed homogeneous idea contributions with a constant level of detail, a satisfactory level of comprehensibility, and a high analytical as well as logical character comprising outward-looking words. Furthermore, the questionnaire-based evaluation of idea generation process showed that CA facilitation induces a higher perceived engagement and social presence among contributors during the idea generation process compared to conventional form-based interfaces. In the following, we elaborate on the multifaceted implications of these findings.
First, the presented design demonstrates the integration of the facilitation concept into state-of-the-art CA technology. For this purpose, following Bostrom et al. (1993), we considered all facilitative acts to leverage support for idea contributors during task processing (task), for accomplishing associated goals (process), and to conduct a socio-emotional interaction. Thus, we integrate and extend previous approaches to CA facilitation, as these have so for examined different aspects in isolation such as supportive behavior for idea generation (Wang et al., 2007) and proactive prompting of desired behavior, e.g., the elaboration and reformulation of input (Louvet et al., 2017; Tegos et al., 2014, 2015). As the results of the evaluation of contributors’ textual language style suggest that CA facilitation is related to a fact-based enrichment of information and that idea contributors are emotionally engaged and apply problem-solving and analytical thinking, we imply that CA facilitation may have a positive influence on idea generation in the context of open innovation. As a supplementary point, it should be noted that, in contrast to emotionality, the dimension of analytical thinking was consistent across all facilitative process steps. Consequently, linguistic analysis of the text data denotes that the idea contributors used analytical writing for idea text, albeit using a positive language style that suggests they were emotionally engaged. Under the given conditions, this can be associated with the CA’s goal- and productivity-oriented behavioral capabilities. Furthermore, the data lends support to the finding that idea contributors were more focused on others than on themselves during the interaction when generating the idea text (Moore et al., 2021). This is a promising finding as it may indicate that humans focus on their interlocutor in this context, even when the partner is deliberately artificial but uses human language patterns.
Second, our results show that the idea generation process can be designed in such a way that idea contributors are more engaged compared to conventional form-based interfaces. This is a valuable insight in the context of open innovation processes (e.g., crowdsourcing, idea contests), as organizations struggle to motivate voluntary and unpaid idea contributors to start and complete submissions (Bretschneider, 2012). Besides manipulating the presentation of the task and goal (Benz et al., 2019; T. de Vreede et al., 2013), the implementation of a task-focused CA that facilitates the idea generation process in a human-like fashion constitutes an additional effective method to increase users’ engagement. This insight is supported by the perception of the focus group participants who stated that CA facilitation reduced their initial hesitation to initiate the idea generation process and provided goal-oriented guidance in the process. In addition, the statistical analysis based on the survey following the simulated idea generation process has shown that significant differences exist between the CA facilitated and non-facilitated idea generation process in terms of engagement and perceived social presence. The fact that engagement and social presence correlate is not unexpected, as the concepts are closely linked. Interestingly, however, differences were observed between the two conditions with respect to significance in the correlations of the two constructs with the variable gender, which could be relevant for the further development of an individualized CA behavior.
Third, the generated design knowledge and design entity in the form of the CA facilitator provide a novel approach to enhance the efficiency of idea selection through an improved idea generation process. More specifically, supporting idea contributors reduces the likelihood of a heterogeneous pool of ideas with low levels of detail, elaboration, and likelihood of implementation. This idea generation approach provides organizations with the option to flexibly implement adjustments in the process and CA’s facilitative acts to tailor the support of contributors to a specific task and determine a pre-defined set of required information. Accordingly, the facilitation of the idea generation process can serve as a preparatory step for a systematic idea selection process. Thereby, our findings address relevant questions about organizations’ efficient management of large numbers of collected ideas with restricted resources in the context of open innovation initiatives (Blohm et al., 2013; Merz, 2018).
8.1 Contribution to Theory
Our results contribute to literature on CAs, collaboration, and open innovation. In terms of research on CAs, we provide a blueprint to implement the facilitation concept in CAs by considering all facilitative acts to achieve effective one-to-one support for individuals working on cognitively demanding tasks. In addition, we present an approach to elevate the level of user engagement by designing dialogues with micro and macro scripts that create a balanced division between task-focused and socio-emotional interaction. The results of our study also have implications for research on open innovation. The presented method for idea generation on idea platforms represents an approach to effectively involve and engage idea contributors. Therefore, CA facilitation is promising to serve as an additional mechanism to leverage user engagement and gather completed and elaborated submissions from voluntary contributors. With reference to collaboration research, this study contributes to investigations addressing the shift of static automated facilitation systems in accordance with the “Facilitator-in-a-Box” paradigm by Briggs et al. (2013) toward more pro-active, flexible, and intelligent conversation-based systems. More specifically, our results suggest that increasing the flexibility of support (e.g., answering questions about the task on demand) in a facilitated and structured process yields elevated conditions for individuals’ task accomplishment.
By completing and communicating this DSR project, we present a nascent design theory of the type “design and action” (Gregor, 2006) with abstracted design knowledge in the form of four DPs. As this prescriptive design knowledge defines functioning and construction for the class of artifacts “conversational AI facilitation”, it constitutes a mid-range theory that combines theoretical insights related to facilitation with solving a concrete problem through the implementation of an artifact (Kuechler & Vaishnavi, 2012). The abstractness and balanced projectability of the generated design knowledge allow its instantiation for similar artifacts (e.g., intelligent voice-based facilitation systems) in related domains (Vom Brocke et al., 2020). Therefore, the DPs can be reused to implement a related artifact in contexts where companies and institutions also depend on voluntary, substantive, and understandable textual descriptions of individuals’ ideas and/or concerns. Accordingly, our insights can be utilized, inter alia, for customer service and citizen participation, to document customer requests or ideas from citizens and ensure an efficient subsequent processing of contributions.
8.2 Contribution to Practice
Furthermore, we contribute to practice by presenting a feasible and implementable concept for automated facilitation with CAs for the application on idea platforms as well as related domains with the goal of achieving more elaborate and detailed outcomes. The CA presented in this study can be adjusted and applied on idea platforms to facilitate individuals in the idea generation process. With this, the challenge of hardly scalable human facilitation on digital platforms can be overcome. Therefore, CA facilitation can support platform providers and organizations in managing the process of involving and engaging external idea contributors in their innovation processes. Thereby, organizations might increase voluntary participation by idea contributors, as the idea generation process is more appealing in comparison to standard submission forms. Moreover, for handling the large pool of submissions in outside-in open innovation processes, e.g., idea competitions or tournaments, the resulting structured and detailed submissions are beneficial to efficiently select promising ideas.
8.3 Limitations and Future Research
Despite the promising results, our study does not come without limitations. We acknowledge that simulating the idea generation process to evaluate user engagement and perceived social presence limits the conclusiveness of our results. Nevertheless, we chose this technique as part of our iterative DSR approach since it enabled us to achieve the necessary sample size to perform inferential statistical analyses. In addition, this approach allowed us to circumvent possible influences of NLP-related flaws on the results. In that regard, despite our efforts to base our CA facilitation on the best possible language model, in the preceding evaluation phase we discovered that in some cases the CA did not respond correctly to users’ utterances, which may be reflected in the results but is not attributable to the facilitation concept as such. We are confident that the method of mock-up-based evaluation, which is widely used in interaction design research, meets the criteria to make a valuable contribution through statistically substantiated conclusions. To deepen insights on the effectiveness of the generated design knowledge, in future studies a CA facilitator should be implemented in an organization to analyze the impact on the operational efficiency in a longitudinal evaluation setting regarding the assessment and selection of external ideas. In addition, while we were able to measure engagement and social presence through the questionnaire, we did not examine how the participants perceived the facilitative support provided by the CA, e.g., regarding satisfaction with the idea generation process and its outcome. In this context, future research should examine how CA facilitation is subjectively perceived. In particular, it should be investigated to what extent the provided support by the CA is in line with needs of prospective idea contributors and explored what possible opportunities for adaptation exist. One promising avenue for future work in this context is to investigate the influence of flexibility during the facilitated idea generation by allowing contributors to choose the sequence of steps to produce creative ideas (Amabile et al., 2018). Finally, we based our text analysis on a proprietary algorithm to assess and interpret the characteristics of contributed idea texts. However, we are confident that the stated validity is accurate based on extensive previous language research and is applicable to our research with the understanding that text analysis is always context dependent. In future studies, computer linguistic text analysis should be used for evaluations to further the validation of this strategy in the realm of CA-based facilitation. Additionally, this approach could be adapted and applied to adjust the facilitation behavior of CAs to users. It is conceivable that, based on real-time analysis of input, users could be prompted by the CA to formulate their content differently (e.g., more analytically) to achieve desired outcomes.
As part of a multi-cycle DSR research project, this study presents a solution to elevate organizational idea generation processes on idea platforms with AI-based CA technology. While idea generation facilitation is critical to innovation, organizations struggle to leverage this potential on idea platforms. So far, large amounts of ambiguous, imprecise, and incomplete ideas hamper organizations in selecting ideas with potential for further processing. To address these challenges, we built on the facilitation concept to iteratively design and instantiate a scalable CA that facilitates individuals during their idea generation. Evaluation results suggest that the natural, dialog-based interaction encourages and engages idea contributors to provide clear, detailed, and complete ideas, which deliver a suitable grounding for the essential follow-up selection of textual ideas in organizations.
Acerbi, E., Pérez, G., & Stella, F. (2010). Hybrid Syntactic-Semantic Reranking for Parsing Results of ECAs Interactions Using CRFs. In H. Loftsson, E. Rögnvaldsson, & S. Helgadóttir (Eds.), Advances in Natural Language Processing (pp. 15–26). Springer-Verlag Berlin Heidelberg. https://doi.org/10.1007/978-3-642-14770-8_4.
Adaji, I., & Olakanmi, O. (2019). Evolution of Emotions and Sentiments in an Online Learning Community. Proceedings of Artificial Intelligence in Education (AIED).
Adams, S., Arel, I., Bach, J., Coop, R., Furlan, R., Goertzel, B., Hall, J. S., Samsonovich, A., Scheutz, M., Schlesinger, M., Shapiro, S. C., & Sowa, J. (2012). Mapping the Landscape of Human-Level Artificial General Intelligence. AI Magazine, 33(1), 25. https://doi.org/10.1609/aimag.v33i1.2322
Amabile, T. M., Collins, M. A., Conti, R., Phillips, E., Picariello, M., Ruscio, J., & Whitney, D. (2018). Creativity in context: Update to the social psychology of creativity. Routledge.
Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J., Iqbal, S., Bennett, P. N., Inkpen, K., Teevan, J., Kikin-Gil, R., & Horvitz, E. (2019). Guidelines for Human-AI Interaction. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3290605.3300233.
Araujo, T. (2018). Living Up to the Chatbot Hype: The Influence of Anthropomorphic Design Cues and Communicative Agency Framing on Conversational Agent and Company Perceptions. Computers in Human Behavior, 85, 183–189. https://doi.org/10.1016/j.chb.2018.03.051
Ashktorab, Z., Jain, M., Liao, Q. V., & Weisz, J. D. (2019). Resilient Chatbots: Repair Strategy Preferences for Conversational Breakdowns. In 2019 CHI Conference on Human Factors in Computing Systems.
Attfield, S., Kazai, G., Lalmas, M., & Piwowarski, B. (2011). Towards a Science of User Engagement (Position Paper). In WSDM Workshop on User Modelling for Web Applications (WSDM ’11). New York, NY: ACM.
Bailenson, J. N., & Blascovich, J. (2004). Avatars. In W. S. Bainbridge (Ed.), Berkshire Encyclopedia of Human-Computer Interaction (pp. 64–68). Berkshire Publishing Group LLC.
Bansemir, B., & Neyer, A.K. (2009). From Idea Management Systems to Interactive Innovation Management Systems: Designing for Interaction and Knowledge Transfer. In H. R. Hansen, and D. Karagiannis, & H. Fill (Eds.), Proceedings of the International Conference of Business Informatics (Tagungsbände der WI 2009). Band 1. Österreichische Computer Gesellschaft, Wien: 861–870.
Barbier, G., Zafarani, R., Gao, H., Fung, G. P. C., & Liu, H. (2012). Maximizing benefits from crowdsourced data. Computational & Mathematical Organization Theory, 18(3), 257–279. https://doi.org/10.1007/s10588-012-9121-2
Benz, C., Zierau, N., & Satzger, G. (2019). Not all tasks are alike: Exploring the effect of task representation on user engagement in crowd-based idea evaluation. In 27th European Conference on Information Systems (ECIS).
Bittner, E. A. C., Küstermann, G. C., & Tratzky, C. (2019a). The Facilitator Is a Bot: Towards a Conversational Agent for Facilitating Idea Elaboration on Idea Platforms. In Proceedings of the 27th European Conference on Information Systems (ECIS). Stockholm & Uppsala, Sweden.
Bittner, E. A. C., & Leimeister, J. M. (2014). Creating Shared Understanding in Heterogeneous Work Groups: Why It Matters and How to Achieve It. Journal of Management Information Systems, 31(1), 111–144. https://doi.org/10.2753/MIS0742-1222310106
Bittner, E. A. C., Oeste-Reiß, S., & Leimeister, J. M. (2019b). Where is the Bot in our Team? Toward a Taxonomy of Design Option Combinations for Conversational Agents in Collaborative Work. In T. Bui (Ed.), 52nd Hawaii Conference on System Sciences (HICSS). Maui, USA.
Bjelland, O. M., & Wood, R. C. (2008). An inside view of IBM’s “Innovation Jam.” MIT Sloan Management Review, 50(1), 32-40.
Blohm, I., Leimeister, J. M., & Krcmar, H. (2013). Crowdsourcing: How to Benefit from (Too) Many Great Ideas. MIS Quarterly Executive, 12(4), 199–211.
Bogers, M., Chesbrough, H., & Moedas, C. (2018). Open Innovation: Research, Practices and Policies. California Management Review, 60(20), 5–16.
Bostrom, R. P., Anson, R., & Clawson, V. K. (1993). Group Facilitation and Group Support Systems. In L. Jessup & J. Valchich (Eds.), Group Facilitation and Group Support Systems (pp. 146–168). Macmillan.
Boudreau, K. J., & Lakhani, K. R. (2013). Using the Crowd as an Innovation Partner. Harvard Business Review, 91(4), 60–69.
Bowers, C. A., Pharmer, J. A., & Salas, E. (2000). When Member Homogeneity is Needed in Work Teams: A Meta-Analysis. Small Group Research, 31(3), 305–327. https://doi.org/10.1177/104649640003100303
Boyd, R. L., & Pennebaker, J. W. (2015). A way with Words: Using Language for Psychological Science in the Modern Era. In C. V. Dimofte, C. P. Haugtvedt, & R. F. Yalch (Eds.), Consumer Psychology in a Social Media World (pp. 222–236). Routledge/Taylor & Francis Group.
Boyd, R. L. (2017). Psychological Text Analysis in the Digital Humanities. In S. Hai-Jew (Ed.), Data Analytics in Digital Humanities (pp. 161–189). Springer International Publishing. https://doi.org/10.1007/978-3-319-54499-1_7.
Brandtzaeg P.B., & Følstad A. (2017). Why People Use Chatbots. In I. Kompatsiaris et al. (Eds), Internet Science. INSCI 2017. Lecture Notes in Computer Science, vol 10673. Springer, Cham. https://doi.org/10.1007/978-3-319-70284-1_30.
Bretschneider, U. (2012). Ideen-Communities. In Die Ideen-Community zur Integration von Kunden in den Innovationsprozess: Empirische Analysen und Implikationen (pp. 33–62). Gabler Verlag. https://doi.org/10.1007/978-3-8349-7173-9_3
Briggs, R. O., Adkins, M., Mittleman, D., Kruse, J., Miller, S., & Nunamaker, J. F. (1998). A Technology Transition Model Derived from Field Investigation of GSS Use aboard the U.S.S. CORONADO. Journal of Management Information Systems, 15(3), 151–195. https://doi.org/10.1080/07421222.1998.11518217
Briggs, R. O., Kolfschoten, G. L., de Vreede, G.-J., Lukosch, S., & Albrecht, C. C. (2013). Facilitator-in-a-Box: Process Support Applications to Help Practitioners Realize the Potential of Collaboration Technology. J. of Management Information Systems, 29(4), 159–194. https://doi.org/10.2753/MIS0742-1222290406
Chandra, L., Seidel, S., & Gregor, S. (2015). Prescriptive Knowledge in IS Research: Conceptualizing Design Principles in Terms of Materiality, Action, and Boundary Conditions. In T. Bui (Ed.), 48th Hawaii International Conference on System Sciences (HICSS). USA: Kauai.
Chatterjee, S., Rana, N. P., & Dwivedi, Y. K. (2021). Assessing Consumers’ Co‐production and Future Participation On Value Co‐creation and Business Benefit: an F-P-C-B Model Perspective. Information Systems Frontiers. Advance online publication. https://doi.org/10.1007/s10796-021-10104-0
Chesbrough, H. W. (2003). Open Innovation: The new imperative for creating and profiting from technology. Harvard Business School Press.
Chesbrough, H. W., & Bogers, M. (2014). Explicating Open Innovation: Clarifying an Emerging Paradigm for Understanding Innovation. In H. W. Chesbrough, W. Vanhaverbeke, & J. West (Eds.), New Frontiers in Open Innovation (pp. 3–28). Oxford University Press.
Chesbrough, H. W., & Brunswicker, S. (2014). A Fad or a Phenomenon? The Adoption of Open Innovation Practices in Large Firms. Research Technology Management, 57(2), 16–25. https://doi.org/10.5437/08956308X5702196
Clawson, V. K., & Bostrom, R. P. (1996). Research-driven facilitation training for computer-supported environments. Group Decision and Negotiation, 5(1), 7–29.
Clawson, V. K., Bostrom, R. P., & Anson, R. (1993). The Role of the Facilitator in Computer-Supported Meetings. Small Group Research, 24(4), 547–565. https://doi.org/10.1177/1046496493244007
Cohen, J. (1992). A Power Primer. Psychological Bulletin, 112(1), 155–159.
Corney, J. R., Sanchez, C. T., Jagadeesan, A. P., & Regli, W. C. (2009). Outsourcing Labour to the Cloud. International Journal of Innovation and Sustainable Development, 4(4), 294–313. https://doi.org/10.1504/IJISD.2009.033083
Creswell, J. W., Plano Clark, V. L., Gutmann, M., & Hanson, W. (Eds.). (2003). Handbook of Mixed Methods in Social and Behavioural Research. Advanced Mixed Methods Research Designs. Thousand Oaks.
Cricelli, L., Grimaldi, M., & Vermicelli, S. (2021). Crowdsourcing and Open Innovation: A Systematic Literature Review, an Integrated Framework and a Research Agenda. Review of Managerial Science, 1–42. https://doi.org/10.1007/s11846-021-00482-9
Cyr, D., Hassanein, K., Head, M., & Ivanov, A. (2007). The Role of Social Presence in Establishing Loyalty in e-Service Environments. Interacting with Computers, 19(1), 43–56. https://doi.org/10.1016/j.intcom.2006.07.010
Cyr, D., Head, M., Larios, H., & Pan, B. (2009). Exploring Human Images in Website Design: A Multi-Method Approach. MIS Quarterly, 539–566. https://doi.org/10.2307/20650308
Daft, R. L., & Lengel, R. H. (1986). Organizational Information Requirements, Media Richness and Structural Design. Management Science, 32(5), 554–571. https://doi.org/10.1287/mnsc.32.5.554
Dean, D. L., Hender, J. M., Rodgers, T. L., & Santanen, E. L. (2006). Identifying Quality, Novel, and Creative Ideas: Constructs and Scales for Idea Evaluation. Journal of the Association for Information Systems, 7(10), 646–699. https://doi.org/10.17705/1jais.00106
Dellermann, D., Lipusch, N., & Li, M. (2018). Combining Humans and Machine Learning: A Novel Approach for Evaluating Crowdsourcing Contributions in Idea Contests. In Multikonferenz Wirtschaftsinformatik (MKWI). Lüneburg, Germany.
Dennis, A. R., Nunamaker, J. F., & Vogel, D. R. (1990). A Comparison of Laboratory and Field Research in the Study of Electronic Meeting Systems. Journal of Management Information Systems, 7(3), 107–135. https://doi.org/10.1080/07421222.1990.11517899
Di Gangi, P. M., & Wasko, M. (2009). Open Innovation through Online Communities. In W. R. King (Ed.), Knowledge Management and Organizational Learning (pp. 206–213). Springer.
Diederich, S., & Brendel, A. B. (2019). On Conversational Agents in Information Systems Research: Analyzing the Past to Guide Future Work. In Internationale Tagung Wirtschaftsinformatik.
Diederich, S., Brendel, A. B., Lichtenberg S., & Kolbe, L. (2019). Design for Fast Request Fulfillment or Natural Interaction? Insights from an Experiment with a Conversational Agent. In 27th European Conference on Information Systems (ECIS).
Drechsler, A., & Hevner, A. R. (2018). Utilizing, Producing, and Contributing Design Knowledge in DSR Projects. In S. Chatterjee, K. Dutta, & R. P. Sundarraj (Eds.), Lecture Notes in Computer Science. Designing for a Digital and Globalized World (Vol. 10844, pp. 82–97). Springer International Publishing. https://doi.org/10.1007/978-3-319-91800-6_6
Elshan, E., Zierau, N., Engel, C., Janson, A., & Leimeister, J. M. (2022). Understanding the Design Elements Affecting User Acceptance of Intelligent Agents: Past. Information Systems Frontiers. Advance online publication. https://doi.org/10.1007/s10796-021-10230-9
Fjermestad, J. S. R. H. (2000). Group Support Systems: A Descriptive Evaluation of Case and Field Studies. Journal of Management Information Systems, 12(3), 115–159. https://doi.org/10.1080/07421222.2000.11045657
Følstad, A., Skjuve, M., & Brandtzaeg, P. B. (2019). Different Chatbots for Different Purposes: Towards a Typology of Chatbots to Understand Interaction Design. In S. S. Bodrunova, O. Koltsova, A. Følstad, H. Halpin, P. Kolozaridi, L. Yuldashev, A. Smoliarova, & H. Niedermayer (Eds.), Lecture Notes in Computer Science. Internet Science (Vol. 11551, pp. 145–156). Springer International Publishing. https://doi.org/10.1007/978-3-030-17705-8_13
Füller, J., Matzler, K., & Hoppe, M. (2008). Brand Community Members as a Source of Innovation. Product Innovation Management, 25(6), 608–619. https://doi.org/10.1111/j.1540-5885.2008.00325.x
Gassmann, O. (2006). Opening up the Innovation Process: Towards an Agenda. R&D Management, 36(3), 223–228.
Gefen, D., & Straub, D. W. (1997). Gender Differences in the Perception and Use of E-Mail: An Extension to the Technology Acceptance Model. MIS Quarterly, 21(4), 389. https://doi.org/10.2307/249720
Gefen, D., & Straub, D. (2003). Managing user trust in B2C e-services. E-Service, 2(2), 7–24.
Ghose, S., & Barua, J. J. (2013). Toward the Implementation of a Topic Specific Dialogue Based Natural Language Chatbot as an Undergraduate Advisor. In International Conference on Informatics, Electronics & Vision (ICIEV). Dhaka, Bangladesh.
Gnewuch, U., Morana, S., Adam, M. T. P., & Maedche, A. (2018). Faster Is Not Always Better: Understanding the Effect of Dynamic Response Delays in Human-Chatbot Interaction. In P. M. Bednar, U. Frank, & K. Kautz (Eds.), 26th European Conference on Information Systems (ECIS). UK: Portsmouth.
Gnewuch, U., Morana, S., & Maedche, A. (2017). Towards Designing Cooperative and Social Conversational Agents for Customer Service. In Y. J. Kim, R. Agarwal, & J. K. Lee (Eds.), 38th International Conference on Information Systems (ICIS). South Korea: Seoul.
Gong, L. (2008). How Social is Social Responses to Computers? The Function of the Degree of Anthropomorphism in Computer Representations. Computers in Human Behavior, 24(4), 1494–1509. https://doi.org/10.1016/j.chb.2007.05.007
Gregor, S. (2006). The Nature of Theory in Information Systems. MIS Quarterly, 30(3), 611–642. https://doi.org/10.2307/25148742
Gregor, S., & Hevner, A. R. (2013). Positioning and Presenting Design Science Research for Maximum Impact. MIS Quarterly, 37(2), 337–355. https://doi.org/10.25300/MISQ/2013/37.2.01
Grudin, J., & Jacques, R. (2019). Chatbots, Humbots, and the Quest for Artificial General Intelligence. In S. A. Brewster, G. Fitzpatrick, A. L. Cox, & V. Kostakos (Eds.), CHI’ 2019 Conference on Human Factors in Computing Systems. Scotland: Glasgow.
Haller, J. B., Velamuri, V. K., Schneckenberg, D., & Möslein, K. M. (2017). Exploring the Design Elements of Open Evaluation. Journal of Strategy and Management, 10(1), 40–65. https://doi.org/10.1108/JSMA-05-2015-0039
Hansen, M. T., & Birkinshaw, J. (2007). The Innovation Value Chain. Harvard Business Review, 85(6), 121–130.
Hansen, M. T., & Pries-Heje, J. (2017). Value Creation in Knowledge Networks. Five design principles. Scandinavian Journal of Information Systems, 29(2), 61–79.
Harper, R. H. R. (2019). The Role of HCI in the Age of AI. International Journal of Human-Computer Interaction, 35(15), 1331–1344. https://doi.org/10.1080/10447318.2019.1631527
Hassanein, K., & Head, M. (2007). Manipulating perceived social presence through the web interface and its impact on attitude towards online shopping. International Journal of Human-Computer Studies, 65(8), 689–708. https://doi.org/10.1016/j.ijhcs.2006.11.018
Hilgers, D., & Ihl, C. (2010). Citizensourcing: Applying the Concept of Open Innovation to the Public Sector. International Journal of Public Participation, 4(1), 68–88.
Hill, J., Randolph Ford, W., & Farreras, I. G. (2015). Real Conversations with Artificial Intelligence. Computers in Human Behavior, 49, 245–250. https://doi.org/10.1016/j.chb.2015.02.026
Holle, M., Elsesser, L., Schuhmacher, M., & Lindemann, U. (2016). How to Motivate External Open Innovation Partners: Identifying Suitable Measures. In Portland International Conference on Management of Engineering and Technology (PICMET). Honolulu, USA.
Humphreys, A., & Wang, R.J.-H. (2018). Automated Text Analysis for Consumer Research. Journal of Consumer Research, 44(6), 1274–1306. https://doi.org/10.1093/jcr/ucx104.
Io, H. N., & Lee, C. B. (2017). Chatbots and Conversational Agents: A Bibliometric Analysis. In 2017 IEEE International Conference on Industrial Engineering & Engineering Management (pp. 215–219). IEEE. https://doi.org/10.1109/IEEM.2017.8289883
Ito, T., Hadfi, R., & Suzuki, S. (2021). An Agent that Facilitates Crowd Discussion. Advance online publication. https://doi.org/10.1007/s10726-021-09765-8
Jain, M., Kumar, P., Kota, R., & Patel, S. N. (2018). Evaluating and Informing the Design of Chatbots. In I. Koskinen, Y.-K. Lim, T. C. Pargman, K. K. N. Chow, & W. Odom (Eds.), Designing Interactive Systems Conference 2018. Hong Kong: China.
Janssen, A., Passlick, J., Rodríguez Cardona, D., & Breitner, M. H. (2020). Virtual Assistance in Any Context: A Taxonomy of Design Elements for Domain-specific Chatbots. Business & Information Systems Engineering, 62(3), 211–225. https://doi.org/10.1007/s12599-020-00644-1
Jenkins, M.-C., Churchill, R., Cox, S., & Smith, D. (2007). Analysis of User Interaction with Service Oriented Chatbot Systems. Lecture Notes in Computer Science (LNCS). Springer.
Johannsen, F., Leist, S., Konadl, D., & Basche, M. (2018). Comparison of Commercial Chatbot Solutions for Supporting Customer Interaction. In P. M. Bednar, U. Frank, & K. Kautz (Eds.), 26th European Conference on Information Systems (ECIS). UK: Portsmouth.
Kacewicz, E., Pennebaker, J. W., Davis, M., Jeon, M., & Graesser, A. C. (2014). Pronoun Use Reflects Standings in Social Hierarchies. Journal of Language and Social Psychology, 33(2), 125–143. https://doi.org/10.1177/0261927X13502654
Kelley, J. F. (1983). An Empirical Methodology for Writing User-friendly Natural Language Computer Applications. ACM Transactions on Information Systems, 2(1), 193–196. https://doi.org/10.1145/800045.801609
Kelly, G. G., & Bostrom, R. P. (1997). Facilitating the Socio-emotional Dimension in Group Support Systems Environments. Journal of Management Information Systems, 14(3), 23–44. https://doi.org/10.1145/212490.212499
Kim, S., Eun, J., Oh, C., Suh, B., & Lee, J. (2020). Bot in the Bunch: Facilitating Group ChatDiscussion by Improving Efficiency and Participation with a Chatbot. In R. Bernhaupt, F. Mueller, D. Verweij, J. Andres, J. McGrenere, A. Cockburn, et al. (Eds.), CHI ’20 Conference on Human Factors in Computing Systems (pp. 1–13). USA: Honolulu.
Kim, Y. H., Kim, D. J., & Wachter, K. (2013). A Study of Mobile User Engagement (MoEN): Engagement Motivations, Perceived Value, Satisfaction, and Continued Engagement Intention. Decision Support Systems, 56, 361–370. https://doi.org/10.1016/j.dss.2013.07.002
King, A., & Lakhani, K. R. (2013). Using Open Innovation to Identify the Best Ideas. MIT Sloan Management Review, 55(1), 69–76.
Kipp, P., Wieck, E., Bretschneider, U., & Leimeister, J. M. (2013). 12 Years of GENEX Framework: What Did Practice Learn from Science in Terms of Web-Based Ideation? In Internationale Tagung Wirtschaftsinformatik. Leipzig, Germany.
Kittur, A., Nickerson, J. V., Bernstein, M., Gerber, E., Shaw, A., Zimmermann, J., et al. (2013). The Future of Crowd Work. In A. S. Bruckman, S. Counts, C. Lampe, & L. G. Terveen (Eds.), Computer Supported Cooperative Work (pp. 1301–1318). USA: San Antonio.
Knijnenburg, B. P., & Willemsen, M. C. (2016). Inferring Capabilities of Intelligent Agents from Their External Traits. ACM Transactions on Interactive Intelligent Systems, 6(4), 1–25.
Kornish, L. J., & Hutchison-Krupat, J. (2017). Research on Idea Generation and Selection: Implications for Management of Technology. Production and Operations Management, 26(4), 633–651. https://doi.org/10.2139/ssrn.2799432
Kosonen, M., Gan, C., Olander, H., & Blomqvist, K. (2013). My Idea is our Idea! Supporting User-driven Innovation Activities in Crowdsourcing. International Journal of Innovation Management, 3(17), 1–18. https://doi.org/10.1142/S1363919613400100
Kuechler, W., & Vaishnavi, V. (2012). A Framework for Theory Development in Design Science Research: Multiple Perspectives. Journal of the Association of Information Systems, 13(6), 395–423. https://doi.org/10.17705/1jais.00300
Kumar, R., & Rosé, C. P. (2014). Triggering Effective Social Support for Online Groups. ACM Transactions on Interactive Intelligent Systems, 3(4), 1–32.
Langan-Fox, J., Anglim, J., & Wilson, J. R. (2004). Mental Models, Team Mental Models, and Performance: Process, Development, and Future Directions. Human Factors and Ergonomics in Manufacturing, 14(4), 331–352. https://doi.org/10.1002/hfm.v14:4
Laumer, S., Gubler, F., Racheva, A., & Maier, C. (2019). Use Cases for Conversational Agents: An Interview-based Study. In 25th Americas Conference on Information Systems (AMCIS). Cancún, Mexico.
Lazzarotti, V., & Manzini, R. (2009). Different Modes of Open Innovation: A Theoretical Framework and an Empirical Study. International Journal of Innovation Management, 13(4), 615–636. https://doi.org/10.1142/S1363919609002443
Li, M., Kankanhalli, A., & Kim, S. H. (2016). Which Ideas Are More Likely to Be Implemented in Online User Innovation Communities? An Empirical Analysis. Decision Support Systems, 84, 28–40. https://doi.org/10.1016/j.dss.2016.01.004
Liao, L., Ma, Y., He, X., Hong, R., & Chua, T.-S. (2018). Knowledge-aware Multimodal Dialogue Systems. In S. Boll, K. M. Lee, J. Luo, W. Zhu, H. Byun, C. W. Chen, et al. (Eds.), 26th ACM International Conference on Multimedia (pp. 801–809). South Korea: Seoul.
Lieberman, H. (1997). Autonomous Interface Agents. In S. Pemberton (Ed.), CHI’ 97 Human Factors in Computing Systems Conference on Human factors in Computing Systems (pp. 67–74). USA: Atlanta.
Loftsson, H., Rögnvaldsson, E., Helgadóttir, S., Acerbi, E., Pérez, G., & Stella, F. (2010). Hybrid Syntactic-Semantic Reranking for Parsing Results of ECAs Interactions Using CRFs. In H. Loftsson, E. Rögnvaldsson, & S. Helgadóttir (Eds.), 7th International Conference on Natural Language Processing, IceTAL (pp. 15–26). Iceland: Reykjavik.
Louvet, J.-B., Duplessis, G. D., Chaignaud, N., Vercouter, L., & Kotowicz, J.-P. (2017). Modeling a Collaborative Task with Social Commitments. Procedia Computer Science, 112, 377–386.
Luger, E., & Sellen, A. (2016). "Like Having a Really Bad PA" The Gulf between User Expectation and Experience of Conversational Agents. In CHI ’16: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (pp. 5286–5297), San Jose, California, USA.
Lüttgens, D., Pollok, P., Antons, D., & Piller, F. (2014). Wisdom of the crowd and capabilities of a few: Internal success factors of crowdsourcing for innovation. Journal of Business Economics, 84(3), 339–374.
Mayring, P. (2014). Qualitative Content Analysis: Theoretical Foundation, Basic Procedures and Software Solution. Retrieved from http://nbn-resolving.de/urn:nbn:de:0168-ssoar-395173. Accessed 11 Dec 2020.
McTear, M., Callejas, Z., & Griol, D. (2016). The Conversational Interface: Talking to Smart Devices. Springer Publishing Company.
Medhi Thies, I., Menon, N., Magapu, S., Subramony, M., & O’Neill, J. (2017). How Do You Want Your Chatbot? An Exploratory Wizard-of-Oz Study with Young, Urban Indians. In R. Bernhaupt, G. Dalvi, A. Joshi, D. K. Balkrishan, J. O'Neill, & M. Winckler (Eds.), Lecture Notes in Computer Science. Human-Computer Interaction - INTERACT 2017 (Vol.10513, pp. 441–459). Springer, Cham. https://doi.org/10.1007/978-3-319-67744-6_28.
Meier, T., Boyd, R. L., Pennebaker, J. W., Mehl, M. R., Martin, M., Wolf, M., & Horn, A. B. (2019). “LIWC auf Deutsch”: The Development, Psychometrics, and Introduction of DE- LIWC2015. https://doi.org/10.17605/OSF.IO/TFQZC.
Merz, A. B. (2018). Mechanisms to Select Ideas in Crowdsourced Innovation Contests - A Systematic Literature Review and Research Agenda. In P. M. Bednar, U. Frank, & K. Kautz (Eds.), 26th European Conference on Information Systems (ECIS). UK: Portsmouth.
Möller, F., Guggenberger, T. M., & Otto, B. (2015). Towards a method for design principle development in information systems. In B. Donnellan, M. Helfert, J. Kenneally, D. E. VanderMeer, M. A. Rothenberger, & R. Winter (Eds.), Lecture Notes in Computer Science: 10th International Conference on Design Science Research in Information Systems and Technology, DESRIST (Vol. 9073). Dublin, Ireland.
Montero, C. A., & Araki, K. (2005). Enhancing computer chat: Toward a smooth user-computer interaction. In International Conference on Knowledge-Based and Intelligent Information and Engineering Systems (pp. 918–924). Springer, Berlin, Heidelberg.
Moore, R. L., Yen, C.-J., & Powers, F. E. (2021). Exploring the Relationship between Clout and Cognitive Processing in MOOC Discussion Forums. British Journal of Educational Technology, 52(1), 482–497. https://doi.org/10.1111/bjet.13033
Morrissey, K., & Kirakowski, J. (2013). ‘Realness’ in Chatbots: Establishing Quantifiable Criteria. In D. Hutchison, T. Kanade, J. Kittler, J. M. Kleinberg, F. Mattern, J. C. Mitchell, M. Naor, O. Nierstrasz, C. Pandu Rangan, B. Steffen, M. Sudan, D. Terzopoulos, D. Tygar, M. Y. Vardi, G. Weikum, & M. Kurosu (Eds.), Lecture Notes in Computer Science. Human Computer Interaction. Interaction Modalities and Techniques (Vol. 8007, pp. 87–96). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-39330-3_10.
Nass, C., & Moon, Y. (2000). Machines and Mindlessness: Social Responses to Computers. Journal of Social Issues, 56(1), 81–103. https://doi.org/10.1111/0022-4537.00153
Nguyen, T. H., Waizenegger, L., & Techatassanasoontorn, A. A. (2021). “Don’t Neglect the User!” – Identifying Types of Human-Chatbot Interactions and their Associated Characteristics. Information Systems Frontiers. Advance online publication. https://doi.org/10.1007/s10796-021-10212-x
Niederman, F., Beise, C. M., & Beranek, P. M. (1996). Issues and Concerns about Computer-Supported Meetings: The Facilitator’s Perspective. MIS Quarterly, 20(1), 1–22. https://doi.org/10.2307/249540
Nielsen, J. (1997). The Use and Misuse of Focus Groups. IEEE Software, 14(1), 94–95. https://doi.org/10.1109/52.566434
Nimavat, K., & Champaneria, T. (2017). Chatbots: An Overview. Types, Architecture, Tools and Future Possibilities. International Journal for Scientific Research & Development, 5(7), 1019–1026.
Nouri, E., Sim, R., Fourney, A., & White, R. W. (2020). Step-Wise Recommendation for Complex Task Support. In Proceedings of the 2020 Conference on Human Information Interaction and Retrieval (pp. 203–212). https://doi.org/10.1145/3343413.3377964.
Nunamaker, J., Derrick, D., Elkins, A., Burgoon, J., & Patton, M. (2011). Embodied Conversational Agent-Based Kiosk for Automated Interviewing. Journal of Management Information Systems, 28(1), 17–48. https://doi.org/10.2307/41304605
Nunnally, J. C., & Bernstein, I. H. (2008). Psychometric theory 3 Nachdr McGraw-Hill series in psychology. McGraw-Hill.
O’Brien, H. L., Cairns, P., & Hall, M. (2018). A Practical Approach to Measuring User Engagement with the Refined User Engagement Scale (UES) and New UES Short Form. International Journal of Human-Computer Studies, 112, 28–39. https://doi.org/10.1016/j.ijhcs.2018.01.004
O’Brien, H. L., & McKay, J. (2018). Modeling Antecedents of User Engagement. The Handbook of Communication Engagement, 73. https://doi.org/10.1002/9781119167600.ch6
Oliver, K. M., Houchins, J. K., Moore, R. L., & Wang, C. (2021). Informing Makerspace Outcomes Through a Linguistic Analysis of Written and Video-Recorded Project Assessments. International Journal of Science and Mathematics Education, 19(2), 333–354. https://doi.org/10.1007/s10763-020-10060-2
Peer, E., Brandimarte, L., Samat, S., & Acquisti, A. (2017). Beyond the Turk: Alternative Platforms for Crowdsourcing Behavioral Research. Journal of Experimental Social Psychology, 70, 153–163. https://doi.org/10.1016/j.jesp.2017.01.006
Peffers, K., Tuunanen, T., Rothenberger, M. A., & Chatterjee, S. (2007). A Design Science Research Methodology for Information Systems Research. Journal of Management Information Systems, 24(3), 45–77. https://doi.org/10.2753/MIS0742-1222240302
Pennebaker, J. W., Booth, R. J., Boyd, R. L., & Francis, M. E. (2015a). Linguistic Inquiry and Word Count: LIWC 2015. Austin, Texas. Pennebaker Conglomerates.
Pennebaker, J. W., Boyd, R. L., Jordan, K., & Blackburn, K. (2015b). The development and psychometric properties of LIWC2015. Austin, Texas. University of Texas at Austin.
Pennebaker, J. W., Chung, C. K., Frazee, J., Lavergne, G. M., & Beaver, D. I. (2014). When Small Words Foretell Academic Success: The Case of College Admissions Essays. PLoS ONE, 9(12), e115844. https://doi.org/10.1371/journal.pone.0115844
Perry-Smith, J. E., & Mannucci, P. V. (2017). From Creativity to Innovation: The Social Network Drivers of the Four Phases of the Idea Journey. The Academy of Management Review, 42(1), 53–79. https://doi.org/10.5465/amr.2014.0462
Piezunka, H., & Dahlander, L. (2015). Distant Search, Narrow Attention: How Crowding Alters Organizations’ Filtering of Suggestions in Crowdsourcing. Academy of Management Journal, 58(3), 856–880. https://doi.org/10.5465/amj.2012.0458
Pilny, A., McAninch, K., Slone, A., & Moore, K. (2019). Using Supervised Machine Learning in Automated Content Analysis: An Example Using Relational Uncertainty. Communication Methods and Measures, 13(4), 287–304. https://doi.org/10.1080/19312458.2019.1650166.
Poetz, M. K., & Schreier, M. (2012). The Value of Crowdsourcing: Can Users Really Compete with Professionals in Generating New Product Ideas? Journal of Product Innovation Management, 29(2), 245–256. https://doi.org/10.1111/j.1540-5885.2011.00893.x
Portela, M., & Granell-Canut, C. (2017). A new friend in our Smartphone? Observing Interactions with Chatbots in the search of emotional engagement. In Proceedings of the XVIII International Conference on Human Computer Interaction (pp. 1–7). Cancun, Mexico.
Poser, M., & Bittner, E. A. C. (2020). Hybrid Teamwork: Consideration of Teamwork Concepts to Reach Naturalistic Interaction between Humans and Conversational Agents. In N. Gronau, M. Heine, H. Krasnova, & K. Poustcchi (Eds.), In Proceedings der 15. Internationalen Tagung Wirtschaftsinformatik (pp. 83–98). Potsdam, Germany.
von der Pütten, A. M., Krämer, N. C., Gratch, J., & Kang, S.-H. (2010). “It Doesn’t Matter What You Are!” Explaining Social Effects of Agents and Avatars. Computers in Human Behavior, 26(6), 1641–1650. https://doi.org/10.1016/j.chb.2010.06.012
Salomonson, N., Allwood, J., Lind, M., & Alm, H. (2013). Comparing Human-to-Human and Human-to- AEA Communication in Service Encounters. The Journal of Business Communication, 50(1), 87–116. https://doi.org/10.1177/0021943612465180
Schuetzler, R. M., Grimes, G. M., & Giboney, J. S. (2018). An investigation of conversational agent relevance, presence, and engagement. In Proceedings of the 24th Americas Conference on Information Systems, AMCIS 2018. New Orleans, LA, USA.
Schuetzler, R. M., Grimes, G. M., & Giboney, J. S. (2020). The Impact of Chatbot Conversational Skill on Engagement and Perceived Humanness. Journal of Management Information Systems, 37(3), 875–900. https://doi.org/10.1080/07421222.2020.1790204
Schuetzler, R. M., Grimes, G. M., Giboney, J. S., & Rosser, H. K. (2021). Deciding Whether and How to Deploy Chatbots. MIS Quarterly Executive, 20(1), 1–15.
Schulze, T., Indulska, M., Geiger, D., & Korthaus, A. (2012). Idea assessment in open innovation: A state of practice. In Proceedings of the 20th European Conference on Information Systems, ECIS 2012. Barcelona, Spain.
Schweitzer, F. M., Buchinger, W., Gassmann, O., & Obrist, M. (2012). Crowdsourcing: Leveraging Innovation through Online Idea Competitions. Research-Technology Management, 55(3), 32–38. https://doi.org/10.5437/08956308X5503055
Seeber, I., Bittner, E., Briggs, R. O., de Vreede, G.-J., de Vreede, T., Druckenmiller, D., et al. (2018). Machines as teammates: a collaboration research agenda. In T. Bui (Ed.), 51st Hawaii International Conference on System Sciences, HICSS 2018, Hilton Waikoloa Village. USA: Hawaii.
Seeber, I., Bittner, E. A. C., Briggs, R. O., de Vreede, T., de Vreede, G.-J., Elkins, A., Maier, R., Merz, A. B., Oeste-Reiß, S., Randrup, N., Schwabe, G., & Söllner, M. (2020). Machines as teammates: A research agenda on AI in team collaboration. Information & Management, 57(2), 1–22. https://doi.org/10.1016/j.im.2019.103174
Seeber, I., Merz, A., Vreede, G.-J. de, Maier, R., & Weber, B. (2017). Convergence on self generated vs. crowdsourced ideas in crisis response: Comparing social exchange processes and satisfaction with process. In T. Bui (Ed.), 50th Hawaii International Conference on System Sciences, HICSS 2017. Hilton Waikoloa Village, Hawaii, USA.
Shah, H., Warwick, K., Vallverdú, J., & Wu, D. (2016). Can Machines Talk? Comparison of Eliza with Modern Dialogue Systems. Computers in Human Behavior, 58, 278–295. https://doi.org/10.1016/j.chb.2016.01.004
Shawar, B. A., & Atwell, E. (2007). Chatbots: Are They Really Useful? Journal for Language Technology and Computational Linguistics, 22(1), 29–49.
Short, J., Williams, E., & Bruce, C. (1976). The Social Psychology of Telecommunications. Wiley.
Sonnenberg, C., & Vom Brocke, J. (2012). Evaluations in the Science of the Artificial – Reconsidering the Build-Evaluate Pattern in Design Science Research. In D. Hutchison, T. Kanade, J. Kittler, J. M. Kleinberg, F. Mattern, J. C. Mitchell, M. Naor, O. Nierstrasz, C. Pandu Rangan, B. Steffen, M. Sudan, D. Terzopoulos, D. Tygar, M. Y. Vardi, G. Weikum, K. Peffers, M. Rothenberger, & B. Kuechler (Eds.), Lecture Notes in Computer Science. Design Science Research in Information Systems. Advances in Theory and Practice (Vol. 7286, pp. 381–397). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-29863-9_28
Strohmann, T., Fischer, S., Siemon, D., Brachten, F., Lattemann, C., Robra-Bissantz, S., & Stieglitz, S. (2018). Virtual moderation assistance: creating design guidelines for virtual assistants supporting creative workshops. In M. Hirano, M. D. Myers, K. Kijima, M. Tanabu, & D. Senoo (Chairs), 22nd Pacific Asia Conference on Information Systems, PACIS 2018. Yokohama, Japan.
Tausczik, Y. R., & Pennebaker, J. W. (2010). The Psychological Meaning of Words: LIWC and Computerized Text Analysis Methods. Journal of Language and Social Psychology, 29(1), 24–54. https://doi.org/10.1177/0261927X09351676
Tavanapour, N., & Bittner, E. A. C. (2018). Automated Facilitation for Idea Platforms: Design and Evaluation of a Chatbot Prototype. In J. Pries-Heje, S. Ram, & M. Rosemann (Eds.), Proceedings of the International Conference on Information Systems, ICIS 2018. San Francisco, CA, USA.
Tegos, S., Demetriadis, S., & Karakostas, A. (2014, September). Leveraging conversational agents and concept maps to scaffold students' productive talk. In 2014 International Conference on Intelligent Networking and Collaborative Systems (pp. 176–183). IEEE. https://doi.org/10.1109/INCoS.2014.66.
Tegos, S., Demetriadis, S., & Karakostas, A. (2015). Promoting Academically Productive Talk with Conversational Agent Interventions in Collaborative Learning Settings. Computers & Education, 87, 309–325. https://doi.org/10.1016/j.compedu.2015.07.014
Tremblay, M. C., Hevner, A. R., & Berndt, D. J. (2010). The Use of Focus Groups in Design Science Research. In A. Hevner & S. Chatterjee (Eds.), Integrated Series in Information Systems. Design Research in Information Systems (Vol. 22, pp. 121–143). Springer US. https://doi.org/10.1007/978-1-4419-5653-8_10
van Swol, L. M., & Kane, A. A. (2019). Language and Group Processes: An Integrative, Interdisciplinary Review. Small Group Research, 50(1), 3–38. https://doi.org/10.1177/1046496418785019
Venable, J. (2006). The role of theory and theorising in design science research. In Proceedings of the 1st International Conference on Design Science in Information Systems and Technology (DESRIST 2006), (pp. 1–18). Claremont, California, USA.
Venable, J., Pries-Heje, J., & Baskerville, R. (2016). FEDS: A Framework for Evaluation in Design Science Research. European Journal of Information Systems, 25(1), 77–89. https://doi.org/10.1057/ejis.2014.36
Verhagen, T., van Nes, J., Feldberg, F., & van Dolen, W. (2014). Virtual Customer Service Agents: Using Social Presence and Personalization to Shape Online Service Encounters. Journal of Computer-Mediated Communication, 19(3), 529–545. https://doi.org/10.1111/jcc4.12066
Vom Brocke, J., Winter, R., Hevner, A., & Maedche, A. (2020). Accumulation and Evolution of Design Knowledge in Design Science Research A Journey Through Time and Space. Journal of the Association of Information Systems, 21(3), 520–544. https://doi.org/10.17705/1jais.00611
Vreede, G.‑J. de, Briggs, R. O., & Vreede, T. de (2021). Exploring a Convergence Technique on Ideation Artifacts in Crowdsourcing. Information Systems Frontiers. Advance online publication. https://doi.org/10.1007/s10796-021-10120-0
Vreede, T. de, Nguyen, C., Vreede, G.‑J. de, Boughzala, I., Oh, O., & Reiter-Palmon, R. (2013). A Theoretical Model of User Engagement in Crowdsourcing. In D. Hutchison, T. Kanade, J. Kittler, J. M. Kleinberg, F. Mattern, J. C. Mitchell, M. Naor, O. Nierstrasz, C. Pandu Rangan, B. Steffen, M. Sudan, D. Terzopoulos, D. Tygar, M. Y. Vardi, G. Weikum, P. Antunes, M. A. Gerosa, A. Sylvester, J. Vassileva, & G.-J. de Vreede (Eds.), Lecture Notes in Computer Science. Collaboration and Technology (Vol. 8224, pp. 94–109). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-41347-6_8
Wang, H., Rose, C., Cui, Y., Chang, C., Huang, C., & Li, a. (2007). Thinking Hard Together: The Long and Short of Collaborative Idea Generation in Scientific Inquiry. In C. A., Chinn, G. Erkens, & S. Puntambekar (Eds.), The Computer Supported Collaborative Learning (CSCL) Conference 2007 (Vol. 8, Part 2, pp. 753-762). New Brunswick, NJ, USA: International Society of the Learning Sciences.
Webster, J., & Ho, H. (1997). Audience Engagement in Multimedia Presentations. ACM SIGMIS Database: The DATABASE for Advances in Information Systems, 28(2), 63–77.
Webster, J., & Watson, R. T. (2002). Analyzing the Past to Prepare for the Future: Writing a Literature Review. MIS Quarterly, 26(2), xiii–xxiii. https://doi.org/10.2307/4132319
Weizenbaum, J. (1966). ELIZA - A Computer Program for the Study of Natural Language Communication between Man and Machine. Communications of the ACM, 9(1), 36–45. https://doi.org/10.1145/365153.365168
Zamora, J. (2017). I'm sorry, dave, i'm afraid i can't do that: Chatbot perception and expectations. In Proceedings of the 5th International Conference on Human Agent Interaction (pp. 253–260). https://doi.org/10.1145/3125739.3125766.
Open Access funding enabled and organized by Projekt DEAL.
All authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Questionnaire Items and Sources.
Note: The questionnaire consisted of the following statements that were translated to German before.
Prior experience, one item, own formulation
Have you ever generated and submitted an idea for an external company or organization, i.e., that was not your own or for which you worked at the time? (Yes/No/I am not sure because…/No answer)
Engagement, six items, adapted from Webster and Ho (1997)
This interface keeps me totally absorbed in the idea generation
This interface holds my attention
This interface excites my curiosity
This interface arouses my imagination
This interface is fun
This interface is intrinsically interesting
Social Presence, five items, adapted from Gefen and Straub (2003)
There is a sense of human contact in the interface
There is a sense of personalness in the interface
There is a sense of sociability in the interface
There is a sense of human warmth in the interface
There is a sense of human sensitivity in the interface
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Poser, M., Küstermann, G.C., Tavanapour, N. et al. Design and Evaluation of a Conversational Agent for Facilitating Idea Generation in Organizational Innovation Processes. Inf Syst Front 24, 771–796 (2022). https://doi.org/10.1007/s10796-022-10265-6
- Conversational Agent
- Human-AI Interaction
- Idea Generation
- Open Innovation