Keywords

1 Definitions of Crowdsourcing

The term crowdsourcing can be traced back to Jeff Howe’s article “The Rise of Crowdsourcing” in Wired Magazine in 2006 (Howe 2006b). Howe defines crowdsourcing as “the act of a company or institution taking a function once performed by employees and outsourcing it to an undefined (and generally large) network of people in the form of an open call” (Howe 2006a). The word crowdsourcing is a neologism created by combining the terms “outsourcing” and “crowd” (Hirth et al. 2011). While outsourcing refers to the outsourcing of internal activities using bilateral relationships (Grossman and Helpman 2005), crowdsourcing refers to outsourcing using an undefined group of individuals called the crowd (Leimeister 2012). Since the publication of Howe, the topic has garnered tremendous interest in both business and science. In order to create a common understanding of crowdsourcing for the remainder of this book, this first chapter will provide a definition of crowdsourcing.

The academic literature contains a wide variety of attempts to create a definition for crowdsourcing. While most of these have many common features, some of them address different phenomena to some extent. For example, Bücheler et al. (2010) and Huberman et al. (2009) classify Wikipedia and YouTube as examples of crowdsourcing, while other authors explicitly exclude these platforms (Kleemann et al. 2008; Brabham 2012). Perhaps the most notable attempt to resolve the problem of arriving at a generally accepted definition was undertaken by Estellés-Arolas and González-Ladrón-de-Guevara (2012). By conducting a literature review, they identified 209 articles containing 40 different definitions of crowdsourcing. From this they have derived an exhaustive and consistent definition:

Crowdsourcing is a type of participative online activity in which an individual, an institution, a non-profit organization, or company proposes to a group of individuals of varying knowledge, heterogeneity, and number, via a flexible open call, the voluntary undertaking of a task. The undertaking of the task, of variable complexity and modularity, and in which the crowd should participate bringing their work, money, knowledge and/or experience, always entails mutual benefit. The user will receive the satisfaction of a given type of need, be it economic, social recognition, self-esteem, or the development of individual skills, while the crowdsourcer will obtain and utilize to their advantage what the user has brought to the venture, whose form will depend on the type of activity undertaken. (Estellés-Arolas and González-Ladrón-de-Guevara 2012, p. 197).

In a recent article, Kietzmann (2017) argues that several technological developments and their rapid diffusion over the last few years have led to the fact that knowledge can nowadays be accessed much more quickly, easily, and efficiently. From these changes and the progress of research in the field of crowdsourcing, he derives implications that require a broader definition of crowdsourcing. However, most of his points are already included in the definition by Estellés-Arolas and González-Ladrón-de-Guevara (2012) and, therefore, do not need to be discussed again. Nevertheless, the essential distinction is that Kietzmann (2017) assumes that the task does not necessarily have to be performed by humans but that a combination of humans and machines can also serve as a crowd.

Crowdsourcing is based on the principle of the wisdom of the crowds (Surowiecki 2004). This principle, in turn, is based on the idea of collective intelligence (Lévy 1997), which describes the intelligence of a group of people created by the interaction of its peers. Surowiecki (2004) argues that, under certain conditions, a group of individuals can produce better decisions and results than individuals, even if the latter are in principle better qualified to carry out the respective task. Crowdsourcing can therefore be used as a mechanism to access the wisdom of the crowd in order to solve a given problem.

Crowdsourcing platforms constitute so-called information systems. Alter (2008, p. 451) defines these information systems as a “system in which human participants and/or machines perform work (processes and activities) using information, technology, and other resources to produce informational products and/or services for internal or external custom.”

Based on this definition Geiger and Schader (2014, p. 4) define crowdsourcing-specific information systems as “socio-technical systems that provide informational products and services by harnessing the potential of large groups of people via the Web.”

The parties involved in crowdsourcing can principally be divided into the two roles of the crowdsourcer or content owner and the crowd. The content owner is the principal who searches for a solution to a given problem, while the crowd consists of the agents solving it (Leimeister et al. 2015). The crowdsourcing process itself takes place on an IT-enabled crowdsourcing platform. This allows content owners to create and share tasks and allows the crowd to solve them collaboratively or individually and to submit solutions. If an intermediary operates this platform, a third role is created that of the crowdsourcing intermediary (Leimeister et al. 2015).

In assuming an intraorganizational perspective, additional roles have to be considered in describing the information system of internal crowdsourcing. Ulbrich and Wedel (see chapter “Systematization Approach for the Development and Description of an Internal Crowdsourcing System” of this book) build a complex model describing the primary, secondary, and tertiary roles necessary for a successful implementation of internal crowdsourcing. In total, they describe eight different roles: (1) crowd master, (2) campaign owner, (3) crowd technology master, (4) content owner, (5) secondary counterpart, (6) crowd, (7) executive board, and (8) employee union representation. For a more detailed description of the role model for internal crowdsourcing and the corresponding descriptions of the roles, see chapter “Systematization Approach for the Development and Description of an Internal Crowdsourcing System” in this book.

2 Crowdsourcing Typologies

Over the years different typologies for crowdsourcing have emerged that are intended to help categorize different types of crowdsourcing. Afuah and Tucci (2012), for example, differentiate between tournament-based and collaboration-based crowdsourcing, depending on how results are generated within the crowd. In tournament-based crowdsourcing, each participant submits an independently developed solution, and the content owner ultimately selects the best solution. In collaboration-based crowdsourcing, on the other hand, a joint solution is developed by the entire crowd. Similarly, Boudreau and Lakhani (2013) classify crowdsourcing according to whether the participants work independently or collaboratively on the solution of the task.

Leimeister (2012) additionally distinguishes between crowdfunding, crowdvoting, and crowdcreation, according to the type of task the crowd performs. In crowdfunding, the participants from the crowd are used to achieve a particular financing goal. In crowdvoting, each participant from the crowd provides a ranking of options given the context of a specific question. This can be, for example, the evaluation of a product or a vote on a new product name. Within the scope of crowdcreation, the crowd participants have to invest significantly more work effort, as this involves the generation of ideas, designs, prototypes, or entirely new business models. The work of the crowd is therefore characterized by significantly higher expenses and production costs.

Geiger and Schader (2014) differentiate crowdsourcing initiatives using two dimensions—the homogeneity and aggregation of contributions from the crowd. Accordingly, contributions can be very similar (homogeneous) or very individual (heterogeneous) in their characteristics. Homogeneous contributions are mostly the result of clearly structured and standardized tasks, while heterogeneous contributions are most often a consequence of unstructured and open tasks (Blohm et al. 2017). The aggregation of the contributions is based on whether the added value of crowdsourcing can be derived selectively from individual contributions or integratively from the entirety of contributions (Blohm et al. 2017). This classification of integrative and selective crowdsourcing can also be found in Schenk and Guittard (2011). From the two described dimensions, Geiger and Schader (2014) derive four types of crowdsourcing information systems (see Fig. 1):

  1. 1.

    Crowd rating: This type of crowdsourcing is based on many homogeneous contributions whose value is not derived from the individual contributions but from their aggregate (e.g., TripAdvisor rating).

  2. 2.

    Crowd creation: The value of this crowdsourcing approach results from the aggregation of many heterogeneous contributions. The contributions are complementary and achieve a comprehensive body of work when aggregated (e.g., Wikipedia).

  3. 3.

    Crowd processing: This type of crowdsourcing is based on a large number of contributions that exhibit a high degree of homogeneity (e.g., reCAPTCHA).

  4. 4.

    Crowd solving: In this case, a heterogeneous set of contributions is submitted, each of which represents an individual and different solution to a given problem. The solutions can be complements or substitutes.

Fig. 1
A 2 cross 2 matrix of deriving value versus differentiating value from contributions. The parameters are non-emergent, emergent and homogeneous, heterogeneous for crowd rating, processing, creation and solving.

Typology of crowdsourcing platforms (reproduced from Geiger and Schader 2014)

Geiger and Schader (2014) describe these types of crowdsourcing as archetypes and state that mixed forms are mostly observed in real-life settings.

Similarly, Prpić et al. (2015) categorize different types of crowdsourcing. Like Geiger and Schader (2014), they identify two dimensions from which four different types of crowdsourcing are derived. The first dimension is based on the nature of the contributions, while the way in which the contributions are used to derive the solution is the foundation for the second dimension. The latter dimension is quite similar to the aggregation dimension of Geiger and Schader (2014). However, the former dimension differs from the classification based on the homogeneity of contributions. Instead, the focus lies on whether the crowd submissions are objective or subjective in nature. Objective contributions represent facts that can be researched and compiled by the crowd, while subjective contributions are, for example, opinions, beliefs, or assessments.

Finally, a distinction between internal and external crowdsourcing can be made based on the location of the crowd. In internal crowdsourcing, the company’s employees form the crowd and can submit solutions, while in external crowdsourcing, the crowd is formed by an undefined number of individuals outside the company (Leimeister et al. 2015). Crowdsourcing can be classified as a coordination model between market and hierarchy due to the possibility of assigning tasks both internally and externally (Leimeister 2012). An illustration of the roles and the location of the crowd can be found in Fig. 2. A more detailed description of internal crowdsourcing is provided in Sect. 4.

Fig. 2
An illustration depicts the principal, mediation, and agent of outsourcing, internal, and external crowdsourcing with and without mediation.

Roles and location of the crowd (reproduced from Leimeister et al. 2015, based on Hoßfeld et al. 2012)

3 The Crowdsourcing Process

As with the definition and typology of crowdsourcing, various descriptions of the crowdsourcing process exist (Lopez et al. 2010; Pedersen et al. 2013; Zhu et al. 2016). However, Geiger and Schader (2014) argue that these are merely variations of a relatively generic process. An agent publishes a task utilizing an open call to an undefined crowd, whose individuals then decide freely whether they want to engage with and contribute to this task. Afterward, the best solution is selected from all submissions. Pedersen et al. (2013, p. 581) summarize this generic process as follows:

A process is a set of actions undertaken by all actors in a crowdsourcing project to achieve a particular outcome or solve a particular problem. In this context, the process refers to the design of a step-by-step plan of action for solving a crowdsourcing problem.

With the help of a comprehensive literature search, Zuchowski et al. (2016) define four elemental steps within the internal crowdsourcing process: (1) preparation, (2) execution, (3) evaluation/aggregation, and (4) resolution. The preparatory phase includes tasks such as the description of the actual assignment, the prerequisites, expectations, evaluation criteria, the selection criteria for the crowd, and ultimately of the incentive structure. The act of crowdsourcing takes place during the execution phase, in which the task is published and the crowd submits their solution proposals. In the third step, the evaluation and aggregation of the submissions from the crowd take place. Either all solutions that meet a certain quality standard (integration) or only the best solution (selection) can be selected (Geiger et al. 2011). During the final phase, the chosen solution is finally implemented, and the submitter of the solution is rewarded. Zhu et al. (2016) as well as Muhdi et al. (2011) add a fifth step to the crowdsourcing process, which precedes the four steps introduced by Zuchowski et al. (2016)—the deliberation phase. Although this phase overlaps conceptually with the preparation phase, another critical aspect is taken into account here. During the deliberation phase, the agent decides whether crowdsourcing can be considered at all as a solution strategy for the particular problem at hand (Fig. 3).

Fig. 3
A flow diagram depicts the deliberation with defined expected outcomes, preparation with defining the task and crowd, execution, assessment, and implementation phase.

Process phases and their design criteria (reproduced from Zhu et al. 2016)

But even this conceptualization might be seen as too broad, especially for the complex system of internal crowdsourcing. More recently, Ulbrich and Wedel (see chapter “Systematization Approach for the Development and Description of an Internal Crowdsourcing System” of this book) developed a more granular description of the internal crowdsourcing process. They differentiate between (1) impetus, (2) decision, (3) conceptualization, (4) execution, (5) assessment, (6) exploitation, and (7) feedback. In part, these processual steps can be parallelized or aggregated in less individual steps. For a more detailed discussion of the different phases, see chapter “Systematization Approach for the Development and Description of an Internal Crowdsourcing System” by Ulbrich and Wedel in this book.

4 Internal Crowdsourcing

Recently, one type of crowdsourcing in particular—internal crowdsourcing—has attracted considerable interest and initiated a first wave of research (Benbya and Leidner 2018; Smith et al. 2017; Zuchowski et al. 2016). Benbya and van Alstyne (2011) were among the first to highlight the potential of internal knowledge markets to improve the flow of information within companies and to find solutions to problems internally. Internal crowdsourcing can be particularly advantageous for large companies with many geographically dispersed employees that possess diverse backgrounds. Successful implementations of internal crowdsourcing have already been reported in several case studies, including world-renowned companies like Siemens, McKinsey & Co, Eli Lilly (Benbya and van Alstyne 2011), Allianz (Benbya and Leidner 2018), Deltares (Leung et al. 2014), Deutsche Telekom (Rohrbeck et al. 2015), IBM (Bjelland and Wood 2008), Microsoft (Bailey and Horvitz 2010), and NASA (Davis et al. 2015).

As internal crowdsourcing takes place within a company, resources in terms of the size of the crowd are naturally limited. Hence, the general concept of crowdsourcing is now analyzed within the intraorganizational context. It is no longer about tapping into the knowledge of an undefined but about tapping into the knowledge of a defined crowd of people—the employees of the company. These employees often possess comprehensive knowledge, especially implicit knowledge about customers, products, and services (Henttonen et al. 2017). Thus, internal crowdsourcing opens up the innovation process by enabling the development of ideas and innovations not only by employees of the research and development department but by all employees of the company (Simula and Ahola 2014).

One of the first descriptions of internal crowdsourcing can be found in Villarroel and Reis (2010, p. 2), who define internal crowdsourcing as a “distributed organizational model used by the firm to extend problem-solving to a large and diverse pool of self-selected contributors beyond the formal internal boundaries of a multi-business firm […].”

This definition clearly states that internal crowdsourcing helps to solve problems by overcoming intraorganizational boundaries. However, this definition does not include further information on how the problems are broadcasted, how individuals in the crowd interact, and how problems are solved. An even broader definition can be found at Simula and Vuori (2012), who describe internal crowdsourcing as the introduction of open innovation principles at the intraorganizational level. The most comprehensive definition of internal crowdsourcing to date is provided by Zuchowski et al. (2016). Based on a structured literature review, they propose to define internal crowdsourcing as an “IT-enabled group activity based on an open call for participation in an enterprise” (Zuchowski et al. 2016, p. 168).

Following Zuchowski et al. (2016), internal crowdsourcing is therefore first and foremost a phenomenon that is fundamentally made possible by (social) information and telecommunications technologies. Secondly, internal crowdsourcing is a group activity in which collaborative or competitive approaches are possible as mentioned above. Thirdly, internal crowdsourcing is based on an open call, which can be explicit (e.g., an explicit call) or implicit (e.g., through an open technology such as an Enterprise Social Network, which permanently invites participation). In contrast to external crowdsourcing, however, the crowd addressed here is known—the employees of the focal firm. This definition will serve as a basis for the remainder of this book.

For a typology of internal crowdsourcing and an elaborate description of its process, the reader is referred to chapter “Systematization Approach for the Development and Description of an Internal Crowdsourcing System” of this book.

5 Conclusion

In summary, the potential of internal crowdsourcing lies in developing new ideas and innovations, finding effective solutions to problems, reducing costs, and shortening product development cycles (Brabham 2008; Simula and Vuori 2012; Vukovic 2009) and can thus make a valuable contribution to existing innovation activities within a firm (Leung et al. 2014).

Compared to external crowdsourcing, internal crowdsourcing has advantages as well as disadvantages. With internal crowdsourcing, the parameters for idea competitions can be set more broadly. Employees can, for example, develop new business areas or develop incremental innovations (Leung et al. 2014). This observation is again based on the implicit knowledge of the employees mentioned above, in particular about customers, products, and services (Henttonen et al. 2017). Malhotra et al. (2017) call this “local knowledge” and emphasize that the solutions of employees are often better oriented toward the requirements of customers and are feasible given the possibilities of the focal firm. This could further facilitate a faster and better implementation of the proposed solutions. On the other hand, restriction to the internal crowd naturally implies a smaller and therefore more homogeneous number of participants, which can lead to a reduction in the likelihood of radical innovation (Malhotra et al. 2017).

An enterprise may also use internal idea competitions in order to promote unity and to encourage creativity and entrepreneurial skills among employees (Leung et al. 2014). Internal crowdsourcing enables employees to make their ideas and innovative solutions more visible and accessible. It also encourages employees by giving them the feeling that their ideas are valued and taken seriously by the company (Malhotra et al. 2017) and that anyone can submit and implement an idea (Rao 2016). These characteristics of internal crowdsourcing can ultimately lead to more committed employees (Rao 2016; Malhotra et al. 2017). The degree of support for innovative activities has a positive influence on the innovation behavior of employees (Scott and Bruce 1994), and companies with more committed employees exhibit higher productivity, higher quality of work, and higher revenues (Baldoni 2013). Moreover, Jette et al. (2015) show that the satisfaction and productivity of employees increase when they are entrusted with meaningful and creative tasks. While, in the case of external crowdsourcing, the agent must decide strategically how to handle intellectual property rights in order to capture the optimum benefit from the submitted solutions (Mazzola et al. 2018), this usually only plays a minor role in internal crowdsourcing (Simula and Vuori 2012). Finally, internal crowdsourcing is a good solution for problems for which secrecy and competitive pressures would render external crowdsourcing inappropriate (Zuchowski et al. 2016; Simula and Vuori 2012).