The term ‘crowdsourcing’ was first used by the American journalist Jeff Howe (2006) in an article entitled ‘The Rise of Crowdsourcing’, which was published in the technology magazine Wired (Howe 2006). It is a neologism from the words ‘crowd’ and ‘outsourcing’, and Howe describes it as follows:
The technological advances in everything from product design software to digital video cameras are breaking down the cost barriers that once separated amateurs from professionals. Hobbyists, part-times, and dabblers suddenly have a market for their efforts, as smart companies in industries as disparate as pharmaceutical and television discover ways to tap the latent talent of the crowd. The labor isn’t always free, but it costs a lot less than paying traditional employees. It’s not outsourcing; it’s Crowdsourcing. (Howe 2006)
So, crowdsourcing is a form of participatory online activity involving an individual, an institution, a charity or a company—an undefined group of individuals—through a flexible open call to perform a task voluntarily or in return for some monetary benefit (Geiger et al. 2012).
Brabham takes the results of Howe and focuses on the crowdsourcing from the company’s perspective saying that:
A company posts a problem online, a large number of individual solutions to the problem, the winning ideas are some form of a bounty, and the company mass products the idea for its own gain. (Brabahan 2008)
It becomes clear that Brabahan (2008) considers the crowdsourcing process particularly as a problem-solving method, thus emphasizing the crowd’s swarm intelligence. However, the authors Lopez, Vukovic and Laredo limit the crowdsourcing principle only to the company’s own perspective and see crowdsourcing as an Internet-based production model that enables distributed and Web-based human collaboration (Lopez et al. 2010).
Taking the above definitions into account, three main components of crowdsourcing can be identified: (1) requester or initiator (crowdsourcer), (2) crowd or Internet users (crowdworker) and (3) Internet-based crowdsourcing platform. This is distinguished from outsourcing in that it has a mediator (Internet-based crowdsourcing platform) which enables the communication between an unknown group of people, crowdworkers and the requester (Leimeister et al. 2015). In the case of outsourcing, the requester knows exactly who the task executer is and instructs him/her by giving a certain task in return for monetary payment. In contrast to outsourcing, in external crowdsourcing, the crowdsourcers outsource certain tasks to an Internet-based platform for processing. The undefined mass of people or the so-called crowdworkers take over the processing of outsourced tasks voluntarily or in return for a monetary benefit (Leimeister and Zogaj 2013; Hirth et al. 2012). The entire process, as well as the interaction between crowdsourcers and crowdworkers, takes place on Internet-based crowdsourcing platforms (Blohm et al. 2014; Hoßfeld et al. 2012). It follows that if the crowdworkers are a defined group of people such as employees, stakeholders or members of an organization, then we talk about ‘internal crowdsourcing’ (Leimeister and Zogaj 2013).
In the light of the above definitions, crowdsourcing is defined as a mechanism of task sharing, especially the outsourcing of tasks or orders by crowdsourcers to a wide crowd (crowdworkers) via an open call to solve a particular problem as quickly and effectively as possible. Following that, we separate crowdsourcing based on the crowdworkers into two categories: external and internal crowdsourcing. External crowdsourcing, as explained in chapter ‘Introduction to “Internal Crowdsourcing: Theoretical Foundations and Practical Applications”’, deals with an undefined open, heterogonous and usually unskilled crowd. However, the crowd in internal crowdsourcing describes a closed group of people with certain skills, usually the employees of the company, defined by the requester (Benbya and Leidner 2018).
2.1 Internal Crowdsourcing
In recent years, the still relatively new strategy of crowdsourcing has become increasingly interesting for companies in various industries as a way to redesign and speed up innovation processes and to benefit from the unused internal knowledge of the employee in an organization (Howe 2006, 2009; Hammon and Hippner 2012; Blohm et al. 2014; Simula and Ahola 2014). This type of crowdsourcing is referred to as ‘internal crowdsourcing’ (IC), which is defined according to Zuchowski et al. (2016) as follows:
Internal crowdsourcing is (a) IT-enabled (b) group activity based on (c) open call for participation (d) in an enterprise.
By definition, certain work steps and tasks of a production or innovation process are outsourced using internal crowdsourcing via company-owned online platforms or intermediaries to a predefined group of employees who participate voluntarily in the completion of these crowd tasks online (Leimeister and Zogaj 2013). These are then used to produce innovative, marketable products and services, thus contributing towards increasing the company’s efficiency and profit.
Internal crowdsourcing is still a poorly researched practical phenomenon. Although corporate crowdsourcing is a lucrative process from a corporate perspective, there is an imbalance between the burden and the benefits of crowd activity from the employee’s point of view. For this reason, an incentive system should be developed so that employees can motivate themselves or see benefits in the process as a whole. To ensure that crowdsourcing for employees does not lead to an extra workload, the labour law framework for internal crowdsourcing should also be defined. Also, there are no standards or standardized procedures for task design, task decomposition, task typology, technical requirements for an internal crowdsourcing platform and measurement of the quality of results in internal crowdsourcing. For this purpose, methods that can be used to measure the quality of a creative process are also to be explored. Finally, an ideal process flow (workflow) is needed, which takes into account all aspects of internal crowdsourcing and in an optimal order with associated roles and resources.
2.2 Employee Motivation
In the case of internal crowdsourcing, the incentive system is more diversified, as employees are generally financially secure due to their employment in the company. Fundamentally, there are intrinsic and extrinsic motivations to contribute to the crowdsourcing task from the employee perspective. Intrinsic motivations are when carrying out an action is rewarding for the employee himself, for example, because it brings satisfaction, while extrinsic motivation comes from the expectation of a reward from the outside world as a consequence of an action (Meffert et al. 2018).
One of the intrinsic motivations for employees is their enjoyment of the activity, for example, having fun while testing an internal software, contributing to the ideation of a new product or being intrigued by the competitive nature of crowdsourcing projects (Leimeister and Zogaj 2013). The prospect of helping to shape products in line with their own wishes and ideas and to influence their development can also be crucial from the employee’s perspective (Sixt 2014). This motivation is particularly evident in crowdsolving and crowdcreation, where the motivation here is not in the material reward but in the positive feeling that arises from being involved in the task. Another intrinsic motivation is social exchange within the crowd where the desire to exchange and interact with like-minded people serves as an incentive (Leimeister and Zogaj 2013). Brabham (2013) also mentions the following as essential motivations for participating in crowdsourcing initiatives: ‘to network with other creative professionals’ and ‘to socialize and make friends’. Another intrinsic motivation for participating in internal crowdsourcing is the opportunity to learn within the crowdsourcing engagement and thus enhance personal skills, competencies and experiences through the exercise of relevant tasks so that creative skills can be improved by tackling complex tasks and gaining experience.
Extrinsic motivation comes from the outer world, and the desire for appreciation by other people can also serve as an essential extrinsic motivation in internal crowdsourcing (Leimeister and Zogaj 2013). To feel appreciated by the outside world, e.g. managers and colleagues, crowdsourcing contributions should be visible to other participants, like in competitions where the winner is chosen by the community itself. Participants hoping for recognition from the crowd (their colleges) and company (their employers) are usually motivated to generate high-quality contributions to gain prestige in the company (Franke and Klausberger 2010). Another extrinsic motive is the desire for self-expression or self-marketing. In addition to showing their own contributions to other members of the crowd, self-marketing through crowdsourcing may also improve an employee’s own career opportunities by opening up the possibility that the employer becomes aware of an employee’s high-quality contributions (Leimeister and Zogaj 2013). Monetary rewards can be mentioned as a last extrinsic motive. These can be in the form of monetary compensation, benefits in kind, discounts or certain premiums (Franke and Klausberger 2010).
2.3 Labour Law Framework
As this new way of working sets in motion a fundamental shift in work organization and in the division of labour, minimum standards for fair work in the crowd and fair digital work environment should be defined. To avoid additional workload, the crowd concepts should be designed in a way that takes into account labour policy /legal requirements such as works constitution law, occupational health and safety, trade association regulations, collective bargaining agreements and data protection regulations. Therefore, it is recommended that an official company agreement that regulates all these aspects be drawn up. One of the latest group-wide company agreements for internal crowdsourcing in Germany, ‘Lebende Konzernbetriebsvereinbarung als soziale Innovation’ (the living group works council agreement as social innovation), establishes the rules for a fair crowdsourcing environment for employees (Otte and Schröter 2018). According to Otte and Schröter (2018), the following rules can be applied to any internal crowdsourcing environment:
-
The participation of employees in the internal crowdsourcing is voluntary, and employees are not affected by participation or non-participation.
-
The participation of employees in the internal crowdsourcing takes place during working hours. The time working on the platform is working time.
-
Accessibility to internal crowdsourcing is provided at work via mobile phones or laptops and guaranteed by the company in employees’ home offices as well. There will be no additional IT accesses or IT jobs provided. Employees without IT access can place IC initiatives directly through the crowd manager.
-
The participation of employees in group-public or employer-public points or ranking systems is always voluntarily. Participants have the right to use their real name or a pseudonym.
-
The company is committed to handling data security and data protection in a highly sensitive manner that goes beyond the regulations of the national data protection law. Any and all platform data will be made available for use only in an aggregated and anonymized form to the corporate bodies as well as department heads and supervisors and, upon request, to the research community. A person-related breakdown of data does not take place. The aim is to strengthen employees’ right to determine themselves what happens to their information and to reinforce their confidence in the company and its careful corporate culture.
-
In order to protect employees’ privacy, there will be no online tracking or controlling of employee performance or behaviour.
-
Innovation ideas by employees that are brought into an internal crowdsourcing platform are legally transferred to the company. The idea providers are entitled to appropriate remuneration.
2.4 Tasks in Internal Crowdsourcing
Tasks in internal crowdsourcing are usually like crowdsolving and crowdcreation as described in chapter ‘Systematization Approach for the Development and Description of an Internal Crowdsourcing System’. In larger and more segmented enterprises, it is difficult to match people to the right problem. However, with the help of internal crowdsourcing, the unutilized or unnoticed knowledge of employees can be used by the company internally to generate faster innovation processes, improve existing products or solve current problems in the organization (Lopez et al. 2010; Benbya and Van Alstyne 2010; Gaspoz 2011). In this way, a social innovation community can be built, and the quality of social capital in an enterprise can be improved with the help of internal crowdsourcing (Bharati et al. 2015).
The content of the internal crowdsourcing tasks is determined by the needs of the company and is therefore highly varied. For example, IBM established a platform called ‘InnovationJam’ to drive innovation and collaboration by providing a platform to discuss innovative ideas (Bjelland and Wood 2008). In the InnovationJam platform, the idea creators must first consider concrete aspects such as costs, quality and deadlines, simulate planned projects and then reach the declared goal. After that, every employee gets a virtual wallet and uses their budget to invest in their colleagues’ ideas and vote for them. If the virtual test run is successful, the company decides to follow through with a real implementation.
Similarly, Allianz UK uses an internal crowdsourcing platform to generate as many different ideas as possible while also encouraging the submission of ‘smaller’ ideas, as these are more common in the financial services sector (Benbya and Leidner 2016). Idea generation takes the form of idea campaigns, and the idea campaigns are geared to the needs of the respective area to increase employee participation. Qualitative feedback mechanisms, in particular, ensure that the ideas submitted are treated with respect and provide for a close exchange between the employees who are involved in the process and those responsible for the process of developing the ideas.
Telekom AG has been conducting so-called forecast markets for product innovation since 2012 (Zuchowski et al. 2016). Their platform is accessible to all employees, and, instead of an employee, a department places various topics and problems in the platform. Usually, tasks that are placed in Telekom’s platform, such as sales channels, target groups, benefits analysis, product design or market potential, would be outsourced to external market research agencies. However, due to the short trading time of the questions that have been asked, that is, within 5 working days, results from the forecast markets are available faster and generally exceed the quality of conventional market analyses. The individual contributions by the employees are aggregated into an overall contribution whereby the calculation basis for the forecast is formed from the median of all individual forecasts.
SAP uses internal crowdsourcing to make a social impact and aims to use the ideas of SAP employees to improve the lives of one billion people on the planet by 2020 (Durward et al. 2019). In doing so, SAP technologies should be used to promote sustainable ideas of social significance that are economically feasible at the same time. For this purpose, SAP APJ invests 1 million € annually to finance social start-ups within SAP. The orientation of the foundations focuses mainly on the business areas of health and disaster management.
2.5 Crowdsourcing Forms
Depending on the task characteristics and skill demands, crowdsourcing can generally be divided into two categories: microtask and macrotask crowdsourcing.
Microtask Crowdsourcing
In the case of microtask crowdsourcing, crowdworkers collectively work on a large number of tasks so that the traditional human resource requirements of requesters can be reduced. In doing so, the online crowdsourcing platform splits the big tasks of the requester into small subtasks, which are as small as possible so as to ensure quick and easy processing. This process is then referred to as ‘microtasks’ or ‘micro-jobs’ (Difallah et al. 2015). In the end, all subtasks are brought together again and sent back to the requester. Amazon Mechanical Turk is one of the best-known platforms for this type of activity.
This kind of crowd task is used by requesters to handle less complex, often repetitive, tasks such as image tagging and video tagging or the transcription, translation or digitation of documents that are easy for humans to process but cannot be processed easily by machines (Geiger et al. 2011). However, the task description should contain all the necessary information about the task execution because crowdworkers only see a small portion of a bigger task which they do not know about, and they usually do not have an opportunity to contact the requester for further information about the task (Deng et al. 2016; Felstiner 2011; Leimeister et al. 2016). Therefore, it is extremely important to formulate the task as concretely and specifically as possible to obtain high-quality solutions (Deng et al. 2016).
Macrotask Crowdsourcing
In macrotask crowdsourcing, the task is divided into units that are quite large and therefore still relatively complex and require preprocessing. It is an interactive form of service delivery that is organized collaboratively or competitively and involves a large number of extrinsically or intrinsically motivated actors using modern ICT systems. Among other things, this variant of crowdsourcing uses the principle of the ‘wisdom of crowds’, which James Surowiecki described in his book in 2005 (Surowiecki 2005). It states that the solution achieved and the decision-making are often better when information is accumulated in a heterogeneous group of individuals than when it is given by a single expert. Macrotasks are difficult to take apart, and the solutions for macrotasks require a great deal of sharing of contextual information or dependencies on intermediary results. Therefore, creative tasks such as design contests or coding challenges are executed in the form of macrotasks (Niu et al. 2019).
In contrast to microtask crowdsourcing, macrotask crowdsourcing promotes collaborative working among crowdworkers because of its complex subtasks. Different abilities and skills meet in the crowd and connect productively to one or more end solutions. In this way, both innovation ability, generation and preservation are promoted.
2.6 Process Management
Crowdsourcer
Institutions, e.g. public authorities or universities, non-profit organizations or an individual may act as crowdsourcers (Leimeister and Zogaj 2013). Typically, an organization passes one task on to more than one external crowdworker and uses the results from their work to complete the crowdsourcing process. The ‘crowdsourcer’ (requester, client) usually designs the process by defining the task typology, the required crowdworker profile, the budget and time constraints and by executing the task decomposition. After the task is completed, the crowdsourcer analyses the results based on the predefined evaluation criteria (Difallah et al. 2015) (Fig. 1).
Crowdworker
Generally, the crowdworkers consist of an undefined large group of people who have participated in the completion of a given task voluntarily or in return for monetary compensation (Difallah et al. 2015). The optimal number of crowdworkers depends on the type of crowdsourcing task, the task specifications and the information and skills needed to solve the problem (Leimeister and Zogaj 2013). The crowdworkers’ expertise and abilities can determine the quality of the results; therefore the credentials and experience of crowdworkers might play an important role when the requester is selecting the right crowd (Allahbakhsh et al. 2013).
Platform
Crowdsourcing platforms provide the medium of interaction and thus the (only) point of contact between the requesters and the crowdworkers. These platforms control all processes starting with the registration of a crowdworker on the platform, then continuing with the placing of the crowdsourcing task defined by the requester on the platform, assigning of the task to the given crowdworker profile, providing support with technical issues and collecting the answers from the crowdworkers (Difallah et al. 2015). The crowdsourcing platforms can be divided into the following types:
-
Microwork platforms: These platforms are mostly designed for microtask crowdsourcing. These tasks are of low complexity and high granularity. The best-known platforms for microtasks today are Amazon Mechanical Turk, Clickworker and CrowdFlower.
-
Development, test and marketing platforms: Tasks are (very) complex and have low granularity. The best-known platforms for this are rapidusertests and Testbirds.
-
Innovation platforms: Tasks may have both low and high complexity. The best-known platform for this is InnoCentive.
-
Internal crowdsourcing platforms: Tasks are business-driven and may have low or high complexity. There is no generally known platform for internal crowdsourcing; each company uses its own platform or one of the platforms of the external providers.
As explained in chapter ‘Systematization Approach for the Development and Description of an Internal Crowdsourcing System’, the ICU Process and Role Model describes the process of internal crowdsourcing, in particular the individual steps that are necessary to solve an explicit task. Exploiting the full potential of the employees, the following steps should be designed by the requester, usually a department in a company or an employee: (1) impetus; (2) decision; (3) conceptualization; (4) execution; (5) assessment; (6) exploitation and (7) feedback (see chapter ‘Systematization Approach for the Development and Description of an Internal Crowdsourcing System’).
Stage 3 in particular plays an important role in the success of internal crowdsourcing and should be defined before the task execution. The definition, decomposition, integration and allocation of tasks are crucial in this process because the task should be explained unambiguously, in a focused manner and precisely so that employees can understand it easily by reading it in an online environment and so that they can complete the task in a short timeframe (Bailey and Horvitz 2010). Additionally, the bigger picture that lies behind the task should be communicated beforehand so that the employees can understand the context better and feel appreciated by contributing to something bigger (Simula and Ahola 2014). After the task is completed, the results evaluation and the quality control can be performed either by employees in the form of a crowdrating or by the requester (the department or an individual employee) or by an expert in the given domain (Thuan et al. 2017). Another important aspect is that the final results and the next steps with the achieved results should be communicated clearly to the employees.
2.7 Role of IT in Internal Crowdsourcing
The internal crowdsourcing platforms can be divided into two categories: generic social IT platforms (i.e. multipurpose tools such as social networking sites or wikis) and specific crowdsourcing IT platforms (i.e. tools developed specifically for crowdsourcing, possibly even for a particular purpose in a particular enterprise) (Zuchowski et al. 2016).
Generic Social IT Platforms
In the case of generic social IT platforms, the company uses an internal social platform such as wikis, intranet, yammer or slack as a tool for internal crowdsourcing (Stocker et al. 2012; Rohrbeck et al. 2015). These platforms are established not only for crowdsourcing; it is also easier to reach out to all of the employees in the company by integrating internal crowdsourcing into existing IT infrastructures. Since the employees are already using them and information about how to use the platform is already established, the entry barriers will be low.
However, it can also be challenging if specific IT features are needed to perform the internal crowdsourcing task. Specific IT requirements cannot be implemented in most cases, and the company needs to work with the given IT structure (Rohrbeck et al. 2015). Different user interfaces and new question and data entry types that are not offered by the generic platform cannot be implemented. Also, the other postings regarding the company may distract the attention of the employees, e.g. the most recent internal crowdsourcing task might be shown at the bottom of the webpage because of other postings which lead to lower participation and attention. Furthermore, it is harder to reach out to a specific group of employees if a generic social platform is used.
In addition, security guidelines and regulations (e.g. privacy, barrier-free access) might be a problem in most cases (Rohrbeck et al. 2015). Large companies especially have strict regulations regarding the security and use of companywide IT platforms that can be accessed by a large number of employees. The data shared on the platform might include sensitive data regarding both the company strategy as well as personal information of employees the sharing of which with third-party organizations is not permitted.
Specific Crowdsourcing IT Platforms
Specific IT platforms enable repeatable and well-defined internal crowdsourcing processes that have the same characteristics (Geiger et al. 2011). These platforms can typically be adjusted to the enterprise’s very particular needs, and new IT features can be developed specifically for the particular problem category crowdsourcing is addressing (intelligence, design, decision) (Zuchowski et al. 2016). Having an expert platform that is specifically designed for the company’s own needs might enhance the motivation of employees to participate, since the user experience on such platforms is usually better than on generic social IT platforms for internal crowdsourcing. Moreover, the security and privacy issues can be addressed and designed in line with the company’s requirements. However, maintaining such a platform, developing new features if needed and providing technical support 24/7 are very time-consuming and costly for an enterprise.
Additionally, the entrance barriers for employees are usually higher than a generic platform because of the aversion to ‘yet another platform’. Creating another account and using another tool might be seen as a burden, especially in big corporate organizations, since the employees are already overloaded with the IT tools (Rohrbeck et al. 2015). Another challenge that organizations might face is that the organization itself is constantly changing. Together with this change, the IT requirements for the internal crowdsourcing platform also change, while an IT tool cannot be developed as fast as the change requires. Therefore, a flexible and configurable IT implementation is an important aspect for specific crowdsourcing IT platforms.
For our study, we chose to implement a specific internal crowdsourcing platform because it enhances the effectiveness of internal crowdsourcing in avoiding security and privacy issues and offers the opportunity to adjust the platform to the specific needs of the application partner.