In analogy with open-source software (OSS) ‘open-source communities’ in general can be defined as peers producing content together on a voluntary basis, without direction from markets or managerial hierarchy, and posting their created content in a virtual commons accessible to all. This, of course, is the definition coined by Benkler (2006). Social movements develop content collectively, both ‘by the people’ and ‘for the people’. A new mode of production is born. It all started with OSS, which attracted increasing numbers of participants with the advent of the Internet. Subsequently the movement spread from software to other kinds of content: encyclopaedias, journals, books, movies and more came to be produced in an open-source fashion. Note, though, that the epithet ‘open-source’ in these instances just refers to the circumstance that contributed content is readily made available for distribution, refinement and modification—the typical software distinction between source code and object code no longer applies.
A mode of production like this opens the gates to the outside world for everybody. Anybody is invited to contribute inputs that are relevant to the project. But to what extent can these suggestions be relied onto make a valuable contribution, and be taken into account or ultimately integrated into the official, up-to-date version of the project concerned? Put the other way round: to what extent can one be sure that the incoming uploads are not disruptive or undermine the collective cause? Some communities are characterized by little interaction between contributors: the project simply amasses all inputs together, like photographs, journal entries, or music samples. In such cases, dubious quality inputs do little harm. However, when contributors do interact continuously about interconnected content that is ever-evolving, the quality problem becomes more acute. Exemplary domains of content with such dense interaction are software and encyclopaedias. With software, one is invited to submit comments, bug reports, code patches or new features in source code; with encyclopaedias, one is invited to submit comments or suggest changes to existing entries, or suggest new entries altogether. In both cases, the contents are in perpetual flux.
As soon as a project grows in size—and monitoring by a single person becomes unfeasible—those in charge have to ask themselves who can be trusted to provide valuable comments and/or content, in a spirit of loyal cooperation and proportional to their competences. Those—and only those—worthy of one’s trust then can be given permission to introduce their changes directly into the official version as presented to the public. The defining body of content (either the source code repository or the body of textual entries as a whole) is entrusted, as it were, to a collective of dedicated contributors to take care of. In contrast to one’s child in the babysitter example, though, the goods being entrusted are ever-expanding.
That trust is an issue can be shown quite specifically. When OSS hackers create a tree of source code together, inappropriate code may be a nuisance (cf. de Laat 2007: p. 171). For one thing, code may be sloppy, of mediocre quality, or contain bugs. For another, in a more malicious vein, code may contain viruses that have the potential to spread throughout the tree (‘malware’). If the official repository is made accessible to a multitude of trusted persons, considerable damage may result. The risk taken is not insignificant, while cleaning a spoiled tree is a nuisance that can take many hours of painstaking work. One has to roll back to earlier versions of the repository and start anew without the contested code. Subsequent source code changes have to be reintroduced one after the other. Although in practice larger OSS projects are split up into modules that run in parallel, thereby reducing this risk, some element of risk still remains.
Project leaders in OSS are often acutely aware of the problem. From an online survey it transpired that sourceforge developers do consider interpersonal trust important for the effectiveness of OSS communities (Lane et al. 2004). In particular they identified obtaining write access to the repository as a matter of trust that has to be gained by adequate performance. Compare the following quotations:
Once a potential project participant has proved his/her interest by submitting relevant code changes and expressing an interest to write more code, this is normally enough for them to gain the trust of existing project members. Once trusted a participant is typically given commit rights to the source repository, and can thus freely change the code base.
Free Software is generally a trusting community. However, it is generally accepted that a new guy is not trusted. This means that a new guy can’t just write an email to the developer’s list and get write access to the project’s CVS.Footnote 1 A new guy has to build trust with the project by submitting patches, useful criticism, help, and testing, and so forth. Before someone can have write access to CVS, they generally have to demonstrate programming skills, an ability to take criticism and use it constructively, work with a team, and show that they are willing to work to resolve problems.
The arch-father of Linux, the largest OSS project ever, is also aware that trust is at stake. By nature not a trusting kind of person, Linus Torvalds only extends his trust to a few chosen lieutenants:
(…) I’m afraid that I don’t like the idea of having developers do their own updates in my kernel source tree. (…) I know that’s how others do it, and maybe I’m paranoid, but there really aren’t that many people that I trust enough to give write permissions to the kernel tree. (retrieved from http://lkml.indiana.edu/hypermail/linux/kernel/9602/1096.html)
A similar analysis applies to open-source encyclopaedias. The entries that collectively make up an encyclopaedia can obviously be spoiled by contributors with mala fide intentions and/or poor capabilities. Wikipedia in particular has by now accumulated ample experience on this point and developed an amusing typology of such participants (http://en.wikipedia.org/wiki/Wikipedia: RCO; henceforth for all English Wikipedia references the prefix http://en.wikipedia.org/wiki will be omitted but presumed as the default). ‘Cranks’ insert nonsense, ‘trolls’ and ‘flamers’ stir up trouble, ‘amateurs’ disturb entries with their fake knowledge, ‘partisans’ smuggle their point of view into entries, and ‘advertizers’ find subtle ways to promote their products. As a result, entries become unbalanced at best, unreliable at worst. Having to monitor and redress such disturbances is obviously a major task (more on this below).
So here, too, providing write access to the project’s official body of content is a matter of trust: dependence, vulnerability and risk are all involved. Inside Wikipedia, its volunteer ‘officials’ are well aware that trust is involved. This is most clearly shown by growing concerns about two issues. On the one hand, the reliability of entries has become a source of concern. In response, various quality measures and procedures for verifying them have been introduced (‘good’ article, ‘featured’ article). Moreover, software schemes that colour chunks of text according to their reliability are in the making. On the other hand, the trustworthiness of contributors themselves is becoming a critical issue. Should levels of ‘trusted users’ be distinguished—as opposed to ordinary users? Should such ‘trusted users’ specifically be charged to carry out quality inspection of entries? Should such users police vandalizing contributors? Obviously, the two initiatives, directed towards both entries and the contributors behind them, are seen as interrelated.
This issue of trust is explored below by a close analysis of developments in both OSS communities (FreeBSD and Mozilla in particular) and encyclopaedic communities (Wikipedia in particular). This selection of cases is meant to cover some typical open-source communities that currently exist. The central argument about the handling of trust—whether by inference, assumption, or substitution—can be briefly summarized as follows. In an initial phase when projects are still small they usually rely exclusively on the first two mechanisms (‘informal phase’).Footnote 2 In this respect OSS could rely on a culture common to the community as a whole: the ‘hacker ethic’. Contributors could be supposed to adhere to this ethic and therefore be considered trustworthy enough. Wikipedia faced a much harder problem. When it started, no relevant common culture was in existence. As a result, trustworthiness could not be inferred directly in any plausible way; the only option was simply to go ahead and assume contributors were trustworthy. Was this assumption based on any rational underpinnings? The answer is found not so much in the mechanism of seeking esteem (as proposed by Pettit), but rather in the mechanism of substantial hope (as proposed by McGeer). Potential contributors were called onto develop and apply their encyclopaedic skills. To be sure, in order to fill the cultural vacuum, a ‘wikiquette’ soon enough came to be developed inside Wikipedia as an analogue of the hacker ethic.
In time rules and regulations were introduced, relating in particular to a division of roles and decision making (de Laat 2007). This is a common development as soon as projects grow, both in terms of the number of participants and the size of content created. In order to manage the complexities involved, project leaders experience the need to structure their projects. The link with trust is that rules may substitute for trust—and so reduce the trust needed. It is important to emphasize, though, that such governance by rules and regulations may vary across projects. On the one hand, rules may be designed starting from the premise that participants can be fully trusted. A maximum amount of participant discretion will be designed in, so to speak. On the other hand, the leading presumption may be the opposite: participants cannot be trusted to deliver reliable content of their own accord, so as little discretion as possible—without stifling voluntary contribution altogether—is granted by the structural design. A low-discretion design signalling low trust is the outcome. In between these extremes, a continuum ranges from high to low discretionary design. It is argued that the design of open-source software and encyclopaedia production seems to be converging to a medium level of discretion.