Science and Engineering Ethics

, Volume 19, Issue 3, pp 1181–1200

Re-skilling the Social Practices: Open Source and Life–Towards a Commons-Based Peer Production in Agro-biotechnology?

Authors

    • Dipartimento di Scienze Politiche e SocialiUniversity of Catania
  • Guido Ruivenkamp
    • CTC Research Unit of Wageningen University
Original Paper

DOI: 10.1007/s11948-012-9405-4

Cite this article as:
Nicolosi, G. & Ruivenkamp, G. Sci Eng Ethics (2013) 19: 1181. doi:10.1007/s11948-012-9405-4

Abstract

Inspired by the thinking of authors such as Andrew Feenberg, Tim Ingold and Richard Sennett, this article sets forth substantial criticism of the ‘social uprooting of technology’ paradigm, which deterministically considers modern technology an autonomous entity, independent and indifferent to the social world (practices, skills, experiences, cultures, etc.). In particular, the authors’ focus on demonstrating that the philosophy,methodology and experience linked to open source technological development represent an emblematic case of re-encapsulation of the technical code within social relations (reskilling practices). Open source is discussed as a practice, albeit not unique, of community empowerment aimed at the participated and shared rehabilitation of technological production ex-ante. Furthermore, the article discusses the application of open source processes in the agro-biotechnological field, showing how they may support a more democratic endogenous development, capable of binding technological innovation to the objectives of social (reducing inequalities) and environmental sustainability to a greater degree.

Keywords

TechnologyDemocracyParticipationSocial skills

Introduction: Skilling, Deskilling, Reskilling

Many authors (Latour 1991; Ingold 1997; Oyama 1998, etc.) have shown in a decisive way how there is a deeply rooted prejudice in modern western societies that considers technology and society two separate and independent entities. In particular, there is a scientific and cultural paradigm (Kuhn 1962) tending to interpret technology as a neutral entity, freed from the social context within which it has developed. This presumed neutrality of technological development, in reality, constitutes an ideological veil that may support and satisfy consolidated political and economic interests (Noble 1993). To view technological development as a separate process from the social dimension, moreover, implies the substantial renunciation of supporting any efforts aimed toward conceiving of a socially oriented technology (Gallino 2007). On the contrary, technology should be considered the product of a techno-scientific and social co-creation process (Bijker 1995) at the same time.

It should be emphasized that the interests of the directly or indirectly involved social groups in such a process do not have equal weight and consideration. Indeed, there are serious imbalances and asymmetries of power that contribute to defining their main dynamics (Feenberg, 2001).

With the aim of dismantling this ‘dualist’ prejudice and showing the close relationship between technology and society, the British anthropologist Tim Ingold, developing a complex research that shifts between the biology of development, ecological psychology (Gibson 1977), phenomenological philosophy (Merlau-Ponty 1945) and practical anthropology (Mauss 1936; Bourdieu 1980), has demonstrated that technical action must be interpreted starting off from the fundamental concept of skill. For Ingold (1983, 1997, 2000a, b) etc.), technique is a skilled practice in which both the natural and social technical domains are not clearly distinct and separable. This because technique, corporeity, environment and sociality are, in truth, the various facets of the same entity. Expert practice (skill), a human action founded on attention and perception, imitation and improvisation, weaves (literally) the relationship between these dimensions together and fuses them in an inextricable interlacing.

With the advent of modernity, this interlacing appeared to many authors as irreparably lost or relegated to a kind of ideal golden age, by now definitively surpassed. Modern societies, by activating processes of radical mechanization, were to have dramatically broken the nexus between action and perception. Besides, the processes of uprooting and abstraction of work and technical action imposed by mechanical automation and linked to the externalisation of the forces of production had already been widely described in classic socio-anthropological literature (Durkheim, Marx, Weber, etc.).1

Our epistemological position is that these processes, real and often dramatic, have ended up by obscuring an anthropological given that remains ineliminable and that François Sigaut has masterfully and synthetically summarised in the expression: the “law of the irreducibility of skills”. That is, the process of deskillization could never be brought to fulfilment completely because: “new skills tend to continually develop around new machines” (Sigaut 1994, 446).2 This “law” has its roots in a fundamental anthropological relationship between the body and instrument that Sigaut (2007) describes and whose traces may also be found in the very etymological root of the term organ (of the body): the word ὄργανον, which in ancient Greek means ‘tool’. Sigaut writes of technique as the foundation of human sociability. Indeed, in his perspective, the mutual relationship between real, ego and the other, which he calls the “triangle of sense”, is created precisely with technical and suitably equipped (outillée) action. At the root of this action is the sharing of experience, which renders technique the source of pleasure and social in its essence.

We believe it possible to put forward substantial criticisms of the theory of radical uprooting and, in such a way, to overcome the pessimism of some apocalyptic3 visions of technology. In particular, we think that there is room for manoeuvre in order to conceive of and also produce in concrete terms reskilling practices (Giddens 1991). Namely, we believe it possible to develop practices aimed at the re-encapsulation of technology within social relations; practices aimed at an empowerment of communities and the participated and shared rehabilitation of technological production ex-ante with the aim of supporting a more democratic endogenous development (Van der Ploeg and Long 1994) and potentially able to bind technological innovation to the goals of social sustainability to a greater degree (reduction of imbalances).

This article sets out to discuss open source as an emblematic example of reskilling practices, which well illustrates how more room for manoeuvre for the socialembedment of technologies, also regarding the agro-biotechnological field, is emerging.

Open Source as Commons-Based Peer Production

Open Source

Open source is both a concept and practice founded on the principle of free access to knowledge and information. Historically, it came about in the framework of information technology. Indeed, open source is the fruit of the evolution of a debate and a social practice that, beginning from the 1960s to 1970s, accompanied the development of free software4 and copyleft5 in the context of a hacker ethic (Himanen 2001). The innovative aspect of open source derives from the ability to incorporate a socio-cultural dimension, a political dimension, an economic dimension and an organizational dimension within a technological product at the same time.

From the socio-cultural point of view, open source re-introduces a dynamics of exchange founded on an obligation based on three actions: giving, receiving and giving back. From this viewpoint, open source reproduces a very similar model to what Mauss (1923) described when referring to the forms of social bonding typical of pre-trade societies (Berra and Meo 2001); and that was later taken up by Karl Polanyi (1974) to formulate his theory of the embeddedness of the economy in society in order to criticize the classic and neoclassic economic argument of the naturalness of the market and underlying anthropology of homo oeconomicus.

From the political point of view, the hacker ethic puts forward again a libertarian and socialist vision of society.6 From the economic standpoint, open source represents a radical criticism of Capitalism that founds profit on patents and champions, on the contrary, an open model of circulating knowledge that we might define coopetition (cooperation and competition).

Finally, from the organizational point of view, the contrast between typical models of proprietary software and those of free software have been widely represented in the metaphors of Eric Raymond (2001), namely the “cathedral” and the “bazaar”. On one hand, the cathedral model supports planning driven by the various phases of segmented work with a consequent loss by the mere executors of any form of ‘local intelligence’. On the other, the bazaar model, self-organized, rooted, acephalous and flexible, is founded on the force of the cooperative motivation of participants. A model, this latter, that has its added value in the logic of reciprocity that constitutes the anthropology of the gift of which Mauss spoke.7 A logic able to strengthen the social bond, anchoring it to a model of alternative exchange both to the market economy as well as to state centralism.

Open Source, Open Knowledge, Commons-Based Peer Production

Free software theory (and practice) anchors its discourse within the analysis of ‘knowledge as commons’. Since this analysis has its roots in the field of interdisciplinary studies of shared natural resources (water, forests, fisheries, etc.) we wish to emphasize that there are some challenges in identifying the similarities between knowledge commons and traditional commons. The main difference is that natural resources are usually ‘subtractive’, because their consumption by a person does not prejudice the possibility of consumption by another. On the other hand knowledge is relatively non-subtractive, since the more people who share knowledge, the greater the common good.8 Some authors (as Braman, 1989) prefer referring to a ‘dual functionality’ of knowledge (human need and economic good) just to highlight the complexity and controversiality of its nature. Indeed, as an immaterial entity (ideas, thoughts, etc.) it can be considered as a ‘public good’,9 but as an economic good (books, CD, newspapers, etc.) it can easily be converted in a capitalistic commodity with serious access and preservation problems. New media and digital technologies have brought both faces of this Janus to the fore, entailing an ambivalent complexity which can reside at the local level, at the global level or somewhere in between.

In all fields in which information, knowledge and goods which are information-dense play a fundamental role, and this is becoming increasingly frequent in today’s network society (Castells 1996), it is possible to realize practices of open access10 that also render such assets ever more usable by those who have been traditionally excluded.11 It is easy to understand how this makes the emergence of practical individual and collective skills, based on non-proprietary logic grounded in cooperative actions and that escape the regulation both of the State and the market, increasingly likely. Yochai Benkler has defined these actions as ‘commons-based peer production’:

Commons-based peer production is a socio-economic system of production that is emerging in the digitally networked environment. Facilitated by the technical infrastructure of the Internet, the hallmark of this socio-technical system is collaboration between large groups of individuals, sometimes in the order of tens or even hundreds of thousands, who cooperate effectively to provide information, knowledge or cultural goods without relying on either market pricing or managerial hierarchies to coordinate their common enterprise. (Benkler 2006a: 395)

and has singled out their main characteristic precisely in the fact that in these forms of production nobody has exclusive control over the use of resources. On the contrary, within them there is a freedom of access reserved to more or less broad groups (as far as including the generality of human beings as a whole) and a regulation of such access always characterized by a general reciprocity (or symmetry). The principle-guideline is that of sharing knowledge and information. In these forms of production, moreover, an institutionalized hierarchy of system is not envisaged, the production being substantially decentralized.

For these forms of production, moreover, the Internet represents the main material and symbolic infrastructure of action and sharing. Also the underlying assumption lies in the conviction that the more complex tasks whose execution is generally attributed to professionals with advanced skills backgrounds and that requires a high level of specialization, in reality, can be reorganized (by moderators accredited with systems of appraisal between peers) and, therefore, realized by simple interested volunteers and with intrinsic motivations bound to social rewards. In the last instance, these forms of production mostly play on a non-economic capital, namely a social capital.

Democracy, Open Codes and Reskilling Practices: The Case of Life Technologies

Benkler rightly asserts that a wider accessibility to knowledge and information may surely have a remarkable impact on reducing the serious imbalances that undermine the basis of world-wide human development (HD). Indeed, this is closely correlated to the real development of some specific fields like food (food and alimentary security), health, research and education. In all of these fields, a determining role is being played today by information-embedded products, goods and tools (Benkler 2006b). The application of commons-based forms of production to these sectors may contribute to a fairer human development by facilitating bottom-up decentralized practices. In this sense, open source may help us avoid what was efficaciously defined ‘the tragedy of the anti-commons’ (Heller 1998). The expression was coined to criticize an unscrupulous use that many liberal economists continue to make of the famous dilemma formulated by the American ecologist Garrett James Hardin (1968) and known as: ‘the tragedy of the commons’. This refers to the situation in which a plurality of individuals that use a common good (not subject to private property restrictions), acting rationally to pursue their own short-lived private interests, end up by squandering the (limited) common resources, even if this evidently harms their own interests over the long term.

The tragedy of the anti-commons, instead, describes an opposite situation, in which the proliferation of holders of property rights frustrate the possibility of reaching a desirable social result, owing to the collapse of the mechanisms of coordination needed to ensure the development and marketing of the products. Indeed, economists recognize that if the ownership rights of multiple components of a single technology are held by a more or less high number of entities (individuals or companies), the marketing and development of products requires considerable effort of coordination between the various parties involved. In an ideal world without transaction costs12 this would not create particular negotiation problems. But the reality is very different and transaction costs are often so high as to hinder negotiation with the paradoxical effect of preventing the satisfaction of both private interests and those of the public at the same time.

This theory has not been spared by criticism. Particularly pungent the one elaborated by the neo-liberal jurist Richard A. Epstein. Referring to biomedical research, Epstein claims that Heller overestimated the problem of patent protection at both theoretical and empirical levels. Empirically, the number of patents filed has continued to grow. Theoretically, according to Epstein and Kuhlik (2004) Heller fails because of a set of ‘faulty analogies’. Namely the analogy between patent system and natural barriers as well as bureaucratic permits.

Concerning the first analogy, Epstein argues that nature can put up a barrier against innovation,13 but this is not the case, as said, of the world of biomedical research. Concerning the second analogy, Epstein argues that the parallel is not pertinent because the patent is always a ‘wasting asset’. Actually, the legal monopoly created by a patent is limited in time and, even during the period of its validity, new and old patents or new technologies can challenge its dominance. He claims: ``those who do not deal will not prosper, so that the entire culture works in ways that encourage various forms of cooperation.”14

The Case of Agro-Biotechnologies

Epstein’s authoritative opinion has also been contested, in turn, as being founded on weak theoretical and empirical underlying assumptions. It is important to note that part of this criticism comes from outside the open source movement (see Dreyfuss 2003). This is just to say that the framework is controversial and complex. It is difficult to find final solutions and, probably, the cultural/axiological ground is, in the end, the only one able to justify the choice between alternative (more than two) normative and practical road maps of action.

In our opinion, for example, an emblematic case of the ‘tragedy of the anti-commons’ has been given by biotechnologies, both the ‘red’ (health and pharmaceutical fields) as well as ‘green’ (agricultural sector) kinds. In particular, the latter, in spite of the great expectations generated, have been a substantially bankrupt experience. The effort that began in the 1930s to increase crop yields and boost the nutritional quality of produced food in order to reduce the impact of famine and malnutrition, has prompted processes of techno-scientific innovation that can no longer respond satisfactorily to the expectations once becoming proprietary. As an example, take the case of Golden Rice, a variety of genetically modified rice with the aim of introducing vitamin A into the poor diet of the populations consuming it. Here, the resorting to the politics of proprietary patents has transformed a potentially revolutionary product into a tragic paradox. The vetoes, obstacles and very high costs caused by the combination of tens of patents (distributed throughout the world) weighing on the several passages that obligatorily must be tackled for its realization have given rise to an inextricable legal and economic predicament that has made Golden Rice an entirely unsustainable product (Hope 2008). This condition of unsustainability can easily be generalized to all the experiences of a proprietary kind regarding biotechnology. A picture emerges in which the levels of social equity, the defence of environmental resources and public health have often been subjected to unacceptable distortions. The case of genetically modified organisms (GMO) is only the most glaring. If we analyze its impact, we may affirm that it was a disastrous experience from the socio-economic point of view (impoverishment and mass proletarization of the Third World farmers), from the environmental point of view (reduction of biodiversity), and from the point of view of real effectiveness (Shiva 2004). Without taking account of the serious symbolic crisis that they have caused also in the rich and economically developed world (Nicolosi 2007).

By means of a patenting policy based on ‘exclusion’, the biotechnological industry has increasingly restricted crop farmers’ access to phytogenetic resources. Indeed, germplasm is the raw material on which any program of plant-breeding is based and to which selectors have traditionally always had free access in order to make on-going and selective improvements, over the course of several generations, in the varieties of plants. The advent of genetically modified organisms has drastically changed this situation. In an apparently paradoxical way, indeed, what has happened is a typical example of antisocial free-riding. The selector (often a multinational corporation) patents (and therefore renders inaccessible) a germplasm that, in turn, is the result of the crossing of genes and other germplasm developed by other selectors in a system of free access. Often this biological material has been developed from the cultural practices of indigenous communities, or within programs financed with public funds or by international research centres and freely made available to all. One of the sensational instances of patenting fundamentalism was reached when seeds rendered artificially sterile were patented in order to exclude the possibility that they might be re-sown by farmers. Conventional biotechnologies, thus, by separating seeds (and the genetic information contained within) from the harvest and, therefore, from the socio-cultural practices and the natural environment that traditionally generated and ‘safeguarded’ it, have made farmers dependent on the specialist globalized multinationals and owners of highly expensive patents (Kloppenburg 2005).

The production of food and the development of agricultural techniques are the fruit of an age-old storehouse of social, cultural and symbolic resources. Such a store house is thus marginalized by processes of scientific production, provoking an uprooting of food and agriculture from the local traditions and an unequal enrichment to the entire gain of the big globalized players.15 In practice, by means of privatizing biodiversity, a gradual erosion of the sovereignty that farmers had in the past over the seeds has been achieved (Kloppenburg 2010). What we have witnessed, therefore, has been the systematic expropriation of the symbolic and material resources by opaque and uprooted international knowledge networks (Ruivenkamp 2008).

Democratizing Innovation: Open Codes and Reskilling Practices

We have seen how, according to many authors, the radical enclosuring of research characterizing the last thirty years of techno-scientific development has introduced serious inefficiencies that have drastically limited its economic and social potential. In our view, open source can resolve some of these problems, adapting to the real needs of users also in the cases where the potential market is not broad, and speeding up innovative processes, making them also economically more accessible and more reliable. Moreover, and this is no secondary matter, open source can facilitate a greater democratization of innovation since it guarantees an effective involvement of the user/consumer (prosumer) in the processes of technological innovation (Von Hippel 2005).

In effect, in open source the one-way and top-down logic of the traditional practices of consumption are radically overturned. It is necessary to emphasize that there is also a broad literature in social sciences and humanities (mainly neo-marxist) criticizing the actual extent of this process. Some of these authors (i.e. Suarez-Villa 2001) consider prosumerism, and more generally the practical involvement of the user, as a new form of exploitation in the age of techno capitalism, where the reproduction of knowledge has become the most important function of society since it is replacing the process of reproduction of capital. According to Suarez-Villa, the emergence of the so-called knowledge society has rendered creativity and knowledge the scarcest resources capital needs. Under techno capitalism, knowledge assumes the properties of a private commodity, as much as raw materials or labour power did under industrial capitalism. For this reason, there are pressures to globalise and commodify intellectual property (patenting). This system, he argues, is strongly dominated by big corporations since the vast majority of new technologies, and particularly the most valuable ones, are spawned within corporate structures. The most radical example of this process, says Suarez-Villa, is the commodification of life itself with the acquisition of patents in the field of genetics, with biotechnology companies obtaining patents on decoded genes that can be used to develop new drugs. In this context, each innovation produced outside the corporation framework is, in the end, appropriated and absorbed by it.

This is the case of open source, he claims, but it is also the case of prosumerism where the user contribution to innovate helps the corporation increase profits through the most sophisticated and pervasive example of alienation. Here, neo-marxism finds in governamentality,16 inspired by the post-structuralist thought of Foucault, a big support. In this perspective, individual freedom, empowerment and technology of the self express contradictory trends as they have ambivalent critical points where their implied liberalism may suddenly change to its opposite and incorporated (embodied) forms of control and can become a dramatic ‘microphysics of power’. That is, disciplinary techniques of the self can become disciplinary technologies of power.

In this perspective, people believe they are able to express themselves and their autonomy, but, on the contrary, they can effectively find the ‘right’ way to align themselves to the power desiderata. Here, « the autonomy of the self is thus not the eternal antithesis of political power but one of the objectives and instruments of modern mentalities for the conduct of conduct » (Rose 1996, 155).

We are aware of these radical well-founded interpretations, but in this article, we support a different position. In our opinion, the skilled involvement of the user can17 go towards a better accomplishment of the ideal ‘democratic rationalization’ of technology of which Andrew Feenberg (2010)There are two reasons to explain this affirmation.

First of all, open source develops in an extreme way a potential implicit in every technology and which constructivist sociological analysis (Bloor 1991; Latour 1987; etc.) has widely demonstrated in contrast to the ‘essentialist’ vision of the philosophy of technology (for instance Heidegger), confirming that technological development is always bound to the socio-historical contingency.

It was probably the author Andrew Feenberg who best expressed this principle from a philosophical perspective, at the same time highlighting all its democratic and emancipatory political potential. Bearing in mind the teachings of the heterodox Marxism of Antonio Gramsci (1975), Feenberg refutes the vision that upholds the presumed neutrality of technology and, on the contrary, recognizes all the hegemonic content that is encapsulated within technological objects:

Technical design responds not only to the social meaning of individual technical objects, but also incorporates broader assumptions about social values. The cultural horizon of technology therefore constitutes a second hermeneutic dimension. It is one of the foundations of modern forms of social hegemony. As I will use the term, hegemony is domination so deeply rooted in social life that it seems natural to those it dominates. One might also define it as that aspect of the distribution of social power which has the force of culture behind it. (Feenberg 2001, 86)

Feenberg reminds us that technologies are as they appear not because there is an absolute law (technical or economic) that imposes this. There are different possible paths and outcomes of development of any technology. Among the several possible configurations, what is actually realized is selected by dominant interests. Once introduced, ex post, every technology appears functional precisely because it offers a material support to a certain cultural horizon.

For this reason, Feenberg defines the concept of technical code of technology as a code that defines ‘the object in strictly technical terms, in accordance with the social meaning it has acquired’ (Feenberg ibidem, 88). These codes prove ‘invisible’ because self-evident,18 but they express hegemonic interests and visions in a given socio-historical context. But precisely from the Birmingham School of Cultural Studies he learns how every hegemonic code, so that it may have a meaning, must also be decoded and domesticated, in the course of time, by the users of the same code.19 Equally, also turning to the work of Foucault (1976a, b) of Latour (1993) and of de Certau (1980), Feenberg shows how resistance tactics (in the terms of de Certau) aimed at re-encoding the social order incorporated hegemonically in the technological objects are possible.

In light of these theoretical contributions, Feenberg proposes setting up a participatory method that renders the code an open entity in which processes of creative appropriation can be fulfilled, aimed literally at re-inventing the used technologies.20 In our view, open source materializes and extols this proposal today, opening real operating margins for its achievement that did not formerly exist.

Secondarily, but in a closely correlated way, the processes of participation activated by open source logic offer an important side to the issue (this also tackled by Feenberg) of the need to limit the solipsistic power of technocracy. A power that, in modern societies, is often more substantial than the political system itself. Technicians, as is known, exercise an influence and control over communities that goes beyond that exercised by the institutions appointed to implement democracy. Winner (1995) speaks of the influence of technocracy in terms of a new form of legislative power lacking representativeness. For such a reason, for example, Richard Sclove, one of the most influential authors in this field, upholds the necessity to find new ways to integrate user participation in the processes of technological design (Sclove 1995; 2010) and points specifically toward non-specialists to limit the autonomy and self-referentiality of technical staff.21 In order to achieve this, Sclove appeals for the need to revitalize local communities to favour the growth of their independence and ability to govern science and technique.

Open source, as we have seen, heads right in the direction indicated by Sclove through the activation of a re-skilling process that restores a part of the design and technical creation process into the hands of non-experts and, therefore, of the communities on which the technology unfolds its effects. In this sense, it becomes fully inserted in the river bed of reskilling practices (Giddens 1991) oriented toward empowering communities based on the reappropriation of knowledge and know-how (savoir-faire) expropriated from techno-scientific ‘abstract systems’ in modern times.

On the value and epistemological and ontological importance of skills22 we have already spoken in the introduction. Richard Sennett has gone further, developing and elaborating the Jeffersonian ideal according to which democratic competence would reside in the skills that are practiced to modify material conditions. Sennett that is, points out that the citizenship can be lost or earned thanks to the material conditions of labour and puts forward artisan work as the ideal form in which participation, sharing, personal realization and social bonds are articulated.

With this argument, the American sociologist fulfils a critical review of the thought of one of his masters, Hanna Arendt. Indeed, in her analysis on the ‘human condition’, Arendt (1994) distinguishes two figures of the human being at work: the animal laborans and the homo faber. The first echoes a vision of man as a beast of burden, namely the repetitive and laborious dimension of work. The second, instead, recalls a higher world of the activity of building a communal life. For Arendt, the figure of the homo faber goes directly back to the high and noble art of Politics (and therefore of Philosophy). In this distinction there is, however an implicit hierarchy:

Homo faber is the judge of material labour and practice, not Animal laborans’s colleague but his superior. Thus, in her view, we human beings live in two dimensions. In one we make things; in this condition we are amoral, absorbed in a task. We also harbour another, higher way of life in which we stop producing and start discussing and judging together. Whereas Animal laborans is fixated with the question ‘‘How?’’ Homo faber asks ‘‘Why?’’ (Sennett, 2008, 6–7)

For Sennett, such a distinction is a fundamental philosophical error that leads to serious ethical and political consequences. In particular, it provokes a devaluation of material labour. For Sennett, instead, work, intended as artisan work, has a founding value for a broad democratic citizenship and not left in the hands of the expert elite. Working well, indeed, puts people in the condition to govern themselves and educates them toward citizenship. In artisan work, one must learn to reconcile autonomy and authority. For Sennett, moreover, everyone can become good craftsmen since the required abilities are innate.

A central aspect of Sennett’s argument, lastly, is the principle for which the advent of technology does not necessarily imply a marginalization of artisan skills. On the contrary, the advent of the most highly evolved digital technologies is making their revival possible. Indeed, skill may be defined as a “trained practice” (Sennett 2008, 37), and the new information technologies make a dynamic feedback that can “learn” from the experience increasingly possible. Thus, all the elements present in Plato’s celebration of Hephaestus, in particular the aspiration to quality, the control over processes, the participated and shared dimension, the unity between individual skill and social community, are present in the new forms of organization of work employed to develop open source software (see Linux) and, more generally, of all the forms of information technology development defined “bazaar” by Eric Raymond.23 Indeed, Sennett shows how the new information and communication technologies (ICT) allow a revival of “craftsmanship” (‘maestria’ in Italian translation). A significant choice of term, which clearly evokes the author’s wish to overcome the risk of a reductive use of the concept of expert experience (skill). Craftsmanship is a quality of the relationship that the worker establishes with his task and his activity; it cannot be reduced to manual activity alone, being bound to “an enduring, basic human impulse, the desire to do a job well for its own sake”. (Sennett 2008; 2009). Moreover, craftsmanship is a quality of work that depends more on the social conditions of the context that surround the worker than on the tools or machinery used. It is therefore founded on the intimate anthropological relationship (a substantial unity) that exists between the hand24 and the head.

For Sennett, therefore, dystopia and the techno-phobia of Arendt, albeit comprehensible for the contextual conditions in which her generation lived, goes directly back to the Greek myth of Pandora’s casket25 and establishes an ideal model based on the control ex-post of technology, in which Politics (the Public, the Discourse) must try to face and resolve the problems created by technology after they have been created by experts and technocrats.26 Sennett, instead, invokes an opposite model, based on a new ‘cultural materialism’, in which participation must begin before (ex-ante), namely the moment in which people produce things through material work.

Conclusions: A Copyleft Germplasm for Reskilling Practices? Problematic Issues and Possible Future Developments

In light of these considerations, although this opinion is not universally acknowledged and there are, as we saw, on the right (neo-liberals) and the left (neo-marxists) of the political spectrum, opposite criticisms, we believe that open source can promote, in specific circumstances, forms of re-skilling practices oriented towards introducing greater participation and social sharing in the processes of technological production.

In the agro-biotechnological field, there are already important experiences of re-skilling practices having the ambitious objective of uniting advanced genetic research and farmers’ know-how. Just to illustrate, we may note here the research and projects of intervention carried out by the research group heading up the Critical Construction Technology (CTC) of the Wageningen University and Research (WUR) and aimed at developing Tailor-Made Biotechnologies (Ruivenkamp 2005; 2008; 2009; Ruivenkamp et al. 2008); similarly, Participatory Plant Breeding (PPB) developed in Italy and realized throughout the world by Ceccarelli and Grando (Ceccarelli et al. 2007 Ceccarelli, 2009). In both cases, there is an attempt of bottom up technological development that re-encapsulates biotechnologies in the social, cultural and environmental traditions of communities, through the participated collaboration of technologists, scientists, farmers and citizens in a virtuous mechanism of participation and sharing that recovers all the local know-how and environmental resources.27

In both cases, this concerns commons-based peer production models that help develop a participated research model enabling farmers to have a fundamental role.28 Indeed, ‘conventional’ biotechnologies proceed with the selection of varieties that often do not respond to the needs of the poorest farmers and of those working in marginal environments with difficult climatic and social conditions where the paucity of common assets, such as water and the unavailability of seeds, are more often than not critical.29 On the contrary, the decentralization of the genetic improvement envisages the use and valorisation of local knowledge of the farmers right from the beginning of the process, when the genetic variability is still large. The process brings farmers and researchers together so that one learns from the other. The participation of farmers in the process of selection in their own agronomic and climatic conditions is not only effective, but considerably speeds up the process of adopting new varieties without involving complex mechanisms of the official release of varieties, of the production and divulgation of certified seed and ensures a higher preservation of biodiversity.

Precisely to develop and strengthen these experiences in the agro-biotechnological sector, it will be crucial in the next few years to experiment and carry out in depth research to develop models for the incorporation of open source licenses of intellectual and personal property in biotechnological innovations. Indeed, open source may promote the emergence or strengthening of commons-based production strategies, stimulating a cooperation model that, taking advantage of the potential offered by new technologies, connects the local with the global. Today, there are already important institutes, foundations and programs engaged in a cooperative way that are applying this local/global model (glocal as Bauman 2005, would say) in order to reduce the superpower of the biotechnological patents system. Some examples are the PIPRA (Public Intellectual Property for Agriculture), a coalition of public American Universities; CGIAR (the Consultative Group on International Agricultural Research)30; and, above all, BiOS (Biological Innovation for an Open Society), probably the most ambitious commons-based project oriented toward biotechnological innovation.31

Indeed, there are technical and legal difficulties in applying to forms of life and synthetic biology the copy left or creative commons model (Lessig 2005) and deriving from such major success in the information technology sector.32 Partly, this is due to the fact that open source in biotechnology is a young field to be developed. But there are also some important differences between ICT and biotechnology which deserve to be briefly discussed.

Firstly, ICT is a field mainly characterized by the copyright law whereas biotechnology is regulated by a patent law system. This difference affects importantly the development opportunities for open source in biotechnology because several aspects of patent law can create some challenges to the application of an open source model.

Particularly, there are crucial differences in the featuring of the commercialization paths. Actually, in life sciences a huge amount of money is necessary to ‘move’ inventions through development, field testing, manufacturing, and distribution. Whereas open source software has no expensive regulatory encumbrances and can be duplicated and distributed with (practically) no marginal costs.

As Boettiger and Wright (2006) claim, when a product has both commercial and humanitarian markets this point can create some economic and ethical paradoxes. Let’s think the case of an AIDS vaccine. Here, the patent owner, usually a public–private partnership, has the chance to play with patent rights in the licensing agreement in order to segment the market and to be able to provide a specific drug to underserved populations (the patent owner can bargain the licensing of the patent rights with the company engagement of delivering a low cost product into developing countries). Well, in these cases, open source licenses may be an encumbrance rather than a solution as they could discourage private investments. Probably, this is why BiOS decided to focus on enabling technologies, just to preserve patent rights on application-level technologies.

Another problematic issue is given by interoperability’. Linux success has been mainly linked to its condition of being a complete, functional alternative to proprietary operating systems. Linus Torvalds was able to create a kernel thanks to which a set of open source licenses had the chance to become ‘viral’ tools ‘infecting’ also proprietary software. This special quality broadened the field of applications integrating open source code. At the contrary, today the BiOS license mandates often discourage the owners of patents on biotechnologies. In a sense, the problem is that it is still hard to consider, as many claim, Transbacter as the new kernel of a bio-Linux.

Moreover, economic issues can create also, to say, ‘cultural’ encumbrances for the wide spreading of open source in the biotechnological field. A big part of the success of open source in ICTs has been played by the (sub) culture of hackers. In life sciences, open source is supported mainly by researchers operating in the public sector. Here, scientists often are not free to choose how to arrange patents issues because patent rights, differently by copyright on texts, are in the hands of their employer institutions (universities, research organisms, etc.).

Last but not least, another important cultural issue could become an insurmountable obstacle. In recent years, particularly in Europe, a widespread opposition to genetically modified organisms (GMO) is grown exponentially in civil society. Irrational, ideological as well as some well founded bio-safety reasons support this lack of public acceptance. What we have called elsewhere the ‘orthorexic society’ (Nicolosi 2007) should not be considered a secondary problem.

Nevertheless, there are various interesting attempts in progress. One particularly significant example has been realized by Tom Michaels (1999) and has been named ‘General Public License for Plant Germplasm’ (GPLPG).33 But, in our view, we are far from being able to say that open source in biology is a reality. We should consider it as a ‘work in progress’, experimental and still controversial. Probably, today it is still early to say if we have the appropriate architecture. The experience of Bios and its practical implementation will be very important to understand where we will be able to go in the future. In this direction there is still much work and much research to be done. The auspice is that, in the next few years, sociologists, philosophers and scientists, together, will prove capable of seizing this challenge.

Footnotes
1

It was probably Karl Marx’s thinking that lent the most meaningful contribution to this topic. And critical sociological literature, whether of orthodox Marxist kind (for example, Braverman 1974), or that of a post-Marxist line (for example, Gorz 1988), albeit introducing different interpretations and remedies (sometimes also opposing) have focused widely on the tendency of industrial Capitalism to produce the deskillization (Friedmann 1946) of the workforce (some like to define this proletarization). Anthony Giddens (1990) has re-articulated the Marxist theme, presenting modernity in itself as a long process of uprooting social relations from local contexts of space–time interaction. As is known, for Giddens, the main mechanisms of uprooting (disembedding) are the symbolic tokens, as for instance money, and expert systems, namely: “systems of technical accomplishment or professional expertise that organize large areas of the material and social environments in which we live today” (Giddens 1990:27).

 
2

Probably the great influence exercised by the Marxist theory of alienation (Marx, 1970) has had a decisive impact in causing this obscuring. In reality, as claimed by Ingold, however much absurd it may seem, also assembly line labourers are workers called upon to develop task-oriented skills. It is precisely thanks to this ability of ‘coping with machines’, that workers are able to resist the attempt to reduce their activity to the mere execution of a command generated by the forces of production and to produce, more than goods for the capitalist, social and personal identity. Ingold was to say: “It is true that the machinery that workers are required to operate may––on account of its noise, heat, vibration or whatever––strain the human body to its limits of tolerance. However, despite Marx’s claim to the contrary, the worker does not cease to dwell in the workplace. He is ‘at home’ there. But home is often a profoundly uncomfortable place to be” (Ingold 2000b, 332).

 
3

Simplifying crudely, cases of techno-pessimism are: the thought of Heidegger, some emphasis on theoretical production by the Frankfurt School and, more recently, authors like Postman (1993), Nikolas Kompridis (2006), Bertrand Stiegler (1998), Hardt and Negri (2003), etc. An extreme and violent version of such a formulation has recently generated a social and political phenomenon of a subversive and terrorist nature called Neo-Luddism. The most famous exponent of Neo-Luddism is surely Theodore J. Kaczynski (2010), also notoriously known as “Unabomber”.

 
4

With this formula, software programming based on the freedom to execute, study, copy, distribute and improve is indicated (Stallman 2002). Therefore, it is formulated in complete contrast to proprietary software (the most famous is Microsoft) that uses the system of patents to hinder the free circulation of information. Obviously, this refers to the digital information encapsulated in the source code of software, the so-called kernel. The first experiences of information technology in the 60 s were historically characterized precisely by these freedoms. For a history of free software see also Paccagnella (2004).

 
5

The term is the outcome of a play on words aimed at overturning the concept of copyright and echoes the idea of an “author’s permission” that formalizes in terms of a license (known as Gnu/Gpl) the protection of free software. In brief, this concerns a legal document in which those using and modifying free software are committed, in a reciprocal spirit, to apply and grant the same freedoms to other potential users. The Gnu/Gpl license (Gnu/General Public License) has been created with the aim of avoiding free-riding phenomena typically associated with public assets: I modify an accessible good and render this new good inaccessible. The first version (1.0) of this license was elaborated by Stallman and Eben Moglen for the earliest versions of Gnu Emacs. The Gnu/Gpl should not be confused with freeware and shareware that do not guarantee any of the freedoms of free software.

 
6

In reality, two main currents clashed in the 1990s. One considers free software not only a simple methodology of information technology development, but a genuine ideology aimed at subverting the model of capitalist society in a libertarian vein. The other, instead, envisages a system of licenses that allows greater promiscuity between free and proprietary software and does not refute the market economy, but indeed aims towards using open source as a potential competitive advantage with respect to competitors (Paccagnella, 2010). This contrast, represented also by different associations (Free Software Foundation with Stallman and Open Source Initiative with Raymond) has today been partially superseded and it is preferential to speak of FLOSS (Free/Libre Open Source Software).

 
7

It is important to emphasize that the concept of gift elaborated by Mauss does not have any connection with the modern concept of donation. It goes back, on the contrary, to a form of social bond founded on the obligation of reciprocity.

 
8

For this reason, Hess and Ostrom (2007, 2005), suggest that “the unifying thread in all commons resources is that they are jointly used, managed by groups of varying sizes and interests. Self-organized commons require strong collective-action and self-governing mechanisms, as well as a high degree of social capital on the part of the stakeholders.”

 
9

In economics, a good is defined public when for a series of natural and/or cultural circumstances (which can also change in time) it is characterized by a low degree of ‘rivalry’ and for a low degree of excludability (that is difficult to prevent its use). For the notion of “public good” and its specificity within the theory of commons, see Ostrom (1990).

 
10

This issue of the highly important potential of development will in the next few years require in-depth theoretical and practical reflection on the commons (Hess and Ostrom 2007).

 
11

The French philosopher Michel Serres, in an interview in France-Info, has recently spoken on the revolutionary potential of this aspect in highly significant terms: http://www.fabriquedesens.net/A-propos-de-Wikipedia-Michel-Serre.

 
12

An ideal world in which all the players have a perfect knowledge of all the variables at play and in which there are no associated impediments or costs to negotiation.

 
13

He cites as example the blocking power of multiple owners controlling different segments of a river.

 
14

Epstein believes that patents do not necessarily create economic blockades and that the fear for anticommons is based on a faulty imagination. People imagine that each patent operates alone. At the contrary, he says:”in most instances, an aggressive program of patent pooling changes the overall landscape”.

 
15

According to the International Center for Agriculture Research (ICARDA), today approximately 50 % of the world market of seeds is monopolized by four large biotechnological multinationals: Monsanto (USA), DuPont (USA), Syngenta (CH), Groupe Limagrain (f). A similar statement can be made with reference to pesticide production: Bayer (d), Syngenta (CH), BASF (d), Dow Agrosciences (USA), Monsanto (USA), DuPont (USA).

 
16

That is « the dramatic expansion in the scope of government, featuring an increase in the number and size of the governmental calculation mechanisms » (Hunt and Wickham 1994, 76).

 
17

If this choice is supported by political and ethical awareness.

 
18

Feenberg gives the example of assembly line machinery of the textile industry that in the 18th century was built with such ‘technical’ features as to make it usable by children. Namely, it incorporated a specific social and ethical code and considered self-evident.

 
19

We like to recall that one of the prominent exponents of this School, Stuart Hall (1973), applying a Gramscian Marxist perspective and a semiological paradigm of a pragmatic kind to the analysis of communication processes, suggests a theoretical model of communication media, defined encoding/decoding, which has much influenced the theoretical and empirical production of the Birmingham School. For Stuart Hall, the encoding of a message governs its reception, but does so in an non-transparent and foregone way. Indeed, every meaning and every speech, once codified, in order to be completed and effective must be translated into social practices. Encoding, therefore, tends to propose a particular vision of the world, tendentially conservative and favouring the position of dominant classes, but whose result is also always the outcome of a negotiation process in which different contextual and social variables play their part. The code must be decoded and this process is open to the local social conditions of interpretation. In this process, the struggle of subordinate classes also has room to assert their own identity. In particular, Stuart Hall singles out three possible ways of reading the codes. The ‘dominant-hegemonic’ reading (passive acceptance of the hegemonic code of the sender by the receiver); the ‘negotiated’ reading (introduction of partially autonomous interpretations); the ‘oppositional’ reading (antagonist interpretation). These different interpretative patterns are closely linked to the economic-social conditions of the receivers, but Stuart Hall does not speak of interpretative pluralism but rather polysemy. This to say that the relationships between sender and recipient are asymmetric. The encoded meanings, indeed, are dominant (but not determinist) because there are patterns of ‘preferential readings’ that bear the political/ideological/institutional order imprinted within. For this reason, we define these preferential readings in Gramscian terms as hegemonic. They represent an order of ‘common sense’ to which we often also trace back new possible interpretations to lend them meaning. This does not diminish the fact, however, that particular social conditions may favour the emergence of antagonistic and autonomous polysemy.

 
20

Feenberg’s studies have been applied to the medical (AIDS) and information technology world (MINITEL in France).

 
21

These themes have been keenly and seminally discussed by John Dewey (1927). For such reason, a pragmatist shift has been ascribed to Feenberg (Hickman 2001) from which Feenberg (2003) has disassociated himself.

 
22

For further exploration, see Nicolosi, 2012.

 
23

Obviously, this refers to a potential. Indeed, Sennett, showing the opportunities furnished by new digital technologies emphasizes how they can also induce an “incorrect” usage (repetitive, static, alienated), that is oriented to a separation between reality and simulation.

 
24

Sennett uses the term hand to refer to the body in its entirety and to the relationship that it establishes with the surrounding context. The head, obviously, represents a synecdoche standing for ‘thought’, ‘reasoning’, ‘abstraction’ and ‘planning’, namely intellectual task.

 
25

According to the Greeks, Pandora, Goddess of invention, was sent to Earth by Zeus as a punishment for Prometheus’ transgression.

 
26

Arendt, for example, was convinced that there should have been a public debate on the atomic bomb when this was created by scientists and technologists.

 
27

The adopted principle is that of ‘diversity managed in the field’. The choice is to leave behind the paradigm of genetic fundamentalism that considers solely the information contained in the genome as the only thing able to bring about potential innovation and to accept the principle that it is, on the contrary, the genotype/environment interaction as having maximum importance.

 
28

In a sense, we could say ‘nothing new under the sun’, since the first modern economist, Adam Smith, described the phenomenon in 1776 referring to a great part of the machines used in the nascent manufacturing industry. Actually, those machines were originally the inventions of common workmen, who employed in some very simple operation, naturally turned their thoughts towards finding out easier and readier methods of performing it. Long before Smith, farmers were solving biological problems without thought of monetary award, and sharing their inventions with their peers. Open source agriculture is more a restoration than a revolution.

 
29

The objective of the large biotech corporations is that of creating products able to ‘function’ in artificially standardized environments by means of the massive use of technologies and chemicals (with prohibitive costs for many farmers). Biodiversity, in this context, is neither contemplated nor valorised.

 
30

Whose aim consists in creating a web interface to share data and computer resources in order to access databases distributed throughout the world that contain the traces of the samples of germplasm produced by local agricultural know-how. Often, access to these data and tools may prove fundamental to develop useful innovations.

 
31

BIOS is an initiative of CAMBIA (Center for the Application of Molecular Biology to International Agriculture), an Australian research institute founded by Richard Jefferson, a pioneer in the field of biotechnological research. The BIOS Initiative is based on a strong information technology component and on the application to the tools used by a very similar license model to Copy left. An historic example of how CAMBIA acts is given by the TransBacter system, in which alternative bacteria to Agrobacterium tumefaciens (generally used to transfer DNA into the genome of the plants) are produced. CAMBIA freely supplies Sinorhizobium meliloti, Mesorhizobium loti and Rhizobium sp. NGR234 to non-profit and for-profit research, but on the condition that eventual genetic improvements are in turn made freely accessible (Broothaearts et al. 2005).

 
32

For an overview of these legal difficulties, see Rai and Boyle 2007.

 
33

This licensing model can be used by different players: individual cultivators, communities, indigenous peoples, scientists, universities, NGO, private firms, etc. Michaels emphasises that the proposal is an adaptation of the GPL software licensing model by means of a ‘materials transfer agreement’ (MTA). As recently claimed by Jack Kloppenburg (2010), this licensing model proves highly useful both from the ‘resistance’ as well as creative points of view. Indeed, it has the advantage of developing a legal frame for the recognition of the collective sovereignty of farmers, and to allow these to exchange, improve, conserve and sell seeds.

 

Acknowledgments

This article is the combined outcome of the PhD research by the first author on ‘On the Traces of Hephaestus. Body, Skills and Technology’ and the second author’s research on ‘Genomics and the production of commons: Open source as a method ‘to go beyond’ public and private knowledge production’ carried out in cooperation with the Centre for Society and Genomics (CSG) of the University of Nijmegen and the Netherlands Genomics Initiative (NGI). Both authors thank Stephen Conway for the English translation. The first author thanks Prof. Bernadette Bensaude-Vincent and Prof. Marina Maestrutti for having had the opportunity to discuss some ideas contained in this article in a seminar organized by CETCOPRA (Centre d’Étude des Techniques, des Connaissances et des Pratiques) at the Sorbonne University (Paris) on the 19th of March 2012 during his period of visiting research.

Copyright information

© Springer Science+Business Media Dordrecht 2012