Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

The French legal framework for research evaluation underwent major changes following the ‘loi relative aux libertés et responsabilités des universités’ (loi LRU). This reform left former evaluative practices in place, whilst bringing in a new evaluation agency, AERES, itself recently replaced itself with a ‘High Council of the Evaluation’ (HCERES). After a presentation of the French research evaluation landscape, as reshaped by the loi LRU, the paper will concentrate on the criticisms that have been formulated about the actors, tools and methods, as well as the place given to the social sciences and humanities (SSH) in this process. In the last section, we will focus on two projects, DisValHum and IMPRESHS, dedicated, respectively, to a study of dissemination strategies in the SSH research and to case studies of the impact of the research in the SSH. Because both projects are still under development, we will describe our methodology and will present only a few preliminary results.

2 The Need for Evaluation in the Post-‘loi LRU’ Period

During the last decade, the need for evaluation increased in all higher-education systems. This movement did not spare France, in spite of this country’s tendency to stay away from the general trends in culture-related mattersFootnote 1 and, more specifically, in education issues, as shown, for example, by France’s non-participation in the European University Data Collection (EUMIDA) surveys (European Commission 2010). Nevertheless, the claims and methods of the so-called new public management did find a favourable echo in France among some politicians and members of the administrative apparatus. In the meantime, the Shanghai rankings came as a shock to the system, and still create a huge discussion about the low ranking of French universities in the top 50 and top 100 league tables (AEF 2013b, ‘Dépêche no. 186447’). A considerable shift in public policy on the higher-education system was, therefore, made under Nicolas Sarkozy’s presidency (2007–2012). The most conspicuous and explicitly stated goal of this change was to create 10 highly performing higher-education and research institutions. These were meant to better represent France in international competitions in research and education, as well as to boost academic standards. The latest law on higher education and research (‘loi ESR’, as it is commonly called in France) brought in by the current government did not renounce this objective, nor did it go against the major changes brought in by the 2008 law (loi LRU)—to the disappointment of many left wing supporters from academia who were pushing for a return to the status quo ante.

Following the changes brought about by this new policy the need for a better organized and a more thorough research evaluation became acute in three key sectors.

2.1 Human Resources

Under the loi LRU, the universities were allotted new duties and competencies regarding the management of their staff. The novelty is that the institutions are now not only allowed, but also invited, to define human resources strategies and policies covering the three major issues of recruitment, promotion and continuous training. Even if this newly acquired freedom is far from complete—as proved by the autonomy dashboard of universities in Europe, in which France scores low (Estermann et al. 2011)—it opened a whole series of possibilities, which in return prompted a new series of questions to be solved.

Under the previous legal framework, recruitment of research and education staff was performed by ‘commissions des spécialistes’ (recruitment panels). Elected for four years, these panels recruited academic staff, sometimes without any assessment of applications for a position by real specialists in the recruitment field. Now, institutions must put together profile-oriented committees whenever the need arises. These new committees must also justify the ranking of candidates. Thus, both aspects of the hiring process (selection of specialists and candidates), now require a reflection as to quality criteria, even if the rationale is, in most cases, quite flimsy or biased by hidden assumptions.Footnote 2).’ The change towards position-specific recruitment panels was also designed to address the issue of endo-recruitment, an issue closely followed by the Ministry of Education, which actively seeks to limit this practice. Panels now include a significant number of members from outside the recruiting university, whose external point of view is supposed to prevent favouritism and to ensure the homogeneity of standards throughout the French Higher Education (HE) system. By making the selection process less opaque, the loi LRU has opened new vistas for research evaluation in France.

The loi LRU not only brought changes in recruitment, but also in promotion practices. The possibility to promote staff members is not a new issue for the French Higher Education Institutions (HEI),Footnote 3 but the novelty is that institutions must now publish their criteria for any decision. Such a requirement was nonexistent prior to the accession to ‘responsabilités et compétences élargies’ (widened responsibilities and competencies) guaranteed by the loi LRU of 2008. Thus, this can be seen as a first step toward a more thorough evaluation of individual careers at the national level, even if numerous voices are to be heard opposing any form of individual evaluation of researchers (CP-CNU 2012; Sauvons l’université 2012). Certain sections of the Conseil National des Universités (CNU), the body that oversees recruitment and promotion procedures,Footnote 4 proved, in such a context, more sensitive to the weaknesses in the methodology applied for assessing files (Garçon 2012) and opened internal discussions about criteria. The thorny question of individual evaluation has recently come up again, even if those doing a pilot study on individual evaluation are very careful to avoid pronouncing the word ‘evaluation’, and talk only about a ‘suivi de carrière [monitoring of careers]’ (AEF 2013a, Dépêche no. 187254). This ‘suivi de carrière [monitoring of careers]’ is also the term used by the most recent law on EC (Décret 2014-997, published on the 2nd of September 2014, see Article 21).Footnote 5

2.2 Funding

Following the 2008 law, the Ministry of Higher Education started to implement a dual financing scheme. Eighty percent of state funding to universities—except salaries—is allocated on an ‘activity basis’, calculated by adding a ‘teaching allocation’ to a ‘research allocation’. These are obtained by multiplying the number of students and tenured academic staff by blocked sums, defined by broad sectors of activity: life sciences, hard sciences and the SSH. The other 20 % rewards the relative efficiency in research and education, compared to that of the rest of the system. But not all the academic staff count in calculating the research allocation, either as activity or as performance; only the ‘EC produisants’, which roughly translates as active researchers, are taken into account. Thus, the assessment of the research activity became of paramount importance following the implementation of this scheme, and more so as an increase in the number of ‘EC produisants’ translates more easily into financial gains than any increase in the number of graduating students.Footnote 6 At the same time, universities received pressing invitations to increase their ‘ressources propres’ (own funding), especially by tapping into the competitive research funding resource. This reinforced the need, for the leading teams, to identify the most active and innovative researchers as well as the less-performing areas, either for allocating seed money and administrative support or for designing incentives.

2.3 The National Grant System

The creation of the Agence National de la Recherche (ANR) in 2005 radically modified the research units’ access to funds and introduced a new actor to the evaluation sphere. For decades, in spite of an increasing concentration of researchers in the universities, 23.5 % of the budget for civil research was directed towards the Centre National de la Recherche Scientifique (CNRSFootnote 7), while universities received less than 5.82 % (Giacobino 2005).

With the new funding scheme, discussed previously, and the allocation of substantial funding possibilities on a project basis through ANR programs, this unbalanced situation changed significantly. In terms of evaluation, mixed teamsFootnote 8 (UMR) were no longer automatically recognized as top performers in research, even if, in practice, UMR benefitted from historical prestige when evaluated; at the same time, topics and teams not aligned to the CNRS priorities gained visibility and funding. New forms of evaluation were put into practice, closer to the peer review system used in highly reputable academic journals.

The biggest consequence of the new project-based funding procedure in the ANR grant system is the considerable change in outlook brought about by a radical change from a system in which teams had to work with the more or less generous amount allocated on a quadrennial basis, to a new system in which supplementary resources could be obtained through competitively funded projects. Unfortunately, this revolution only affects the SSH in a limited way, partly because of the long-lived reflexes of managing penury, partly because the available funds are much more limited than the investments in other scientific domains or in technological development. ANR priorities clearly favour scientific domains, which are considered as better contributing to industrial leadership and in responding to societal concerns. The situation is much the same at the regional level, where science policy priorities tend to mimic those established at the national level, which copy, in turn, the European ones, as proved by a recent Ministry discourse and by the subsequent policy document, entitled significantly, ‘France-Europe 2020’.

Consequently, a new need for evaluation has arisen, in particular, one stemming from the SSH researchers themselves. The chronic underfunding of the SSH, and, more specifically, of the humanities, can be linked to an insufficient understanding and assessment of their impact outside academia. Impact does figure among criteria taken into account by AERESFootnote 9 and by ANR, both for the evaluation of the research units and for that of projects. However the ANR has no published guidelines for assessing impact, while those of AERES start from a very restricted understanding of the phenomenon. Impact tends to be considered exclusively in the form of patents or spin-offs, two types of results notoriously difficult to obtain when researching SSH topics. In this way, the major contribution of SSH research to the cultural industry is entirely neglected, while the role of SSH research in society is reduced to popularization conferences during specific manifestations (‘Fête de la science’ is explicitly mentioned), or to contributions to European laws and regulations. The list of impact types published by AERES is not a closed one, but its contours clearly manifest a lack of thorough examination of the matter. The time is, however, not far off when the question of impact will be in the spotlight, as proved by a recent report released by the ‘Cour des comptes’, the higher administrative court that oversees spending by public bodies and major French NGOs. The report pointed out the considerable budgetary effort made for the research since 2005 and questions whether the nation is getting a sufficient return on its money.

Whether for allocating funds, designing research strategies, supporting teams in their development, or demonstrating value for money, a more objective approach to research evaluation has become a major necessity in France over the last decade.

3 Current Practices and Levels of Evaluation

Unfortunately, in spite of the law and the need for modernized evaluation procedures, many institutions involved in research evaluation remain very vague about their criteria, in general, and about research excellence, in particular. At the same time, the process through which a percentage of the staff of an institution and/or individual persons are labelled as ‘produisant’ has been constantly questioned but still remains opaque. Finally, a great deal of confusion reigns about the peer review process.

The CNU has been repeatedly criticized over years for its opacity as well as for the weakness of its methodology (Garçon 2012). Because of the large number of applications to be assessed during the qualification or promotion processes, the review process in many sections cannot exceed 10 min/candidate. Furthermore, the relative weight given to the different elements of a CV varies widely from one section to another, and from one evaluator to another. It is to be noted that the way in which CNU members are selected does not require any competency in, or knowledge of, research evaluation, and is indifferent to the scientific merit of the candidates. At the same time, the CNU has no links with entities studying research evaluation, whether these be research laboratories or ministry-related agencies.

The AERES agency, created in 2007 to evaluate French Higher Education and Research Institutes at four levels,Footnote 10 never managed to fully implement individual evaluation of EC in spite of the importance of this level in the process of evaluating teams and institutions. The notion of ‘EC produisant’ does not appear in the official document presenting the evaluation principles of a research unit (see AERES 2012a), but it does exist in a separate document which affirms that ‘[l]’un des indicateurs est une estimation de la proportion des chercheurs et enseignants-chercheurs “produisant en recherche et valorisation”’ [one of the indicators [of the quality and influence of the research unit] is the estimation of the percentage of researchers and EC active in research and development] (AERES 2012b, p. 1).Footnote 11 Depending on his or her status, two to four ‘first-class publications’ (‘productions de rang A’) by period of four years are supposed to earn a researcher the ‘produisant’ label; patents, databases and other similar products are accepted as an equivalent. The problem is that there is no clear reason for the number of publications requested (why not one or six, for instance?), while the rigid classification of the outputs is inappropriate in many disciplinary fields (see infra).

Besides, the thorough characterization of journals and books, recommended initially by the AERES to define the channels of first-class publications, proved to be highly complicated. Even a simple glance at the produced lists, displayed on the AERES site, reveals tremendous problems. On one hand, these lists have evolved, following major criticisms from the academic community, from being graded league tables (A, B and C or international, national and limited reputation) to a collection of titles whose very inclusivenessFootnote 12 is at odds with the ‘first-class publications’ claim. On the other hand, such lists do not exist for many SSH domains, including French language and literature research, which is maybe the most striking example. What constitutes a ‘first-class publication’ depends, therefore, in many domains, on the expert’s opinion. This opinion is formed without any reading of the submitted publications—as none were submitted during the assessment process, whether at the individual or the institutional level. To give but one example, the AERES guidelines claim that only collected works presenting a unified critical apparatus and a scientific deepening of the understanding of an original subject can be considered as ‘first-class publications’. Unfortunately, the question as to how the experts are supposed to verify these requirements on the basis of a simple inclusion of a title (with its references) in the activity report generated by the research unit is not elucidated.

Conscious of these methodological problems, many visiting committees of the AERES do not release ‘produisants’ lists; nevertheless, the Ministry for Higher Education and Research, through its directorate for higher education, DGES-IP,Footnote 13 still applies very precise numbers per domain when it allocates funds to the universities—a somewhat magical operation if individual evaluation does not yet exist. Universities can propose corrections for these figures by signalling forgotten names. Thus, to a certain point, higher institutions operate as experts in evaluation, conducting their own analysis by applying, or not applying, AERES-based criteria to evaluate their academic staff.

4 DisValHum and IMPRESHS Projects

However unclear the future of the institutional research evaluation in France may be,Footnote 14 far too many questions occur in the day-to-day life of researchers and institutions that require clear answers for the problem to be ignored. Such questions include elucidating who is ‘produisant’ and who is not, what is to be considered as performance in research and what is not. Whether in France or throughout Europe, the need for clear responses to key evaluation questions is reflected by the growing popularity of Snowball MetricsFootnote 15 in the UK with its emphasis on informed decision-making. It is then significant that some major French research universities are also looking closely at this methodology so as to carry out foresight analysis. However, such indicators cannot work until there is critical research into dissemination practices, and this is particularly true in France. The evolution of the French higher-education system during the last years, as well as the external and the internal pressure, has opened the field for initiatives like the DisValHum and the IMPRESHS projects.

The starting point for the DisValHum and IMPRESHS projects is the realization that many of the problems observed in research evaluation in France stem from an insufficient—and, in certain cases, nonexistent—observation of the domain to be assessed and a lack of engagement with the stakeholders, principally the researchers themselves. The situation is even more acute for the SSH, where the preliminary analyses rarely go further than a few platitudes (‘SSH publish more books than articles’, ‘SSH journals are not included in international databases’, ‘workshops and conferences are important in the SSH’), clumsily taken into account in the various evaluation activities. Both projects seek to contribute to filling this gap. Their intended benefits concern both SSH research, which suffers from its deficit of evaluation, and policymakers by proposing ambitious research policies at the national or institutional level. In general, and despite declarations to the contrary, French evaluation tends to be of a summative type, and is used primarily to allocate funds. Thus, to be effective, it requires a high degree of transparency, and hence faces the challenge of obtaining support from the academic community (Guthrie et al. 2013). Both transparency and support can only be obtained by improving current methodologies, and by listening to researchers at the ground-floor level, who often neither understand the means or the need for an evaluation process, and, generally find the process ill-adapted to their everyday existence.

Our specific aim is to provide the various evaluation performers (experts of the national agencies, or panels in the universities or research funding institutions, etc.) with objective information about dissemination practices in the SSH, as well as insight into how SSH scholars perceive this dissemination process. We also intend to contribute to the international effort of solving the numerous conundrums implicit in the research assessment of the SSH. This includes issues as the recognition of the specificities of the field, a position that can be seen as somewhat at odds with the claim that they must be taken as an integral part of the whole of scientific effort.

Both projects are supported by the Human Sciences Institute in Brittany (Maison des Sciences de l’Homme en Bretagne), and must be seen as two sides of the same research effort. For administrative reasons, the two projects were submitted for assessment under two separate calls, hence the different acronyms. They concentrate on the dissemination of the research results produced by SSH academics from the four Breton universities. Of the four universities, two, Brest and Bretagne-Sud, are multidisciplinary institutions. Of the two in Rennes, Rennes 1 is predominantly science based, but with a law and economics school, and Rennes 2 is exclusively arts, humanities and social sciences. The four belong to a cluster known as the Université Européenne de Bretagne and share common doctoral schools and joint research groups. Each university retains a degree of specialization in each of the fields studied.Footnote 16 For this study, we look only at the output of researchers from the three bigger institutions in Brest and Rennes. The initial results described in this paper refer to a language and literature research group in Brest, a history research group in Rennes 2 and two research teams in the law research group in Rennes 1. The reason for the last one is that this is a large research group with very different research themes. We shall be looking at the output from historical lawyers and specialists in civil law.

Our aims are:

First: to analyse the forms of dissemination, starting from what researchers do (as reflected in their CVs), and not from various preconceptions, based, in most cases, on practices in other fields or on the personal experience of the category designer. The idea is to avoid Procrustean solutions like those imposed by the official reporting, which asks all academics, irrespective of their field, to classify their production in fixed categories. Such categories are not necessarily clear, as there is, for example, no precise definition about what constitutes an international or a national conference. They are also incomplete. Among the most visible gaps are the lack of a category for critical editions or translations, frequent in the SSH, and also the nonexistence of categories such as databases or websites for scientific information. Reporting on forms of engagement with the wider public is also not taken into account, somewhat surprising in that this type of impact is supposedly to be evaluated. Categories can also be redundant, in that an invited conference paper can also be declared as an article in proceedings, or disparate when participation to PhD evaluation panels appears alongside authoring of books, without distinction as to the different nature of the exercise).

Second: to observe productivity curves and averages. As shown previously, an EC is considered to be ‘produisant’ if he or she has generated two pieces of work over a period of four years, but the reason for establishing such a threshold is not made explicit. At the same time, one of the most frequent criticisms of this requirement from French researchers is that a single-authored monograph should not be accorded the same weight as an article of a few pages in a journal, even if it is a highly reputed international publication.

Another aim in analysing productivity curves is to help render more objective value judgments conveyed in terms of ‘average researcher’ or ‘impressively productive’, etc. The CNU reports on individual applications frequently resort to such qualifications, whilst there is no clear definition of the benchmarks taken into account.

Third: to analyse collaborative research practices, as reflected by the disseminated products. The objective is principally to study frequency and forms of co-authorship in the SSH disciplines. We are particularly interested in the identification of trans-disciplinary and international cooperation of Breton researchers.

Fourth: to observe channels of dissemination, mainly publishing houses and types of journals favoured by SSH scholars in Brittany, but also channels for oral dissemination. The channels will be further characterized by using objective descriptors, such as presence in international databases or not (for journals), and international distribution or not (for publishing houses), etc. Once again, the aim is to start from the bottom and not from top-down defined lists.

Fifth: to understand the reasons motivating the choice of these channels, as well as of the publication formats adopted. On one hand, we try to understand if maximising the scientific impact constitutes a preoccupation of Breton researchers when they publish; on the other hand, the requirement is to track their ideas about how and why they interact with the wider public.

To fulfill these aims, our first concern has been to build a research products database. A preliminary study was conducted on a small number of CVs published online by researchers in French literature, linguistics, history and law, since these are the domains covered as a priority by the projects. The study was meant to identify the types of research products created by SSH researchers, whether as written material or not. This pilot study was completed by a study of categories selected by various information systems, such as CRISTIN in Norway, VABB-SSH in Flanders (Belgium), or RIN in the United Kingdom. These categories were then tested on a larger scale with the help of the students from the Master of Digital Humanities in Université de Bretagne-Sud. These gathered as many CVs of Breton researchers as possible in the considered domains, helped refine the categories and the structure of the database, and provided the first statistical calculations. For all these reasons, the number of categories finally selected is much larger than that of any of the considered CVs; the differences have proved interesting in themselves as both the focus groups and interviews have demonstrated that the non-inclusion of an item in a CV does not translate necessarily into the nonexistence of such a product in the activity of the considered researcher. Its absence is merely a form of self-censorship, sometimes related to the perceived expectations of the external evaluation bodies.Footnote 17 In such situations, top-down criteria imposed without a preliminary study of the ground clearly result in a loss in information and, moreover, of potential arguments for demonstrating the social impact of the SSH.

The database, which is currently under development, is organized into four main sections: books, articles (whether in journals or collected works), other written material and non-written material. A comparative list in the appendix of this article shows the types of products it covers, compared to those taken into account by the UK RIN analyses. Authors are characterized by their affiliation (institution and research unit) and by domain (CNU section); a CNU section is conventionally attributed to foreign researchers who cooperate with Breton academics. This has the disadvantage that CNU sections are extremely broad, but does mean that precision can be reached a posteriori using a study of dissemination types and focus group output rather than imposing further subdivision. Co-authorship characterization allows for social network analysis, which will be confronted with a similar analysis conducted on institutional contacts of research units. Moreover, geographical information is available (city and country of authors, and country of publication), making it possible to map visualizations of research contacts.

The basic information as to who, what, where and when is entered in the database. In each section, broad classes of channel and type are used. These remain sufficiently broad to handle all the data included in an individual CV. Only when the database has reached a reasonable size will work start on trying to classify the input in more detail. This is particularly the case with the ‘other’ section, which contains a rich variety of outputs that probably have a wider social impact than those in a standard CV. As the aim is to get an overall picture of different research groups and different disciplines, we are not concerned with individual researchers, but will look at individual cases when necessary.

The highly time-consuming operation of establishing a database was necessary because information about the SSH production of the researchers in our perimeter is incomplete, unusable, or inaccessible. The institution in charge of producing indicators for research and innovation in France, namely, Observatoire des Sciences et des Techniques, covers the SSH production only on an exceptional basis (Filiatreau 2010) and in doing so relies on the Thomson-Reuters database. If this choice is justified by the benchmarking purposes of the report, it proves clearly inadequate to answer the practical questions listed previously.

As a responsible scientific organization, the CNRS is fully aware of the need for quality checks. Consequently, it has put into place its own internal survey, called RIBAC (Dassa and Sidéra 2011). Unfortunately, this information system concerns only the CNRS, and despite talk of imposing it on universities, it is more than probable that the current government will abandon the idea. This is not altogether a bad thing, as it is far from certain that RIBAC categories are adapted to the EC. The typology of research products also tends to be very restrictive. A full comparison with other databases has not been possible as yet as the CNRS has not made access to the structure publicly available. It is however clear that the non-written material, as well as research reports of all types and forms, are underestimated, which does handicap impacts studies as that envisaged here.

A national database of research output, HALFootnote 18Hyper articles en ligne—that collects research outcomes from French researchers, has existed since 2006 (‘HAL: Accueil’, 2013) as an open repository. HAL SHS, a specific site for the SSH managed by the CNRS, is used by researchers wishing to put data online. This is not compulsory and, given the extreme lack of user-friendliness, many researchers do not submit; thus, its coverage is only partial. Data can be exported in csv format, but an attempt to nourish our database showed that a great deal of what was necessary, coupled with the non-compulsory nature of the repositories, meant that such an operation is not feasible in the immediate future. The imposed categorization also introduces a further difficulty, as researchers either leave out aspects of their work or misinterpret the categories. Technological changes, as well as policies of major research groups, are rapidly rendering the HAL database redundant.

Lastly, research group activity reports, established for the quadrennial evaluation performed by AERES, have appeared unsatisfactory as evaluation research tools. Not only do many laboratories not publish these reports, but when the reports do exist, the laboratories list only the productions of the previous four years. Inside each report, bibliographical references are far from unified, rendering impossible an automatic translation of the information into our database.

Parallel to the building of the database, which is still in the long phase of manual data entry, a series of group interviews with SSH scholars from various research units in Brittany are being conducted. Appendix 2 lists the questions asked. Recorded interviews are supplemented with notes taken in parallel, which are also transcribed and coded using Atlas.ti.Footnote 19 These interviews are intended to help refine the types of products included in the database, and, above all, to retrieve ‘natural’ hierarchies made between forms and channels of dissemination, to understand who Breton scholars consider when they disseminate their research (the ‘ideal reader’) and to identify their partners from outside academia. A further aim is to build a typology of publishing outlets and to discover what their purpose may be from the scholar’s point of view.

5 Initial Outcomes

Following initial focus groups and observations of the database, one thing is very clear: there is an enormous mismatch between what goes into CVs, what is accepted by AERES and how researchers see the dissemination of their research. The interpretations of the AERES classification codes vary widely, between those researchers who put in all their activities, no matter how trivial, and those who leave out activities such as speaking to the general public—considering that the CV deals only with ‘research’. This is summed up neatly by an English language specialist who asked whether pedagogical dissemination (course material) could be treated as research dissemination: ‘Est-ce que la dissémination pédagogique compte, est-ce que les cours comptent?’ [Does pedagogical dissemination count, does teaching courses count?] This is a delicate question to ask in that many SSH scholars write material for the French competitive exams governing entry into the secondary school system as teachers. This is output, but not necessarily considered research, as it is, essentially, a compilation of material to be absorbed by candidates. Textbooks in law do, however, carry a certain prestige.

Preliminary conclusions show that impact concerns vary greatly among the SSH scholars. The representatives of socioeconomic and psychology disciplines are more attentive to selecting publication channels and forms according a career plan, or have a genuine expectation to attract the attention of best international partners in their disciplines; these representatives also are very attentive to the requirements of AERES. Scholars in literature and languages, however, generally lack a clear dissemination policy. This observation is also supported by the fact that the latter clearly find difficulty in defining what can be considered an international publishing house or an internationally reputed journal. Two English-language specialists were very clear about the necessity of publishing in English, while recognizing a certain confusion about the value of certain publishing houses. As one said:

une tendance chez les anglicistes français de publier chez Cambridge Scholars Publishing, la nouvelle maison d’édition à Newcastle, donc on voit bien qu’il y a pas mal de colloques anglicistes qui sont publiés là bas, et autres d’ailleurs, j’ai publié deux là bas donc je trouvais ça très bien, et dernièrement j’ai appris que des chercheurs anglais, eux, considèrent que c’était leur Harmattan, c’est leur Harmattan.

[A tendency among French English researchers is to publish with Cambridge Scholars Publishing, the new publishing house in Newcastle, so we see clearly that quite a few conference (proceedings) of English specialists are published there, and others elsewhere, I published two there, so I found it quite good, but lately I learnt from English researchers that they consider it their Harmattan, it is their Harmattan.]Footnote 20

The interesting fact is that the researcher in question has published books only in the two outlets, but is now doubting whether this is a good thing or not. Whereas in evaluations, the status of publishers is not currently a discriminatory factor, the scholars are clearly sceptical about the pay-to-publish sector.

There was also a tendency to see the English-speaking journals as having higher standards and better review practices, with one scholar very impressed by the facilities offered when asked to review for a major American journal. This researcher insisted on journals being demanding and using the double-blind review, something found in few journals in France in English studies. Her colleague, however, insisted that more local journals should not be written off as ‘un cahier local n’est pas forcément de mauvaise qualité, de qualité inférieure, alors qu’on peut avoir des articles de qualité excellente dans une revue locale.’ [A local journal is not necessarily bad quality, inferior quality, you can have very good articles in a local journal.] He also pointed out that such journals more readily publish the work of junior researchers, allowing them to get recognition.

Best practices are mainly identified, in the humanities group, as being those recommended by the ministry, less because these are genuinely considered more efficient in developing research, but clearly because ‘it is what is expected’ (interviews with historians and with language specialists). The influence of evaluation, however, is present in the socioeconomic and psychology group, too. One economics researcher, who professed to having no clear dissemination strategy, found herself classed as non-produisant because of the restrictive list imposed in her field.

Another problem identified by focus groups as weighing on the research and dissemination practices in the SSH is the themes a research group in the humanities imposes on itself to meet national evaluation requirements. These last only for the four years of a contract, and create a straitjacket for any researcher who is thematically or discipline based. This thematic issue is a particularity of certain humanities groups and is imposed to provide a semblance of homogeneity where heterogeneity dominates. Research groups in languages often bring together researchers from different languages and different periods of interest. They are also broadly divided into researchers in literature, cultural studies and linguistics. The third one is largely grammar, because linguists themselves are in a different CNU section and mostly in different research groups. Thus, whereas a scientific research group may be specialized in, for instance, polymers, a language group will give itself a theme, such as ‘great men’, that is supposed to be a focus point for the four-year contract with the state. This, obviously, requires a fair bit of non-productive acrobatics from the higher-level researchers who have carefully developed a particular area of expertise. As one researcher said:

la place des SHS est telle qu’on est la 5ème roue de la carrosse donc on nous demande de nous agréger à des champs de recherche et des thèmes de recherche qu’on a pas choisis, à [name of university] c’est ça, si on veut être un peu visible, et c’est un problème de [name of research group] par rapport aux autres labos, même si c’est un peu pareil, si on veut être visible, il faut, localement, qu’on réponde à des appels qui ne sont pas naturellement dans notre champ. Donc, ce qu’on fait quelquefois avec des déceptions parce qu’il n’y a pas de publication par derrière parce que justement c’est trop large...

[The SSH are excess to requirements, so they ask us to group our areas and themes of research that we have chosen, in [name of university] it is just that. If you want a minimum of visibility, and it’s a problem for [name of research group] in relation to other research groups, even if it’s a bit the same. If you want to be visible, you must, locally, answer calls for tender which are not naturally in your field. Thus, it is what we do, but sometimes with regret as there are no publications forthcoming as the theme is too wide...]

Another interesting observation can be made about the contrast between the practices and perceptions of engagement with non-academic representatives. The discourses present this activity as a one-way process, in which the Researcher transmits Knowledge to a passive Receiver; the idea of a possible influence of stakeholders on one’s own research triggered vivid reactions in some cases. But examples cited during the discussion proved that outside academia, stakeholders are, at least in certain cases, valuable collaborators as much as passive receivers. We try to collect precise identifications of these partners to conduct cross-interviews in the manner of those recommended by the ERiC method.

In quantitative terms, the image about SSH publication coming from the database is, for the moment, as in Table 1.

Table 1 Output types across four disciplines in percentages

The dominance of books and book chapters is clear in history and literary studies, but these figures must be treated with care. Published chapters may be, in fact, published proceedings, something that is rarely declared in English, but is always noted in the sectors of law and history. The AERES classification lumps together books and book chapters and groups papers in proceedings with either national or international conferences. It is possible that the book section is considered more prestigious by English specialists, hence the preference to declare a chapter to a proceedings article. The absence of certain items may simply show that these disciplines do not deem such outputs as worthy of mention in a CV. The very high percentage of journal publications in civil law also requires caution, because many of these may be short legal commentaries. While we are attempting to track the length of papers, not all CVs give full references. Obviously, miscellaneous publications and books will require close attention. However, what these statistics do show is that simplistic evaluations based on declared data do not give a genuine picture of the complex dissemination patterns across disciplines.

Some factors are becoming clear. Each discipline has its own publication patterns and its own channels, with no similarity across even legal history and history. To date, there is little sign of interdisciplinarity or internationalization. The rule is single authorship for papers and books, except for proceedings and collected works that tend to be co-edited. The exception to this rule was a specific case in law, relating to scientific and medical fields, but the co-author was another lawyer and not someone from outside the discipline. Most publications are in French, and in France, although there are also major legal publications in francophone Belgium.

The regional university press, the Presses Universitaires de Rennes, is the main publisher for books in history and, to an extent, in literary studies. This publisher has built a strong reputation in regional history and is an obvious publisher for collected works and proceedings. Civil law tends to have its own highly specialized publishers.

As research groups can be fairly homogeneous, it is interesting to look at the ‘anomalies’. To date, three examples stand out: a researcher in languages publishing in high-impact journals in a research group that tends to remain at local or national levels; a researcher in history whose subject area, piracy, has strong popular appeal and, therefore, gives numerous radio broadcasts; another is a researcher who has a particular interest in one legal field that links him to a particular form of local court. Other broad cross-disciplinary tendencies also are beginning to appear, as language researchers closer to the visual arts, notably those studying cinematographic productions, have dissemination patterns different from those more concerned with producing scholarly editions. As one researcher said:

je suis un peu partagé en fait puisque je fais de l’édition de textes, l’édition de textes se prête assez mal à la communication; l’édition de textes a plutot tendance à la publication directe.

[I am of two minds about this in fact as I have worked on critical editions. Critical edition work is not adapted to popularization; critical editions tend more toward direct publication.]

6 Conclusion

The Loi LRU caused a sea change in French research by bringing in internationally certified evaluation procedures. The modification of that law by the Loi ESR watered these procedures down, at the demand of trade unions and a vocal section of the research community. As a result, evaluation procedures that might allow for informed decision-making and foresight activities are now far off. The situation has become more, rather than less, confused, leaving opaque recruitment and promotion practices in place, and not really providing, the tools for a better-informed monitoring of research. Existing systems may work more or less well in some disciplines, where internationalization and, therefore, international benchmarking of research are strong, but this is not the case in the SSH.

Despite resistance in some quarters, greater attention to quality criteria is inevitable as France remains a major player in international research in all fields, including those of the SSH. Current research is leading to better bibliometrics and an understanding of research practices and dissemination. However, although common terminology is developing, the interpretation of that terminology will inevitably remain anchored in national practice, needs and research traditions. Thus, any attempt at benchmarking must be based on an analysis of the situation in each large field and in each country. An overall picture is needed before indicators are imposed. This global picture is what IMPRESHS is setting out to achieve, starting from one region of France with the aim of launching a larger study across university research in the SSH across France.

There are numerous threads to be followed before a clear picture of French SSH research can be obtained. What is already clear is a very complex situation dominated by national parameters. What this means in practice is that a neutral study based on bottom-up procedures will encourage greater understanding of output types and the motivations of researchers behind their choice of those output channels. Only then will it be possible to equate research outcomes with possible societal impact. SSH research covers a broad spectrum of activities, outcomes and impacts. Understanding this is the key to better quality research evaluation criteria and, therefore, better research. The wealth is in the variety; IMPRESHS aims to help bring about a better understanding of this variety.