Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

The Romanian two-part film ‘Tales from the Golden Age’ (2009) portrays several legends from the ‘golden age’ of communism. One of these tales recounts an inspection of a village in preparation of a visit of state officials the next day. The whole village is nervous about the preparations and receives detailed instructions from two inspectors. In a comical sequence of events, the inspectors end up getting drunk with the villagers, and finally both inspectors and villagers find themselves stuck in a merry-go-round for the night. It is said that they were still spinning when the state officials passed by in the morning.

While folk-tale should not be confused for reality, it is easy to draw a parallel between this dark comedy and current debates about ‘quality’ in Romanian universities. Concerns about quality are clearly connected to expectations of state officials (both domestic and foreign), while it is hard to untangle the inspectors, academics and other actors involved in the merry-go-round of evaluations. These evaluations were initiated between 2005 and 2011, a period in which Romania joined the European Union and played an important role in the Bologna Process. The broad header of ‘evaluations’ comprise, inter alia, quality assurance reviews by the quality assurance agency ARACIS, a classification of universities and departments by the Ministry of Education, Science, Youth and Sports, university evaluations organised by the European University Association,Footnote 1 as well as various other audits mandated by official legislation.

Many perceive the amount of regulation to be a problem (see other chapters in this volume), but so far little has been done to correct the situation (Geven et al. 2014; Păunescu et al. 2011, 2012; Sursock 2014). This paperFootnote 2 aims to present some suggestions on how this merry-go-round can be stopped, hopefully to the benefit of higher education and scientific research in Romania. The guiding question of the paper is “How can evaluations in Romanian universities become more meaningful?” Answers will be provided by analysing a set of interviews of 310 university leaders, faculty and students in five Romanian universities.

We present five recommendations to the government, the ministry of education, science, youth and sports and the various agencies tasked with evaluation (primarily UEFISCDI, ARACIS, CNCS, CNFIS, and CNATDCUFootnote 3). We also present four recommendations to the universities, and interested faculty, students and other stakeholders. While the primary audience of the paper is Romanian, international organisations or individual observers of debates on quality assurance in Romania may also take an interest. The core of our message is that (1) evaluation procedures should be simplified, (2) students and professors should receive more control over the evaluations and (3) agencies and university leaders should use a more open definition of what ‘quality’ means for different people.

2 Evaluation in Romanian Universities

For-profit education was booming in Romania all through the 1990s and early 2000s. Many were afraid that standards were dropping, and thought that regulation was necessary. In 2004, the government appointed a new minister, Mircea Miclea, with plans to improve the quality of higher education. His main obstacle to do so was the Romanian parliament. Many parliamentarians had connections with low quality universities, and some had much to lose from tougher quality controls.Footnote 4

In May 2005, the new minister went to the Ministerial meeting of the Bologna Process in Bergen, Norway. At the summit, ministers adopted the ‘European Standards and Guidelines for Quality Assurance in the European Higher Education Area. This came at a very convenient time for Minister Miclea, who now had an argument to establish higher standards in Romania. Upon return from the ministerial conference, he agreed with the government to pass an emergency ordinance to adopt these new standards in Romania. While the parliament reeled, they could do little against an ‘emergency’ measure, and were practically unable to amend the proposals from ‘Europe’.

The ordinance established a quality assurance agency, ARACIS, which was mandated to evaluate all degree programmes and universities in the country. The idea behind the new agency was that it would shut down low quality universities and degree programmes, while helping to improve the better universities. While these aims were underlined as important in several external evaluations of ARACIS,Footnote 5 many questions remained about whether this was being done effectively (Păunescu et al. 2011, 2012). Policy-makers therefore started making new proposals to improve the quality of teaching and research in Romanian universities.

Around late 2010, there was another chance to reform the system. In 2007, Romania had joined the European Union, and subsequently Romania was elected to host the secretariat of the Bologna Process. The European ministers were now pushing for new initiatives to improve the quality of higher education, mostly under the header of ‘transparency instruments’. The government had just appointed an ambitious new minister for education, Daniel Funeriu, who promised to modernize the Romanian education system along European lines. The new minister sought to develop an integrated reform package, culminating in a new law on education, passed in January 2011.

Rather than replacing the quality assurance system, minister Funeriu developed an additional set of policy instruments to modernize the universities. Each of these instruments had a new element of evaluation. Perhaps the most important of these was an instrument to classify universities in three categories: (A) research universities, (B) research and teaching universities, and (C) teaching universities. Whereas universities previously received comparable amounts of public funding, the classification exercise sought to differentiate funding streams to the three kinds of universities. Additionally, the new law also introduced a number of alternative audits and evaluations, including a ranking of departments by scientific discipline, and an evaluation of doctoral schools.

The following table breaks down the different policy instruments that have been introduced over the years. Table 1 shows six categories of policies, broken down into ten different policy instruments that evaluate (aspects of) universities. Some of these instruments are not (yet?) implemented, since policy-makers could not (yet) agree upon the method of implementation. For instance, the methodology of the classification was never fully published, leading to a lengthy court battle over whether it was a valid tool to fund the universities.

Table 1 Different instruments of evaluation in Romanian universities

Many commentators have criticized the emerging ‘evaluation culture’ in universities, including in Romania (for an overview of this literature, see Geven and Maricuţ 2015). While critique is important, it begs the question what should be done about this problem. We decided to analyse the suggestions made by people who interact with these policy instruments on a daily basis, namely university leaders, faculty and students. The next section will lay out how we went about our research.

3 Methodological Considerations

Researchers can use many methods to evaluate policies, such as impact analyses, cost-benefit studies, policy process studies or implementation studies; each of these based on qualitative and/or quantitative methods (for an overview, see Moran et al. 2006). Since we are trying to answer a question of meaning in this paper, we have opted for an implementation study, based on interviews and documentary analysis. Indeed, we thought that the most straightforward way to answer questions about the ‘meaning’ of policies was to interview the people who deal with this policy on a daily basis. These people can tell us how policy instruments relate to their professional practice, and what the boundaries of the policies are. Thus, we used a tradition of ‘interpretive policy research’ that treats discourse of people and policy documents as sources of ‘data’ (cf. Schatz 2009; Schwartz-Shea and Yanow 2012).

There are two main implications of using ‘discourse’ as a source of information. The first implication is that conceptual boundaries that are clear in theory may not be so clear in practice. For instance, a document such as the ‘European Standards and Guidelines on Quality Assurance’ makes a distinction between ‘internal’ and ‘external’ quality assurance (Dill and Beerkens 2010). ‘Internal’ quality assurance refers to the evaluations initiated by people inside the universities; ‘external’ quality assurance, on the other hand, refers to evaluations undertaken by the government or other actors ‘external’ to the university. For our interviewees, however, both ‘internal’ and ‘external’ evaluations are seen as imposed by ‘others’ (Geven et al. 2014), thus rendering this conceptualisation inadequate to understand the experiences of our interviewees.

In order to avoid getting stuck in this conceptual swamp, we use one general term, ‘evaluation’, to denote the various assessments that take place in the universities. These include accreditation, quality assurance (both internal and external), research assessments, audits, and various other forms of assessing work in universities. In this sense, we follow the literature about the ‘audit culture’ in universities (Power 1997; Shore and Wright 1999). While this conceptual lumping may be confusing for those working in different fields of evaluation, we think that our approach stays close to how people inside universities think about all these forms of evaluations. Indeed, we were cautious about imposing our theoretical preconceptions onto our interviewees.

A second implication is that our results cannot be considered ‘objective’. All our recommendations have been developed from a qualitative interaction between the researchers and the interviewees. Interviewees may confuse certain policy instruments with each other, or talk about seemingly unrelated issues. They may be experts on the subject, or it may be the first time that they are thinking about evaluations. Perhaps this is the closest we can come to an overview of the policy implementation, since these are the very people dealing with implementation.Footnote 6 We carried out fieldwork in 5 Romanian universities, representing different institutional types, and different geographical regions of Romania. The universities were selected because these are considered to be good performing universities, who take the evaluations seriously. In addition, we made sure to include four different regions (South West, Centre, North East, South) and to include at least one private university (Romanian American University). Table 2 gives a broad overview over these universities.

Table 2 An overview over the universities in which we carried out fieldwork

Field-visits took place between December 2012 and June 2013, gathering the views of 310 interviewees in 186 conversations (some interviews had multiple participants). Interview participants were selected according to their professional roles as decision-makers (i.e. rectors, vice-rectors, deans), faculty (professors), administrators (i.e. secretaries), students, and QA-personnel (see Fig. 1 for the distribution of interviewees). Interviews were carried out in the English or Romanian language, following the preference of the interviewee. Notes were taken in English and analysed using qualitative data analysis software. We developed a coding scheme to identify main themes and problems, as well as possible suggestions.Footnote 7

Fig. 1
figure 1figure 1

Interviewees’ role, broken down by university

In order to better understand what kind of policies we are talking about, we also carried out an analysis of policy documents (primarily legal texts, policy papers, quality assurance and evaluation guidelines). These were coded along similar lines as the interview notes, allowing us to map the concerns of interviewees onto the specific policies and procedures. We will present the results of this analysis in the following sections.

4 The Policy Problem

The interviewees recognize that there are good reasons why policy-makers have imposed evaluations on Romanian universities. To name just a few prominent issues: there are many universities who provide questionable education to the students; teaching methods are usually out-dated; Romanian scientists publish few articles in international journals; plagiarism is still not battled effectively, and so on. While many interviewees recognize these problems, they question whether ‘evaluations’ are the right tool to address these problems.

Indeed, the interviewees consistently pointed out that evaluations fail to achieve substantial reflection on higher education and scientific research (see Geven et al. 2014 for an analysis why this may be so). Three main problems were most dominant: Evaluations are perceived (1) as bureaucracy that is often changed; (2) as failing to create ownership; and (3) to be based on inconsistent evaluation criteria that lead to gaming and compliance behaviour.

Table 3 gives a summary of the problem in a so-called ‘policy-tree’ that breaks down the general problem into specific problems. Below, we will outline each of the abovementioned problems in more detail.

Table 3 An overview of the problem regarding evaluation in Romanian universities in a policy-tree, breaking down the general problem into more specific problems

4.1 Evaluations Are Perceived as Too Bureaucratic

The first problem with the national legislation is that it creates a bureaucratic workload. Evaluations require documents, meetings, specialised staff, and working structures that are seen to distract from academic work. This bureaucracy dominates current evaluation practices; professionals see such practices as being disconnected from their daily activities of teaching and research. This results in a sort of resignation and task avoidance, which is seen as a major reason why evaluations cannot be internalised (see also Păunescu et al. 2012).

This bureaucracy costs both time and money. Many of our interviewees, particularly those we see as decision-makers (rectors, vice-rectors, deans, vice-deans, senate members) are faced with an enormous amount of papers each day. In the words of some of them:

[We need] to stop working twice for the same thing. Why do I need to have a faculty report and a QA report? Are they not the same thing? Why do we need two different reports and formats? (Decision-Maker, Professor, Male, 50503).

Time management needs to become better. We are wasting a lot of time on useless things (Decision-Maker, Lecturer, Male, 20705).

The QA process is characterized by huge quantities of bureaucratic requirements. We are lucky that the Vice-Dean for Quality Management takes care of these documents (Decision-Maker, Professor, Male, 50604).

In terms of financial costs, we estimated for a large university that the costs for undergoing quality assurance evaluations amount to around RON 1,160,900 (approx. €258,000) per year.Footnote 8 This is a conservative estimate, since it only includes direct costs of the external agencies and the costs of maintaining the quality assurance unit in the university. It does not include the costs of the involvement of faculty, university administrators, meeting rooms, European projects, evaluations by foreign experts, research evaluations or any other evaluations undertaken by the university. While some may argue that these are legitimate costs for improving the quality of education and scientific research, we have our doubts that these costs can be justified as such, since these reports are often unrelated to improvements.

To give another example of the bureaucracy, we present an overview of the minimal activities by each university that we have found in Table 4. In the Table, we find the various structures inside each individual university, faculty and department that we visited. The law prescribes universities to establish each of these structures and activities, and we indeed could find people involved in each of these structures, as well as trails of documents and reports elaborated by each of them.

Table 4 The various evaluation structures and practices in all Romanian universities in which fieldwork was obtained. All these structures are prescribed by national legislation and policy documents

One of the main reasons why many of these evaluations are seen as meaningless bureaucracy is that they are too often changed. One interviewee described the situation as such:

Regulations are constantly changing and it is hard to follow up on them. Some of the regulations are not coherent. We are constantly on stand-by. This creates confusion and we cannot plan for the future. (Decision-Maker, Professor, Female, NS0302).

Since many of the governmental decrees mention some form of evaluation, the word ‘regulation’ has become a synonym for ‘evaluation’. Consider a few major legislative changes. The law on quality assurance has remained more or less in place since 2005. The 2011 law on education, the classification exercise and associated legislation related to the evaluation of research centres added several new layers of evaluation (see Table 1). In turn, the current government amended these regulations several times. Because these regulations are changing so often, universities cannot develop a consistent strategy for evaluation (see also problem 3 below). This creates confusion (since it is difficult to keep up-to-date with the latest legislative modifications) and prevents them from engaging in long-term planning. Each of these changes has led to a build-up of frustrations about evaluations procedures and their supposed remedies among many academics.

Another reason why these procedures are perceived as bureaucratic is that they overshadow more informal practices to improve. Yet, discussions at the coffee machine or a simple personal exchange between colleagues are often the most efficient ways to solve a problem. One of our interviewees said that:

The contact with people is most important. Collegial visits could help, but please do not try to quantify quality. (Associate Professor, Female, 20602).

When it comes to students, it may be much easier to hear their problems through informal channels. As one student told us:

Face-to-face conversations are better if something needs to be improved. Professors shouldn’t give up on this feature. (Student, Female, 30702).

Taken as a whole, we can perhaps say that these policy instruments try to achieve too many things at the same time: applying minimum standards for curricula, matching curricula to labour market needs, introducing pedagogic innovations, improving the management of the universities and faculties, and lifting Romania’s scientific production up to Western European standards. And if this is not enough, they also intend to rid the universities of plagiarism and corruption. The combined effect is that these policy instruments achieve very few specific intended results; instead they crowd out informal initiatives to improve quality as we aim to show in the next section.

4.2 Academics and Students Do Not Feel Ownership Over Evaluations

A problem closely related to the frustration over the evaluation procedures is that actors in the university feel little ownership over the criteria on which evaluations are based and how this is being done. The faculty in the universities express it as such:

The QA system was only created in response to the law and ARACIS requirements - there is no point to hide this fact (Decision-Maker, Associate Professor, Male, 11201).

We are forced by all these different institutions, ARACIS, EUA, to do such evaluations (Decision-Maker, Professor, Male, 10202).

The invocation of authorities like the ‘law’ and the ‘external agencies’ underlines that evaluation procedures do not exist because faculty and students think they are ‘ideal’. Evaluations are viewed as something imposed from the outside, through procedures meant to artificially create a ‘quality culture’.

While students participating in administrative structures typically felt slightly more involved than academic staff, not all students feel that they are being listened to, even if they are being heard. Indeed, in practice, many barriers exist that hinder students’ active participation in these evaluations.

There is not a lot of freedom of speech. The problem is mostly in our mind, but also we are not asked to speak our mind, not allowed to say what we really think (Student, Postgraduate, Female, 10603).

A big problem is the laziness of the students. About 50 % of the students do not even read their e-mails. Students are also not very involved in the university (Student, Postgraduate, Male, 10802).

While it is hard to give any ‘objective’ measure of this lack of ownership, the end-result does present some unintended consequences. Since the evaluation process is not seen as legitimate, people display strategic behaviour towards the evaluations. This problem is often referred to in the literature as ‘gaming’ the system (cf. Hood 2006). This seems to range from trying to avoid consequences from evaluations (especially with regard to the ranking exercise) to outright plagiarism in order to meet research requirements (or indeed improve one’s status). The irony is that the evaluations may reinforce the very gaming behaviour they are meant to address. The following quote from a faculty member is instructive:

[In order to fulfil the publication criteria,] “I take information from students’ diploma projects. I give them some research to do, and maybe I get some papers from the research. It is maybe not so good, but both the student and I gain from this (Associate Professor, Male 20503).”

Indeed, another unintended consequence is visible in scientific research. Many interviewees mention that the current assessment framework for scientific research is heavily biased towards the sciences for which international journals exist (that have an interest for Romanian science). Although most interviewees think that it is pointless to reward research in the humanities or legal research in the same way as theoretical physics, this is precisely what is being done. The assessment framework does not acknowledge that publication practices differ widely between disciplines in terms of how often one can publish, whether one has access to international journals and with whom one collaborates. The unintended consequence is that only a few scientific fields are seen as ‘serious’ sciences that are worthy of funding and public attention.

4.3 Evaluations Are Perceived to Be Based on Inconsistent Criteria

This gaming behaviour is reinforced by the fact that the evaluations are based on different standards and performance indicators. Table 5 is the same as Table 1, now displaying the different indicators used in each instrument. In the programme evaluation and accreditation organised by ARACIS introduced in 2006/7, there are 43 different performance indicators. In the programme ranking introduced in 2011, on the other hand, there are no less than 80 variables on which the programmes are evaluated. If we consider these standards as additive, then there are close to 300 formal standards to which the universities have to comply.Footnote 9 These indicators do not so much complement each other, but are quite different indeed. Whereas the quality assurance and accreditation scheme focuses on education and training, as well as the internal quality assurance procedures of the university, the classification emphasises research productivity (scientometrics) and ‘external relations’. This makes it quite hard for academics to figure out what the standards really are.

Table 5 The different criteria used in each policy instrument (see Table 1 for more information about each instrument)

If we take a more detailed look, we can see that the instruments are based on different underlying ideas of ‘quality’. The quality assurance scheme is based on minimum standards for all universities, whereas the classification is based on nominal categories for universities. In other words, the quality assurance and accreditation is based on the idea that there are common (minimum) features to all universities, whereas the classification is based on the idea that there are different kinds of universities. The ranking, on the other hand, is based on the idea that universities are inherently ‘better’ or ‘worse’ than each other. Indeed, the ranking instrument places these universities and programmes on an ordinal scale.

What is important about these inconsistent criteria is that they lead to a confusing picture for the academics, let alone for students and the general public. A university can receive, in principle, high trust in the accreditation process, but categorised as a ‘C’ university (i.e. teaching only) in the classification, and its departments may be ranked in the middle of the distribution. To achieve a higher ranking, it may have to shift resources away from education to scientific production, which may in turn lower its status in the accreditation system. In other words, these different instruments send confusing messages to the universities about what is required from them, and do not help the wider Romanian society to understand what is going on in the field of higher education and research.

At the level of the universities, there is much complaining, but little reflection about these standards. While a few universities have defined their own standards for evaluation, this has not yet trickled down to the faculties and departments. We have not found a single faculty where there exists a systematic plan to improve teaching and learning practices or to experiment with pedagogic innovations.Footnote 10 Similarly, we have found very few instances of faculty-level attempts to improve scientific research production. Indeed, there are many individual initiatives to achieve this, but this is not done very systematically. Can these instruments lead to a reflection on education and scientific research?

Perhaps it is important to reiterate that evaluations do not replace action. Evaluations are diagnostic instruments; they are not the medicine to cure the patient. In fact, we (and our interviewees) found it quite hard to attribute follow-up activities to each of the evaluations carried out. We cannot put it better than one of our interviewees:

I do not believe that even 100 laws will increase quality in the system. Most people respond with maximum attention to forms, but the best way to learn on how to have a quality education system is by learning from [other] teachers. That is how we learned before (Decision-Maker, Professor, Male, RS0903).

In line with this statement, we think that the government and the universities should strengthen the common-sense discussions about quality of education and scientific research. We will present our recommendations to achieve this in the next sections.

5 Recommendations for National Policy-Makers

To address the previously outlined general problem and its causes, many interviewees plea for systematic reflection on key dimensions of education and research. Table 6 gives an overview over the recommendations that we drew from the interviews. In this section, we will elaborate on the recommendations at national level, recognising that these will address the ‘external’ evaluations.

Table 6 An overview over the recommendation to address the policy problem
  • Objective 1: Simplify the procedures

  • Recommendation 1.1: Reduce the number of evaluation instruments and reports.

At the moment, several evaluations are being undertaken across universities in Romania, that absorb too much time and money. Therefore, we suggest that policy-makers integrate the existing evaluation instruments (see Tables 1 and 5) into a single, comprehensive evaluation scheme that will satisfy the need for quality assurance, quality improvements, and comparative quality analysis across institutions. A single evaluation system will reduce the amount of administrative and paperwork conducted at the moment by universities, and will make the standards and their assessments more transparent for professionals. Failing to do so—or worse, increasing the number of evaluations—is likely to further increase the bureaucracy that universities deal with on a daily basis.

  • Recommendation 1.2: Evaluate the evaluation procedures as a whole every five years.

Evaluation procedures can never be perfect instruments to assess all aspects of the quality of higher education. However, that doesn’t imply that they cannot be improved. As new priorities for higher education emerge, countries should invent new ways to evaluate universities. Consequently, we suggest the holistic assessment of the evaluation practice(s) on national level every 5 years. Current external evaluations clearly do not do so: they only review the quality assurance agency ARACIS, but hardly ever address other forms of evaluation. This time interval would give enough stability for the evaluation practices to be understood and effectively carried out by institutions, but also provide an opportunity for national level stakeholders to make small improvement where needed. Moreover, involving university leaders, faculty and students in this process is crucial, since they are the ones who deal with quality assurance regularly.

  • Recommendation 1.3: Take misconduct out of the evaluations.

Misconduct (i.e. bribery, plagiarism, etc.) is recognised as a major problem, but interviewees question whether evaluation instruments are the right tools to address it. The problem is one of effectiveness: evaluation instruments do not respond quickly or directly with individual cases of misconduct. Instruments that would be more effective in dealing with misconduct should aim at distributing power within the university and increasing transparency (after all, academic misconduct is abuse of power). Moreover, some innovative tools are now available such as anti-plagiarism software to review previously published and new scientific publications. Cases of bribery in relation to exams can be dealt with more effectively by providing external reviews of students’ (dissertation) work or using standardized tests carried out by external examiners.

  • Objective 2: Allow professors and students to influence the standards for evaluation

  • Recommendation 2.1: Focus on organising the evaluations without pre-defining all the standards.

The quality assurance agency ARACIS sets two types of standards for universities: a list of minimum standards, and a set of ‘reference standards’. Other evaluation practices prescribe similar—or even higher—levels of performance based on which institutions and people are assessed. While these standards are often meant as ‘minimum quality’, they in fact crowd initiatives of universities, departments and faculty to define quality according to their own terms and standards. It would be more effective if professors and students set many of the standards on which they want to be assessed themselves; it would encourage organisational actors to conceptualize quality and engage in a search for relevant benchmarks. This is also the direction taken in the revised European Standards and Guidelines that are to be adopted in the Bologna Process. Failing to allow professors and students to define more of the standards themselves will continue to create perverse incentives where individuals trick the system, as is currently the case.

  • Objective 3: Apply a more consistent and open concept of ‘quality’

  • Recommendation 3.1: Reduce the number of criteria on which evaluations are to be carried out.

We have shown above that there are 10 evaluation instruments, with a combined load of close to 300 standards on which the universities have to comply. Every evaluation is based on an implicit (or explicit) idea of what quality is. This preconception is reflected in the criteria or standards set by the external agency that is in charge of carrying out the activity. The criteria vary across the evaluation procedures applied in Romania, which results in an unwanted level of confusion among universities and individuals. The more criteria are predefined, the more limited the possibility of universities to supplement the assessment of quality with additional aspects, tailored to their own needs Hence, the reduction of criteria on which evaluations are carried out can reduce the existing formal inconsistencies, while simultaneously broadening up the discussion on the meaning of quality.

6 Recommendations for the Universities

For many in the universities, changes at the national level are uncertain and it may take much time before they are realised. In light of the uncertainty of parliamentary processes, it is important to enact changes in the universities as well. These changes can be enacted even if politicians are slow in responding to the problems identified here. The following changes can improve the ‘internal’ evaluations.

  • Objective 1: Simplify the procedures

  • Recommendation 1.4: Foster informal evaluation practices as well as formal practices.

Current evaluation practices put too much weight on formal assessment methods, such as questionnaires and reports. However, often quality is debated in a less formal environment, without explicit planning or measurement behind it. Such informal practices have been present in universities for a long time, and in some cases continue to be the most important evaluation method. Therefore, we suggest, that informal assessments should be also accounted for, by encouraging individuals to constantly assess the quality of their own work and that of their institution, and providing formal ways to share this knowledge between professors and students.

  • Objective 2: Allow professors and students to influence the standards

  • Recommendation 2.2: Enable a more flexible approach to evaluations within departments.

It is extremely difficult to assess national standards across departments and scientific disciplines. Many fields of knowledge are so specific that the meaning of the criteria gets distorted (a problem of scientific validity). We therefore recommend a more flexible approach at the institutional level. Particularities of the teaching and research traditions of each department should be allowed to influence and change the outcome of the assessment.

  • Objective 3: Apply a more consistent and open concept of ‘quality’

  • Recommendation 3.2: Organise structured discussions about the meaning of quality in faculties and departments

Individuals tend to define the quality of academic practice differently. Nevertheless, without structured discussions on this topic among academics, the existing practices will likely remain superficial or technical. These discussions can be used to adopt professional standards for faculty and students. Promoting organized deliberation on the quality of work at the university, the quality of teaching and research, the quality of administration and management, and so forth, is essential for developing a shared understanding on what quality is in the context of a particular institution. These events should be initiated on a regular basis by the top-management of the institution, and be open to professors, students, employers, and representatives of the wider community.

  • Recommendation 3.3: Develop professional networks between people working on evaluations.

Professors and administrative staff carry out evaluation exercises in most of the universities. Over time, these individuals build up extensive experience in carrying out evaluations, and some develop manuals, reflexive literature, or other new ideas. Organizing professional networks between people working on evaluations will help the institution to make good ideas travel from one organizational unit to the other, or to help the involved individuals to overcome some of the emerging challenges more easily. Certainly, the more isolated the involved stakeholders remain from each other, the harder it becomes to organize evaluations across the university.

7 Concluding Remarks

In the last few years, Romanian policymakers have done a lot to improve the universities in their country. They used evaluations to achieve this goal. The idea was that people in the universities would follow the guidance of external inspectors, in the form of state officials, foreign evaluators or colleagues from other universities. But do professors change because inspectors tell them to? In the Romanian case, this is clearly not the case. Nearly everyone in the system is constantly engaged in one form of evaluation or another. And once one round of evaluation has finished, a new one begins immediately. Instead of having the time to change, the academic community is stuck on a ‘merry-go-round’ of evaluations.

Many people in Romanian universities find it difficult to imagine that the evaluation system will be reformed. Some may even be sceptical of any new change of the legislative environment. But perhaps it is important to remember that evaluations are still a relatively new instrument in Romanian higher education. The quality assurance agency ARACIS was created in 2006, while the classification, rankings and other evaluations were introduced only in 2011. Put differently, these instruments are not old enough to have become institutionalized, but they exist for long enough to be judged on their effectiveness. And we can be optimistic: many national level policy-makers indicate that they are willing to change the system, and recognize the problems mentioned in this paper.

The recommendations in this chapter are based on the views of people who are stuck on this merry-go-round. The advantage of this type of analysis is that we are fairly certain that the people who are supposed to implement these recommendations already support them. The interviewees recognize that no single set of recommendations could solve all problems in Romanian higher education; there is no ‘silver bullet’. Thus, we tried to break the problem down into smaller sub-problems that can be productively addressed by policy. We also gave recommendations to different actors: recommendations at national level imply the alteration of the legal framework and national evaluation instruments; those for universities imply changes at the management and departmental levels. The latter changes can be made immediately, without lengthy parliamentary debates.

Our recommendations may be perceived as going in the direction of a completely decentralised evaluation system. While we think that more responsibility should be placed in the hands of faculty, we do not discount the importance of either national legislation, or leadership of the universities. The key here is dosage. Medicine should not be so strong that it kills the patient. Nor should it be so diluted that it doesn’t work at all. Careful recalibration could achieve a lot.

Our main message is that policy-makers should shift focus from the current obsession about process to achieving substantive results in learning and scientific research. They should envisage a bigger role for faculty and students inside the university, and a smaller role for themselves, for external inspectors and university management. While accountability will continue to be important, it should be based on demonstrated achievement, rather than on process. Put otherwise, Romanian policymakers should try to mobilise the brainpower of faculty and students in the universities. Inspectors and inspected should step off the merry-go-round of evaluations and start reflecting on the purpose and scope of existent practices. Only then can we engage in more meaningful evaluations.