Keywords

JEL Codes

1 Introduction

Economists have long recognized the importance of investments in human capital and education as fundamental engines of a country’s economic growth (e.g. Becker 1994; Barro 2001). Together with quantity, scholars have also more recently stressed the importance of the quality of education in explaining countries’ economic performance (Hanushek and Woessmann 2012). At the micro-level as well, there is evidence of positive labour market returns to university quality (McGuinness 2003; Black & Smith 2004; Di Pietro & Cutillo 2006; Ciani & Mariani 2014; Andrews et al. 2016; Deming et al. 2016; Anelli 2018), with wage premia associated with better university reputations (MacLeod et al. 2017).

Being aware of the importance of having a highly educated workforce, several countries have made attempts to increase the number of university graduates by expanding their higher education supply to attract more students into higher education. Policies such as increasing the geographical diffusion of university branches (Oppedisano 2011) or a complete restructuring of university education—an example being the ‘Bologna process’ (Bondonio & Berton 2018; Di Pietro & Cutillo 2008)—have been implemented to reach this goal. Yet, an important hurdle to increasing the number of university graduates remains the high share of students dropping out from higher education. In OECD countries, for instance, ‘on average, 12% of students who enter a bachelor’s programme full time leave the tertiary system before the beginning of their second year of study. This share increases to 20% by the end of the programme’s theoretical duration and to 24% three years later’ (OECD 2019, p. 208). Chapters “Do Financial Conditions Play a Role in University Dropout? New Evidence from Administrative Data” and “Drop-Out Decisions in A Cohort of Italian Universities” in this book provide an extensive discussion of the determinants of student dropout and new evidence based on Italian universities. It is clear that an increase in the number of graduates could be achieved by reducing important inefficiencies in higher education systems.

The extant literature has extensively investigated the individual-level determinants of university student progression and academic performance (e.g. school entry qualifications and family background). However, often owing to a lack of data studies accounting for supply-side (i.e. university) characteristics are very rare. Those investigating degree-programme characteristics are even rarer. In the current chapter, we seek to fill this important gap in the academic literature. Leveraging a new and very rich database built by merging information on university performance indicators (PIs) provided by the National Agency for the Evaluation of the University System and Research (ANVUR) with degree-programme-level information gathered within the quality assurance system for higher education (HE hereafter), this study features, to the best of our knowledge, the first comprehensive analysis of the degree-programme determinants of student performance. Our study spans the complete HE supply in Italy (bachelor’s, master’s and combined bachelor’s/master’s degrees) for the 2013–2018 period.Footnote 1

In addition to researchers in the field of higher education, other types of stakeholders are likely to be interested in our analysis as well. First are students and their families, who when making their (or their children’s) enrolment choices often focus on a given degree programme rather than on higher education institutions or college majors more broadly. This study provides findings on student dropout and progression that may inform their choices. Second, our study is of interest to all stakeholders that are engaged at different levels in the governance of higher education institutions, such as the heads of degree programmes and quality assurance (QA) groups. Indeed, several countries have introduced complex QA systems to improve the quality of their educational systems (see the next section). In the Italian QA system, for instance, heads of degree programmes and QA groups represent the frontline for interventions to improve the quality and effectiveness of tertiary education. A strong stimulus to improve quality in higher education comes from the diffusion of a quasi-market, which implies that universities are increasingly competing for students and have to devote more attention to the quality of the services they provide and overall student satisfaction compared to the past. Students’ enrolment choices are indeed affected by university characteristics, including teaching and research quality (Biancardi & Bratti 2019), and some students are willing to travel long distances in search of better educational opportunities (Baryla & Dotterweich 2001; De Angelis et al. 2017; Bratti & Verzillo 2019). Moreover, the analysis developed in chapter “Drop-Out Decisions in a Cohort of Italian Universities” shows that the abilities of students from outside the town/region of their university are higher than the overall population in terms of high school grades and that these students drop out significantly less than those who study in their hometowns. Although heads of degree programmes and QA groups are equipped with an extensive set of indicators to monitor degrees, only rarely are these systematically analysed as we do in this chapter. Thus, our analysis can be of interest to policymakers needing to take actions to improve the quality of higher education.

This chapter unfolds as follows. The next section discusses the evaluation of teaching activities and the introduction of quality assurance for higher education systems, while Sect. 3 describes the Italian system of quality assurance. Section 4 briefly reviews some key findings from the literature on student progression and dropout. The data and the econometric model used in our empirical analysis are presented in Sect. 5. The main results of our analysis are commented on in Sect. 6, while some robustness checks are presented in Sect. 7. Section 8 summarizes the main findings of this chapter and draws conclusions.

2 The Evaluation of Teaching Activities and Higher Education Quality Assurance (QA) Systems

Teaching quality is an important element for university student performance. Although teaching and research—and more recently, the so-called ‘third mission’—have been traditionally recognized as equally important missions of universities, the evaluation of their activities has developed quite differently over time in terms of rationales and intensity. Several scholars have recognized that the assessment of university activities has been heavily influenced by a striving for research excellence (Dill & Soo 2005). Performance-based funding mechanisms, international rankings, and even the structure of academic careers have consequently been based almost exclusively on the assessment of research performance at both the individual and the institutional level (Horta et al. 2012). Nowadays, a majority of Western European countries (but also countries in other parts of the world) have indeed adopted evaluation exercises or comparable mechanisms to assess research quality (Hicks 2012).

In contrast, the evaluation of teaching activities is younger and almost entirely expressed in the form of accreditation or QA systems, the main function of which is to verify the existence of qualitative standards and requirements through an evaluation procedure that does not affect—at least directly—the amount of public funding that universities receive from national governments. The introduction and diffusion of QA is the result of three main interrelated policy rationales and processes that have occurred, especially in Western Europe, since the late 1980s (Cheng 2015).

First, a ‘steering at a distance’ conception of HE system governance has developed, according to which national governments grant some form of institutional and organizational autonomy in exchange for external control through various mechanisms such as funding and evaluation systems. QA proved to be an instrument of such policies and clearly emerged in countries such as the UK and the Netherlands (Neave & van Vught 1991).

Second, QA has often been introduced as part of new public management (NPM)-based reforms or, more generally, of ‘market-based’ policies (Agasisti et al. 2019). At the system level, the NPM reforms aimed to steer HE systems vertically through agencies, evaluation exercises and budgetary constraints, increasing the universities’ accountability as well as supporting the overall level of competition for resources (Bleiklie & Michelsen 2013). QA systems can thus be seen as a mechanism through which to make the relationship between public funding and the quality of university’s activities more transparent. With the decrease in public funding and increasing competition, QA can also be viewed as a way of demonstrating value for money to those who bear the cost of educational services—in other words, it serves as a consumer protection device (Stensaker 2011). At the institutional level, the NPM reforms supported the introduction of a new management style that strengthened the power of the leadership and executive bodies and, at the same time, decreased the power of collegial bodies. QA was seen as a ‘top-down managerial device’ (Vidovich 2002, p. 397) to make either universities or academics more accountable and to some extent limit and control their historical autonomy and self-governance. QA mechanisms indeed help a university’s top management develop clearer lines of responsibility through the definition of minimum-quality standards and their continuous monitoring, as well as the consequent centralization of information (Morley 2003; Stensaker 2008). In this way, QA can support the direction of a university both in terms of resource allocation and in terms of its organizational effort (Jarvis 2014).

Third, it is equally claimed that the spread of QA systems in Europe is also part and parcel of the consequences generated by the Bologna process (Huisman & Westerheijden 2010). In the process of developing a European Higher Education Area (EHEA), the need for a common framework was also translated into the requirement that each country would establish a national system of QA. To this end, the European Network of Quality Assurance Agencies (ENQA) was created, and the European Standards and Guidelines (ESG) were then established to provide general conditions and standards that each national QA system must adopt in relation to both the internal QA of HE providers and the external QA of national agencies (Sin et al. 2017).

Despite the increasing diffusion of QA systems, their effects on teaching and learning performance have not been fully investigated and discussed yet, whereas the literature has stressed some potential unintended consequences. QA practices have been found to be heavily bureaucratic and compliance-oriented processes (Harvey & Newton 2007). Huisman and Westerheijden (2010) claimed that internal QA systems could be considered as a good example of power’s idea of ‘decoupling’ (Power 1997), that is, a buffer complying with the requirements and standards of external evaluation actors by providing verifiable measures that are unrelated to organizational processes. Consequently, it cannot be a coincidence that several recent studies have questioned the actual impact of QA practices on teaching and learning activities. However, these mainly address this issue through the perceptions of academics (Stensaker 2011; Cardoso et al. 2016; Tavares et al. 2017), without going into depth into the actual teaching and learning performance of either students or degree programmes (an exception is, for instance, Andreani et al. 2020).

Finally, a potential unintended consequence of QA mechanisms, and more generally of the evaluation of teaching, is the quantification of quality, as denominated by Kallio et al. (2017, p. 299). In their empirical study on Finnish higher education, they illustrated that ‘the easiest way of meeting targets is by lowering standards, for instance, by letting students pass exams more easily and granting degrees with looser criteria.’ These ‘gaming’ phenomena have indeed already been observed in other practices diffused in the public sector, as shown by Christopher and Hood (2006).

3 The Italian Higher Education QA System

Although the Italian HE system is one of the largest in Europe, with over 1.6 million enrolled students, more than 300,000 graduates per year and 90 universities, the first extensive QA system was only introduced in 2013 (Ministerial decree 47/2013). Occasional QA practices could be found among Italian universities before 2013, however (Rebora & Turri 2011). These were the result of either the Conference of Engineering Deans, which promoted QA and accreditation practices for engineering degree programmes, or the Conference of the Rectors of Italian Universities (CRUI), which launched accreditation procedures for degree programmes on a voluntary basis.

With the full establishment of the National Agency for the Evaluation of the University System and Research (ANVUR), the QA policy became much more comprehensive and structured. The NPM-based reform of 2010 (Law 240/2010) clearly identified ANVUR as the body in charge of monitoring the effective operation of internal QA procedures by defining quality standards and verifying that these are applied by universities (Agasisti et al. 2019). The QA model denominated AVA (Autovalutazione, Valutazione Periodica e Accreditamento, i.e. Self-Evaluation, Periodic Evaluation and Accreditation) is clearly inspired by the European Standards and Guidelines and consists of three interrelated stages. The first is a set of internal QA practices and procedures carried out by universities at the level of both the entire organization and individual degree programmes. Each university is indeed required by law to define its objectives and procedures for quality assurance and improvement and to perform an annual review for each degree programme. Since the internal QA procedure is mainly carried out at the level of the individual degree programme, a major part of the QA process is performed by the head of the degree programme. The internal QA process has to comply with the quality standards established by ANVUR, which provides specific requirements such as, for instance, the involvement of student representatives in the internal QA process.

Second, the external process consists of on-site visits from a group of QA experts and students forming a CEV (Evaluation Expert Committee) and appointed by ANVUR every 5 years. The main output of these CEV visits is an assessment of the compliance of degree programmes (10% of the total number of degree programmes) using the quality standards defined by ANVUR, in order to assess the effectiveness of the internal QA system. A CEV might also decide whether a degree programme needs to undertake corrective actions (within a time limit) if the final rating is not satisfactory. Third, based on the evaluations obtained by on-site visits, ANVUR recommends whether the Ministry of Education, Universities and Research (MIUR) should accredit a university. With the launch of the new ESG in 2015, ANVUR started a review and updating process of the AVA system, which resulted in new guidelines issued in December 2016 (Ministerial decree 987/2016). The review of AVA had multiple goals. First of all, it aimed to reduce the number of quality standards (from 57 to 30) that universities have to be compliant with. Second, ANVUR also aimed to strengthen the internal self-evaluation of universities before the visits by reducing the number of degree programmes under evaluation. These two goals were particularly important in terms of reducing the potential bureaucratic burden associated with QA procedures.

Finally, ANVUR developed and introduced a list of 37 indicators to support a stronger connection between the outcomes of the internal QA system and the performance of either the entire university or particular degree programmes. The introduction of these indicators was also a way to put students, the learning process and outcomes at the centre of the QA process, instead of QA procedures merely complying with the national legislation (Andreani et al. 2020). These indicators are used by the different internal university actors who participate in the QA process and by ANVUR in the assessment phase that precedes the on-site CEV visit. The 37 indicators are structured into 6 areas and can refer to the level of either the entire university or the degree programme, as well as to both levels (Ministerial decree 6/2019):Footnote 2

  1. I.

    Teaching: 9 indicators at the level of the entire university and at the degree-programme level

  2. II.

    Internationalization: 3 indicators at the level of the entire university and at the degree-programme level

  3. III.

    Environment and quality of research: 5 indicators at the level of the entire university

  4. IV.

    Economic and financial sustainability: 3 indicators at the level of the entire university

  5. V.

    Further indicators for the evaluation of teaching: 8 indicators at the degree-programme level

  6. VI.

    Further pilot indicators: 9 indicators at the degree-programme level

The value of each indicator is computed by ANVUR for three consecutive academic years to facilitate the identification of time trends. Moreover, all indicators also present the average values for other degree programmes that belong to the same scientific area within the same geographical macro-area, as well as at the national level, in order to enable benchmarking exercises. Among these 37 indicators, 29 are clearly connected to the area of learning and teaching (L&T) and belong to the above-mentioned areas, no. I, II, V and VI.

These 29 indicators can be classified according to the four domains of the L&T process and quality, as recognized by the literature (see Leiber 2019 for a framework and a literature review of this topic), namely (i) L&T environment, that is, a framework of conditions and inputs to L&T in terms of organization, staff and students, (ii) teaching processes and competences of teachers, (iii) learning processes and competences of students and (iv) learning outcomes and gains. As claimed by Leiber (2019, p. 79) and in line with ESG (2015), ‘these four constitutive domains should be considered to generate a comprehensive view on L&T quality issues, because L&T quality of (higher) education is multi-causally determined by the quality of inputs (L&T environment; teaching, learning and assessment competences) as well as the quality of teaching and learning processes and characterized by the quality of outcomes (learning outcomes and learning gain).’

Therefore, through the framework proposed by Leiber (2019), it turns out that the large majority of the AVA teaching indicators are concentrated in the ‘L&T environment’ (10) and ‘learning outcomes’ (18) domains, whereas there is almost an absence of indicators concerning both the ‘teaching competences’ (only 1) and the ‘learning competences’ domains. Indicators such as the ‘proportion of teaching staff who participated in pedagogical training’ and the ‘number of and duration of students’ interactions with course activities’ might indeed become more and more important in supporting the shift in paradigm from teaching to learning represented by the student-centred approach of the ESG (2015). Moreover, the number of indicators related to the ‘learning outcomes’ domain is heavily skewed in favour of metrics regarding the student success rate and the regularity of students’ careers, without covering any aspect of the learning gain process, in other words, the proper achievement and assessment of learning outcomes. In the domain of ‘L&T environment’, there are instead no indicators of the quality of incoming students and the amount of financial investment in L&T.

In the following sections, some of these indicators will be used for our empirical analysis.

4 Literature Review on the Determinants of University Student Performance and Dropout

There is an extensive literature on the determinants of university student progression and dropout. The number of studies is too large to summarize all of the findings here.Footnote 3 For the sake of space, in this section, we only report some of the key results emerging from the literature.Footnote 4

Individual-Level Determinants of Student Dropout and Progression

Scholars have especially worked on the individual-level determinants of university student progression and the probability of dropout. Among the demographic characteristics significantly associated with student dropout are age, i.e. older students are more likely to drop out (Montmarquette et al. 2001; Smith & Naylor 2001; Stratton et al. 2008)—although in the Italian context this may simply be due to the lower ability of older students, i.e. those who experienced grade retention—and gender, i.e. female students are less likely to drop out of university (McNabb et al. 2002; Arulampalam et al., 2004a, 2004b; Gury; Cappellari & Lucifora) due to their greater study effort (Stinebrickner & Stinebrickner 2012) and higher returns from education (Goldin et al. 2006). Gender differences also exist in terms of the probability of on-time graduation, although in this case the advantage of women is not ubiquitous in the literature (Häkkinen & Uusitalo 2003; Aina et al. 2011; Lassibille & Navarro Gomez 2011). The results are less clear-cut with regard to ethnicity, with some scholars finding higher dropout rates for students from minority groups (Harvey & Anderson 2005) and others finding a lower dropout rate, especially for those enrolled in selective institutions (Alon & Tienda 2005).

In most studies, dropout is negatively associated with the level of student entry qualifications and ability (Smith & Naylor 2001; Arulampalam et al. 2004a,b; Stratton et al. 2008). Good student achievement in secondary school is also associated with a shorter time to graduation (Aina et al. 2011; Lassibille & Navarro Gomez 2011), although this does not necessarily reflect a causal relation (Bound et al. 2012). Yet, in some studies, students with better entry qualifications are found to be more likely to drop out (DesJardins et al. 1999; Belloc et al. 2010). This reflects the complex nature of student dropout, which is sometimes motivated not by unsatisfactory student performance but by the availability of good opportunities in the labour market or the higher expectations of better students, which may not be met by the study programme they originally choose. Yet, early academic performance—that is, performance in the first years of enrolment—is a powerful determinant of student dropout (Montmarquette et al. 2001; Bennet 2009; Belloc et al. 2010), as students learn about their abilities during the courses and while taking exams (Stinebrickner & Stinebrickner 2014).

The network of relations cultivated during their studies also affects university student dropout behaviour. Indeed, dropout is lower when students have more interactions with professors (Tinto 1975; Pascarella & Terenzini 1978) and peers (Stinebrickner & Stinebrickner 2006), for instance, through study and learning groups (Tinto 1997). Thus, students attending more selective programmes have the additional advantage of benefiting from more able peers (Sacerdote 2011).

A student’s family background is an important determinant of his/her probability of dropping out of higher education. Dropout is generally higher for students with a lower socio-economic status (Di Pietro 2004; Johnes & McNabb 2004; Cappellari & Lucifora 2009; Trivellato & Triventi 2009; Aina 2013). In these types of studies, it is generally not possible to disentangle the effect of family income from that of other family characteristics, which would be very relevant policy-wise, however. An exception is Stinebrickner and Stinebrickner (2008), who report that differences between students with different socio-economic statuses persist even in the absence of credit constraints.

According to the human capital model of Gary Becker (Becker 1994), student dropout depends on the opportunity costs of studying, which in turn depend on labour market conditions. Thus, student dropout should decrease in poor labour market conditions, such as during recessions. Results consistent with this prediction are found, for instance, by Di Pietro (2006) and Adamopoulou and Tanzi (2017).

Institutional Determinants of Student Dropout and Progression

Interestingly, much less work exists on the institutional determinants of student dropout. Scholars have often focused on system-wide higher education characteristics. Student aid generally contributes to increasing the participation of low-income students in higher education (Dynarski & Scott-Clayton 2013). By relaxing cash constraints, increases in student aid contribute to reducing student dropout (Singell 2004; Arendt 2013) and boost the probability of graduation for disadvantaged students (Alon 2007). However, although the introduction of a strong merit component in student aid (i.e. cut-offs in grade point average (GPA) or university credits to be achieved to maintain aid eligibility) speeds up graduation, on average (Glocker 2011; Scott-Clayton 2011; Gunnes et al. 2013; Denning 2019), it also raises educational inequalities, increasing the probability of dropout for low socio-economic background students and creating an equity–efficiency trade-off (Schudde & Scott-Clayton 2016; Scott-Clayton & Schudde 2020). Financial incentives for good performance are more effective if they are combined with support services for students (Page et al. 2019; Andrews et al. 2020), especially for women (Angrist et al. 2009).

Another important institutional feature of higher education is student fees, i.e. the amount of private vs public funding devoted to higher education. Studies credibly identifying the causal effect of fees on student dropout are in short supply, and scholars have reported mixed results. Bradley and Migali (2019) investigate the effect of the 2006 fee reform that increased university fees in England and, using a difference-in-differences (DIDs) strategy, report opposite effects for high-income and low-income students, whose dropout probabilities fell and increased after the reform, respectively. Conversely, Montalvo (2018) exploits the discontinuity in student fees by student income in a regression discontinuity design (RDD) and finds no adverse effect on student dropout, irrespective of student socio-economic status. A similar strategy has been used by Garibaldi et al. (2012), who show that an increase in tuition fees reduces the probability of late graduation, increasing the efficiency of the educational system.

Other non-monetary institutional features are likely to impact student dropout and performance, such as the quality of facilities and services (tutoring, support etc.) provided to students, the structuring of teaching activities, the type of admission criteria and the characteristics of the teaching body. Ryan (2004) shows that dropout is lower in large universities thanks to the greater availability of services and support that they can provide by exploiting economies of scale. As for admissions criteria, although some scholars have reported lower student dropout rates and a shorter time-to-graduation in systems characterized by stricter admission criteria (Bowen et al. 2009; Bound et al. 2010), this result does not seem to apply to all contexts. Francesconi et al. (2011) leverage a reform introducing selective admissions in a large private university in northern Italy and do not find any improvement in student performance, a result that they relate to the existence of several enrolment alternatives available to students. This finding seems to be confirmed by Carrieri et al. (2015), who instead report positive effects of a similar reform implemented in a public university in southern Italy, in an area where students had very few alternatives for pursuing university studies. In addition, the way teaching activities, exams and graduation sessions are organized during the academic year affects student performance. On the one hand, Di Pietro and Cutillo (2008) find that the greater flexibility introduced thanks to the Bologna reform (2001) reduced dropout in Italy. On the other hand, a study from Sweden shows that in universities that reduced the number of thesis defence sessions, i.e. reducing flexibility, the time for degree completion fell (Löfgren & Ohlsson 1999).

Following the literature on school class size, researchers have also focused on the impact of student–teacher ratios or other measures of resource intensity at the university level, generally finding positive associations with student performance (Bound & Turner 2007; Bound et al. 2010; Aina et al. 2011; Gitto et al. 2016). Chapter “Teaching Efficiency of Italian Universities: A Conditional Frontier Analysis” in this book provides a test of university teaching efficiency using a similar indicator.

Specifically concerning the working conditions and quality of the teaching staff, Herzog (2006) reports that a higher share of tenure-track (vs temporary) professors is associated with lower student dropout. An important supply-side factor to be taken into account is the quality of teachers (Hanushek & Rivkin 2006; Hanushek et al. 2019). The impact of teaching in higher education institutions (HEIs) on student and graduate performance has been the subject of a recent strand of literature (Laureti et al. 2014; Braga et al. 2016; Brownback & Sadoff 2020) showing positive returns of teaching quality. An interesting finding is that the quality of teachers measured in terms of value added is not always reflected correctly in student teaching evaluations (Braga et al. 2014).

In the current chapter, we seek to contribute to the extant literature by moving the focus to the degree-level determinants of student progression, student dropout and levels of student satisfaction. Expanding our knowledge of these issues is key as the first actors called on to implement policies to reduce dropout and improve student progression are degree directors and the QA groups that support them in degree governance. These can never operate on features of the higher education system, which are determined at higher hierarchical levels—for instance, at the level of the higher education institution or even the more aggregated regional or national level (e.g. the amount and forms of student aid, the amount of student fees etc.)—and often require very long periods of time to be changed. Heads of study programmes and the QA group have much more limited policy levers and can often change only small organizational features of the degrees they manage. Thus, assessing the latter’s impact on student progression and satisfaction becomes key for effective policymaking, especially in the short run.

5 Data and Empirical Model

In what follows, we describe our empirical model and the main variables used in our empirical analysis, along with the data sources.

5.1 Model

We estimate linear regression models specified as follows:

$$\displaystyle \begin{aligned} y_{it}=\beta_0 + \boldsymbol{\beta}_c {\mathbf{C}}_{it} + \beta_D {\mathbf{D}}_{it} + \beta_T {\mathbf{T}}_{it} + e_{it}, \end{aligned} $$
(1)

where yit are the measures of university outputs provided by ANVUR indicators. The explanatory variables are collected in three distinct vectors. The first is a vector Cit for contextual factors or factors that are beyond the control of each degree programme. The key regressors in this vector are represented by geographic macro-area and detailed degree subject fields (i.e. class of degree) fixed effects.Footnote 5 An example of a class of degree is ‘LM-56’, i.e. ‘Master’s Degree in Economic Sciences’. For each degree class, the Ministry of University specifies how the syllabus must be articulated in terms of subject groups covered (SSD, ‘scientific and disciplinary sectors’) and the corresponding number of university ECTS credits.Footnote 6 Degree programmes in the same degree class and geographic macro-area are indeed the benchmark against which heads of degree programmes and the QA group are called on to compare the performance of their degrees. These fixed effects capture the average differences in PIs that are geographic- and subject-group-specific; a second vector (Dit) collects degree-programme features that do not pertain to the teaching body. They include the type of student admissions, the teaching language, the multidisciplinary character of the degree, the size of the QA group and the intensity of spatial competition; a final vector (Tit) of regressors collects characteristics of the teaching body such as the percentage of teachers by academic position, the number of students per teacher-tutor, the percentage of teachers in ‘core’ subjects (SSD) and the research evaluation of the teaching body measured by the most recent Research Assessment Exercise (Valutazione Qualità della Ricerca, VQR). Finally, eit is a degree-specific error term.

To allow for degree-level specificities, the models are estimated separately for bachelor’s degrees, master’s degrees and combined bachelor’s/master’s degrees.

In the following sections, we explain the main dependent and explanatory variables included in Eq. (1) and the rationale for their inclusion.

5.2 Dependent Variables

In the regression models, the following ANVUR indicators are used as dependent variables, and the types of degrees for which they are available are indicated in parentheses (BA =  bachelor’s degree; MA =  master’s degree; BA+ MA =  combined bachelor’s/master’s degree). For each indicator, we report the original ANVUR name and the name with which we will refer it to in the analysis (e.g. in tables):

Student Progression

  • iC01 (ECTS40): percentage of regularly attending students who have earned at least 40 ECTS credits during the academic year (BA, MA, BA+ MA)

  • iC13 (ECTS1): percentage of ECTS credits achieved in the first year over the total ECTS credits to be achieved in the first year (BA, MA, BA+ MA)

  • iC15 (ECTS201): percentage of students who continue on to the second year in the same degree programme having earned at least 20 ECTS credits in the first year (BA, MA, BA+MA)

  • iC16 (ECTS401): percentage of students who continue on to the second year in the same degree programme having earned at least 40 ECTS credits in the first year (BA, MA, BA+MA)

  • iC02 (GRAD): percentage of graduates within the legal programme duration (BA, MA, BA+MA)

Student Satisfaction

  • iC18 (ENR): percentage of graduates who would enrol in the same degree programme again (BA, MA, BA+ MA)

  • iC25 (SATI): percentage of students who are generally satisfied with their degree programme (BA, MA, BA+ MA)

In our opinion, these are the ANVUR indicators that can be more strictly considered as degree-programme outputs related to student dropout and progression and overall levels of student satisfaction. Other ANVUR teaching indicators are related to features of the teaching body engaged in each degree programme, such as the percentage of teaching hours taught by personnel with open-ended contracts, and are included as explanatory variables in the econometric models (see the next section). The source of these data is ANVUR, who provided us with indicators for the 2013–2018 period for the purpose of the current research.

In Fig. 1, we plot the raw geographical differences for the first indicator of progression—the percentage of regularly attending students who have earned at least 40 ECTS credits during the academic year. A clear North–South divide emerges for progression. Students enrolled in higher education institutions located in the North are much more likely to have completed at least 40 ECTS credits during the academic year (see Table B1 in the Appendix B for the means and standard deviations of all degree performance indicators by macro-area). A different picture emerges from Fig. 2, which shows the geographical distribution for student satisfaction with their chosen degree—measured as the percentage of graduates who would enrol again in the same degree programme at their university. We cannot identify a clear pattern between regions in different macro-areas of the country in this case, but we observe large differences across regions by degree level. For example, students who enrolled in bachelor’s degrees in Trentino-Alto Adige are much more satisfied with their choice than students enrolled in combined bachelor’s/master’s degrees in the same region. The only region that is always ranked at the top for student satisfaction is Emilia-Romagna.Footnote 7

Fig. 1
figure 1

Student progression

Fig. 2
figure 2

Student satisfaction

5.3 Explanatory Variables

The explanatory variables used in the empirical analysis come from two main sources. The first is ANVUR, for the indicators that are used as degree-programme-level inputs. The second source is the degree-programme cards (namely, Scheda SUA-CdS), the completion of which is made mandatory by the national system of QA and which gather a wealth of information on degree programmes.Footnote 8

Contextual Factors

As we anticipated, since comparisons of ANVUR indicators should be made with degrees in the same degree class and geographic macro-area, we include interaction terms defined at this level in the models (i.e. degree class group by geographic macro-area level). The four macro-areas are North-West, North-East, Centre, South and Islands (area4 variable). Including these fixed effects purges the ANVUR output indicators of factors that depend on the degree subject or geography, namely the geographical location of the university branch supplying the degree. The degree class is provided by SUA-CdS cards and the macro-area by ANVUR. Degree classes are aggregated into the following 14 groups using the classification provided by the National University Council (CUN): Mathematics and Informatics; Physics; Chemistry; Earth Sciences; Biology; Medicine; Agricultural and Veterinary Sciences; Civil Engineering and Architecture; Industrial and Information Engineering; Antiquities, Philology, Literary Studies, Art History; History, Philosophy, Pedagogy and Psychology; Law Studies; Economics and Statistics; Political and Social Sciences.

We have computed a measure of the potential level of spatial competition for each degree programme, namely the number of programmes in the same broad degree subject and geographic macro-area. Previous research by Cattaneo et al. (2017) has shown that competition among universities, measured through geographical proximity and similarity of educational supply (in terms of subject groups), affects the number of student enrolments. Similarly, we might expect better incentives towards improvement in highly competitive geographical contexts.

Non-teaching-Personnel Degree-Programme Features

Selective admission degrees are more likely to perform better than non-selective degrees in several dimensions. Academic preparedness is indeed a key determinant of both student dropout and academic progression (Arulampalam et al., 2004a, 2004b; Jia and Maloney 2015). On top of ‘cream-skimming effects’, the concentration of more able individuals induced by selective admission policies may also spur positive peer group effects (see the literature review above). For this reason, we include dichotomous indicators for the type of admission, namely, programmed number degrees (or numerus clausus for brevity),Footnote 9 and entry requirement assessment, respectively, while the control group is open admission degrees. In the first case, there is a fixed maximum number of students who can access a given degree. In the second case, the number is not fixed, but entry requirements are assessed through a test or an application package and an interview. While in many cases for bachelor’s degrees, these entry requirements (e.g. the score on an entry test) are not binding for students, i.e. if they do not meet the entry requirements, they can still enrol in the degree with some academic debits, in master’s degrees they are binding, entailing very different policies for the two levels of degrees.

A second proxy of degree-programme selectivity is the official teaching language. We include in the model an indicator for degrees completely or partially taught in English. This is a feature that we expect to potentially affect not only student progression—since degrees using English as the language of instruction attract better students, on average—but potentially also the satisfaction associated with the degree. International degrees may indeed attract more foreign students and enhance the university experience.

Another dimension we consider is the degree of multidisciplinarity of degree programmes. This degree feature is captured by an indicator for inter-class degrees, that is, degrees spanning different degree classes. It captures the potential advantages/disadvantages of knowledge and curriculum specialization vs diversification. These degrees generally require student proficiency in quite different subjects.

Aspects related to the degree programme’s governance may also affect performance. Agasisti et al. (2019), for instance, demonstrate that the composition and the role of the quality insurance committee (QAC) instituted at the university level affects the success of higher education institutions in pursuing effective quality assurance policies. Since we do not have variables measuring the volume of activity of the QA group of each degree programme, or its composition, we use the size of the QA group (i.e. the number of participating teachers and students) as a proxy. The QA group includes the members of the Review Team (Gruppo del Riesame).

These data come from SUA-CdS.

Teaching Personnel Degree-Programme Features

These are variables capturing characteristics of the teaching body that can potentially affect teaching quality.

We include the percentage of personnel in each academic role, and more specifically the percentage of full professors, associate professors, open-ended researchers, temporary researchers (both in tenure track and not in tenure track), other teaching personnel and of external teachers. At first glance, it is not clear what to expect. On the one hand, more experienced teachers (typically full and associate professors) may be better teachers thanks to learning by doing, and this could positively affect all output indicators. Moreover, junior personnel are rarely formally assessed on teaching quality but more often on research performance, and for this reason as well we might expect little focus on teaching (De Philippis 2021). On the other hand, full professors in particular have fewer career concerns and are often more engaged in paid consultancy outside of the university compared to junior personnel, and the time and effort they devote both to teaching and research activities may be lower than that of the latter (Muscio et al. 2017). Furthermore, junior personnel may be more aware of the recent developments in the profession/subject, which can be incorporated into their teaching, while more senior teachers may use outdated syllabi. For this reason, the sign of the relation between seniority (or academic qualification, in our case) and teaching quality is ambiguous and must be empirically assessed.Footnote 10

Another proxy of teaching quality is the consistency between the scientific sectors (SSD) in which teachers are recruited (mainly corresponding to their research field) and the scientific sector of the course they teach. We might expect a positive effect in virtue of the strong specialization of academic knowledge and the potential complementarity between research and teaching activities, especially in master’s and combined bachelor’s/master’s degrees. ANVUR provides a useful indicator for this purpose: the percentage of structured (i.e. non-temporary) teaching personnel that belong to scientific sectors that are core or characterize the degree programme for which they are ‘reference teachers’. Indeed, the Ministry of University requires a given number of teachers to be considered as reference teachers in each degree programme. The main rationale is to prevent an excessive expansion and fragmentation of the higher education supply. Having reference teachers that are not in the main subject fields of a degree may imply a bad match between the teaching staff and the content they have to teach.

For master’s degrees, ANVUR provides an interesting indicator that allows for a direct test of potential research–teaching complementarity (see, for instance, De Philippis 2021; Rodríguez & Rubio 2016; Artés et al. 2017, and Palali et al. 2018). This is the summation of the indicator of research quality of the university for each SSD assessed through the last Research Evaluation Exercise (VQR 2011–2014), where each SSD is weighted by the ECTS of courses included in the degree-programme syllabus. It is worth mentioning that this is not an indicator of the research performance of personnel providing teaching services in the given degree programme, which is not provided by ANVUR, but rather the average research performance of teachers in the scientific sectors prevailing in that programme.

5.4 Creation of the Linked ANVUR-SUA CdS Database

The dataset used in the empirical analysis was built starting from a dataset of ANVUR indicators (provided by MIUR) and containing information on 37 indicators at the level of the degree programme (corso di studi, CdS), from which we selected our variables related to degree-programme performance in terms of student progression and time-to-degree. In the original dataset, ANVUR indicators are available for 2290 degree programmes in 92 Italian universities (public, private and online) over the 2013–2018 period.

These data have been merged with those extracted from the degree-programme cards (SUA-CdS), a set of information at the degree-programme level regarding the structure and characteristics of each degree (e.g. duration, procedure for admission, language of teaching, academic staff, tutors, persons in charge of the quality process etc.) used to create the main explanatory variables. Unfortunately, the two sets of data identify degree programmes using different coding systems, and thus we use the triplet name of university—name of degree programme—duration to match the observations. In a few cases—both in ANVUR and in the SUA-CdS data—we found multiple observations for the triplet (17.5% in the ANVUR data, mainly due to several university branches observed in some degree programmes (CdS), especially in the field of medicine and nursing, and 2.3% in SUA-CdS, in most cases due to a ‘double’ version of the same CdS, e.g. in Italian and in English), which did not allow for a one-to-one match. We decided to handle this problem by identifying the CdS code associated with the largest number of students enrolled (i.e. the ‘head branch’) for the ANVUR indicators, that is, the highest value of the ANVUR indicator ‘iC00d’, and then keeping only the observations identified by the selected code. To the univocal triplet name of university–name of CdS–duration in the ANVUR data we linked data from SUA-CdS, keeping all of the (few) multiple observations. Overall, we managed to find correspondences for almost 96% of univocal observations from the ANVUR dataset. Unmatched data mainly refer to degree programmes that are not present in all of the years spanned by the dataset and are probably affected by changes in the educational supply of universities over time. We merged to this dataset some variables related to the type of high school attended (school track) by new enrolled students and their final secondary school grade, to be used as control variables. The latter were provided by the Ministry of University and are built using data from the National University Student Registry (ANS, Anagrafe Nazionale degli Studenti e dei Laureati).

This merged database is used in our empirical analysis. Sample summary statistics for the merged ANVUR-SUA CdS database are shown in Table 1. The summary statistics do not necessarily correspond to those in the estimation samples, the composition of which varies according to the dependent variables.

Table 1 Sample summary statistics
Table 2 Student progression—bachelor’s degrees

6 Results

In this section, we comment on the main results of our regression analysis by level of degree (bachelor’s, master’s and combined bachelor’s/master’s).

6.1 Student Progression and Time-to-Degree

The estimates of the models of student progression for bachelor’s degrees are reported in Table 2. Our main finding is that lower shares of junior personnel and more tutors are associated with a faster progression of students. Furthermore, research quality is positively correlated with student progression in master’s degrees.

Table 3 Table 2
Table 3 Student progression—master’s degrees

Selective admission degree programmes generally display better progression indicators. The percentage of regular (i.e. regularly attending) students achieving at least 40 ECTS per year is 25 percentage points higher in degrees with numerus clausus. However, no other advantage emerges for the other performance indicators, which may be partly due to the low number of degrees with this type of admission. A possible caveat is that the number of students admitted may be below the numerus clausus, which is therefore non-binding. In such a case, there would be no difference with courses assessing the entry requirements. Unfortunately, SUA-CdS data do not provide additional information on the number of places available to measure whether or not and to what extent it is binding. In contrast, in degree programmes with access subject to a non-binding entry test or entry requirements, regular students are 7.3 percentage points (pp hereafter) more likely to pass at least 40 ECTS during the academic year than students in open-access degrees, an advantage that is also displayed when focusing on first-year students only (6.6 pp). Similarly, the probability of passing at least 20 ECTS in the first year is 6.4 pp higher. The average percentage of ECTS passed over the total number of ECTS to be passed in the first year is 6.9 pp higher. Thus, our analysis shows that entry tests assessing student preparedness—even when they are not binding for enrolment—may convey valuable information by signalling to potential students whether they are making the right choice and are indeed always positively associated with better student progression indicators. Yet, quite surprisingly, students in degrees with entry tests are less likely to graduate in the normal duration: the percentage of graduates within the legal duration is 1.8 pp lower. A similar penalty in graduation time is found for degrees with numerus clausus, although it is statistically non-significant. A possible explanation is that degrees with entry tests are also more academically demanding than open-access degrees. As a consequence, students may repeat exams to increase their GPA (indeed, in Italy, there are several exam sessions—generally 5–6 per year—and students can refuse grades and retake exams if they are not happy with their exam results). Alternatively, they can devote more time to their final thesis, in an attempt to increase their GPA. In both cases, this would lead to an increase in the graduation time. Unfortunately, our data do not allow us to test these hypotheses.

As expected, degree programmes taught in English perform better in all student progression indicators. The advantages with respect to degrees using Italian as the language of instruction are sizable: 16.7 pp for the percentage of regular students achieving at least 40 ECTS, 17.8 pp for the percentage of ECTS achieved in the first year over the total number achievable, 13 (18.8) pp in the percentage of students achieving at least 20 (40) ECTS in the first year and a 18.7 pp higher probability of graduating on time. As already mentioned, this may be related to the better academic preparedness and motivation of students enrolled in degrees taught in English.

The composition of the teaching body is also significantly related to student progression. Compared to the current composition, increasing the share of full professors by 10 pp (reducing the percentage of external personnel) is associated with a decrease of 0.73 pp in the percentage of regular students achieving at least 40 ECTS. A similar significant penalty is observed for only one another indicator of student progression, namely the percentage of students graduating on time, which is 1.19 pp lower. Negative gaps are also associated with the percentages of associate professors and both temporary and open-ended researchers. The only differences between the latter two groups are a smaller negative coefficient for the percentage of students obtaining at least 20 ECTS (in the first year) for open-ended researchers, and a larger gap for on-time graduation, for which the coefficient of temporary researchers (compared to external personnel) becomes positive. Quite remarkable are the performance penalties suffered by degree programmes that employ more personnel classified in the residual ‘other’ category, including, inter alia, junior personnel such as PhD students and postdocs. Increasing the percentage accounted for by this last group by 10 pp is associated with a 4.11, 3.29, 4.22 and 3.59 pp decrease in the percentages of regular students obtaining at least 40 ECTS, in the number of ECTS over the total achievable in the year, and the percentage of students obtaining at least 20 ECTS and 40 ECTS in the first year, respectively. Without any further supporting information, it is difficult to interpret these coefficients. As for the penalty associated with junior personnel, a potential explanation could be the lack of teaching experience and adequate incentives or motivation to teach, since at least in the first stages of their careers, tenure and promotion mainly depend on research performance. Often times, this junior staff is employed full-time on research and only teaches to integrate their income or because more senior staff ask them to do teaching support activities. On average, it would not be too surprising to find that motivation to teach for this specific group can be rather low (especially for PhD students who still have to complete their studies). A similar argument holds for temporary researchers, who are still under ‘probation’ and whose likelihood of entering tenure tracks and achieving tenure mainly depends on their research activities. As for the better student progression associated with external personnel, it is difficult to find a clear-cut interpretation since it includes both teaching personnel from other universities and professionals, who could possess very different levels of experience and motivation for teaching. A possible interpretation of their positive results on student progression is that since they do it on a voluntary basis and receive teaching contracts, they may be more motivated, perhaps also because of the extra money they receive for teaching. Good performance in teaching may also be a pre-condition for the renewal of their contract. On a more negative note, external personnel may be less interested in how much students learn and may apply lower standards to reduce their workload (e.g. time to mark exams), increasing in this way the pace of student progression.

The size of the QA group is negatively associated with the percentage of regular students achieving at least 40 ECTS (–0.2 pp for a one-unit increase), the percentage of first-year credits achieved (–0.1 pp for a one-unit increase) and the percentage of on-time graduates (–0.5 pp for a one-unit increase). A possible interpretation is that making the group larger produces a dilution of the individual effort and responsibility, creating incentives to free ride, or greater group heterogeneity could make it more difficult to have a clear orientation of governance (coordination problems). Other interpretations are possible, however. Given individual time constraints, time devoted to administration by members of the QA group is subtracted from research and teaching, therefore potentially penalizing student results. Moreover, the negative association may also capture reverse causation because heads of degree programmes not performing well may devote more staff to QA activities.

Tutor teachers seem to be a useful resource for improving student progression. The number of students per tutor teacher is negatively associated with all indicators except on-time graduation. Penalties associated with a 10-student increase vary within the range of –0.2 pp and –0.1 pp depending on the indicator, with an unexpected marginally significant positive association of 0.1 pp for the percentage of students graduating on time. The negative coefficient for the number of students per tutor teacher confirms that student support is effective and advisable.

Quite surprisingly, the percentage of ‘reference’ teachers in core or characterizing SSD for a degree programme turns out to be negatively associated with on-time graduation. This may reflect higher teaching standards (e.g. higher exam fail rates) applied to these courses compared to ancillary courses for the particular degree programme, which do not affect first-year progression but still have an effect on the time needed for degree completion.

The intensity of spatial competition seems to be positively associated with student progression, with gains of 2.7, 2.5, 3.6 and 3.5 pp in the percentages of regular students obtaining at least 40 ECTS in the academic year, and of students obtaining 40 and 20 ECTS in the first year, respectively, for a 10-unit increase in the number of courses in the same CUN subject group, duration (i.e. degree level) and geographic macro-area.

Table 5 Table 3

Table 3 reports the estimates for master’s degrees. Many effects are consistent with those found for bachelor’s degrees, and are not commented here.

Similar to what we found for bachelor’s degrees, degree programmes with an assessment of entry requirements seem to perform better in all progression indicators except graduation time, compared to degrees with a numerus clausus (the comparison group). The premia are 4.2, 4, 2.5 and 4.9 pp on the percentages of regular students achieving at least 40 ECTS in the academic year, and of students obtaining 20 and 40 ECTS in the first year, respectively. In contrast, degrees with a programmed number have a 1.4 pp higher percentage of students graduating on time. However, unlike for bachelor’s degrees, entry requirements are generally binding for master’s degrees, so a numerus clausus approach means higher selectivity only on the grounds that the number of students willing to enrol exceeds the programmed number. Another possible difference is that admission to selective degrees is generally made on the basis of standardized tests, given the very high number of applicants (programmed numbers are generally introduced because demand is much higher than the number of places available), so as to make the selection process less cumbersome. In contrast, in courses featuring the assessment of entry requirements, selection is often based on the evaluation of application packages and interviews. Thus, the differences in the outcomes for the two types of degree programmes may simply signal that selection based on standardized tests is less able to screen for the best potential students. An equally plausible explanation, however, is that more selective degrees are tougher.

In master’s degrees as well, degree programmes taught in English fare much better than programmes taught in Italian on all performance indicators. Not surprisingly, effect magnitudes are smaller than those observed in bachelor’s degrees, since students have already undergone a process of selection during their undergraduate education, but they are still remarkable. To take just a few examples, degrees taught in English have a 7.7 pp higher percentage of students obtaining at least 40 ECTS in the first year and a 9 pp higher percentage of on-time graduates.

Multidisciplinarity seems to pay in terms of student progression. Inter-class degrees have an advantage of 3, 2.5 and 3.2 pp for the percentage of regular students with at least 40 ECTS, the percentage of ECTS achieved over the total in the academic year, and of students with at least 40 ECTS in the first year, respectively. A possible explanation is that given the peculiarity of these degrees, which require heterogeneous interests and abilities, students enrolled in these programmes may be highly motivated.

As we observed for bachelor’s degrees, the composition of the teaching body is important for student progression in master’s degrees as well. Significant penalties for some categories of structured personnel emerge compared to external personnel. The largest penalties are associated with the ‘other’ category, which displays a positive premium on the percentage of on-time graduation, however (4 pp associated with a 10-pp increase in the percentage of ‘other’ personnel), a positive association with graduation time that is shared with the group of temporary researchers (0.76 pp associated with a 10-pp increase in the percentage of temporary researchers). We have already commented on the possible explanations for these effects for bachelor’s degrees.

The negative association between the size of the QA group and the indicators of student progression that was observed for bachelor’s degrees is confirmed for master’s degrees, at least for the percentage of ECTS achieved (a 0.1 pp decrease for a one-unit increase in the QA group) and for on-time graduation (–0.7 pp).

The analysis for master’s degrees also confirms the valuable role of tutor teachers. A higher student–tutor teacher ratio is associated with slower student progression, with significant effects on the percentage of regular students obtaining at least 40 ECTS, obtaining at least 20 ECTS in the first year and graduating on time.

We find ambiguous results for the percentage of ‘reference’ teachers in core or characterizing SSD: although it is positively associated with the percentage of students achieving at least 20 ECTS in the first year, it is negatively associated with the percentage of students completing their studies within the normal duration (–0.38 pp associated with a 10-pp increase in the explanatory variable).

Our results point to some form of complementarity between teaching quality and research quality, as measured by the research assessment (VQR) results of the personnel teaching in a degree programme. A one-point increase in the VQR score (in our estimation sample ranging between zero and 1.81) is associated with increases of 9.4, 6.1, 6, 10.3 and 4.7 pp in the percentage of regular students achieving at least 40 ECTS per year, of achieved ECTS over the total in the first year, of students achieving at least 20 and 40 ECTS in the first year and of on-time graduates, respectively. This association is further explored in chapter “The Relationship Between Teaching and Research in the Italian University System” of this book. These are intriguing results that deserve further investigation, possibly using individual-level data. Indeed, as we have stressed in the literature review, results on the complementarity between university teaching and research are quite mixed and research on this issue is still sparse.

Unlike for bachelor’s degrees, for master’s degrees we find a very limited scope for positive returns from spatial competition on student progression.

Finally, the results for combined bachelor’s/master’s degreesFootnote 11 are reported in Table 4.

Table 4 Student progression—combined bachelor’s/master’s degrees

We find results that are generally consistent with those for master’s degrees, for instance, in terms of entry requirements, the composition of teaching personnel and the size of the QA group. A striking difference is the negative premia for almost all student progression indicators suffered by degree programmes using English as the language of instruction. The percentage of students achieving at least 40 ECTS in the first year, for instance, is 11.8 pp lower. A possible explanation may be that the level of academic preparedness of foreign students might be below that of Italian students, on average, because the share of foreign students is likely to be larger in the degrees taught in English. In other words, the selection criteria applied by universities may be less effective in screening the best foreign students in combined bachelor’s/master’s degrees.

A higher percentage of teachers in the ‘core’ subjects of a degree programme is positively associated with both the percentage of regular students achieving at least 40 ECTS and the percentage of on-time graduates. Finally, unlike for master’s degrees, the intensity of spatial competition turns out to be positively related to student progression.

6.2 Student Satisfaction

Due to the progressive establishment of quasi-markets in education, universities compete for students. With increasing competition, student satisfaction becomes key for the success of degree programmes, for instance, to attract students overall or highly qualified students more specifically. Although we expect student satisfaction to partly reflect the speed of their careers, i.e. progression, many more elements related to their overall ‘university experience’ enter into this judgement.

In Table 5, we explore the correlates of student satisfaction for all degree levels. Specifically, columns (1) and (2) refer to bachelor’s degrees, columns (3) and (4) to master’s degrees and the remaining to combined bachelor’s/master’s degrees. We find that degree selectivity is not necessarily a synonym for higher student satisfaction in bachelor’s degrees (quite the opposite, in fact). Bachelor’s degrees with entry requirements have a 1.2 pp lower percentage of graduates who state that they would re-enrol in the same programme, compared to open-access programmes. Although the estimated coefficient for numerus clausus programmes is non-significant, presumably owing to the low number of bachelor’s degrees with this admission type, it is negative and sizeable in the first column. As for master’s degrees, in contrast, we find a negative gap of –2.2 pp with respect to selective degrees (in this case, numerus clausus is the omitted category since open admission is not allowed in master’s degrees). Master’s degrees with entry requirements also score –1.3 pp lower for the percentage of graduates that declare being generally satisfied with their degree programmes compared to selective degrees.

Table 5 Student satisfaction
Table 8 Table 5

Quite surprisingly, degree programmes in which lecturing is in English score worse in terms of student satisfaction, irrespective of the degree level. The negative penalties are very high in bachelor’s degrees, where the percentage of students who declare that they would re-enrol in the same programme is 15.1 pp lower and those who report being generally satisfied is 17.2 pp lower. In addition to students in degrees taught in English having higher expectations (being more able, on average), another possible reading of this result is that highly internationalized degrees also attract more foreign students who, bearing higher educational costs, demand higher educational standards. Finally, a certain degree of dissatisfaction, especially among international students, may be caused by the teaching staff not always being adequately able to speak English properly. Indeed, although universities face a pressure to increase their teaching supply in English in order to attract foreign students, they may lack the personnel able to do it, given the low level of internationalization of the teaching staff.

Both the results for admission criteria and the language of instruction are quite interesting and point towards a potential tension for universities between selecting top-level applicants—i.e. ‘cream skimming’ and therefore increasing their performance indicators related to student progression—and the need to ensure them a top-quality education, with the risk of having unsatisfied students. Students in more selective programmes are likely to develop higher expectations that may not be met by the degree programmes.

As for the composition of the teaching body, we find results that are not consistent across degree levels. Indeed, the prevalence of more senior teaching staff, namely full and associate professors, is generally negatively associated with satisfaction indicators in bachelor’s degrees, but the relationship is positive for master’s degrees. A possible explanation is that more experienced teachers and researchers prefer to teach more advanced material, and their motivation and effort may be higher when they teach in master’s degrees. Or alternatively, master’s degree students may appreciate advanced and more difficult teaching material compared to their bachelor’s degree peers. It is worth mentioning that at least in bachelor’s degrees, we do not observe the same penalty noted for student progression in programmes employing a larger share of junior personnel: student satisfaction measured by the percentage of those who would re-enrol turns out to be higher (+ 1.2 pp for a 10 pp increase in the percentage of ‘other personnel’).

We do not find effects for the size of the QA group or for tutors.

The percentage of teachers in ‘core’ subjects is instead very strongly associated with student satisfaction, especially in bachelor’s degrees. Increasing by 10 pp the percentage of teachers in core SSD is associated with premia of 2.11 pp for the percentage of students who would re-enrol and of 2.14 pp for the percentage of students who declare being generally satisfied. This may point to the fact that a higher degree specialization or a better match between teachers and the subject fields in which they teach may increase student satisfaction.

Degree programmes employing teaching staff that perform well in research generally display higher student satisfaction. Master’s degrees with a one-point higher VQR score have 9.3 and 13.3 pp higher percentages of students who would re-enrol and who are satisfied with the degree, respectively.

Finally, spatial competition appears to be positively associated with student satisfaction only in bachelor’s degrees. A possible explanation is that master’s degrees may be quite specialized and be subject to less spatial competition and thus have fewer incentives to improve compared to bachelor’s degrees.

7 Robustness Checks: Controlling for the Quality of Student Intake

Up to now, we have excluded from the regression model measures of the ‘quality’ of student intake. However, as we have argued, variables such as the type of access or the language of instruction partly proxy for this. In Tables C1–C4 in the Appendix C, we have re-estimated all models in the main text including two additional variables provided by the Ministry of Education and computed on National Student Registry data: the percentage of students coming from the academic secondary school track and the percentage of newly enrolled students with a final secondary school mark of 90 or greater (out of a range of 60–100). These are indicators of student ability and are partly correlated with student family background, since high socio-economic status students are more likely to enrol in the academic track compared to the technical and vocational tracks (i.e. the Italian system of upper secondary education is characterized by three broad tracks, and the academic track is the one generally chosen by students who plan to enrol in tertiary education).

Consistent with the past literature, the estimates show that the two proxies of student academic preparedness at entry into HE are strong positive predictors of student progression at all degree levels. Yet, the coefficients on the degree-programme variables are generally not affected. Quite interestingly, more able students—namely, those coming from the academic track—appear to also be ‘pickier’ or have higher expectations, i.e. a higher concentration of these is associated with lower levels of student satisfaction at the bachelor’s degree level. Interestingly, after controlling for students’ entry qualifications, the coefficient on entry test admission ceases to be statistically significant for bachelor’s degrees, while we still find a satisfaction penalty for master’s degrees. Conversely, lower average student satisfaction is still observed for degrees taught in English, for both bachelor’s and master’s degrees.

8 Concluding Remarks

The expansion of the number of university graduates in the population is one of the key objectives set by the EU. The ‘Education and Training 2020’ (ET 2020) work programme set an ambitious target: ‘The share of 30–34 year-olds with tertiary educational attainment should be at least 40%.’Footnote 12 One way of achieving this goal is to lower student dropout from higher education and ensuring satisfactory student progression. In order to reach this goal, many countries have established quality assurance systems for higher education.

The existence of QA systems coupled with the higher competition for students (the quasi-market in higher education) has led Italian universities to devote increasing attention to the quality of the degree programmes they offer. Yet, a systematic analysis of the degree-programme correlates of student dropout and progression is still lacking. In this chapter, we leverage the very rich set of indicators built by the National Agency for the Evaluation of the University System and Research (ANVUR) at the degree level and seek to fill this gap by merging degree-programme-level information gathered by the programme cards (Scheda SUA-CdS) with ANVUR degree-programme performance indicators. To the best of our knowledge, this is the first analysis using degree programmes as the unit of observation and data on the complete Italian university supply (for the years 2013–2018).

Our empirical analysis identifies several degree-programme characteristics associated with student dropout and progression.

Bachelor’s degree programmes with entry requirements generally have better student progression indicators than those with open admission policies, except for graduation times. Interestingly, programmes with this type of admission also exhibit better progression indicators with respect to the more selective (on paper) master’s degree programmes with numerus clausus policies. Higher selectivity is often negatively associated with student satisfaction in bachelor’s degrees, however, presumably owing to the higher expectations of enrolled students, while it is positively associated with student satisfaction in master’s degrees. A positive association with student progression is also observed for programmes taught in English (except in combined bachelor’s/master’s degrees), while a penalty on student satisfaction generally emerges for degrees not taught in Italian, irrespective of the degree level. We put forward that this may be partly due to the fact part of the teaching body lacks adequate proficiency in English.

Degree-programme performance is affected by the composition of the teaching body, with programmes employing external teachers generally showing better progression indicators but not necessarily higher average student satisfaction. Programmes where more junior personnel (e.g. PhD students or postdocs) accounts for a larger proportion of the teaching body display slower student progression at all degree levels but higher on-time graduation rates in master’s and combined bachelor’s/master’s degrees. We argue that this can be explained by the different teaching incentives and motivations of external and internal junior vs senior teaching staff.

Tutor teachers appear to be a valuable resource to support students’ academic careers and are generally associated with better progression indicators. Yet, those premia are not reflected in average student satisfaction.

A higher proportion of teachers in the ‘core’ subject groups of degree programmes is associated with a higher percentage of students graduating on time for combined bachelor’s/master’s degrees, whereas the effect turns out to be negative for bachelor’s and master’s degrees. Counterintuitively, the same variable is associated with higher student satisfaction in bachelor’s and master’s degrees.

Our analysis points to some complementarity between research quality and teaching quality at advanced levels of tertiary education. Master’s degree programmes whose teaching body performed well in the last Italian research evaluation exercise (2011–2014) perform better in terms of both student progression and student satisfaction.

Finally, the geographical concentration of degree programmes in the same broadly defined subject groups, as a proxy of spatial competition, is positively correlated with student progression in bachelor’s and combined bachelor’s/master’s degrees and student satisfaction in bachelor’ degrees, suggesting that higher competitive pressure may push higher education institutions to improve the quality of the educational services they provide.

Although the richness of our data allows us to uncover many interesting associations between the characteristics of degree programmes and student progression and dropout, this study has a descriptive nature, and without further research these associations cannot necessarily be attributed a causal interpretation. Our work nonetheless provides some interesting insights that could represent a starting point for future research.