Curricular fit perspective on motivation in higher education

Abstract

In this article, we present a curricular perspective that can be used to understand students’ focus on assessment in higher education. We propose that the degree of alignment between the objectives and assessment of the curriculum plays a crucial role in students’ motivation. In case of perfect alignment, all objectives have an equitable probability of being assessed. Thus, all learning contributes to performance equitably. Consequently, the motivation to perform and the motivation to learn should result in the same learning behaviour and performance. However, in reality, a certain degree of cognitive and operant misalignment of the assessment with the objectives is present. Hence, some objectives will not need to be mastered in order to pass certain assessments. Consequently, a distinction arises between assessed and unassessed learning, and only the assessed learning contributes to performance. Thus, the probability of performing well on assessments is higher when students focus their effort on the assessed learning only, instead of dividing their effort between the assessed and unassessed learning. Therefore, students who are motivated to perform have a motivation that fits in a misaligned curriculum. The article concludes with implications of this curricular fit perspective for assessment practices, as well as for motivational research.

Most motivational researchers will agree that students learn best when learning tasks are considered enjoyable or interesting, as students who consider learning enjoyable or interesting will have mastery goals (Ames, 1992) and/or will be autonomously motivated (Ryan & Deci, 2000). Thereby, current motivational theories such as goal orientation theory (Ames, 1992; Dweck, 1986; Nicholls, 1984) and self-determination theory (Ryan & Deci, 2000) prescribe the optimal learning situation: educators should strive to make their curricula enjoyable or interesting in order to optimally motivate students.

However, most higher education students are more focused on their assessment performance than on their learning enjoyment or interest (Becker et al., 1968; Cilliers et al., 2010). This focus on assessment performance can take different forms: all students have the goal to pass assessments; a subsample of students aims for higher grades (Kickert et al., 2019). We suggest that the explanation for students’ focus on assessment should not be sought in individual students, but in the description of which learning is rewarded by the curriculum, through grades. More specifically, we posit that students with a focus on assessment performance have a motivation that fits in a misaligned curriculum. The benefit of our proposed curricular fit perspective is that the focus shifts towards the way curricula shape student motivation, directing the responsibility for student motivation towards educators. An innovative aspect of this article is that we explicitly describe under which conditions assessment harms motivation, thereby leading to concrete suggestions for how to improve student motivation through adaptation of the curriculum. Furthermore, we delineate pragmatic ways to measure student motivation.

In the following, we first describe how misalignment of assessment with the objectives of the curriculum comes to occur. Secondly, we present our conceptualisation of motivation. Thirdly, we substantiate two mechanisms through which misalignment may affect student motivation. Fourthly, we present several curricular adaptations that can help to motivate students to learn the full curriculum, instead of only the assessed curriculum. Finally, we present implications of this curricular fit perspective for motivational research.

Curricula and alignment

Conceptions of curricula range from teacher-determined to co-created with students, product-oriented to process-oriented, and pre-determined and structured to indeterminate, non-linear and contingent (Bovill & Woolmer, 2019; Fraser & Bosanquet, 2006; Knight, 2001). Furthermore, there are contesting views on the nature of curricular knowledge, ranging from positivist absolutism to relativism, with social realism in between (Morgan et al., 2017). For the current perspective we adopt a curricular alignment framework (Anderson, 2002; Biggs, 1996; Cohen, 1987) that has been associated with the teacher-determined, product-oriented, pre-determined, structured and positivist absolutist view (Bovill & Woolmer, 2019). Nevertheless, our reasoning can be extrapolated to co-created, process-oriented, indeterminate, non-linear, contingent and relativist or social realist views on curricula as well. We chose the current definition of curricula as we believe it is the most straight-forward to understand how student motivation fits in a curriculum.

For our curricular fit perspective on motivation we assume that curricula consist of three primary elements: objectives, instruction (including both instructional activities and materials), and assessments (Anderson, 2002). The objectives determine the intended outcomes of learning, instruction is the means through which these objectives should be achieved, and assessments serve to determine whether the objectives are achieved. These three elements of the curriculum are the educators’ means to affect student motivation.

Importantly, in order for education to be effective there should be alignment between objectives, instruction, and assessment of the curriculum (Anderson, 2002; Biggs, 1996; Cohen, 1987). Alignment means that there is congruence between the objectives, instruction and assessments. In laymen’s terms: instruct what you intend to teach, and test what you have taught. This means that the assessment is a random sample out of the population of objectives (see Fig. 1a). As a result, in case of alignment, all objectives have an equitable probability of being assessed. Hence, all learning behaviour that was intended by the curriculum has an equitable probability to aid students’ assessment performance.

Fig. 1
figure1

A schematic representation of the relationship between alignment and the distribution of effort for students who are motivated to perform. The square represents the curricular objectives; the stars represent the assessment items. a Alignment of objectives and assessment, as the items are evenly spread around the area. b Cognitive misalignment, as the stars are not evenly spread around the objectives square. c Cognitive and operant misalignment, answering the black stars (i.e. assessment items) correctly is sufficient to pass the test. d A student who manages to focus efforts on the area within the dotted square will have better chances of performing well on the assessment items than when that student spreads efforts over the whole square. The larger dotted square represents a student who wants to get a perfect score on the assessment; the smaller dotted square represents a student who wants to pass with a sufficient grade

However, since most learning is not directly observable, in reality there will be a certain degree of misalignment of the assessment with the objectives. We will now explain why misalignment occurs, using the distinction that Cohen-Schotanus (1999) has made between cognitive and operant aspects of learning that are affected by assessment. Cognitive aspects concern the content of learning (i.e. what and how), and thus include the knowledge covered, as well as the required level of processing; operant aspects of learning refer to the amount of required learning (i.e. when and how much). We suggest that this distinction between cognitive and operant aspects can be extended to misalignment as well.

In case of cognitive misalignment, some objectives’ content will be relatively underrepresented in the assessment. Krathwohl (2002) describes Bloom’s revised taxonomy, in which the content of educational objectives can be represented in a knowledge dimension and a cognitive process dimension. The combination of these two dimensions results in an educational objective, wherein the knowledge dimension embodies the noun and the cognitive process embodies the verb. For instance, an objective for a social sciences curriculum can be that graduates can ‘apply advanced statistical designs and methods’, wherein ‘advanced statistical designs and methods’ is the knowledge, and ‘apply’ is the cognitive process. Cognitive misalignment occurs when certain knowledge or cognitive processing aspects of the objectives are inequitably represented in the assessment.

We identified several sources of cognitive misalignment based on the literature. As the whole is often greater than the sum of its parts, a first source of cognitive misalignment lies in the fragmentation of learning into smaller assessable elements (Lindquist, 1951; Sadler, 2007). Firstly, this fragmentation occurs because the curriculum is being divided into separate subjects, and assessment normally takes place at the subject level. Consequently, assessments concern the subject objectives, but not the curricular objectives. Therefore, ultimate learning objectives of the curriculum remain unassessed: ‘…the recognized ultimate objectives of instruction of individual subjects do not collectively constitute or account for the recognized ultimate objectives of the whole program of general education’ (Lindquist, 1951, p. 135). Secondly, within each subject, the fragmentation continues, by deconstructing the subject objectives into smaller assessable elements, thereby further losing track of the greater whole (Sadler, 2007).

Besides fragmentation of learning into smaller assessable elements, other sources of cognitive misalignment are that some knowledge and skills are more likely to be assessed (Biggs, 1996; UNESCO, 2015), and that deep learning is often harder to assess than superficial learning (Frederiksen, 1984; Krathwohl, 2002). For instance, objectives often concern integration and forming a substantiated opinion about the subject matter. However, in many cases, multiple choice assessments are used for efficiency considerations, for example in case large groups of students need to be assessed. These multiple choice assessments cannot assess whether the student can make innovative integrative connections or form an own substantiated opinion. Consequently, compared with an aligned curriculum, some aspects of learning will have an inequitable probability of being assessed in a cognitively misaligned curriculum. Because of this bias, the assessment will not be a random sample of the curricular objectives (see Fig. 1b).

In case of operant misalignment, the amount of required learning for the objectives is larger than the amount of required learning for the assessment. Although the objective is for students to fully master a certain topic, a passing grade does not require fully mastering the topic. For instance, a passing grade often requires 50 to 60% correct answers on the assessment. Consequently, on the assessment, students can afford not to have mastered certain aspects of learning, and still obtain a passing grade (see Fig. 1c).

In sum, due to cognitive and operant misalignment, some learning that is intended by the curriculum will not need to be mastered in order to pass the assessments. Thus, within misaligned curricula, a distinctionFootnote 1 arises between assessed objectives and unassessed objectives. Before we elucidate how this distinction may affect students’ motivation, we will first discuss our operationalisation of student motivation.

Motivation to learn and motivation to perform

Studying can serve many different ends for students, such as to get an interesting job, to get a high-paying job, to impress others or themselves, to become an expert, to feel the pleasure of learning, or to feel smart. However, within each individual subject, students have only two means to achieve these ends: through learning the subject materials, and/or through performing well on the subject assessment. Thus, we posit that within each subject, students can have two motivations for studying. The motivation to learn concerns the extent to which students aim to master curricular knowledge and skills, i.e. how much students want to learn and their perseverance in achieving this learning. The motivation to perform concerns the extent to which students aim to perform on the assessment, i.e. the grades students aim for and their perseverance in attaining that grade.

The motivation to learn and motivation to perform resemble self-determination theory’s distinction between intrinsic and extrinsic motivation, respectively (Ryan & Deci, 2000). Intrinsic motivation refers to performing an activity for the inherent satisfaction of the activity itself, whereas extrinsic motivation concerns performing an activity for some separable outcome. However, within self-determination theory, the focus is on the reasons students have to learn and perform, whereas we solely focus on the extent to which students want to learn and perform. In the following, we will use our distinction between the motivation to learn and the motivation to perform to elucidate how curricula shape students’ motivation.

Motivation in a misaligned curriculum

We assume that all students aim to graduate. In order to graduate, students need to pass assessments. Hence, it has been observed that the lowest grade that students would be satisfied with is never below the passing grade, regardless of what that passing grade is (Kickert et al., 2019). In other words, although students differ in which grade they are satisfied with, all students are motivated to perform.

In an aligned curriculum, all learning has an equitable probability to benefit performance on the assessments. As a result, in terms of performance, whether students are motivated to learn, and/or motivated to perform, will not matter as learning is a prerequisite to perform. In other words, in a perfectly aligned curriculum, all learning contributes to performance. Thus, the motivation to perform should essentially result in the same learning behaviour as the motivation to learn, and vice versa. Conversely, in a misaligned curriculum, learning assessed objectives (we will refer to this as assessed learning) is profitable for assessment performance, but learning unassessed objectives (we will refer to this as unassessed learning) is not. There are two ways in which misalignment can affect student motivation, an active and a passive mechanism.

Active mechanism: adapting to misalignment

First, there is an active mechanism through which assessments may influence motivation: assessments determine the reward structure of the curriculum, and students may adapt their motivation in order to achieve rewards on misaligned assessments. A student who is able to focus his or her effort on assessed learning will have better chances of performing well on assessments, compared to when that student would evenly spread his or her efforts among assessed and unassessed learning. As in higher education, grades are students’ only formal and institutionalized reward for learning (Becker et al., 1968), students are only rewarded for putting effort in assessed learning.

Furthermore, given that students’ time and energy are limited resources, focusing efforts on unassessed learning reduces efforts towards assessed learning, and thus should reduce assessment performance. Therefore, in terms of performance, a student is discouraged from putting efforts in unassessed learning because this lowers the chances of performing well on assessments. Indeed, Senko and Miles (2008) have reported that students who focus on personally interesting materials instead of on what the teachers find important achieve lower grades than students who focus on what the teachers find important.

A necessary condition for students to be able to shape their motivation to match the misalignment is that the students have expectations of misalignment. Indeed, many students are aware that there is a conflict between learning and meeting the assessment demands (Becker et al., 1968; Cilliers et al., 2010; Öhrstedt & Scheja, 2018). In a seminal study, Snyder (1971) observed that students differentiate between the formal curriculum and what he termed the hidden curriculum. The former contains the formal requirements of the curriculum, whereas the latter denotes what is actually expected in order to perform academically (Snyder, 1971). The crucial element in the hidden curriculum is assessment (Sambell & McDowell, 1998). In addition, improving alignment is associated with improved satisfaction among students, and with an increase of the desired learning activities (Driessen & Van Der Vleuten, 2000; Newble & Jaeger, 1983). In sum, students seem to expect misalignment (Becker et al., 1968; Cilliers et al., 2010; Öhrstedt & Scheja, 2018; Snyder, 1971), and respond to it.

We can conceive of two sources of information that shape students’ expectations of misalignment. A first source of expectations of misalignment can be students’ previous experiences, both in preceding, misaligned subjects of the students’ current curriculum, as well as earlier in a student’s educational career (Boud, 1995; Sambell & McDowell, 1998). A recent study has shown that although students are not able to accurately predict their first grade at the university, predictive ability already improves considerably for the second grade (Kickert, 2020). Apparently, the first assessment helps to properly manage expectations of the assessments in the course programme. Additionally, previous experiences with assessments, also outside the curriculum, may have made the student aware that deep learning is difficult to assess. Therefore, the student can know that deep learning has an inequitable probability of being assessed.

A second source of expectations of misalignment is the implicit and explicit cues given about the assessment by the teacher. Research has shown that many students seek cues about what is more likely to feature in assessments (Becker et al., 1968; Cilliers et al., 2010; Miller & Parlett, 1974). Providing information on the assessments is often advised (Baartman et al., 2007; Broekkamp & Van Hout-Wolters, 2006), or even compulsory for teachers due to educational policy. Teachers may (be required to) communicate the assessment format during the subject, and knowing expected demands of assessments may affect students’ learning (Baeten et al., 2010; Cilliers et al., 2010). For instance, students show differences in learning on multiple choice assessments versus essay assessments (Scouller, 1998; Stanger-Hall, 2012; Struyven et al., 2005), or on open-book versus closed-book assessments (Heijne-Penninga et al., 2008), and the type of assessment questions is associated with whether students aim for surface or deep learning while studying (Entwistle & Entwistle, 1991; Öhrstedt & Scheja, 2018; Struyven et al., 2005). For instance, when students know that the assessment will consist of questions that require reproduction of knowledge, students will aim for reproduction instead of transformation of knowledge, while studying (Entwistle & Entwistle, 1991). As a consequence, the amount of effort put into learning is related to the type of assessment; students invest more effort when the assessment is deemed relevant (Preston et al., 2020). Finley and Benjamin (2012) have even shown that students adapt their memory encoding strategy to the expected demands of an upcoming assessment, by experimentally demonstrating that students perform better when the assessment type was as expected, regardless of what that type was. As a likely consequence of these adapted learning behaviours, students who expect to be assessed through assessments that require higher-order thinking skills have a deeper understanding of the subject matter (Jensen et al., 2014), and test performance is best when students receive the kind of assessment they expect (Lundeberg & Fox, 1991; McDaniel et al., 1994; Thiede et al., 2011).

In addition to cues about test format, students will often be aware that less than 100% mastery is sufficient to pass an assessment, as the passing grade is another cue that is generally known prior to the assessment. Teachers can also provide practice exams, or make past exams public (Öhrstedt & Scheja, 2018). Reviewing past exams has been identified as an important cue seeking strategy that is associated with higher performance (Sebesta & Bray Speth, 2017). Additionally, material that is discussed in the lectures is deemed more likely to be assessed, especially in case of high frequency and intensity with which the material is discussed (Cilliers et al., 2010; Öhrstedt & Scheja, 2018). In summary, students have a host of informational sources to form expectations of misalignment.

We posit that the accuracy of expectations of assessment is a crucial determinant of academic performance, as students whose expectations are correct have a strong advantage over students with misguided expectations: a correct expectation of what will be assessed can help in distributing effort towards the assessed learning, and will therefore positively impact performance. Better performance increases students’ chances of ‘survival’, which brings us to the second mechanism.

Passive mechanism: misaligned selection

The second mechanism through which misalignment affects motivation is passive: assessment performance determines who is allowed to progress, and failing assessments can lead to academic dismissal (Stegers-Jager et al., 2011), which means that students with insufficient assessment performance are selected against. Thus, students have better chances of progressing and/or avoiding academic dismissal in higher education when they are motivated to perform than when they are motivated to learn. Thereby, in misaligned curricula, assessments are the motivational bottleneck: If a student does not pass the assessments, all other goals (e.g. learn how to become a good doctor/psychologist) are rendered useless as well. And indeed, students are aware that they need to survive in the short term by passing assessments, in order to reap the long-term benefits of their education (Cilliers et al., 2010). As assessments often serve to eliminate poor performers, many students are in a survival mode (Backer & Lewis, 2015). And students who do not perform well enough do not ‘survive’ in the curriculum.

In addition, students who were motivated to perform on misaligned assessments in previous phases of their educational career have better chances of ever reaching higher education at all, compared to when these students would have been motivated to learn. Therefore, each time students encounter misaligned assessments in their educational careers, those students who are motivated to perform have better chances of performing well, and thus of continuing to higher levels of education. Consequently, higher education students have been subjected to a long line of motivational bottlenecks by the time they are in higher education. Thus, in addition to students actively shaping their motivation, students have also undergone a selection process that favours students who were motivated to perform.

Curricular fit

In summary, the curriculum is the educator’s tool to motivate students to learn and to perform. In case of alignment, the motivation to learn and motivation to perform have the same adaptive value for students who aim to graduate; both motivations will lead to the same performance. However, for students in a misaligned curriculum, regarding assessment performance, it is maladaptive to distribute efforts towards unassessed learning, and adaptive to focus effort on assessed learning (see Fig. 1d). We use the term adaptive to underscore the fact that students who are motivated to perform have a motivation that fits in a misaligned curriculum. This motivation should positively impact higher education students’ only form of formal rewards: grades. In addition, being motivated to perform is adaptive to increase the students’ chances of ‘survival’, i.e. passing assessments. Conversely, students who are motivated to learn unassessed objectives are more likely to fail assessments and face academic dismissal, and thus are ‘selected against’. Therefore, the larger the misalignment, the more adaptive it will be for students to be motivated to perform. Students who are motivated to perform thus have a motivation with better fit in a misaligned curriculum than students who are motivated to learn. Hence, a misaligned curriculum is implicitly encouraging students to refrain from putting effort in unassessed learning. Figure 2 depicts a visual summary of our perspective of student motivation.

Fig. 2
figure2

A conceptual model of student motivation in higher education. The degree of misalignment is represented by the surface of the unassessed objectives; in case of perfect alignment, this surface is non-existent (scenario 1). Hence, all objectives are assessed, and motivation to learn equals the motivation to perform. In case of misalignment (scenario 2), the unassessed learning does not contribute to academic performance. Consequently, the motivation to perform no longer equals the motivation to learn, and it becomes adaptive for a student to focus on the assessed learning only

Fortunately, not all students will behave in the way that the curriculum pressures them to. Although we have explained that we believe all students are motivated to perform, we are not postulating that all students only want to pass. Many students wish to perform better than satisfactory (Kickert et al., 2019), and some students will want to put effort in unassessed learning, despite the curricular pressure to refrain from doing so. In fact, unassessed learning should be highly salient for students who are mindful of long-term benefits of learning. However, these long-term benefits can only be achieved in addition to the short-term goal to perform, because poor performance can lead to academic dismissal (Stegers-Jager et al., 2011). In other words, not all students may have the luxury to invest in long-term benefits, because these students are merely trying to survive. Nevertheless, some students do have that luxury.

In addition, the curricular pressures will affect different students in diverse ways. Students will have different experiences and perceptions, i.e. a different understood curriculum, within the same created curriculum (Knight, 2001). For instance, not all students have the same perception of and reaction to assessments, to grades or to the threat of academic dismissal (Struyven et al., 2005). Students with different learning styles or approaches to learning may also respond differently to the same created curriculum. Student behaviour is thus not a consequence of the created curriculum, but of the understood curriculum. However, because we want to know what educators can do to optimally motivate students, we now focused on the created curriculum. In other words, our aim was to present a model of curricular pressure, rather than a model of student behaviour. Hopefully, many students will withstand that pressure, and determine their own behaviour.

An analogy: training for a marathon

As an analogy, suppose Sarah is motivated to perform well on a marathon. This is an example of a situation that should have excellent alignment, as the objective (i.e. run a marathon) is congruent with the assessment performance (i.e. finish time on a marathon). Now suppose Sarah knows the assessment will be misaligned; her marathon performance will only be assessed by measuring her time on the first half marathon. If Sarah wanted to perform as well as possible on this assessment, she would probably adapt her training to this shorter distance. The shorter this assessed distance becomes, the larger the misalignment, and the more this would affect her preparation. In an extreme case of misalignment, suppose her marathon performance would only be assessed over a hundred meter interval. Her training would likely feature an excessive amount of explosive sprinting, and Sarah would perform much better than when she really would have trained for a full marathon. Her expectations of the assessment would have changed her assessment preparation, and thus gave her an advantage on the assessment. This change in preparation is an adaptive response to the misaligned assessment.

Implications for education

When our curricula are indeed implicitly encouraging students not to invest effort in unassessed learning, the consequences for both students and society will be dire. Due to the focus on assessment, learning that is not (as easily) assessable runs the risk of not being done (UNESCO, 2015). As a consequence, students will graduate, but lack crucial knowledge and skills. We can conceive of a number of options to remedy this problem. A first route would run through the students; making students aware of the consequences that misalignment has for them could help students to focus on the long-term positive consequences of learning unassessed materials. However, as all students still need to pass assessments in the short term, increasing students’ awareness of misalignment may also increase students’ allocation of efforts towards the assessed learning, and thereby aggravate the adverse effects of misaligned curricula. Therefore, solutions need to be sought in the curriculum.

First solution: abandoning assessment or grades

A drastic option is to abandon assessment altogether (Becker et al., 1968). However, this would lead to an educational situation in which there is no standardized information available about the level of students’ knowledge and skills. In addition, assessments can of course also be motivating for many students. Therefore, many educators will not find abandoning assessment a realistic option. However, as we have tried to substantiate above, a poorly aligned assessment can have adverse effects: the assessment not only gives distorted information about students’ knowledge and skill levels, it discourages students from performing the unassessed learning. Thereby, the assessment corrupts the learning process it was intended to monitor. Hence, the adverse effects of misaligned assessments are not to be underestimated, and abandoning these assessments should be considered.

Instead of abandoning assessment altogether, we could reconsider the attachment of grades to students’ performance (Tannock, 2017). Assessment is not equivalent to giving grades. And just as many assessments only measure a proportion of learning, grades do not fully capture assessment performance. In fact, Sadler (2014) asserted that codification of learning into the form of grading is impossible, even for pass/fail grading. Therefore, educators could give qualitative judgements, such as a verbal description of students’ understanding of different topics, instead of grades. For an explanation of the reasons behind and method for qualitative judgements in workplace learning, see Govaerts and Van der Vleuten (2013).

Alternatively, we could strive to lower the importance of grades. According to Campbell’s law, ‘the more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor’ (Campbell, 1976, p. 49). Thus, reducing the consequences of grades could prove beneficial to our educational system. A practical way to lower the stakes for individual assessments is to assess more often, with resulting lower stakes attached to each individual assessment (Van der Vleuten et al., 2012). This does not mean ‘assess more’, but ‘assess smaller portions, more often’.

Second solution: improving assessment

In addition to (partly) abandoning assessment or grades, assessment practices can be improved. First and foremost, this means we should strive to optimize alignment in our curricula. In essence, aligning assessments with curricular objectives means that the learning behaviour that was intended by the curricular objectives is rewarded by the assessments. A practical tool that can be used to assess cognitive alignment is Bloom’s revised taxonomy (Krathwohl, 2002). Both the educational objectives and the assessments can be placed in a table that consists of the knowledge dimension and cognitive process dimension of Bloom’s taxonomy (Anderson, 2002). Then, the tables for the objectives and assessments can be compared in order to see which objectives are underrepresented in the assessment. Regarding operant alignment, educators need to assess whether the performance standards (i.e. grade required to pass) on the assessments are appropriate to determine whether the objectives have sufficiently been mastered. A necessary condition to improve alignment would be that educators receive the appropriate training, and are granted enough time to invest in improving their assessment practices.

Second, we should raise educators’ awareness of the fact that assessments are a fundamental part of our curricula, and thus serve more purposes than measurement alone (Boud et al., 2018; Schuwirth & Van Der Vleuten, 2004). In particular, despite the strong traditional focus on assessment’s reliability and validity (Boud, 1995), educators should also be aware of the motivational consequences of assessments. If the assessment solely rewards superficial learning, students are implicitly discouraged to perform deep learning. Gibbs and Simpson (2005) have even argued that ‘…we should design assessment, first, to support worthwhile learning, and worry about reliability later’ (p. 3). One way to support variation in learning is to increase the variation in types of assessment (Broekkamp & Van Hout-Wolters, 2006). For instance, assessments of individual subjects which only concern subject objectives could be supplemented by assessments of curricular objectives, such as progress testing (Van der Vleuten et al., 1996).

Third, the prevailing view on assessment is one of damage control, in which assessments serve to exclude poor performers (Backer & Lewis, 2015). However, assessments also have the potential to inform, to make students push their boundaries and to be a force of positive change. In other words, educators need to reflect on whether they are assessing to find out what students do not know, or in order to elucidate what students do know. Again, this entails a shift from seeing assessments merely as evaluative tools, towards seeing assessments as educational tools.

Fourth, one particular way to improve assessment in cases where aligned assessment of outcomes is impossible is to set process learning goals instead of outcome learning goals. Consequently, the process needs to be assessed, rather than the outcomes. For instance, instead of setting the outcome goal for students to learn to apply critical thinking, the goal could be to actively practice applying critical thinking. The assessment of the process of practicing is often more feasible than the assessment of the outcome. And as long as the process is right, the outcomes will follow (Knight, 2001). Or in this instance, as long as students practice critical thinking, they will improve at it.

Fifth, instead of viewing assessment as the ‘finish line’ of a subject, the importance of the ‘cooling down’ could be reconsidered. Less cryptically, this means that exam reviews could be made a more fundamental part of the curriculum. Then, instead of students just knowing their grades, students could regularly reflect on which content was or was not mastered, based on the assessment performance. Which questions were answered correctly, which were not, and why? Making this reflection a customary part of the curriculum could aid all stakeholders in realizing that each assessment is not the endpoint of the learning experience, but a checkpoint somewhere along the way. Consequently, the distinction between formative and summative assessment, i.e. assessment for learning and assessment of learning, would cease to exist (Taras, 2005). In essence, all the above-mentioned ways to increase the quality of assessment require an increased self-reflection among educators on the possible influences of their assessments. This reflection requires time and energy.

Third solution: counter strategic effort

Given that perfect alignment often is an overly optimistic goal, a final resort may be to make it harder for students to be strategic in allocating their effort towards assessed learning, and not towards unassessed learning. Although transparency is often considered a quality criterium for assessments (Baartman et al., 2007), explicating detailed and transparent criteria of assessment can lead to assessment completely dominating the learning experience: assessment as learning (Torrance, 2007). As a possible remedy, students’ expectations of cognitive misalignment can be obstructed, simply by not telling them how they will be assessed. If students know as little as possible about the assessment, preparation and assessment behaviour cannot be adjusted to the expectations either. Cilliers et al. (2010) observed that students were less likely to neglect certain learning tasks in case the assessors were perceived as less predictable.

For instance, suppose a certain course programme has eight courses, but it is unfeasible to give oral assessments or essay assessments to all students for each course; at least six courses need to be assessed through multiple choice assessments. Then, educators can randomly pick a sample of students for oral assessments and essays at the end of each course. So that at the end of a year, each student has had at least one oral examination, at least one essay, and a maximum of six multiple choice assessments. And more importantly, in each course, students had to study well enough to pass all three kinds of assessments.

The expectations of operant misalignment can be obstructed as well, by not setting the performance standard before the assessment. If students know that 60% of the assessment items needs to be correct to pass an assessment, preparation for the assessment can be adapted to this standard. For instance, deep learning can be omitted because the superficial learning will suffice for a passing grade. Although it seems fair to give students all available information, not setting the performance standard may actually stimulate students to unleash their full potential, instead of unleashing their potential up until the point that the educator deems sufficient. It may even be considered to let go of quantitative strategies to summarize assessment data, and use expert judgement instead (Van der Vleuten et al., 2012).

So, in terms of the marathon analogy, a first option would be to just let students run the marathon, without measuring the finish time. The second option is to measure someone’s marathon aptitude by assessing the full marathon. However, if for some reason only an interval can be assessed, a lot of adverse effects of this misalignment could be circumvented by the third option: not informing the runners about which interval will be assessed, or what time is considered to be sufficient.

Implications for motivational research

A first implication of our perspective for research on motivation is that we expect that the adverse effects of assessment on motivation are a consequence of misalignment. A well-known observation in motivational research is that extrinsic motivators such as assessments seem to have detrimental effects on students’ intrinsic motivation (Deci et al., 1999; Harlen & Crick, 2003). We have presented a possible mechanism through which these effects can occur, and thus hypothesizse that the reason for these detrimental effects lies in the misalignment of assessment with the objectives. Thus, if the assessment is perfectly aligned, we predict that assessment will not damage motivation. This prediction can be empirically investigated by measuring students’ motivation under various degrees of expected misalignment.

A second implication concerns the measurement of motivation. The two concepts motivation to learn and motivation to perform are highly similar to the concepts intrinsic and extrinsic motivation in self-determination theory (Ryan & Deci, 2000) and mastery and performance goals in goal orientation theory (Ames, 1992; Dweck, 1986; Nicholls, 1984). However, within these theories, motivation is measured by asking for the reasons students have to learn and perform. For instance, in the Academic Motivation Scale, an example item for extrinsic motivation is ‘Why do you go to college? In order to obtain a more prestigious job later on’ (Vallerand et al., 1992). An example item for a performance goal from the Patterns of Adaptive Learning Scales is ‘One of my goals is to look smart in comparison to the other students in my class’ (Midgley et al., 2000). Although in different ways, both scales focus on the reasons for pursuing certain educational activities. Instead, we suggest that what essentially matters is not why students are motivated to learn or perform, but how much effort students are willing to invest. We believe our stance is supported by the fact that the two motivational factors that show the strongest association with academic performance are students’ performance self-efficacy and grade goals (Richardson et al., 2012; Schneider & Preckel, 2017). Performance self-efficacy refers to the grades students expect to obtain, and grade goals are the grades students want to obtain (Richardson et al., 2012). These two factors both concern the ‘how much’ of motivation, instead of the ‘why’.

The third implication for research also concerns the measurement of motivation. In a (hypothetical) perfectly aligned curriculum, the assessment is a perfect reflection of learning. Thus, in order to measure motivation, researchers only need to measure the motivation to perform or the motivation to learn, as both motivations will result in the same learning behaviour. However, in a (realistic) misaligned curriculum, researchers need to differentiate between the motivation to perform and the motivation to learn. The motivation to perform concerns the answer to the question ‘to what extent do you want to do the assessed learning?’. The motivation to learn is essentially about answering the additional question ‘to what extent do you want to do the unassessed learning?’ (see Fig. 2 for a visual illustration). However, asking this second question means we would assume that students are perfectly aware of misalignment. Therefore, an essential question that needs answering first is ‘how well is the student able to predict which learning will be assessed and which will not?’.

Conclusion

In conclusion, we have presented a curricular fit perspective on motivation in higher education, by which we explain why it is more adaptive for students to be motivated to perform than to be motivated to learn in a misaligned curriculum. Thereby, we do not advocate an educational system that ignores students’ interest, enjoyment or enthusiasm for learning. On the contrary, the tremendous benefits of enjoying an activity (Woolley & Fishbach, 2017), or of being intrinsically motivated (Cerasoli et al., 2014), are not under dispute. The point we have tried to make is that many contemporary curricula only reward the learning of assessed materials, and thereby implicitly discourage students to learn unassessed materials. Our assessment-minded educational system is pressuring students to be primarily motivated to perform.

This curricular fit perspective contributes to the literature on motivation in several ways. Firstly, we described the distinction between cognitive and operant misalignment, and presented concrete sources of both. Secondly, we delineated two processes through which misaligned assessments may harm student motivation: adaptation and selection. Thirdly, we offered suggestions to alter assessment practices and counter these harmful effects. Fourthly, we proposed how motivation can be measured in a pragmatic way. In our view, the most important benefit of conceptualizing motivation from a curricular perspective is that this puts the focus on those aspects of motivation that we can improve through our curricula. In other words, students’ motivation is a reflection of the curriculum. If many students are not motivated to master all objectives, think critically, or show deep processing, the most likely explanation is that the curriculum is not motivating students to do so. Analogously, when scoring an exam, students’ mistakes can be seen as a sign of what students need to learn better; however, if many students make a certain mistake, this should be seen as a sign of what the teacher needs to teach better. Educators have the privilege to shape curricula, and thereby create their students’ motivational context. Consequently, there are no good or bad kinds of motivation, just good or bad curricula.

Notes

  1. 1.

    This is not a binary dichotomy, but rather a continuum of objectives with a very high probability of being assessed on the one end, and objectives with a very low probability of being assessed on the other end.

References

  1. Ames, C. (1992). Classrooms: Goals, structures, and student motivation. Journal of Educational Psychology, 84(3), 261–271.

    Article  Google Scholar 

  2. Anderson, L. W. (2002). Curricular alignment: A re-examination. Theory Into Practice, 41(4), 255–260. https://doi.org/10.1207/s15430421tip4104_9

    Article  Google Scholar 

  3. Baartman, L. K. J., Bastiaens, T. J., Kirschner, P. A., & Van der Vleuten, C. P. M. (2007). Evaluating assessment quality in competence-based education: A qualitative comparison of two frameworks. Educational Research Review, 2(2), 114–129. https://doi.org/10.1016/j.edurev.2007.06.001

    Article  Google Scholar 

  4. Backer, D. I., & Lewis, T. E. (2015). Retaking the test. Educational Studies, 51(3), 193–208. https://doi.org/10.1080/00131946.2015.1033524

    Article  Google Scholar 

  5. Baeten, M., Kyndt, E., Struyven, K., & Dochy, F. (2010). Using student-centred learning environments to stimulate deep approaches to learning: Factors encouraging or discouraging their effectiveness. Educational Research Review, 5(3), 243–260. https://doi.org/10.1016/j.edurev.2010.06.001

    Article  Google Scholar 

  6. Becker, H. S., Geer, B., & Hughes, E. C. (1968). Making the grade: The academic side of college life. Wiley.

  7. Biggs, J. (1996). Enhancing teaching through constructive alignment. Higher Education, 32(3), 347–364. https://doi.org/10.1007/BF00138871

    Article  Google Scholar 

  8. Boud, D. (1995). Assessment and learning: Contradictory or complementary? In P. Knight (Ed.), Assessment for Learning in Higher Education (pp. 35–48). Kogan Page.

    Google Scholar 

  9. Boud, D., Dawson, P., Bearman, M., Bennett, S., Joughin, G., & Molloy, E. (2018). Reframing assessment research: Through a practice perspective. Studies in Higher Education, 43(7), 1107–1118. https://doi.org/10.1080/03075079.2016.1202913

    Article  Google Scholar 

  10. Bovill, C., & Woolmer, C. (2019). How conceptualisations of curriculum in higher education influence student-staff co-creation in and of the curriculum. Higher Education, 78(3), 407–422. https://doi.org/10.1007/s10734-018-0349-8

    Article  Google Scholar 

  11. Broekkamp, H., & Van Hout-Wolters, B. H. A. M. (2006). Students’ adaptation of study strategies when preparing for classroom tests. Educational Psychology Review, 19(4), 401. https://doi.org/10.1007/s10648-006-9025-0

    Article  Google Scholar 

  12. Campbell, D. T. (1976). Assessing the impact of planned social change. Retrieved April 1, 2019, from http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=8083EC66C38CAA01FC82574D7D06C37C?doi=10.1.1.170.6988&rep=rep1&type=pdf.

  13. Cerasoli, C. P., Nicklin, J. M., & Ford, M. T. (2014). Intrinsic motivation and extrinsic incentives jointly predict performance: A 40-year meta-analysis. Psychological Bulletin, 140(4), 980–1008. https://doi.org/10.1037/a0035661

    Article  Google Scholar 

  14. Cilliers, F. J., Schuwirth, L. W., Adendorff, H. J., Herman, N., & van der Vleuten, C. P. M. (2010). The mechanism of impact of summative assessment on medical students’ learning. Advances in Health Sciences Education, 15(5), 695–715. https://doi.org/10.1007/s10459-010-9232-9

    Article  Google Scholar 

  15. Cohen, S. A. (1987). Instructional alignment: Searching for a magic bullet. Educational Researcher, 16(8), 16–20. https://doi.org/10.3102/0013189X016008016

    Article  Google Scholar 

  16. Cohen-Schotanus, J. (1999). Student assessment and examination rules. Medical Teacher, 21(3), 318–321. https://doi.org/10.1080/01421599979626

    Article  Google Scholar 

  17. Deci, E. L., Koestner, R., & Ryan, R. M. (1999). A meta-analytic review of experiments examining the effects of extrinsic rewards on intrinsic motivation. Psychological Bulletin, 125(6), 627–668. https://doi.org/10.1037/0033-2909.125.6.627

    Article  Google Scholar 

  18. Driessen, E., & Van Der Vleuten, C. P. M. (2000). Matching student assessment to problem-based learning: Lessons from experience in a law faculty. Studies in Continuing Education, 22(2). https://doi.org/10.1080/713695731.

  19. Dweck, C. S. (1986). Motivational processes affecting learning. American Psychologist, 41(10), 1040.

    Article  Google Scholar 

  20. Entwistle, N. J., & Entwistle, A. (1991). Contrasting forms of understanding for degree examinations: The student experience and its implications. Higher Education, 22(3), 205–227. https://doi.org/10.1007/BF00132288

    Article  Google Scholar 

  21. Finley, J. R., & Benjamin, A. S. (2012). Adaptive and qualitative changes in encoding strategy with experience: Evidence from the test-expectancy paradigm. Journal of Experimental Psychology: Learning, Memory, and Cognition, 38(3), 632–652.

    Google Scholar 

  22. Fraser, S. P., & Bosanquet, A. M. (2006). The curriculum? That’s just a unit outline, isn’t it? Studies in Higher Education, 31(3), 269–284. https://doi.org/10.1080/03075070600680521

    Article  Google Scholar 

  23. Frederiksen, N. (1984). The real test bias: Influences of testing on teaching and learning. American Psychologist, 39(3), 193–202. https://doi.org/10.1037/0003-066X.39.3.193

    Article  Google Scholar 

  24. Gibbs, G., & Simpson, C. (2005). Conditions under which assessment supports students’ learning. Learning and Teaching in Higher Education, 1, 3–31.

    Google Scholar 

  25. Govaerts, M., & Van der Vleuten, C. P. M. (2013). Validity in work-based assessment: Expanding our horizons. Medical Education, 47(12), 1164–1174. https://doi.org/10.1111/medu.12289

    Article  Google Scholar 

  26. Harlen, W., & Crick, R. D. (2003). Testing and motivation for learning. Assessment in Education: Principles, Policy & Practice, 10(2), 169–207. https://doi.org/10.1080/0969594032000121270

    Article  Google Scholar 

  27. Heijne-Penninga, M., Kuks, J. B. M., Hofman, W. H. A., & Cohen-Schotanus, J. (2008). Influence of open- and closed-book tests on medical students’ learning approaches. Medical Education, 42(10), 967–974. https://doi.org/10.1111/j.1365-2923.2008.03125.x

    Article  Google Scholar 

  28. Jensen, J. L., McDaniel, M. A., Woodard, S. M., & Kummer, T. A. (2014). Teaching to the test…or testing to teach: Exams requiring higher order thinking skills encourage greater conceptual understanding. Educational Psychology Review, 26(2), 307–329. https://doi.org/10.1007/s10648-013-9248-9

    Article  Google Scholar 

  29. Kickert, R. (2020). Raising the bar: Higher education students’ sensitivity to the assessment policy. [Doctoral dissertation, Erasmus University Rotterdam]. EUR repository. https://repub.eur.nl/pub/134032. Accessed 27 Jan 2021.

  30. Kickert, R., Meeuwisse, M., Stegers-Jager, K. M., Koppenol-Gonzalez, G. V., Arends, L. R., & Prinzie, P. (2019). Assessment policies and academic performance within a single course: The role of motivation and self-regulation. Assessment & Evaluation in Higher Education, 44(8), 1177–1190. https://doi.org/10.1080/02602938.2019.1580674

    Article  Google Scholar 

  31. Knight, P. T. (2001). Complexity and Curriculum: A process approach to curriculum-making. Teaching in Higher Education, 6(3), 369–381. https://doi.org/10.1080/13562510120061223

    Article  Google Scholar 

  32. Krathwohl, D. R. (2002). A revision of Bloom’s taxonomy: An overview. Theory Into Practice, 41(4), 212–218.

    Article  Google Scholar 

  33. Lindquist, E. F. (1951). Preliminary considerations in objective test construction.pdf: EDU S061A1: Statistical and Psychometric Methods for Educational Measurement (Part). Retrieved October 7, 2019, from https://canvas.harvard.edu/courses/33644/files/5027562.

  34. Lundeberg, M. A., & Fox, P. W. (1991). Do laboratory findings on test expectancy generalize to classroom outcomes? Review of Educational Research, 61(1), 94–106. https://doi.org/10.3102/00346543061001094

    Article  Google Scholar 

  35. McDaniel, M. A., Blischak, D. M., & Challis, B. (1994). The effects of test expectancy on processing and memory of prose. Contemporary Educational Psychology, 19(2), 230–248. https://doi.org/10.1006/ceps.1994.1019

    Article  Google Scholar 

  36. Midgley, C., Maehr, M. L., Hruda, L. Z., Anderman, E., Anderman, L., Freeman, K. E., & Urdan, T. (2000). Manual for the Patterns of Adaptive Learning Scales (PALS). University of Michigan.

    Google Scholar 

  37. Miller, C., & Parlett, M. (1974). Up to the mark: A study of the examination game. Society for Research into Higher Education.

    Google Scholar 

  38. Morgan, J., Hoadley, U., & Barrett, B. (2017). Chapter 1—Introduction: Social realist perspectives on knowledge, curriculum and equity. In B. Barrett, U. Hoadley, & J. Morgan (Eds.), Knowledge, Curriculum and Equity: Social Realist Perspectives. Routledge. https://doi.org/10.4324/9781315111360

  39. Newble, D. I., & Jaeger, K. (1983). The effect of assessments and examinations on the learning of medical students. Medical Education, 17(3), 165–171. https://doi.org/10.1111/j.1365-2923.1983.tb00657.x

    Article  Google Scholar 

  40. Nicholls, J. G. (1984). Achievement motivation: Conceptions of ability, subjective experience, task choice, and performance. Psychological Review, 91(3), 328–346. https://doi.org/10.1037/0033-295X.91.3.328

    Article  Google Scholar 

  41. Öhrstedt, M., & Scheja, M. (2018). Targeting efficient studying – First-semester psychology students’ experiences. Educational Research, 60(1), 80–96. https://doi.org/10.1080/00131881.2017.1406314

    Article  Google Scholar 

  42. Preston, R., Gratani, M., Owens, K., Roche, P., Zimanyi, M., & Malau-Aduli, B. (2020). Exploring the impact of assessment on medical students’ learning. Assessment & Evaluation in Higher Education, 45(1), 109–124. https://doi.org/10.1080/02602938.2019.1614145

    Article  Google Scholar 

  43. Richardson, M., Abraham, C., & Bond, R. (2012). Psychological correlates of university students’ academic performance: A systematic review and meta-analysis. Psychological Bulletin, 138(2), 353–387. https://doi.org/10.1037/a0026838

    Article  Google Scholar 

  44. Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist, 55(1), 68–78. https://doi.org/10.1037/0003-066X.55.1.68

    Article  Google Scholar 

  45. Sadler, D. R. (2007). Perils in the meticulous specification of goals and assessment criteria. Assessment in Education: Principles, Policy & Practice, 14(3), 387–392. https://doi.org/10.1080/09695940701592097

    Article  Google Scholar 

  46. Sadler, D. R. (2014). The futility of attempting to codify academic achievement standards. Higher Education, 67(3), 273–288. https://doi.org/10.1007/s10734-013-9649-1

    Article  Google Scholar 

  47. Sambell, K., & McDowell, L. (1998). The construction of the hidden curriculum: Messages and meanings in the assessment of student learning. Assessment & Evaluation in Higher Education, 23(4), 391–402. https://doi.org/10.1080/0260293980230406

    Article  Google Scholar 

  48. Schneider, M., & Preckel, F. (2017). Variables associated with achievement in higher education: A systematic review of meta-analyses. Psychological Bulletin, 143(6), 565–600. https://doi.org/10.1037/bul0000098

    Article  Google Scholar 

  49. Schuwirth, L., & Van Der Vleuten, C. (2004). Merging views on assessment. Medical Education, 38(12), 1208–1210. https://doi.org/10.1111/j.1365-2929.2004.02055.x

    Article  Google Scholar 

  50. Scouller, K. (1998). The influence of assessment method on students’ learning approaches: Multiple choice question examination versus assignment essay. Higher Education, 35(4), 453–472. https://doi.org/10.1023/A:1003196224280

    Article  Google Scholar 

  51. Sebesta, A. J., & Bray Speth, E. (2017). How should I study for the exam? Self-regulated learning strategies and achievement in introductory biology. CBE—Life Sciences Education, 16(2), ar30. https://doi.org/10.1187/cbe.16-09-0269

    Article  Google Scholar 

  52. Senko, C., & Miles, K. M. (2008). Pursuing their own learning agenda: How mastery-oriented students jeopardize their class performance. Contemporary Educational Psychology, 33(4), 561–583. https://doi.org/10.1016/j.cedpsych.2007.12.001

    Article  Google Scholar 

  53. Snyder, B. R. (1971). The hidden curriculum. Knopf.

    Google Scholar 

  54. Stanger-Hall, K. F. (2012). Multiple-choice exams: An obstacle for higher-level thinking in introductory science classes. CBE—Life Sciences Education, 11(3), 294–306. https://doi.org/10.1187/cbe.11-11-0100

    Article  Google Scholar 

  55. Stegers-Jager, K. M., Cohen-Schotanus, J., Splinter, T. A. W., & Themmen, A. P. N. (2011). Academic dismissal policy for medical students: Effect on study progress and help-seeking behaviour. Medical Education, 45(10), 987–994. https://doi.org/10.1111/j.1365-2923.2011.04004.x

    Article  Google Scholar 

  56. Struyven, K., Dochy, F., & Janssens, S. (2005). Students’ perceptions about evaluation and assessment in higher education: A review. Assessment & Evaluation in Higher Education, 30(4), 325–341. https://doi.org/10.1080/02602930500099102

    Article  Google Scholar 

  57. Tannock, S. (2017). No grades in higher education now! Revisiting the place of graded assessment in the reimagination of the public university. Studies in Higher Education, 42(8), 1345–1357. https://doi.org/10.1080/03075079.2015.1092131

    Article  Google Scholar 

  58. Taras, M. (2005). Assessment—Summative and formative—Some theoretical reflections. British Journal of Educational Studies, 466–478.

  59. Thiede, K. W., Wiley, J., & Griffin, T. D. (2011). Test expectancy affects metacomprehension accuracy. British Journal of Educational Psychology, 81(2), 264–273. https://doi.org/10.1348/135910710X510494

    Article  Google Scholar 

  60. Torrance, H. (2007). Assessment as learning? How the use of explicit learning objectives, assessment criteria and feedback in post‐secondary education and training can come to dominate learning. Assessment in Education: Principles, Policy & Practice, 14(3). https://www.tandfonline.com/doi/full/10.1080/09695940701591867.

  61. UNESCO, International Bureau of Education. (2015). Student learning assessment and the curriculum: Issues and implications for policy, design and implementation. Current and Critical Issues in the Curriculum and Learning Series. Retrieved September 6, 2017, from: http://www.ibe.unesco.org/en/document/student-learning-assessment-and-curriculum-issues-and-implications-policy-design-and.

  62. Vallerand, R. J., Pelletier, L. G., Blais, M. R., Briere, N. M., Senecal, C., & Vallieres, E. F. (1992). The academic motivation scale: A measure of intrinsic, extrinsic, and amotivation in education. Educational and Psychological Measurement, 52(4), 1003–1017. https://doi.org/10.1177/0013164492052004025

    Article  Google Scholar 

  63. Van der Vleuten, C. P. M., Schuwirth, L. W. T., Driessen, E. W., Dijkstra, J., Tigelaar, D., Baartman, L. K. J., & van Tartwijk, J. (2012). A model for programmatic assessment fit for purpose. Medical Teacher, 34(3), 205–214. https://doi.org/10.3109/0142159X.2012.652239

    Article  Google Scholar 

  64. Van der Vleuten, C. P. M., Verwijnen, G. M., & Wijnen, W. H. F. W. (1996). Fifteen years of experience with progress testing in a problem-based learning curriculum. Medical Teacher, 18(2), 103–109. https://doi.org/10.3109/01421599609034142

    Article  Google Scholar 

  65. Woolley, K., & Fishbach, A. (2017). Immediate rewards predict adherence to long-term goals. Personality and Social Psychology Bulletin, 43(2), 151–162. https://doi.org/10.1177/0146167216676480

    Article  Google Scholar 

Download references

Acknowledgements

We would like to thank Patrick Kickert, Brian P. Godor, Loïs Schenk and Işıl Sincer for their constructive feedback

Author information

Affiliations

Authors

Corresponding author

Correspondence to R. Kickert.

Ethics declarations

Conflict of interest

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Kickert, R., Meeuwisse, M., Stegers-Jager, K.M. et al. Curricular fit perspective on motivation in higher education. High Educ (2021). https://doi.org/10.1007/s10734-021-00699-3

Download citation

Keywords

  • Motivation
  • Higher education
  • Alignment
  • Curricular fit
  • Motivation to learn
  • Motivation to perform