Introduction

Digital transformations of higher education (HE) comes with several different agendas and thus creates different expectations of what new technology should address. In debates, promises connected to digitalisation include the opening up of HE, improvements in administrative effectiveness, and innovation in teaching and learning. Digital technology (DT) can have transformational effects on the velocity, scope and impact of HE assessment practices, similar to those van Veldhoven and Vantheinen (2019) describe regarding effects of DT on the business world. Digitalisation is often seen as the enabler of ‘making things better’, but as set out in several chapters in this book, (for example by Tømte and Lazareva, and Scholkmann), it also creates new and unexpected dilemmas and changes faculty roles and responsibilities (Kirkwood & Price, 2014). One example of how digitalisation changes academics’ responsibilities, is that learning platforms enable HE to open up and reach learners other than the merited students that have gone through formal admission processes. To some extent, the opening up of universities broadens the HE mission. Outreach in the form of courses offering lifelong learning, for example, MOOCs (Massive Open Online Courses), changes the way teaching and learning is planned, delivered and evaluated (Barman et al., 2019a; Tømte, 2019). Teachers in HE may find themselves in unusual situations as they are increasingly asked to popularize and create shorter courses with condensed messages in advanced topics, which can be challenging (Barman et al., 2019b). The opening up of HE as a result of digital platforms also involves current initiatives that aim to create joint education offerings between universities across countries in new ways, such as The European Civic University (CIVIC) and University Network for Innovation, Technology and Engineering (UNITE). Furthermore, students can participate in hybrid on-and-off-campus lectures simultaneously, and via digital tools collaborate with each other and external stakeholders situated on other continents (Barman, 2021).

University teachers are known to have heavy workloads and IT applications provide an attractive way to make everyday work faster and easier when data is automatically transferred between systems. The idea is that digitalisation offers administrative effectiveness, and thus saves time, for example regarding teachers’ work during assessments of students’ performances (Mimirinis, 2019). Increased effectiveness includes the shift from having to spend time on copying exam papers to obtaining students’ answers in digital form and grading their performances based on automated assessments in IT systems. In addition, DT offers new ways of assessing students’ knowledge. The transformative potential of using digital tools in teaching and as a support in students’ learning processes is one area where the literature makes promises but empirical findings remain modest (Sweeney et al., 2017). In particular, teachers seem to maintain old habits and their view of how to assess students’ performances even though digital tools are available (Bennett et al., 2017; Deneen & Boud, 2014). Formative, (ungraded), and summative, (graded), assessments have major impacts on what and how students learn (Weurlander et al., 2012), and is therefore central to HE. The literature reports on efforts to innovate and for example change teachers’ and students’ roles in knowledge-creation (Bearman et al., 2020; Bygstad et al., 2022; Kirkwood & Price, 2014), and a broad implementation of emergency remote teaching during the pandemic (see Chapter 12 by Wollscheid et al.). However, one important question remains: what is it that really transforms? The overall aim of this chapter is to contribute to the conversation regarding what kinds of transformations occur as a result of digitalisation of teaching and learning in Swedish higher education. The specific purpose is to illuminate digital transformation of assessment practices by exploring teachers’ experiences of using digital technology to assess students’ performances, including the planning, implementation, grading and provision of feedback.

Assessment of Students’ Performances Using Digital Technology

One major promise from digitalised assessments is to enable multimodal ways of presenting and representing knowing and knowledge, for example using sound and moving images (Selander & Kress, 2010; Timmis et al., 2016). The change in design and figuration of tasks influences what students get to experience in assessment tasks, such as being exposed to three-dimensional digital models of environments in architecture, or being able to rotate mechanical constructions, or seeing films that illustrate authentic situations from business. The use of digital tools also increases students’ opportunities to present their abilities in different ways, which could fundamentally change what kind of knowledge, abilities and approaches are required and graded (Sweeney et al., 2017; Tan et al., 2020). Digitalisation may also facilitate a shift in how assessments and grading are traditionally viewed and conducted in HE (Boud et al., 2018). Research on HE learning emphasises students’ involvement in the assessment process, for example during the development of standards or when they practice their abilities to make judgements through self-and peer-assessments (Barman et al., 2022; O’Donovan et al., 2008). Such processes can be facilitated with digital platforms offering flexibility, for example with the use of quizzes or automated distribution of peer-learning tasks. In sum, digitalisation may increase the authenticity of assessment tasks and broaden students’ opportunities to make their knowing and knowledge visible, thereby increasing the ecological validity of assessments and grading in HE. However, developments and transformation of assessments seem to be slow, and researchers argue that old ideas are being locked in by current digitalisation, instead of benefitting from the potential that the new era may offer (Bearman et al., 2020).

Research on HE assessments of student learning also addresses some recurrent challenges faced by teachers. These challenges include how to assess and grade students fairly, and at the same time allow for open-ended assignments where students demonstrate their ability to integrate basic facts or science with more elaborate reasoning or problem-solving that resembles abilities required in working life (Barman et al., 2022; Epstein & Hundert, 2002; van der Vleuten et al., 2010). In general, teachers’ different epistemological views and understanding of what assessment should enable in combination with locally embedded traditions influence their choices of what and how to assess students’ performances (Boud et al., 2018; Mimirinis, 2019). Such examples include measurement of factual knowledge versus assessment of integrated competencies, and/or providing feedback and thus creating learning opportunities (Hodges, 2010; van der Vleuten et al., 2010).

Assessment in Swedish Higher Education

In this chapter, we studied HE assessment practices in Sweden. Swedish HE adopts a course-based system in which student completion of each course needs to be summatively assessed and graded (UKÄ, 2020). One course generally requires 5–10 weeks of full-time studies but at some universities several part-time courses are offered in parallel. Obligatory course requirements such as graded assessments must be stipulated in course syllabuses, and additional assignments that aim to provide formative assessment of student performance are not part of formal requirements. In contrast to formal regulations, teachers sometimes include bonus systems so that students gain credits from formative assessments which are then included in course grades. Each course has a formal examiner who is responsible for the assessment including the design of assignments, student grading and feedback. The examiner often has the responsibility for the overall course design as well. In some cases, several teachers are involved in the assessment process and provide exam questions, conduct assessments and feedback, and provide information on student performances for grading purposes.

Theoretical Framework

We based this study on the underlying assumption that assessment constitutes social practices embedded in local contexts (Boud et al., 2018). Practice theories views practice as consisting of ‘the relations among the everyday interactions, routines and material arrangements in particular environments and forms of knowing generated from these’ (Hager et al., 2012, p. 3). In line with this, we view assessment practices as purposeful, influenced by local routines, available technologies and other material artefacts used, and the views, ‘sayings and doings’ regarding assessment matters of the various people involved (teachers, students, administrators and others). Assessment of student performance in HE requires a number of decisions regarding what knowledge and knowing students should demonstrate, and standards for judgement and grading. These decisions affect the format, mode and the design of assignments and exams such as the question/problem type. Furthermore, choices regarding the assessment situation are also necessary, including what resources students are allowed to use, such as literature, calculators, or the internet, and the time allocated for accessing and completing assignments and tests (e.g. hours or weeks). Bearman et al. (2016) outline a practice framework for assessing students’ performances and define assessment design decisions ‘as the corpus of choices regarding assessment, made by university educators who take responsibility for the module or unit or overall program at a curricular level’ (Bearman et al., 2016, p. 548). These decisions are central aspects of assessment practices. Here we are concerned with possible changes regarding teachers’ design decisions including their intentions with and implementation of assignments and exams.

Furthermore, we apply the concepts of convergent and divergent assessment, (Torrance & Pryor, 2001), to discuss the informants’ descriptions of the result of their design decisions, namely the format, mode and character of graded and ungraded assignments and assessment tasks associated with the use of DT. Convergent assessment refers to assignments and tasks which aims to ‘find out if the learner knows, understands or can do a predetermined thing’; and is ‘characterised by detailed planning, and it is generally accomplished by closed or pseudo-open questioning and tasks’ (Torrance & Pryor, 2001, pp. 616–617). Such a perspective is associated with behaviourist views of learning and, in our view, similar to the rationales behind the psychometric tradition concerned with reliability and validity of tests (Hodges, 2010; Torrance & Pryor, 2001). Divergent assessment involves more open questioning and tasks that are complex to perform and aims to discover what the learner knows, understands and are capable of. In addition, divergent assessment tends to be oriented towards future development, and are associated with social constructivist views of learning (Torrance & Pryor, 2001).

Method

The empirical materials included analysis of 12 interviews with teachers from two universities in Sweden. The teachers were recruited based on their involvement in various strategic education initiatives concerning either pedagogical development by their own choice, or digitalisation of study programmes initiated by their respective University. All teachers had the experience of assessing students using digital systems, and two of the informants were involved in initiatives explicitly aimed at digitalising assessments. The sampling was made to gain access to a broad variation of experiences, and thus the informants consisted of women (7) and men (5) who teach in various subjects such as mathematics, physics, chemistry, law, language, language education, and social science research methods. Their experience as teachers ranged from 18 months to more than 20 years, and some could be considered ‘early adopters’ of educational DT, while others employed digital tools in their teaching due to the pandemic. Both universities provide various digital solutions, such as learning platforms (LMS) and specific IT systems useful for digital exams and/or automated assessment of students’ performances. The informants had the experience of applying these technologies in various ways, both before and during the pandemic. The majority included automated assessments such as quizzes, open responses or online peer-assessments in their courses, and some had used on-site digital assessments as well.

Interviews were conducted in physical meetings or online on Zoom and lasted between 35 and 54 minutes. Questions addressed informants’ experiences of assessing students’ performances on digital systems, for what purposes they used DT in assessments, what type of knowledge and knowing students were asked to demonstrate, and the design of assessment tasks in the digital environment. All informants consented to participate in the study prior to any data being collected. To protect the privacy of informants, quotes in the findings section of this chapter are attributed to fictitious names.

We performed a thematic analysis, (Braun & Clarke, 2006), to uncover changes in the teachers’ assessment design decisions that were associated with their application of DT. During the analysis, we focused on which kinds of changes the informants described and reasoned about (manifest content). Furthermore, in accordance with the thematic analysis, (Braun & Clarke, 2006), we interpreted the latent meaning of these changes which resulted in three overarching themes regarding the nature of change. These in turn relate to possible transformations in either teachers’ work processes or their design decisions.

Findings

In this section, we introduce three themes that present changes of different nature as a result of the teachers’ use of DT when they designed, implemented and performed assessments of students’ learning: (I) Transformation of assessment processes, (II) Redesign of courses and assessment tasks and (III) Rethinking student competencies and requirements of learning. These changes relate to either how the teachers worked, (their processes), or what the teachers designed and created, (the ‘products’). This section starts with an overview of the areas and nature of changes presented in Table 7.1. Each theme is presented followed by a discussion of how these changes may, or may not, be regarded as transformations.

Table 7.1 Changes of different nature as result of the teachers’ use of digital applications when they designed, implemented and performed assessments of students’ learning

Transformation of the Assessment Process

Teachers experienced that the use of digital systems for assessments significantly changed their work processes in different ways. What stood out was how the teachers needed to carry out the planning and implementation of assignments earlier in the work process and how each task or assignment required a greater level of detail regarding instructions and possible student solutions. The shift from paper-and-pen written exams to digitally accessible student assignments and tasks significantly streamlined the distribution of exam questions and results to-and-from teachers to students, and between teachers. Marc, a teacher in social science research methods shared his positive view of how digital technology (DT) made the distribution of students’ results more efficient.

All you have to do is report it to the students. Fully automated. Then they get access to it, immediately when we hand out everything, all students get their results. (Interview 10)

At the same time, digital applications often required numerous settings or even programming skills, and in those cases, the teachers had to spend considerably longer time than they were used to or had anticipated to prepare each task. Peter, a maths teacher with programming skills, believed that one of the applications used for automated assessments would be helpful in saving time to assess larger student groups, but only after working with the application for a number of years.

You have to struggle with a certain user interface. Initially it’s not very intuitive. But on the other hand, there are several tutorials you can watch, but sure, it’s not so easy in the beginning. It’s a pretty big threshold to start with. (Interview 4)

With similar experiences, Mona who teaches language education reasoned about how automated assessments saves time for continuous assessment of larger student groups, but that it also takes time and careful planning.

Well, if you create a quiz, I think it saves time since it’s assessed automatically. But it’s crucial to get all the settings right. We have experimented a bit with using automated assessment of open responses, using keywords. That backfired quite a bit. We had to go in and try to fix it manually. (Interview 9)

Digital systems often required additional decision making and teachers carefully needed to think through each possible interpretation of their instructions and problem descriptions. In cases where automated assessment of open responses was performed in the IT system, the teachers had to consider what possible different typos the students might enter in the IT system, such as an extra space between words or numbers. Sally, who had the experience of implementing automated assessments in maths and physics courses, explained:

In some cases, they contacted me and protested. The students had not written exactly as the system requires, and because it’s automatically assessed one must write exactly in accordance with the way the system is programmed. (Interview 1)

Mona realised that her idea of using keywords that the IT system should recognise as correct student answers did not always match how students demonstrated their knowledge.

I took the author’s name as a keyword, but not all students used this name in their answers. Two students wrote really good answers but did not mention the author by name and their answers were not approved by the system. And I couldn’t change this manually. So, I had to e-mail them and tell them that, their reports says fail, but they did pass. (Interview 9)

Several teachers needed support during remote assessments to ensure that students who experienced problems with the technology received help. Such expertise was not always provided by the university IT support and had to be arranged locally by involving e-learning expertise. During the assessment occasion in several courses, e-learning support, (2–4 persons), were available for half a day every second week during the first half of the semester. In addition, administrators and IT staff created advanced settings that enabled students with special needs to obtain the support they were entitled to. Additional support was also needed when the DT required programming skills to enter assignments into the system. In some cases, the design of assignments and questions had to be edited and adapted based on the DT, in which cases the programmer was involved in taking design decisions regarding the problem tasks and assignments. Hence, the need to involve additional expertise in different ways changed the teachers’ role and responsibilities.

Rethinking Student Competencies and Learning Requirements

As the teachers created assignments and exam questions, they considered what forms of knowing their students had to demonstrate. For example, factual and declarative knowledge, as opposed to the ability to perform procedures such as mathematical calculations; or that students could demonstrate their skills with a different modality. Due to limitations imposed by the pandemic, the teachers needed to find new ways to assess students remotely. Mona took the opportunity to assess her students using uploaded videos in which they orally presented their skills in language education. She reflected on the importance of offering various modes of assessment to enable different ways for students to demonstrate their knowledge and abilities.

Oral examination has a greater meaning than just being a safety-enhancing measure. It’s spontaneous and under time pressure, which makes oral examination contribute other kinds of validity. […] So, it’s good with variation so there are different ways to demonstrate your knowledge. Then, the assessment might be fairer. I think the flexibility part is important, and technology can help us with that. (Interview 9)

For Marc and his colleague who teach scientific methods, offering remote online exams forced them to rethink the requirements of learning, since students could access course literature and the internet during the exam.

There was more emphasis on providing examples [… ] But we’ve also increased the time students’ can spend when taking the exam because it also means increased demands compared to a three-hour exam taken at campus. (Interview 10)

The teachers reported that they had some scope to choose other IT tools than their respective University’s LMS or the on-campus digital exam system. The various digital tools enabled students to express their answers in different ways, such as using mathematical language with symbols and signs. Several maths and physics teachers implemented such DT in their courses. While they appreciated the opportunities for students to provide answers with correct disciplinary language, (signs, symbols), they redefined what kind of knowledge the assessment should capture based on platform affordances and available functions. Several teachers reported that the available DT created limitations as to what kind of knowledge and knowing that was possible to assess. For example, no available system made it possible for students to draw graphs or assess students’ understanding of correct units of measurement expressed in Swedish, as Sally explained:

Since the system language is English, adjustments need to be done. For example, in cases where units of measurement are requested, “min” – is not correct. In these cases, we have already written out the units and the students only need to answer with numbers. (Interview 1)

All physics and maths teachers reported that their aim was to assess students’ abilities to perform calculations, which required students to demonstrate every step of the way in their calculations. According to the teachers, such transparency made students’ thinking visible and created opportunities to provide feedback and adjust teaching. The use of automated assessments using different IT systems meant that students were instead asked to report the results of their calculations and problem-solving. George and Peter who created assignments in maths and physics courses, expressed their view regarding some of the limitations of the automated assessments and how they adapted assignments accordingly.

We have tried to make them [digital quizzes] equivalent to the E-level [minimum requirements] on the final exam. But we haven’t been able to assess their abilities to solve problems or present solutions. (Interview 3)

It would be optimal if they could write a proper solution so we can assess and check that they use the correct language, refer to the right things, which theorems they refer to, and draw correct figures, that they define everything used in their calculations. The things that are missing right now in the assessment, simply. We rarely think that the numerical final value is interesting, it’s how they got there that is of interest. (Interview 4)

The teachers recognised that students’ digital competence and previous experiences of using various IT systems affected how well students performed. Margret and her colleagues who teach physics and maths, experienced frustration when the system nomenclature differed from how signs were normally written in Swedish and thus required students to write dot instead of comma when answers included numerical values.

[System X] has many annoying features that both teachers and students are bothered with. It's so super petty with format and how to enter numbers, it's almost like half a programming task to answer correctly in [system X]. So you start thinking that it doesn’t entirely test the things you consider important to assess. (Interview 2)

In contrast, Mona reasoned that HE should train students’ digital literacy, and that students are expected to apply such competence in the exam or test situation. Therefore, if students made mistakes due to IT ignorance this would affect their grades. Sophia, a language teacher, designed several quizzes in her course with the double purpose to help students both test their language knowledge and learn how to conduct quizzes in the LMS. Hence, IT was not only a means to an end but also part of the intended learning.

Redesign of Courses and Assignments

The use of various digital applications for the assessment of students’ performances meant that the teachers adapted the design of their graded and non-graded assignments. This in turn encouraged teachers to consider and change the overall design of their courses. Several teachers changed the assessment mode, for example by replacing laboratory reports with automated assessed multiple-choice questions. They also reflected upon the available DT, which they found more suitable for assessing some aspects of students’ expected competencies. Teachers reported that it was not possible to make all kinds of knowledge, skills and approaches visible in the digital applications they used. Automated assessments, for example, were considered useful for assessing factual, non-disputable basic knowledge. To this end, the teachers created assignments such as quizzes useful for students’ continuous self-assessment, something that was implemented in the majority of courses that these teachers referred to. Here, Dina, who teaches law, explained the advantages of using DT.

This kind of formative elements… firstly, it only works using digital environments, at least with these student volumes. […] From the students’ perspective, that they can get automated feedback. They can do it anytime; they can do it several times. (Interview 11)

Most teachers reported that the application of digital assessment made them redesign course activities and assignments, for example, creating home assignments requiring deeper understanding and several multiple-choice questions assessing limited parts of the students’ knowing. Several teachers reported using DT to assess ‘easy-to-learn’ simpler skills continuously throughout the course, and in addition, they created home assignments to capture the students’ abilities to apply knowledge. This way of combining different assessment modes was implemented by the majority of the informants in response to the different opportunities and limitations that the digital tools offered. Formative multiple-choice questions were regarded as a way to motivate students’ engagement and continuous studying throughout courses, and digital tools created an opportunity that for reasons of time could not be justified without the automated assessments.

Teachers were aware that students sometimes collaborated during remote assessments, or that they worked out maths problems using digital tools available on the internet, instead of doing calculations themselves. This made test scores unreliable. Therefore, teachers introduced several measures to prevent students from sharing information in individual tasks. Such measures included assigning different values to the same maths problem randomly distributed to different students or mixing the order of how tasks were presented to students. In addition, some teachers created libraries with several problem tasks so that students performed different assignments requiring similar knowledge. In these ways teachers made additional decisions and assignments compared to conducting assessments with pen and paper.

Discussion

In this chapter, we illuminate transformations of assessment practices by exploring teachers’ experiences of assessing students’ performances using DT. Given the time of data collection, the redesign of courses and assessments was also influenced by the pandemic, including the necessity to assess students learning remotely. Thus, DT was a condition for making remote assessment possible. The use of DT made teachers redesign assignments and courses, and they assessed other forms of knowledge and knowing than when students previously used pen and paper. Unsurprisingly, teachers’ work processes also changed. DT affected teachers’ assessment design decisions in several ways, not only regarding who were involved in making decisions, but decisions had to be brought forward and further detailed and thought-through before implementing assignments that students performed in digital systems. It can be difficult for teachers to imagine and predict exactly how students demonstrate their knowledge including choice of words and possible spelling errors. Open responses or word recognition of student-made texts could enable divergent assessments where students demonstrate knowledge beyond what could be tested via multiple-choice, such as performing calculations or elaborate reasoning. Teachers who used systems for automated assessment of open responses experienced friction between their intentions and the default system settings that did not always correspond to their ideas of how to judge student performances. Automatically assessed answers, although efficient and timesaving, do not allow for small typos and partial mastery (cf. Lesage et al., 2013), and require additional decisions made beforehand to adjust system settings on what possible errors to allow.

The idea that the digitalised assessment process saves teachers’ time often drives implementation (Bennett et al., 2017), and was also a reason why teachers in this study chose to use DT. On the one hand, the teachers experienced that the digitized parts of the process (e.g. submitting essays via LMS) significantly changed and made their work easier and increased opportunities to assess large student groups. On the other hand, the digitalisation of assessments, such as automated feedback and grading based on student answers, required additional and unforeseen work, and even advanced programming skills. The contradiction between timesaving and workload reduction, and the unexpected consequences of requiring more time for planning and set-up were described in a recent review (Brady et al., 2019). In other words, the digitalisation of assessment seems to change the work process but not necessarily into an overall more effective and efficient process. As one teacher’s reasoning in this study reveals, the initial work to implement digital assessment may pay off if the same assignments are re-used in later courses, but programming and system adjustments may not be worthwhile if the digital assessment is a one-off situation.

According to the findings of this study, when digital assessments were implemented, the final course exam was complemented with continuous assessments throughout the course, increasing the number and frequency of assignments. If DT drives assessment practices towards continuous formative and summative assessment of student learning, as in the current study, and even replace the typical end-of-course exam, this would be a significant transformation. Students could be given continuous feedback, which we know is important for learning, and the ‘exam stress’ associated with the one-time snapshot constituted by a single test, could be reduced. However, such feedback would have to be of high quality to support student understanding. One risk is that the high frequency of digital assessments will be limited to ‘pieces of knowledge’, which might signal to students that while studying they should focus only on factual and declarative knowledge. Overall, based on the teachers’ design decisions in this study, the assessments via digital tools converged, making it harder for teachers to allow for variation in student responses. In contrast to the intentions reported by several teachers, the assessments became about measuring students’ fulfilment of specified and detailed outcomes of learning (‘the correct answer’) and limited teachers’ opportunities to provide supporting feedback (feedforward). As previous in-class assessment research indicates, the way teachers design assessments, i.e. the type of questions asked, sends signals to students of what counts as knowledge in a particular course (Weurlander et al., 2012). Consequently, the convergent change of assessment identified here may influence students’ views of knowledge. Also, if formative elements mainly assess students’ ‘lower level of understanding’ or, pieces of knowing according to pre-defined and static tasks, the transformation due to DT in HE may foster assignments that epistemologically move in a direction contrary to ambitions of furthering student learning. Such ambitions include equipping graduates with twenty-first-century skills and capabilities to solve multifaceted societal challenges (Barman, 2021; Barnett, 2012; Griffin & Care, 2014).

From a pedagogical perspective, the design of systems for digital assessment can be criticised for facilitating the assessment of ‘easy to measure’ pieces of knowledge—convergent assessments—for example with multiple-choice questions at the expense of enabling assignments requiring students’ integrated and holistic knowing—divergent assessment. Bearman and colleagues argue that assessment ‘too often require a high degree of recall and offer little opportunity for student input or choice. Our overall impression is, in higher education, the digital has locked in an old set of ideas about assessment’ (Bearman et al., 2020, p. 8). Digital assessments would thereby tend to conserve views about assessment of student learning than transform new ways of capturing student capabilities. Thus, for digital transformation of assessment practices to occur, we need to re-imagine what and how we assess students’ knowledge. In this study, several teachers were frustrated that students were unable to demonstrate their thinking in STEM subjects and, hence, that they were unable to assess important knowledge. This implies that some of the current transformations due to the use of DT is, in some ways, moving in the wrong direction.

In addition to assessing the intended subject area learning, it became evident that, according to the teachers, students’ familiarity with the DT—digital competence—influenced how well they performed. Several teachers found this unfair, messy and an unnecessary demand on students, while a few argued that DT should be an integrated part of learning and a requirement of what students in HE should be capable of. Like other kinds of general competencies, such as writing or presentation skills, supporting students’ digital competence will certainly be part of what teachers need attend to if assessments increasingly are performed digitally. It seems though that many university teachers experience shortcomings in handling issues in the digital environment that are more complex and, in general, teachers need to improve their digital competence, according to a review of the research literature (Zhao et al., 2021).

The teachers’ experiences in this study show that the use of DT during the assessment process may require increased support and collaboration in new ways. This implies that the autonomy associated with assessment decisions may decrease due to default settings in IT systems and the necessity of involving non-teaching staff. This raises a question of who should be the decision-makers and how much influence on content IT support or ICT staff and educational developers should have. Scholkmann, (Chapter 6), discusses this with similar and elaborated reasoning regarding frontline workers during HE digital transformation. Pursuing critical perspectives regarding edtech-driven developments in the education system as a whole, Facer and Selwyn (2021) acknowledge that teachers’ roles will undoubtedly change due to DT. However, they warn about the deprofessionalisation of teachers if technology assistants start to replace professional decision-making. From one perspective, the implementation of DT facilitates pre-defined and standardised ways to provide education and can serve as an important guarantee for quality in processes and output. In contrast, assessment design of, for example, mode and modality is more likely to address important knowing when based on an understanding of context including subject-specific expertise, and are varied to meet learners with different needs (Barman et al., 2019b, 2022; Facer & Selwyn, 2021). In this study, teachers seemed grateful for help in setting up, redesigning and adapting tasks in digital environments, and some even sought support during assessments. However, consequences associated with the transformation of academics’ roles and responsibilities due to distribution of assessment design decisions in HE is certainly something that needs further exploration in coming years.

Digital transformation may be seen as a buzz term and several efforts have been made to define it. Advancements due to DT refer to innovative IT, or the effect on people’s everyday lives as well as organisational offerings and internal operations (van Veldhoven & Vantheinen, 2019). Assessment of learning using DT may fundamentally change how HE institutions interact with society in terms of enabling universities to provide credentials to learners other than the enrolled students. However, the current study illuminates transformational processes regarding teachers’ roles and work, in particular the assessment design decisions as defined by Bearman et al. (2016), and described in this chapter. In addition, the use of DT, partly due to the need for remote assessment, resulted in epistemic changes as to what kind of knowledge and knowing that were assessed. While conducting remote assessments facilitated the redesign and implementation of divergent assignments where students were asked to apply and integrate knowledge, which pedagogically may indicate a step forward, currently available technology also enabled continuous, but increasingly convergent assessments. The latter implies transformation towards reduced transparency of students’ learning processes and hiding students’ learning issues and misunderstandings from teachers, which should be considered a step backwards. In this sense, it seems that using digital technology led to adaptation rather than innovation of assessment practices.