Introduction

Vocational education (VE) is a complex and varied domain that encompasses many occupational fields and a diverse range of systems across different countries (De Bruijn et al., 2017). One of the core aims that these fields and systems share is to educate students for occupations by developing their capacities to become or remain successful practitioners (Billett, 2011). This article focuses specifically on VE that prepares students for their working life through formal education and entails any educational programme that qualifies students for their future as practitioners from ISCED level 3 (e.g. childcare and carpentry) up to ISCED level 7 (e.g. dentistry and law). During their education students in these programmes are most likely to be afforded work experience in the form of workplacement or apprenticeships, and thus learn both in the context of school and in the context of the workplace (Billett, 2014; Schaap et al., 2012). The contribution of workplacement or apprenticeship to the complete curriculum can vary in VE, depending on the educational system or the occupational field. Even though workplace learning is essential to VE, it often accounts for a relatively small part of the curriculum. At the workplace, students are usually assigned a workplace professional who takes on the role of educator and assessor. In different fields and systems this role has different names, such as supervisor, preceptor, coach or mentor (Ceelen et al., 2021). In this paper we refer to the workplace professional in their role as educator and assessor as the workplace educator.

When students in vocational education complete their programme successfully, they receive a diploma and are considered ready to start working life. To be able to successfully complete their education, students need to pass different assessments that evaluate the outcome of their learning in school, but also the outcome of their learning at the workplace. However, assessment of performance at the workplace differs from assessing what students learn and do at school. When we compare the workplace as a context for assessment to a school setting, we find that it is much less controlled than a classroom, since workplace circumstances are likely to be unpredictable (Gulikers et al., 2015). Furthermore, day-to-day work determines what students do and learn at the workplace, much more so than contents and activities prescribed by school (Billett, 2011). To be able to accommodate the workplace as a more unpredictable context for assessment and the significance of day-to-day work, we choose to apply a sociocultural perspective. This perspective allows us to characterise student learning at the workplace as participatory and situated, implying that learning is intricately connected to the everyday activity of the workplace (Lave & Wenger, 1991). In line with this, we consider student participation in that workplace as crucial to learning for future occupations (Billett, 2004; Guile & Griffiths, 2001; Tynjälä, 2008). Students thus learn through participation in the unpredictable context of day-to-day work and their learning is social and relational. When we further extend this view on workplace learning, we can also assert that as students learn, they become part of a vocational community of practice through a process of belonging, becoming and ultimately being (Chan, 2019; Colley et al., 2003). Their learning therefore, reaches beyond achieving the goals set by school. In sum, when we take a sociocultural perspective on learning at the workplace we can assert that where students learn, how they learn and what they learn differs from the school context. As a result, we raise the question what this means for the assessment of student performance at the workplace.

At the workplace, workplace educators play an essential role in the assessment of workplace performance. These educators are primarily professionals at the workplace, who take on the additional role of educator and assessor, and are tasked with guiding and evaluating student performance. They are instrumental in assessing whether a student has performed adequately or not at the workplace (Trede & Smith, 2014). Research into assessment of workplace learning that focuses on the workplace educator is not new, but it is often concerned with assessment instruments such as portfolios and structured observations, or it takes a distinctly cognitive approach by focussing on mental processes. Thus research into assessment of workplace performance often bypasses the situated nature of the assessment as embodied by the workplace educator (e.g. Heeneman & Driessen, 2017; Kogan et al., 2011; Paravattil & Wilby, 2019; Qi et al., 2018; Rogers et al., 2022; Schutz & Moss, 2004; Yeates et al., 2015). For example, Kogan et al. (2011) have studied how workplace educators in a medical education context form judgements about students and show that educators use multiple frames of reference. These findings are interesting, but unfortunately the study is based on workplace educators assessing videotaped performances of students they do not know and thus excluding the social and relational nature of workplace learning entirely. Qi et al. (2018) show how educators have varying approaches of assessment in different contexts, which again seems a promising results. Here we find, however, that the study is ultimately mainly concerned with the effects of assessment training and does not take into account the situated and participatory nature of workplace learning. We thus conclude that insights from current research into the assessment of workplace learning are only partially relevant to furthering our understanding of how educators at the workplace assess students when we approach assessment from a sociocultural perspective. As of yet, an overview that brings together what we already know about how workplace educators assess the outcomes of workplace learning from a sociocultural perspective on assessment does not yet exist.

Purpose

To further our understanding of how workplace educators assess student performance from a sociocultural perspective, we conducted a qualitative systematic review aimed at answering the question: How does the assessment of students’ workplace performance in vocational education operate when it is enacted by professionals at the workplace who take on the role of educator and assessor? Our purpose is to integrate and compare insights from relevant studies in order to identify themes or constructs across these studies that will enable us to describe how workplace educators embody their assessment practices at the workplace (Grant & Booth, 2009). We want to tell the story of what assessment looks like and what educators do when assessing student performance.

Method

Search and selection

We conducted a qualitative systematic review (Booth et al., 2016; Grant & Booth, 2009) where we applied a systematic search strategy based on key words (Table 1) that were applied to seven databases (ERIC, PubMED, Web of Science, Academic Search Premier, Business Source Elite, Science Direct, and PsychInfo) to ensure a comprehensive body of relevant articles. The search was limited to peer reviewed articles written in English. We conducted our first search in May 2018, followed by a supplementary search using the same key words and databases in November 2020.

Table 1 Keywords Literature Search

Table 2 below represents the different steps in the selection process and the corresponding numbers of included and excluded articles. During step 1. Entries incomplete and double entries were removed, as were any entries that were not based on English language publications of scientific research. During step 2. Titles it became clear that the word ‘assessment’ also has non-educational connotations (e.g. the assessment of sedentary behaviour amongst office workers or the assessment of workplace influence on smoking cessation) and all titles from which was clear that they referred to non-educational publications were thus excluded.

Table 2 Selection process in steps and numbers

For step 3. Abstracts and 4. Full text we assessed the relevance of the remaining 822 articles by using inclusion questions (see Table 3). These questions follow from our research question and focused on how assessment works and what the workplace educator does. If the answer to any of these questions was ‘yes’ or ‘unsure’ the article was included in the next step of the selection process, until one of the questions yielded a ‘no’ based on a more comprehensive screening during the next selection step and the article was subsequently excluded. This resulted in 22 articles included for initial analysis. Table 3 shows an overview of the inclusion questions, reasons for exclusion encountered in step 3 and step 4, and the number of excluded articles in each step organised per reason. Additional file 1 provides and overview of which occupational fields and systems were represented in the included research. During the first phase of the analysis (step 5. Initial analysis), 4 more articles were found not to be relevant enough pertaining to the research question and were thus excluded from analysis. The final selection of studies that was included in our analysis comprised 18 articles (step 6).

Table 3 Inclusion questions, reasons for exclusion and number of excluded articles for step 3 and 4

Analysis

Since our aim is to identify potential themes and constructs across a variety of studies, this review takes an aggregative approach and aims for narrative synthesis. Narrative synthesis allows us to summarise the insights from various studies and consequently explore their heterogeneity through descriptive themes (Booth et al., 2016; Snilstveit et al., 2012). To be able to summarise the insights from a variety of studies and develop relevant themes, we used iterative thematic analysis in four distinct phases (see phase 1 – 4 below). Analysis was conducted using software suitable for qualitative data analysis (Atlas.ti 22).

Phase 1: Identifying relevant fragments

During the first step we identified fragments that were relevant to answering our research question in the light of our sociocultural approach. We formulated three topics with nine preliminary codes to guide the first analysis step (see Table 4). These topics and preliminary codes were based on a combination of our research question, our sociocultural perspective and the context of vocational education. After identifying relevant fragments in the first set of three articles, these fragments were coded sentence by sentence using open codes in preparation of our next phase.

Table 4 Topics and codes phase 1 data analysis

Phase 2: Developing descriptive themes

Using the open codes created at the end of phase 1, the first author created networks to organise these codes into preliminary descriptive themes representing the content of the first set of three articles (Booth et al., 2016; Thomas & Harden, 2008). The resulting descriptive themes were then organised hierarchically under the topics and codes that guided phase 1. Table 5 gives an example of what this looked like in this phase of the analysis. The preliminary themes were discussed with the second, third and fourth author during a group discussion to establish shared understanding and clarity.

Table 5 Example of developing descriptive themes

Phase 3: Refining descriptive themes

The themes developed in phase 2 were applied to three more articles during which the first author added to the thematic analysis by elaborating existing themes, adding new themes or merging themes. This phase was repeated a total of five times to analyse the full set of 18 included articles resulting in the final set of themes. This final set of themes was again subject of a group discussion which included all four authors to establish shared understanding and clarity.

Phase 4: Exploring descriptive themes

To complete our analysis towards a narrative synthesis, the first author developed memos for each question from phase 1 in which they explored the descriptive themes by comparing and contrasting the input each article contributed. These memos formed the basis of the results presented in this article.

Quality appraisal

In line with our qualitative approach, we chose not to exclude any articles prior to analysis based on their quality (Carroll & Booth, 2015). Instead, we conducted a sensitivity analysis where we appraised the quality of each article after our analysis (Carroll et al., 2012). We used established critical appraisal tools developed by the Johanna Briggs Institute (JBI) that were suitable for the different types of articles included in our review (see Table 6).

Table 6 JBI critical appraisal tools

Our critical appraisal identified two low quality empirical studies, mainly resulting from a lack of clarity with regard to data analysis and interpretation (for full quality appraisal, see Additional file 2). Any result solely based on either study was reconsidered and if it was not substantiated by other studies, it was discarded. This did not significantly influence the essence of our results.

During the selection process and our analysis we also took measures to ensure the quality of our work. In the selection process the second author applied the inclusion questions to 200 titles, 50 abstracts, and 10 full text articles concurrently with the first author. During each phase of the selection process the first and second author compared and discussed their choices and further calibrated their application of the inclusion questions and established shared reasons for excluding articles. As part of the analysis we addressed the quality of our work with two different measures. First of all, two other members of the research team, the third and fourth author, independently coded two articles identified as core studies by the first author in the first phase of the analysis. The complete research team used their coding as input for a group discussion on the clarity and relevance of the emerging descriptive themes. Secondly, the second author coded two articles selected by the first author using the descriptive themes as developed in the first phase of analysis. The first and second author compared and discussed the second author’s coding to further refine the descriptive themes and focus of the analysis.

Results

The aim of our narrative synthesis was to tell the story of what assessment looks like and what educators do when assessing student performance. We will present the themes that emerged from our analysis in narrative form. The story of our results begins with an exploration of what assessment looks like across the included studies, and we then elaborate on what workplace educators do when they are engaged in assessing student performance. The final segment of our results goes into what assessment practices at the workplace might look like according to the included articles. We start off the results by providing a descriptive overview of the included studies and how they position the workplace educator in their role as assessor. In our conclusion and discussion we will return to our sociocultural perspective and discuss our findings in this light.

Characteristics of included studies

We included eighteen articles consisting of thirteen empirical articles, four theoretical articles and one review. Additional file 3 shows the different types of data encountered in the empirical articles. Twelve articles stem from the health professions domain, other domains include art, engineering, and education (De Vos et al., 2019*; Peach et al., 2014*), business (Richardson et al., 2013*) and food production industries (Timma, 2005*). A total of fifteen articles focus on higher professional education, whereas the remaining three include senior secondary vocational education (De Vos et al., 2019*), secondary education (Berg et al., 2007*) and workplace qualifications (Timma, 2005*). All articles position workplace learning and its assessment entirely at the workplace, but several publications stress either the integration of school and work or the necessity for collaboration between school and work. Twelve articles approach assessment as a practice that both stimulates learning and involves making pass/fail decisions. Peach et al. (2014*) and Richardson et al. (2013*) are dedicated to assessment as learning and focus specifically on the feedback process. Duijn et al. (2018*), Gauthier et al. (2018*), O’Connor et al. (2019*) and Toohey et al. (1996*) focus almost exclusively on making assessment decisions. The included studies indicate that giving workplace educators a role as assessors can potentially create issues. First of all, an educator’s judgement process is inherently imperfect (Lee et al., 2019*; Timma, 2005*). Secondly, educators feel they do not have enough time for assessment (Burch, 2019*; Daelmans et al., 2006*; Gauthier et al., 2018*; Immonen et al., 2019*; McSharry & Lathlean, 2017*; Peach et al., 2014*; Timma, 2005*; Tomiak et al., 2020*; Young et al., 2020*). Thirdly, educators might lack adequate training or experience (Burch, 2019*; Duijn et al., 2018*; O’Connor et al., 2019*; Peach et al., 2014*; Richardson et al., 2013*; Timma, 2005*), potentially resulting in problems such as inadequate feedback (Burch, 2019*; Daelmans et al., 2006*) and too much focus on making assessment decisions (Burch, 2019*; Castanelli et al., 2020*). In spite of these issues, workplace educators are framed as important gatekeepers for their occupation in their role as assessors (Castanelli et al., 2020*; Hauer et al., 2014*; O’Connor et al., 2019*).

What does assessment at the workplace look like?

According to Burch (2019*) and Castanelli et al. (2020*) we need to understand the assessment of student performance at the workplace in its authentic context of work. It is the workplace culture and educators’ familiarity with its norms and expectations that informs their understanding of what is considered adequate behaviour and thus shapes their assessment practice (Hauer et al., 2014*). Daelmans et al. (2006*), Burch (2019*), and Gauthier et al. (2018*) maintain that assessment should be a part of everyday work, reflecting what students and educators actually do at the workplace. De Vos et al. (2019*) and Gauthier et al.’s (2018*) empirical work further indicate that misalignment between assessment practices as intended by school and the reality of the workplace can lead to a mismatch between formal procedures and what educators really do when assessing students. It seems logical therefore, that an often proposed approach towards the design of assessment practices is integrating them in everyday activities at the workplace (Daelmans et al., 2006*; Gauthier et al., 2018*; Richardson et al., 2013*; Timma, 2005*; Tomiak et al., 2020*).

Several of the included studies argue that the social context of the workplace is what constitutes the assessment practice and social interactions irrevocably influence and shape assessment (Berg et al., 2007*; Castanelli et al., 2020*; De Vos et al., 2019*; Gauthier et al., 2018*; Hauer et al., 2014*; Peach et al., 2014*; Timma, 2005*). Educators build relationships with students as they would with new colleagues, and collaboration and shared understanding between co-workers are a part of their assessment practice (De Vos et al., 2019*; Toohey et al., 1996*; Toohey & Ryan, 1996). The included studies show that social relationships between students and educators are a vital part of assessment practices. Several studies indicate that educators need to spend time with students to be able to judge them or provide them with adequate feedback (Castanelli et al., 2020*; Daelmans et al., 2006*; Duijn et al., 2018*; McSharry & Lathlean, 2017*; Young et al., 2020*). This point is further strengthened as multiple authors argue that longitudinal relationships and continuity are beneficial to the quality of the assessment practice (Daelmans et al., 2006*; Hauer et al., 2014*; Immonen et al., 2019*; Peach et al., 2014*; Toohey et al., 1996*; Young et al., 2020*). Continuity has the potential to ensure meaningful assessment (De Vos et al., 2019*), can improve feedback practices and is essential in achieving mutual understanding (Immonen et al., 2019*). Stable and safe longitudinal student-educator relationships can give the educator a more comprehensive view of student development as they see their student progress over a longer period of time (De Vos et al., 2019*). These relationships also enable educators to deliver negative feedback constructively (Castanelli et al., 2020*; Lee et al., 2019*). Conversely, student-educator relationships are also described to impact assessment negatively as the relationship can create tension, influence decision making, and impede negative judgement (Castanelli et al., 2020*; Daelmans et al., 2006*; Lee et al., 2019*; Timma, 2005*). This potential negative impact of social relationships can interfere with assessment (Burch, 2019*; Hauer et al., 2014*; Immonen et al., 2019*).

What do workplace educators do when assessing student performance?

The included articles suggest that what workplace educators do, is engage in an assessment process that consists of continuous assessment-related activity during which educators observe students at work, discuss their tasks with them or give them feedback (Berg et al., 2007*; De Vos et al., 2019*; Gauthier et al., 2018*; Timma, 2005*). These continuous assessment-based interactions help normalise observations and feedback for both students and educators as part of the day-to-day work (Tomiak et al., 2020*; Young et al., 2020*), which in turn can further strengthen both the assessment practice and student learning (Burch, 2019*; Hauer et al., 2014*; Immonen et al., 2019*; Richardson et al., 2013*). The included studies suggest that guidance and the continuous process of assessment are intertwined and positively reinforce each other when educators use assessment information to tailor their guidance and assessment practices (McSharry & Lathlean, 2017*). Educators see assessment as an opportunity to adapt their guidance to meet students’ individual learning needs as it provides them with information about who students are, what they need and what they are capable of (Berg et al., 2007*; Castanelli et al., 2020*; De Vos et al., 2019*; Timma, 2005*). Educators for example adjust the amount of guidance in line with their judgement of a student’s level of performance (McSharry & Lathlean, 2017*). Immonen et al. (2019*) describe a process of educators gathering information about student performance, assessing the achieved outcomes and providing students with tailored feedback and suitable learning situations to increase their capacity for independent practice. This can lead to underperforming or struggling students receiving more supervision or more elaborate feedback than others (Berg et al., 2007*; Daelmans et al., 2006*), whereas high performing students can be given more complex cases to handle to better match their ability level (Lee et al., 2019*).

What educators also do and which further intertwines assessment and guidance, is providing students with feedback as part of the continuous process of assessment (Burch, 2019*; Daelmans et al., 2006*; Duijn et al., 2018*; Immonen et al., 2019*; Lee et al., 2019*; Peach et al., 2014*; Richardson et al., 2013*). Educators consider providing students with feedback an important part of their assessor task and link feedback to opportunities for revision (Berg et al., 2007*; Castanelli et al., 2020*; Richardson et al., 2013*; Young et al., 2020*). Using structured assessment tools when giving feedback can be a stimulus to engage in dialogue with students and discuss their progress (Tomiak et al., 2020*). Educators are however concerned that the quality of their feedback will diminish when they delay its conveyance (Lee et al., 2019*). Furthermore, educators also perceive making pass/fail decisions about students as an opportunity to stimulate learning. They sometimes choose to grade students for their final assessment rather than giving them a pass or a fail, as they see this as way to motivate students to perform better (O’Connor et al., 2019*). Other authors, however, indicate that mixing assessment as a stimulus for learning with making pass/fail decisions about students can undermine the learning potential of assessment (Burch, 2019*; Castanelli et al., 2020*; Richardson et al., 2013*; Young et al., 2020*).

According to Hauer et al. (2014*) workplace educators are continuously diagnosing or monitoring students’ level through assessment in order to provide appropriately challenging tasks. This resonates with Burch’s (2019*) proposition that assessment at the workplace should enable educators to monitor student development towards independent practice. Using assessment to monitor development can aid workplace educators in forming a longitudinal picture of student capacities and recognising patterns in performance (Castanelli et al., 2020*; Gauthier et al., 2018*). This process can take on a more formal character when educators record feedback and accomplishments (Berg et al., 2007*; Tomiak et al., 2020*). Assessment practices can be purposefully designed towards monitoring and consequently support educators in confidently determining the competence of students (Timma, 2005*; Tomiak et al., 2020*). Several studies indicate in this vein that goal setting prior to placement as part of the assessment practice, can be beneficial to educators’ monitoring of student development (Immonen et al., 2019*; Peach et al., 2014*; Toohey et al., 1996*).

Timma (2005*) and Hauer et al. (2014*) indicate that workplace educators prefer using their own standards and expectations during assessment. The criteria provided in school assessment frameworks are a poor match for the workplace, as they do not reflect the specialised nature of workplace learning (Peach et al., 2014*; Richardson et al., 2013*), and educators feel that provided instruments and criteria bear little relevance to the workplace (De Vos et al., 2019*; Lee et al., 2019*). Gauthier et al. (2018*) point out that members of a community of practice observe what they value rather than what they are told to observe. They are inclined to use practical criteria or favour a more specific set of competencies than prescribed in the provided instruments (Castanelli et al., 2020*; Daelmans et al., 2006*; De Vos et al., 2019*). Evidence suggests that what educators find important is informed by what is valued in their day-to-day work and their individual views on what it means to be a successful practitioner in their community of practice (Castanelli et al., 2020*; Daelmans et al., 2006*; De Vos et al., 2019*; Lee et al., 2019*; O’Connor et al., 2019*; Richardson et al., 2013*; Timma, 2005*; Tomiak et al., 2020*). Several included articles argue that what educators find important when assessing is variable and this makes educators idiosyncratic in their role as assessor (Burch, 2019*; Castanelli et al., 2020*; Duijn et al., 2018*; Immonen et al., 2019*; Lee et al., 2019*; Timma, 2005*; Toohey et al., 1996*). The included studies revealed several of these variable standards: comparing students to their peers (Duijn et al., 2018*), using personal standards such as ‘would I let this student treat a family member’ (O’Connor et al., 2019*), taking their own professional performance as a benchmark for desired student behaviour (Burch, 2019*; Hauer et al., 2014*; Lee et al., 2019*), or comparing students’ current performance to past performance (Hauer et al., 2014*). Untangling the plethora of varying criteria applied by workplace educators seems a tall order, as several authors indicate that educator idiosyncrasy cannot be standardised away with training or tools (Burch, 2019*; Immonen et al., 2019*; Lee et al., 2019*; Timma, 2005*; Toohey et al., 1996*). However, numerous included articles make a case for tackling this issue through negotiated assessment criteria. These could be achieved by working towards criteria that use language that matches the workplace (Burch, 2019*; Immonen et al., 2019*) and align with how educators at the workplace think (Duijn et al., 2018*). Negotiated criteria should focus on what educators at the workplace find important and allow for flexibility when applied (Castanelli et al., 2020*; De Vos et al., 2019*). Peach et al. (2014*) propose that key stakeholders from school and work together formulate criteria based on top ranked competencies during placement (Peach et al., 2014*), whereas Toohey et al. (1996*) favour an approach where representatives from school and work negotiate criteria at the start of the placement. Our analysis also indicates that assessment can be viewed as a collaborative practice that relies on recognisable teacher presence at the workplace and enabling representatives from school and work to share insights with each other (Berg et al., 2007*; De Vos et al., 2019*; Richardson et al., 2013*). Lee et al. (2019*) and Timma (2005*) identify collaboration as a process of enculturation in which workplace educators build understanding of assessment processes through engagement and activities such as formal moderation. This increased understanding can support educators in providing better feedback and adapting learning opportunities to students’ needs (Hauer et al., 2014*; Peach et al., 2014*). Collaboration between school and work can further focus on achieving agreement on both the assessment process and the content of the assessment, jointly establishing criteria for assessment, and establishing clear roles in the assessment process (Immonen et al., 2019*; Peach et al., 2014*; Richardson et al., 2013*; Toohey et al., 1996*). Immonen et al. (2019*) warn that unsuccessful collaboration or lack of clear communication can result in a variety of assessment approaches that could further muddle the waters.

Making pass/fail decisions as part of the assessment practice is a particular point of interest where collaboration can play a significant role in what workplace educators do in their assessment practice. The included studies frame assessment decisions made by workplace educators as likely based on rich information gathered over a period of time during day-to-day work or earlier assessment activities that occurred throughout placement (Burch, 2019*; Castanelli et al., 2020*; Daelmans et al., 2006*; De Vos et al., 2019*; Duijn et al., 2018*; Gauthier et al., 2018*; Hauer et al., 2014*; Tomiak et al., 2020*). In spite of their potential, assessment decisions can be difficult to justify for workplace educators since these are more global decisions that encompass many smaller judgements throughout the placement and involve overseeing a variety of information (Castanelli et al., 2020*; De Vos et al., 2019*). Furthermore, the multiple (and sometimes conflicting) roles that educators have in the assessment practice come to a head when assessment decisions are made. Educators have a triple role that requires them to shift between being a professional who does their job well, an educator who provides students with guidance, and an assessor who judges performance and makes decisions about passing or failing (Castanelli et al., 2020*; Lee et al., 2019*; O’Connor et al., 2019*). Here educators experience tension because they are responsible for both assessment and guidance. The included articles suggest that educators seek ways to relieve this experienced tension. Castanelli et al. (2020*) show that educators create alternative informal assessment systems that circumvent their formal roles to mitigate the perceived pressure. We also found that collaboration plays a role in relieving this tension, as the studies show that educators incorporate judgements and feedback from colleagues when making decisions about students (Burch, 2019*; Castanelli et al., 2020*; Duijn et al., 2018*; Hauer et al., 2014*), or use colleagues to seek concurrence or review their decision (Castanelli et al., 2020*; De Vos et al., 2019*; Tomiak et al., 2020*). The educators in Castanelli et al.’s work (2020*) consider collectivity particularly important when failing students and giving negative feedback. Another possible solution to alleviate the tension specific to making the decision to fail students is recording judgements both at the end and throughout placement as part of the assessment process (Berg et al., 2007*; Castanelli et al., 2020*). Formal records make it possible for educators to prove a pattern of problems for a struggling student (Tomiak et al., 2020*). However, recording negative judgement is a cause for concern as other authors point out educators’ seeming reluctance to document negative comments (Hauer et al., 2014*; Peach et al., 2014*; Richardson et al., 2013*; Timma, 2005*; Toohey et al., 1996*).

What might assessment at the workplace look like?

All stakeholders involved in assessment practices at the workplace, the workplace educator, the student and the school, rely on clear communication about assessment aims and procedures prior to and during placement (Immonen et al., 2019*; O’Connor et al., 2019*; Peach et al., 2014*; Richardson et al., 2013*). The included studies indicate that the diversity of placements and the experiential nature of workplace learning require clarity about procedures and roles (Daelmans et al., 2006*; Immonen et al., 2019*; Richardson et al., 2013*; Tomiak et al., 2020*; Young et al., 2020*). The included articles suggest a variety of design choices when it comes to (co)designing the assessment of workplace performance and many designs presuppose two parties in the assessment practice: school and work. The design choices a school might make, can be practical such as selecting suitable tools or instruments, scheduling decisions, or using assessment as an instrument to match expectations between school and work (Berg et al., 2007*; Young et al., 2020*). Several authors also propose more comprehensive designs as suitable for the assessment of workplace learning (Castanelli et al., 2020*; Daelmans et al., 2006*; Duijn et al., 2018*; Gauthier et al., 2018*; Hauer et al., 2014*; O’Connor et al., 2019*; Richardson et al., 2013*; Tomiak et al., 2020*). These designs often propose an order in which students’ development is intended to progress (for example using developmentally sequenced milestones or tasks) and define roles and procedures when it comes to providing guidance and making decisions. However, assessment practices that focus too narrowly on procedures or do not allow for workplace educator concerns such as limited time and engagement, do not adequately match assessment practices at the workplace (Berg et al., 2007*; Timma, 2005*; Tomiak et al., 2020*; Young et al., 2020*).

Discussion

With our narrative synthesis we aimed to answer the question: how does the assessment of student workplace performance in vocational education operate when it is enacted by professionals at the workplace who take on the role of educators and assessors? The results in this review tell the story of assessment of workplace performance as manifested in day-to-day work and shaped by the interactions and relationships at work. Workplace educators are likely to use criteria and standards that are informed by day-to-day practice and embedded in the norms and values of the vocational community, rather than criteria prescribed by school. The included studies thus represent assessment as embedded in the workplace, enacted through participation and therefore inherently social and relational. For the purpose of our discussion we will first zoom in on how the embeddedness of assessment is seemingly problematised in the included articles. Secondly, we will discuss available perspectives that could provide a starting point for approaching assessment as embedded.

At a first glance, most of the included studies embrace the embedded and relational character of assessment at the workplace. A large number of studies emphasises the importance of relationships at the workplace between educators and students and how these relationships positively shape the assessment of workplace performance (Castanelli et al., 2020*; Daelmans et al., 2006*; De Vos et al., 2019*; Duijn et al., 2018*; Hauer et al., 2014*; Immonen et al., 2019*; Lee et al., 2019*; McSharry & Lathlean, 2017*; Peach et al., 2014*; Toohey et al., 1996*; Young et al., 2020*). We also find several authors suggesting to capitalise on the embedded nature of assessment by integrating it as a part of everyday work (Burch, 2019*; Daelmans et al., 2006*; Gauthier et al., 2018*). However, several of the included studies also seem to suggest that overreliance on the context of work is problematic. For example, Berg et al. (2007*), Castanelli et al. (2020*) and Tomiak et al. (2020*) make a case for recording feedback and judgements formally to better justify negative assessment decisions (Berg et al., 2007*; Castanelli et al., 2020*; Tomiak et al., 2020*). We also find authors proposing structured tools to enable better feedback provision (Tomiak et al., 2020*) and several suggestions as to how best approach formulating assessment criteria that better match workplace needs (e.g. Burch, 2019*; Duijn et al., 2018*; Immonen et al., 2019*; Peach et al., 2014*). These examples from the included articles seem underpinned by the view that assessment at the workplace is flawed and needs to be fixed by interventions. We argue that this view is based on a discrepancy between on the one hand embracing the embedded and relational nature of workplace learning and its assessment, while simultaneously questioning the suitability of assessment by workplace educators.

This discrepancy stems from a mismatch between the embedded and relational nature of workplace learning and the more traditional view on assessment that is often aligned with school-based concerns and focuses on measuring outcomes, determining achievement and quality assurance (Boud, 2007). In our introduction we posed that where students learn, what they learn and how they learn at the workplace might be different from the situation in school, and raised the question what this might mean for assessment. We want to further explore that question and will focus on what is assessed by the workplace educator and discuss the potential quality of their decisions from the perspective of validity. One of the aspects validity is concerned with the evidence that supports or opposes assessment decisions, and it is dependent on both what the intention of the assessment is and when the assessment occurs (Govaerts, 2015; Kane, 2016). This means that validity might be different depending on the perspective we take. Our results show that assessment plays a role throughout placement and is intertwined with guidance. This means that assessment decisions are made continuously and can entail ‘small’ or low stake decisions about guidance choices and ‘large’ or high-stake decisions about passing or failing the placement at the end. Considering the situated nature of assessment of workplace performance, workplace educators use highly variable evidence to substantiate their decisions. This leads us to propose that the validity of performance assessment by the workplace educator depends on the context of the assessment. Validity resides in the relevance of the evidence that is used to substantiate a decision within that context. The suitability of evidence follows from what is considered successful professional performance within that specific context. Thus we argue that the quality of assessment is governed by the workplace in which it is embedded, as validity is connected with what specific workplaces consider suitable evidence for assessment decisions.

The discrepancies that come into play when students learn in the two different contexts of school and work have been studied and discussed from a sociocultural perspective by many different scholars (e.g. Billett, 2013, 2014; Bouw et al., 2019; Tynjälä, 2008), but there is little work that sheds light onto what this means for assessment. A useful perspective on the perceived discrepancy between school and work is that of boundary crossing. In line with Engeström (2001) we consider school and work to be two distinct activity systems, each with their own objects, tools, rules and community. Subsequently, we presume that these two activity systems also value different aspects in assessment of performance at the workplace and organise assessment differently. Akkerman and Bakker (2011) use activity systems when conceptualising how students learn across the different contexts of school and work. They postulate that students experience boundaries between school and work in their learning process. We believe that workplace educators in their role as assessors also experience boundaries between the two activity systems. Akkerman and Bakker (2011) delineate four mechanisms that can help students in their learning process across boundaries: identification, coordination, reflection and transformation. The results in the present review suggest that a communicative connection and efforts of translation might be at play in the assessment of workplace learning. For example, several studies indicate that successful assessment relies on clear communication about assessment aims and procedures prior to and during placement (Immonen et al., 2019*; O’Connor et al., 2019*; Peach et al., 2014*; Richardson et al., 2013*). The efforts of translation or the need for these efforts seems apparent when we consider the discrepancy between what educators consider important about student performance and what they are expected to assess (e.g. Hauer et al., 2014*; Timma, 2005*). However, it is as of yet unclear whether these mechanisms have a counterpart for workplace educators and their assessment process, as research that approaches assessment as a practice at the boundary of school and work is scarce.

Another concept that offers a perspective on assessment as a practice at the boundary, is the idea of a third space between school and work. In teacher education we find this third space as a hybrid space where educators take multiple perspectives: that of the learner, the teacher and the workplace educator (Williams, 2013). In their role of assessor, workplace educators occupy that same third space at the boundary and thus the third space might be a useful perspective. Additionally we find the term border zone or border land in health care literature to describe where learning at the boundary happens through interrupting and reconstructing boundaries (Kerosuo, 2001). These different perspectives on boundary crossing offer a valuable lens on the assessment of student performance at the workplace to further explore what the results from the present review might mean for how we can approach and understand assessment as the boundary of school and work as a valuable part of vocational education.

In short, we propose that a more traditional perspective on what is ‘good’ assessment feeds into research that is more aligned with a sociocultural perspective, such as the articles included in the present review. This results in the discrepancy between school and work that we have presented and discussed above. We argue that the small number of included articles in this review shows that a sociocultural perspective is not common in assessment research, while simultaneously demonstrating that assessment is underrepresented in research into vocational education which is more likely to be embedded in a sociocultural paradigm. We would like to suggest that what we have found might not represent a discrepancy, but rather an oxymoron. How workplace educators assess student performance is not incompatible with assessment quality. It does however, require a different view on assessment.

Limitations

As our theoretical lens makes clear, we started from a sociocultural perspective for the present review. This perspective was further amplified by the selected literature as we focused on articles that would reveal the ‘how’ of assessment. Research that yields insight into the process of assessment and how educators engage with this process often takes a sociocultural perspective. We may have therefore overlooked other relevant articles into assessment of performance at the workplace, as they did not have a similar perspective and were excluded because of our focus. Our sociocultural focus also implies that learning is considered an activity that involves (significant) others and not only the student-educator dyad that takes center stage in our results. The included studies seem to suggest that assessment also involves others at the workplace. However, we have chosen to focus on the workplace educator as the most important actor in the assessment practice and because of that focus we might not do justice to assessment practices to their fullest extent. A third possible limitation is the overrepresentation of literature from the health professions domain in our review. The assessment of workplace performance is an often researched topic that is published about in many journals dedicated to health professions education. It is however not surprising that the assessment of workplace performance is an often research topic in that domain, since inadequate performance as a medical professional can have devastating consequences.

Conclusion

The purpose of this review was to understand what assessment practices look like at the workplace and what educators do when assessing student performance. With our narrative synthesis we aimed to answer the question: how does the assessment of student workplace performance work when it is enacted by professionals at the workplace who take on the role of educators and assessors? We have taken a distinctively sociocultural perspective in both our theoretical positioning and our analysis. Our results depict assessment as a practice that is manifested in day-to-day work and shaped by the interactions and relationships at work. It also shows that workplace educators are engaged in an assessment process that consists of continuous assessment-related interactions and which intertwines assessment with guidance. Workplace educators tailor to what students need for further development based on their continuous monitoring of progress. In the assessment process educators prefer using criteria and standards that are informed by day-to-day practice and embedded in the norms and values of the vocational community, rather than criteria prescribed by school. Formulating criteria that are negotiated between school and work in an assessment practice that is truly collaborative might be a way to facilitate what happens at the workplace, rather than attempt to standardise assessment according to school standards. Collaboration also involves co-workers whose judgements about students are valued and who function as a sounding board when educators have to make pass/fail decisions. The reviewed literature further revealed that assessment practices at the workplace might be purposefully co-designed and that any design requires close communication between school and work.

Implications and future research

With this review, we present a different perspective on assessment with our sociocultural approach. Our results show that assessment of workplace performance in vocational education can be conceptualised as a practice that is shaped by the activity system in which it is embedded – the specific workplace. We have shown that from this perspective assessment can be explicated and acknowledged, and as a consequence be further conceptualised and researched in both assessment research and vocational education research. This is necessary to work towards a different view on assessment at the workplace and its quality. We would like to raise the following questions, aimed at pushing collective thinking about assessment in this direction: How can we approach and research the quality of assessment at the workplace form a perspective that acknowledges and values it embeddedness, rather than correcting for it? And how can we approach assessment as a practice at the boundary of school and work and what mechanisms are at play for workplace educators in this border land or third space?