Introduction

Technology in the Swedish compulsory school curriculum is the newest subject, introduced in classrooms in 1982. Although it was originally connected to science and also had roots in a vocational and crafts tradition, technology education today is broad in scope including elements of technical skills and design, engineering components and the relationship of technology to society and the environment. Related to this latter, contextual component, technological systems also constitute a distinct curriculum content. Students are thus not only supposed to learn about the form, function and design of artefacts, but also about technological systems in which artefacts are viewed as components of a greater whole. Systems are related to the students’ every-day life, the local as well as the global context (Schooner, Klasander & Hallström, 2018b; Skolverket 2019a).

Assessment is a complex educational practice that teachers generally find difficult, especially in relation to technological systems (Hattie 2012; Schooner, Klasander & Hallström, 2018a; Wiliam, 2006; 2017). Different subject traditions support the teacher in his or her assessment practices to varying degrees (Kimbell, 2007), but in technology education teachers do not have a long and well-defined subject tradition to lean on when assessing students’ knowledge and skills. Investigating how teachers experience their teaching practices in technology could therefore be an important step in developing the subject, especially in relation to assessment (Black & Wiliam, 1998; McLaren, 2012). Research exploring teachers’ beliefs could thus contribute to ameliorating uncertainties about, for example, assessment, because how teachers perceive their own knowledge and ability to carry out elements of teaching also influence the ways in which they and even their students perform in the classroom (e.g. van Aalderen-Smeets, Walma van der Molen, & Asma, 2012).

The aim of this study is thus to explore Swedish secondary technology teachers’ cognitive beliefs about assessing students’ learning of technological systems, in relation to the assessment tools they use. In doing so, the research responds to the following three research questions: (1) What are teachers’ cognitive beliefs about assessment, assessment tools and need for professional development? (2) What are the associations between different dimensions of teachers’ cognitive beliefs? (3) Are there differences in cognitive beliefs between teachers with different teaching experience and educational background?

Theoretical considerations and literature review

A technological system can be defined as a collection of components and the relationships between them, together with a system boundary that delimits its scope (Ingelstam, 2012); the components can be artefacts, knowledge, humans, and more (Bijker et al., 2012). The complexity and wide scope of the term system is reflected in previous technology education research, which has shown that teachers express uncertainties regarding what should be defined as a technological system as well as what is essential for students to know about such systems (e.g., Hallström, 2022; Schooner, Klasander & Hallström, 2018a).

Research exploring teachers’ attitudes and beliefs could address such uncertainties and thus contribute to improving teaching and assessment (Tschannen-Moran et al., 1998; Tschannen-Moran & McMaster, 2009). Thus, studying teachers’ attitudes to assessment could shed new light on technology education in general, and teachers’ perceptions and evaluations of complex subject matter such as technological systems in particular (cf. Hartell et al., 2015). In this regard, we explore teachers’ attitudes in terms of the construct cognitive belief, which we operationalize as teachers’ psychologically held perceptions of the appropriate level and extent of their technological knowledge for teaching and assessment, their perceptions of the effectiveness of different assessment tools, and their perceptions of the need for professional development (Hatisaru, 2018; van Aalderen-Smeets, Walma van der Molen, & Asma, 2012).

Modelling has been suggested as an effective way of teaching and learning about technological systems that might also be a tool to support assessment of students’ knowledge (Hallström, 2022). Models of technological systems can be created in different modes of representation – physical/concrete, visual, symbolic, etc. – and having a teacher who knows when and how to switch or transit between these modes increases students’ potential for learning (Gilbert 2004; Hallström & Schönborn, 2019). Research on assessment in relation to models per se is limited in technology education, although there has been a good deal of research focusing on design and the use of physical models (e.g. Rönnebeck et al., 2018). Physical models thus constitute one way of representing a technological system, but there are also other modes such as the visual, for example, drawings or system/block diagrams (Svensson, 2011). The use of programmed digital representations or simulations further broadens the opportunities for expression of a student’s knowledge of a technological system in the visual and/or symbolic mode, because the perceived system can be represented by action-orientated animations or simulations (Denning & Tedre, 2019; Slangen et al., 2011). Assessment could hereby focus on the essential components that constitute a system because, in an animation or simulation, every part has to play a role and has to be explained. Conversely, the students could also design an overview that provides a blueprint for assessing their learning of the system as a whole (Blomkvist & Kaijser 1998; Ingelstam, 2012; Klasander, 2010; Löfström, 2008).

Common to these different modes and types of modelling is the fact that they could promote formative assessment. Summative assessment serves an important diagnostic function in teaching – often symbolized by oral and written assignments in the form of tests or homework – whereas formative assessment is more procedural in character. Formative assessment is not a single method but a range of different methods with the common denominator that they are used to support and give feedback to students while they are learning, not primarily to evaluate them (e.g. Andrade et al., 2019; Hattie, 2012). In technology education, various assessment tools are common, and regarding technological systems physical or visual models or digital simulations have a formative potential, as was mentioned above. Furthermore, a portfolio also provides the teacher with diverse student-assembled material for assessment, for example, models, sketches or diagrams (Kimbell, 2007, 2012; Kimbell et al., 2009). However, since a portfolio is brought together over a stretch of time it could also include essays and other written material. Portfolios provide a formative assessment opportunity whereby a student’s progression can be discussed and evaluated together with the student. The same goes for log books although they focus on written data and do not primarily include actual models or prototypes.

Method

Development and validation of a questionnaire

A questionnaire was designed to probe respondents’ cognitive beliefs concerning assessing different aspects of learning about technological systems, effectiveness of different assessment tools, and perceived need for professional development. In addition, the questionnaire contained items on background information about the respondents.

Cognitive beliefs about assessing students’ learning of technological systems were probed in relation to nine dimensions of technological systems that were derived from both theory and previous research (e.g. Bijker et al., 2012; Churchman, 1979; Ingelstam, 2012; Klasander, 2010; Schooner, Klasander & Hallström 2018a, b):

  1. 1.

    purpose of a technological system (Purpose),

  2. 2.

    input and output (Input/output),

  3. 3.

    system structure, including components/sub-systems (Structure),

  4. 4.

    connections and flows within a system (Process),

  5. 5.

    roles of humans within a system (Roles),

  6. 6.

    system boundaries (Boundary),

  7. 7.

    system impact on its surroundings (Influence),

  8. 8.

    comparison of systems (Differences),

  9. 9.

    driving forces behind system change (Driving forces).

These nine dimensions were also probed from the point of view of three system types, with incremental distance to a person:

  1. A.

    individual and household-related systems (such as smartphone, dishwasher, and TV),

  2. B.

    local, regional and national systems (for example, water and sewerage, recycling and energy, and production), and.

  3. C.

    global systems (for instance, the Internet, aviation, and international production systems).

These nine dimensions and three system types also relate to the Swedish technology curriculum and the related commentaries (Skolverket, 2019a, b) which promote learning about technological systems in relation to individuals, society, and the environment. The ubiquity of technology is also a central part of the curriculum because students should understand the advantages, risks and limitations of both local and global systems. The three system types, with the examples provided, were also meant to function as scaffolding support when filling in the questionnaire. One item was constructed for each system dimension (e.g. the purpose of the system) and system type (e.g. individual and household-related systems), yielding a total of 27 items. Respondents rated their agreement with statements on the form (in this case for dimension 1 – the purpose of the technological system) “I feel that I have enough knowledge to assess students’ learning about the purpose of the technological system when I teach about the following types of technological systems” for each of the three system types.

Cognitive beliefs about the perceived effectiveness of assessment tools were probed in relation to eight tools. Potentially common tools used in the assessment of students’ knowledge about technological systems were identified based on literature on assessment tools in education generally, and technology education specifically (Hartell, 2015; Kimbell, 2007; Korp, 2011; Stiggins & Conklin, 1992). Teachers’ preferences for using the different assessment tools were probed with eight items, on which teachers rated their agreement with the statement “I assess students’ learning of technological systems best when the assessment instrument consists of” in relation to each of the following assessment instruments: portfolio, short reflections (“log book”), short oral or written assignments (e.g. homework), extensive oral or written assignments (e.g. tests), active classroom discussions, written reports, the construction of physical or conceptual models depicting technological systems, and programming or digital simulation of technological systems.

Cognitive beliefs about the perceived need for professional development (three items) was measured by asking teachers to rate their agreement with the statements: “I need more professional development in the subject area of technological systems”, “I need more professional development in planning and executing teaching about technological systems”, and “I need more professional development in assessing students’ learning about technological systems”.

On each of the above items, teachers responded using a 6-point scale ranging from “Is not correct” to “Is completely correct”. A “Do not know” option was also provided to avoid a forced choice for teachers that were uncertain.

The resulting questionnaire consisted of four parts: background information, such as the respondents’ age, sex, teaching experience, academic education or professional development in technology, and information about the schools where they teach; cognitive beliefs regarding assessment related to technological systems; cognitive beliefs about assessment tools; and cognitive beliefs about need for professional development.

The questionnaire development was structured so that pursuit of internal validity was integrated into the process. This was achieved through expert groups (Borsboom, Mellenbergh, & van Heerden, 2004) that gave input on three separate occasions; (1) review by a small group of subject experts at the home university of the authors, (2) review by teacher educators from different universities in Sweden who belong to a network within a national centre for technology teachers in Sweden (CETIS – the Centre for School Technology Education), and (3) seminar presentation and peer-review by experts in technology and science education at the authors’ home institution. Iterative changes to the questionnaire were made between each of the steps in the validation procedure. The changes made included revising the content, such as removing “summative assessment” and “formative assessment” as individual items in the questions about cognitive beliefs about assessment, since those terms are more abstract and partly overlap with the concrete assessment tools. Other changes were made to clarify the wording of items based on the feedback to avoid potential misunderstanding.

Participants and data collection

The population for the present study consisted of currently teaching certified technology teachers for grades 7 to 9 in the Swedish compulsory school system. The Swedish compulsory school consists of grades 1–6 (primary education) and grades 7–9 (lower secondary education); technology education is mandatory throughout compulsory school. In grades 7–9, teachers are subject teachers teaching two to four separate subjects, and to become qualified as a technology teacher requires a teacher education diploma in technology education. However, becoming a qualified subject teacher in grades 7–9 is also possible by validating credits from previous education, for example, engineering studies, and complement it with basic teacher education. Qualified teachers can subsequently apply for certification in their subjects after one year of service, but it is also possible to become certified without technology teacher education by being validated after eight years of service.

A public database for educational statistics (SiRiS) provided by the Swedish National Agency for Education indicated that 2,326 certified teachers were actively engaged in technology education with grades 7 to 9 during 2018. In order to reach these teachers, we chose to invite potential respondents to a web-based questionnaire in three different ways: (1) by email to all certified Swedish technology teachers from the public records found in the database at the Swedish National Agency for Education, (2) by email to all municipalities in Sweden that have provided a public contact email address (potentially to be forwarded to teachers), and (3) by word of mouth and social media.

Electronic, online surveys are commonly used (Wright, 2005) and present a direct, cost-effective and environmentally friendly approach to sending and receiving questionnaires. We deemed this to be more effective than obtaining postal addresses and sending invitations by traditional mail as the principal method for data collection. The benefits of an online survey compared to a traditional paper survey also include the fact that an email can be received by a respondent immediately after it is sent (Baruch & Holtom, 2008; Nulty, 2008) and that web-based questionnaires are convenient for the respondent, given that most teachers work with computers on a day-to-day basis.

Data analysis

The responses to the survey were analyzed in four main steps. Firstly, teachers’ cognitive beliefs with regard to assessing students’ learning of the nine dimensions of technological systems for different types of systems were explored. Secondly, underlying dimensions in teachers’ responses were identified using factor analysis. Thirdly, the relations between the identified underlying dimensions were explored. Fourthly, differences between groups of teachers were analyzed. Given that many of the variables were not normally distributed, non-parametric methods were used in analyzing comparisons and correlations (Abbott, 2011; Corder & Foreman, 2014; Norman, 2010). Throughout the analysis, the “Do not know” item was treated as a neutral response. Hence, each rating scale ranged from 1 to 7, where 1 corresponds to disagreement with a statement and 7 corresponds to agreement.

Testing for statistical significance involves deciding on an acceptable risk (alpha level) of a false positive result (type I error); that is, the risk of accepting a difference as representative of the population while it is only a result of random variation in the sample. All analyses were performed using an alpha level of 0.05. In addition, since the type I error risk is inflated when running multiple statistical analyses on the same group, Bonferroni corrections were applied to multiple comparisons in order to maintain an overall alpha level of 0.05 for significance (Bender & Lange, 2001). All statistical analyses were performed using SPSS Statistics v.27.

Teachers’ cognitive beliefs about assessing students’ learning of technological systems

A Friedman test was conducted to determine if there were differences in respondents’ reported cognitive beliefs concerning assessing students’ learning of the nine different dimensions of technological systems (i.e. Purpose, Input/output, Structure, Process, Roles, Border, Influence, Differences, Driving forces). For each dimension, values for the three types of system were collapsed into one overall mean value. Significant results were followed up by conducting post-hoc Wilcoxon signed ranks tests for pairwise comparisons, using a Bonferroni correction.

A Friedman test was run to determine if there were differences in respondents’ reported cognitive beliefs about assessing students’ learning of different types of system (i.e. individual, national and global systems). For each type of system, values for the different dimensions were collapsed into one overall mean value. Statistically significant results were followed up by a post-hoc Wilcoxon singed ranks test for for pairwise comparisons using a Bonferroni correction for multiple comparisons.

Factor analysis

Exploratory factor analysis was conducted to find possible underlying dimensions in teachers’ responses to the survey questions on different aspects related to assessment of students’ learning about technological systems. Three separate analyses were performed, concerning teachers’ cognitive beliefs about (i) Their knowledge for assessing students’ learning of technological systems (27 questions), (ii) The effectiveness of assessment instruments for assessing students’ learning about technological systems (8 questions), and (iii) Their need for competence development to better teach about technological systems (3 questions), respectively. In each case, principal component analysis (PCA) of the correlation matrix was performed.

For teachers’ cognitive beliefs about assessing students’ learning about technological systems, one variable for each system dimension were used, wherein a mean value was calculated for each of the three different system types. Thus, nine variables were included in the factor analysis, one for each system dimension. The suitability of performing a factor analysis on the data was supported in three ways. All variables had at least one correlation coefficient greater than 0.3. The Kaiser-Meyer-Olkin Measure of Sampling Adequacy (KMO) was 0.945, and thus larger than the suggested threshold value of 0.6 (Kaiser, 1974). Bartlett’s test of sphericity was statistically significant (p < .0005). Together, the preliminary analysis indicated that the data was likely to be factorizable. Parallel analysis indicated that a single-component solution was suitable.

For the eight variables about teachers’ cognitive beliefs about the effectiveness of assessment instruments for assessing students’ learning about technological systems, all variables had at least one correlation coefficient greater than 0.3. The initial KMO was 0.661, and Bartlett’s test of sphericity was statistically significant (p < .0005). The analysis indicated that two components should be retained, based on parallel analysis. One item (concerning assessment based on written reports) loaded onto both components. Therefore, the analysis was repeated without this variable. Oblique rotation using Oblimin indicated a low correlation between the components (< 0.1), and therefore an orthogonal rotation was performed using Varimax.

The three variables that targeted teachers’ cognitive beliefs about perceived need for competence development to better teach about technological systems also had at least one correlation coefficient greater than 0.3. Furthermore, KMO was 0.754, and Bartlett’s test of sphericity was statistically significant (p < .0005). Parallel analysis indicated a single-component solution.

Construction and characterization of new variables

Based on the factor analyses, four new cognitive belief variables were constructed by calculating mean values for the variables that corresponded to each revealed underlying dimension. Cronbach alpha was calculated as a measure of reliability for each new variable. A cut-off value of 0.5 was used; any variable with lower reliability were discarded. Associations between the new variables were assessed using Spearman’s rho.

Differences based on background variables

Mann-Whitney U tests were used to evaluate if teachers’ experience and educational background were important for their cognitive beliefs about assessment of students’ learning of technological systems. Three different groupings of teachers were constructed, based on educational background (traditional teacher education vs. professionals, e.g. engineers, retrained to work as teachers), experience of professional development in technology (teachers with no additional technology education vs. teachers that have taken additional courses in technology), and teaching experience (14 years or less vs. more than 14 years), respectively. For each grouping, differences were analyzed for the new cognitive belief variables constructed above. Effect sizes were calculated to assess whether any differences between groups were meaningful within the research context. The appropriate effect size measure (r-value) for this analysis was calculated by dividing the z-scores by the square root of the total number of respondents (n = 511) (Field, 2017).

Research ethics

Throughout the research process, the ethical principles for research were followed in the customary way by informing the participants about the purpose of the research project and the questionnaires, and about their right to discontinue their participation should they wish to do so. Furthermore, the teachers were informed that their participation would be anonymous, and that the data would not be used for anything other than research purposes (Swedish Research Council, 2017). The handling of the data adhered strictly to the EU General Data Protection Regulation (GDPR).

Results

The number of completed questionnaires by teachers belonging to the population was 511. The sample thus represents 22% of the population. This also sets a lower limit of 22% for the response rate; an exact value cannot be determined since the sampling procedure did not offer a way to determine how many of the targeted study population of 2,326 certified and active technology teachers were actually reached by the invitation to participate. All responding individuals from the study population completed the questionnaire in full. Of these, 249 were males (48.7%), 257 were females (50.3%) and 5 (1%) identified as neither male nor female.

A Friedman test indicated statistically significant differences (χ2(8) = 411.0, p < .001) between teachers’ cognitive beliefs in relation to different dimensions of technological systems (e.g. structure, border, etc.). Post hoc analysis revealed multiple statistically significant differences between dimensions of technological systems. The differences suggest a pattern wherein teachers’ cognitive beliefs are lower for Border than for any of the other dimensions. In addition, Process, Differences, Input/output and Structure do not differ significantly from each other, and are all lower than Influence, Roles, Purpose and Driving forces. Thus, the findings suggest three clusters of dimensions of technological systems that differ between them in terms of the cognitive beliefs that technology teachers have in assessing students’ learning (see Table 1).

A Friedman test indicated statistically significant differences (χ2(2) = 232.5, p < .001) between teachers’ cognitive beliefs in relation to different types of system (i.e. individual, national and global systems). Post hoc analysis revealed statistically significant differences between individual and national systems (Mdn = 4.33 and Mdn = 4.67; p < .001), between global (Mdn = 4.11) and national systems (p < .001) and between global and individual systems (p = .001). Thus, the findings indicate that teachers have the strongest cognitive beliefs in assessing systems at the national level and the weakest in assessing global systems. Individual systems are in between (see Table 1).

Table 1 Differences between teachers’ cognitive beliefs in relation to nine system dimensions and three types of technological systems

Factor analysis revealed that one factor was suitable to describe teachers’ cognitive beliefs in assessing students’ learning about technological system (Table 2). This factor explained 74.0% of the total variance. Given that this construct was found to be one-dimensional, a composite score was calculated by taking the average across all nine dimensions of technological systems knowledge. The resulting variable was termed “Cognitive beliefs - Knowledge” – CB-K – and was used as an overall indicator of each teacher’s beliefs. Cronbach alpha was 0.956, indicating a high reliability. The mean CB-K value for the participants was 5.22 on a scale from 1 to 7 (n = 511, SD = 1.21). A Shapiro-Wilks test indicated a non-normal distribution for this variable (W = 0.963, p < .001).

Factor analysis of teachers’ cognitive beliefs about the effectiveness of assessment instruments for assessing students’ learning about technological systems yielded two components (Table 3), which explained 25.5% and 19.8% of the total variance, respectively. The first component included assessment instruments where students construct a product. As argued below, the assessment tools included in this variable are generally compatible with a formative tradition, and the resulting composite variable is therefore termed “Cognitive beliefs – Formative assessment” (CB-FA). Cronbach alpha was 0.537. While this is a somewhat low value compared to the typical benchmark of 0.7, values above 0.5 may be considered acceptable for research purposes (e.g. Field 2017). The mean CB-FA value for the participants was 4.02 on a scale from 1 to 7 (n = 511, SD = 1.27). A Shapiro-Wilks test indicated a non-normal distribution for this variable (W = 0.986, p < .001). The second component included instruments where students demonstrate their knowledge through their performance in a defined situation (i.e. test or group discussion), and the resulting composite variable is therefore termed “Cognitive beliefs – Summative assessment” (CB-SA). However, a Cronbach value of 0.386 indicated that the variable is lacking reliability, and was therefore excluded from further analysis.

Factor analysis of teachers’ cognitive beliefs about their need for competence development to better teach about technological systems indicated that one component was suitable (Table 4), which explained 87% of the total variance. A composite variable termed “Cognitive beliefs – Professional development” (CB-PD) was constructed, and had an acceptable reliability (Cronbach alpha = 0.925). The mean CB-PD value for the participants was 4.90 on a scale from 1 to 7 (n = 511, SD = 1.85). A Shapiro-Wilks test indicated a non-normal distribution for this variable (W = 0.903, p < .001).

Table 2 Component loadings and communalities of the rotated solutions for PCA of items related to teachers’ cognitive beliefs in assessing students’ learning about technological system
Table 3 Component loadings and communalities of the rotated solutions for PCA of items related to teachers’ cognitive beliefs about the effectiveness of assessment instruments for assessing students’ learning about technological systems
Table 4 Component loadings and communalities of the rotated solutions for PCA of items related to teachers’ cognitive beliefs about their need for competence development to better teach about technological systems

Associations between cognitive belief variables

To further describe the participating teachers, associations between the newly constructed composite variables were examined using Spearman’s rho. As shown in Table 5, a significant positive correlation was found between CB-FA and CB-K, indicating that a strong cognitive belief in assessing students’ learning of technological systems is associated with stronger beliefs in the use of assessment instruments where students construct products. A significant negative correlation between CB-PD and CB-K indicates that teachers with a strong cognitive belief about their knowledge for assessment see less of a need for further professional development. There was no significant correlation between CB-FA and CB-PD.

Table 5 Correlation between cognitive belief variables

Differences in cognitive beliefs between groups of teachers

Mann-Whitney U tests were performed to investigate differences between teachers who were professionals that had retrained to become teachers (group 1, n = 175) and teachers that had taken the formal teacher education (group 2, n = 336). No significant differences were found in CB-K scores (mean rank group 1 = 265.7; mean rank group 2 = 251.0; U = 27,705, z = -1.070, p = .284) and CB-FA (mean rank group 1 = 267.7, mean rank group 2 = 249.9; U = 27,357, z = -1.293, p = .196). A significant difference was found for CB-PD (mean rank group 1 = 222.2; mean rank group 2 = 273.6; U = 23,481, z = -3.770, p = .000). Thus, the findings indicate that teachers who had taken the formal teacher education felt a higher need for professional development than professionals (e.g. engineers) who had been retrained to work as technology teachers.

Mann-Whitney U tests were performed to investigate differences between teachers who had not taken additional courses in technology and engineering besides those given within teacher education (group 1, n = 239), and those that had taken additional courses in technology and engineering (group 2, n = 272). Significant differences between the two groups were found for CB-K scores (mean rank group 1 = 224.7; mean rank group 2 = 283.5; U = 25,025, z = -4.492, p = .000), and for CB-PD (mean rank group 1 = 292.4; mean rank group 2 = 224.0; U = 23,808, z = -5.268, p = .000), but not for CB-FA (mean rank group 1 = 246.5; mean rank group 2 = 264.4; U = 30,222, z = -1.373, p = .170). Thus, the findings indicate that teachers who had not taken additional technology courses had a weaker cognitive belief about their knowledge for assessment and felt more need for further professional development than teachers that had taken courses in technology apart from the courses taken as part of teacher education.

Mann-Whitney U tests were performed to investigate differences between teachers with 14 years or less of teaching experience (group 1, n = 244), and those with more than 14 years of teaching experience (group 2, n = 267). No significant differences were found for CB-K scores (mean rank group 1 = 256.4; mean rank group 2 = 255.6; U = 32,475, z = -0.059, p = .953) or for CB-PD (mean rank group 1 = 256.9; mean rank group 2 = 255.2; U = 32,355, z = -0.132, p = .895). A significant difference was found for CB-FA (mean rank group 1 = 277.2; mean rank group 2 = 236.6; U = 27,394, z = -3.113, p = .002). Thus, the findings indicate that teachers with less than 14 years of experience tended to view assessment instruments that involves students creating products as more effective than did teachers with a longer experience.

Discussion

There has been some previous research concerning technology teachers’ self-efficacy in general (e.g. Rohaan et al., 2012) and on assessment specifically (e.g. Hartell et al., 2015), but this is the first study of technology teachers’ cognitive beliefs (van Aalderen-Smeets, Walma van der Molen, & Asma, 2012) about assessment of technological systems. The study is thus novel in that it contributes new scientific knowledge about technology teachers’ cognitive beliefs concerning assessing students’ learning of technological systems in relation to several background variables: educational background, amount of additional technology courses, and teaching experience. We further show that the complex task of assessment is influenced not only by the background variables but also by several other underlying factors such as system structure, type of system and assessment tool. In the following, the results are discussed for each of the research questions under a separate heading.

Teachers’ cognitive beliefs about assessment of technological systems

Regarding the first research question, there are three clusters of dimensions that differ in teachers’ cognitive beliefs on assessment. The weakest cognitive beliefs can be found concerning knowledge of the system border, followed by internal system dimensions (Process, Input/output, Structure), and strongest on knowledge related to human-oriented, socio-technical dimensions (Influence, Purpose, Roles, Driving forces). The system border is a difficult concept to understand for people of any age (e.g. Koski & de Vries 2013), which is why it is not a surprising finding that it was related to weak cognitive beliefs. Conversely, it is rather surprising that process, input/output and structure were not related to stronger cognitive beliefs, since previous research suggests that they are well-known aspects of systems among technology teachers (e.g. Hallström, 2022). Also surprisingly, the strongest cognitive beliefs were connected to the socio-technical dimensions while previous research shows that this is a difficult dimension of technological systems (e.g. Klasander, 2010; Kroes et al., 2006; Schooner, Klasander & Hallström, 2018b). There are also differences in cognitive beliefs about assessing learning about different types of systems. The strongest cognitive beliefs were found regarding local, regional and national systems, followed by household systems, and the weakest for global systems. This is a finding that resonates with previous research in that the local, regional and national systems are often quite well-known infrastructural systems such as electric grids and water supply with many visible and obvious components in the form of power stations, power networks, and water towers (Ingelstam, 2012). It is possible, however, that household systems such as a home heating system are seen more as artefacts than systems, hence the relatively low score.

Furthermore, teachers’ preference for four assessment tools together formed an underlying dimension in the responses, namely portfolio, use of a log book, the construction of physical or conceptual models of technological systems, and students’ programming or digital simulation of technological systems. The identified positive correlation between this dimension and teachers’ cognitive beliefs about their knowledge for assessing learning indicates that confident teachers tend to prefer or be more open to using formative assessment tools. It may be that formative assessment tools such as portfolios and log books come natural in technology education as it deals a lot with procedural knowledge, for example, in projects when students design and construct products over a longer period of time (e.g. Schut, Klapwijk, Gielen & de Vries, 2019). However, the fact that modelling is such a central aspect of the design and understanding of technological systems may also explain the positive correlation between these tools and the teachers’ cognitive beliefs.

Even though the tools for assessing students’ knowledge of technological systems associated with strong cognitive beliefs among teachers follow the formative tradition (Bennett, 2011), models and simulations are specific to technology and the other STEM (science, technology, engineering, mathematics) disciplines and are therefore not generally described in the assessment literature. The formative approach provides students with opportunities to self-evaluate their own learning progression and thus promotes life-long learning (Stables, 2018), something which seems also to be suitable in relation to models and simulations. All the four kinds of assessment tools associated with strong cognitive beliefs among the studied technology teachers consequently display useful features in relation to learning about technological systems.

The relationships between cognitive beliefs and reported use of tools for assessment

The results from correlation analysis indicate that teachers with higher cognitive beliefs about knowledge for assessment tended to see less need for professional development. This could indicate that reinforcement regarding professional development occurs when teachers have strong cognitive beliefs and a perceived ability to master technology teaching, which in turn strengthens teachers’ beliefs in their own capacities and consequently minimises the perceived need for further teacher education/professional development (Hoy & Spero, 2005).

In addition, teachers’ cognitive beliefs about knowledge for assessment was positively correlated with assessment tools in the formative tradition. It is possible that assessment using such tools is more demanding in terms of knowledge for assessment, and that teachers that feel more confident in their knowledge are also more positive towards using such tools. However, the association was weak, so no strong conclusions can be drawn regarding this from the data in this study.

Differences in cognitive beliefs between teachers of different backgrounds

Previous studies have shown that, in technology education in Sweden, teachers do not perceive the curriculum as being clear about assessment in general (e.g. Bjurulf 2008; Hartell, 2015) or about technological systems in particular (Klasander, 2010; Schooner, Klasander & Hallström, 2018a). Thus, teachers are forced to be self-reliant and self-efficacious in their interpretation of the curriculum and its subsequent application in teaching.

The findings indicate that teachers who had done a full teacher education programme felt a higher need for professional development than professionals who had been retrained to work as technology teachers, that is, they had a professional background with, for example, an engineering degree and had complemented it with a shorter teacher education programme. Thus, both categories had a full teacher degree but had obtained it in different ways, but with the latter category having a much more profound technological education. Some respondents had also read additional technology or engineering courses after their teacher education exam, placing them in this latter category.

Differences in cognitive beliefs were found between teachers who had taken additional courses in technology and engineering and those who had not. This indicates that additional education in the fields of technology and engineering relate to stronger cognitive beliefs among teachers about their knowledge for assessing students’ learning of technological systems (cf. Rohaan et al., 2012). The teachers who had taken additional courses also saw less need for professional development. It is conceivable that those teachers who had taken additional engineering courses were more confident about the physical components of systems – the technological, material core of the systems (Hughes, 2012) – because the engineering view of technology is generally quite materialistic (Kroes et al., 2006).

Our analysis thus suggests that expanding and deepening teachers’ content knowledge within the technology subject area by studying further courses in technology and engineering (Williams & Gumbo, 2011) may be an efficient way of boosting teaching competence about, for example, understanding and carrying out modelling, and thereby strengthening cognitive beliefs regarding teaching about technological systems (Barak 2018; Hallström, 2022).

Studies have indicated that longer teaching experience increases the confidence to teach, which leads to teachers rating themselves more highly in self-efficacy and ability to instil knowledge about technology over time (e.g. Nordlöf, Höst & Hallström, 2017, Rohaan et al., 2012). Perhaps surprisingly, this study did not find any difference in cognitive beliefs about knowledge for assessment between technology teachers who have 14 years or more of experience in teaching technology, and those who have less. This may indicate that the attitude components self-efficacy and cognitive beliefs evolve along different trajectories. It may also be because assessment is such a specific – and often neglected – activity, at least in technology education, that it does not follow the same pattern as teaching in general. Another possible interpretation is that our definition of an experienced teacher (14 years or more) is poorly matched to the time frame over which cognitive beliefs evolve.

Teachers with more experience were found to have lower cognitive beliefs than less experienced teachers about the effectiveness of formative assessment instruments that involved students’ construction of products such as physical models or log books. This finding may be explained by the fact that the most recent developments in technology teacher education have included formative assessment components. However, the finding probably reflects a more general feature of technology teacher confidence in relation to experience. Holroyd & Harlen (1996) also found among Scottish primary teachers that “the more recently qualified were more confident than the more experienced” (p. 323). Although we did not study what teachers do in their classrooms, our finding could also be consistent with a recent study indicating that even the actual quality of teaching may in some contexts be lower for more experienced teachers (Graham et al., 2020).

Overall, therefore, our findings suggest that particularly further professional training in technology and engineering courses may serve to develop the teachers’ content knowledge and provide the insights needed for the strengthening of their cognitive beliefs and thereby potentially improve technology teaching and assessment (cf. Hultén & Björkholm, 2015). In particular, professional development courses would need to address the role of modelling in technological practice in general, and in relation to technological systems in particular (Hallström, 2022). Consequently, based on our findings, we make the case for continued professional development in the form of, for instance, engineering courses for all technology teachers, particularly those who have a teacher degree without a prior technology or engineering background.

Limitations

The response rate of at least 22% is relatively high compared to what can typically be expected from internet-based surveys, which can often only muster around 10% when addressed to a wide range of respondents without previous information (Denscombe, 2011). Nevertheless, the response rate warrants caution in generalising the findings. In particular, it seems likely that teachers with little interest in technological systems are less likely to respond than teachers with greater interest, positive attitudes, and a more developed understanding of the topic. Therefore, the distributions of the responses may not hold for the entire population. However, the observed differences between teachers with different backgrounds and relationships between variables may be less sensitive to a potential self-selection bias.

Conclusions

In conclusion, we find no difference in cognitive beliefs about assessment between experienced technology teachers and inexperienced ones. On the other hand, additional education in the fields of technology and engineering does relate to stronger cognitive belief ratings concerning the ability to assess students’ learning of technological systems. Unsurprisingly, teachers with strong cognitive beliefs did not consider professional development to be as necessary as did those scoring low.

Understanding what influences cognitive beliefs about assessing students’ learning provides insights into how technology teachers view their teaching practices and, ultimately, the challenges that teacher educators and policymakers need to address in order to strengthen teachers’ beliefs in their own ability to teach and evaluate students’ learning (Koloi-Keaikitse, 2017). Further studies should investigate more factors that contribute to technology teachers’ cognitive beliefs about teaching and assessing students within the subject of technology, especially concerning the formative tools that were highlighted in this study: the generally widespread portfolio and log book as well as the technology-specific models and simulations. Further investigation is also needed into the actual assessment practices in compulsory school technology education.