Exploring formative assessment in primary school classrooms: Developing a framework of actions and strategies

Article

Abstract

The importance of formative assessment in facilitating student learning has been well established in the literature. However, defining and implementing formative assessment in classroom settings is a rather complicated task. The aim of this study is to explore formative assessment, as implemented in primary classrooms in Cyprus, and develop a framework of action for analysing and understanding formative assessment processes. The research was qualitative, interpretive, collaborative, and guided by the ethics of care. Four primary school teachers of the third and fourth grade participated in the study. The teachers differ in their teaching experience and gender. Data collection was based on non-participant classroom observations, teachers’ interviews and documentary analysis of children’s work for written feedback. The analysis of the data was carried out using the constant comparative method and revealed five distinctive processes of formative assessment: (a) Articulation/communication of expectancies and success criteria, (b) Elicitation and collection of information, (c) Interpretation of information/judgement, (d) Providing feedback, and (e) Taking action/regulation of learning. The analysis also pointed the confusions arising from the various interpretations of the concept and the difficulties in implementing effectively formative assessment in classroom settings. Implications of the findings for policy and practice are drawn and suggestions for further research are finally provided.

Keywords

Pupil assessment Formative assessment Assessment for learning Assessment framework Assessment practices 

1 Introduction

Over the last decade, policy makers, teachers and educational researchers have had a growing interest in the development of formative assessment practices that promote and reflect student learning (Bell and Cowie 2001; Torrance 2001; Williams 2011). Formative assessment occurs during a teaching unit with the intent that the gathered information will be used to adjust future learning scenarios (Earl 2003). Consequently, formative assessment is distinguished by the fact that its main purpose is to aid or improve learning rather than simply attributing a grade (Marshall and Drummond 2006). The idea of linking assessment to instruction and learning is not new, yet teaching and assessment have been viewed as separate entities for a very long time (Richard and Godbout 2000). This was due to the fact that assessment is a concept that subsumes many varieties of activities and functions that work toward often disparate goals (Brookhart 2001). The debate seems to fall into two broad camps at the moment, both of which view assessment as a positive phenomenon, albeit from different perspectives. On the one hand, national policies oriented towards a particular view of raising academic standards and rendering the school system more accountable, are focused on summative assessment whereas, on the other hand, assessment theorists and educators largely focus on the potentially formative impact of assessment on pupil learning: on how careful observation, judgement and feedback, about pupil strengths and weaknesses, can assist the process of learning (Torrance 2001). The argument is that teachers are in the best position both to collect good quality data about students over an extended period of time and to make best use of it in their feedback (Harlen 2007).

Despite research findings suggesting that Cypriot teachers hold positive attitudes towards formative assessment (Kyriakides 1997), only a limited number of teachers actually implement such practices in their teaching (Creemers et al. 2012; Christoforidou et al. 2013). This finding is in line with international research suggesting that classroom assessment practice still appears to be outcome-oriented (Earl and Katz 2000; Lock and Munby 2000). In addition, although educational policy usually acknowledges the value and significance of formative assessment, student assessment prioritises summative assessment which is politically more powerful and influential. Particularly, in Cyprus, although the Cyprus Ministry of Education and Culture (Ministry of Education and Culture 2004) suggests that teachers should make use of all kinds of assessments, summative and formative, teachers are not provided with any kind of guidance or training on how to assess primary school students formatively and there are no instruments or a clear policy regarding how to assess pupils in order to enhance their learning (Kyriakides 2004). In this context, this paper aims to explore further the difficulties and complexities of implementing formative assessment in classroom settings in Cyprus, in an environment that is so heavily weighted toward summative assessment.

Although a distinction between formative and summative assessment has been familiar for 45 years (Scriven 1967), the meaning of these two terms has not always been well understood (James and Pedder 2006). Moreover, apart from the two main functions of assessment, other expressions related to the functions and purposes of student assessment such as ‘assessment for learning’ and ‘diagnostic assessment’ complicate even further teachers’ understanding of the concept. Thus, the next section elaborates on the different meanings and understandings of formative assessment that have been proposed in the relevant literature.

2 Definition and characteristics of formative assessment

The term formative assessment is not used consistently in the literature (Bennett 2011). This has resulted in a number of definitions of formative assessment. The way in which these definitions are understood, interpreted and manifest in practice often reveals misunderstanding of the principles that the original ideals sought to promote (Klenowski 2009). Particularly, some authors see all classroom assessment as formative and discuss summative assessments primarily in terms of external assessments. Other authors agree all classroom assessment can be formative, but only if teachers and students use the information for formative purposes, while others recognise that some classroom assessment can serve summative purposes too. Moreover, some authors claim that formative assessment refers to an instrument (e.g., Pearson 2005), as in a diagnostic test or an item bank from which teachers might create those tests (Wiliam and Thompson 2008), whereas educators and researchers argue that formative assessment is not an instrument but a process (Popham 2008). In this view, the process produces not so much a score, but a qualitative insight into student understanding (Shepard 2008). Taken together, as Bennet (2011) argues, formative assessment might be best conceived as neither a test nor a process, but some thoughtful integration of process and purposefully designed methodology.

Another term used almost interchangeably with formative assessment is that of assessment for learning. Black et al. (2003) make a distinction between these two terms by arguing that assessment intended to promote learning only becomes formative when evidence is actually used to adapt teaching work to meet learning needs. In addition, another term that is often confused or used interchangeably with formative is that of diagnostic assessment. An assessment could be considered as diagnostic when it provides information about what is going amiss and formative when it provides guidance about what action to take (Wiliam and Thompson 2008). It is also important to note that not all diagnostic assessments are instructionally actionable. Black (1998, p.26) offers a somewhat different view, stating that: ‘… diagnostic assessment is an expert and detailed enquiry into underlying difficulties, and can lead to a radical re-appraisal of a pupil’s needs, whereas formative assessment is more superficial in assessing problems with particular classwork, and can lead to short-term and local changes in the learning work of a pupil.’

Such problems of definition are often further confounded by external policy changes (see for example, Pollard et al. 1994; Torrance and Pryor 2001). Thus, it is important to distinguish formative assessment from other current interpretations of classroom assessment. Most authors agree that assessment can be considered formative only if it results in action by both the teachers and students to enhance learning (Black and Wiliam 2006). For the purposes of this study, we adopted the definition of formative assessment provided by the Assessment Reform Group in the UK (2002, pp.1–2) as “the process of seeking and interpreting evidence for use by learners and their teachers to decide where the learners are in their learning, where they need to go and how best to get there”. In order to make the differences clear, it is useful to summarize the basic characteristics of formative assessment. According to Black and William (2009), formative assessment can be conceptualized as consisting of five key strategies: (1) clarifying and sharing learning intentions and criteria for success; (2) engineering effective classroom discussions and other learning tasks that elicit evidence of student understanding; (3) providing feedback that moves learners forward; (4) activating students as instructional resources for one another; and (5) activating students as the owners of their own learning. Additionally, other characteristics found in the literature refer to formative assessment as an ongoing multi-process, integrated in the teaching and learning, that is carried out on a daily basis through teacher-pupil interactions (Earl 2003). In this process, teachers modify their instruction and activities, according to the assessment information, in order to improve learning processes and student outcomes. As Black et al. (2003) argue, formative assessment applies not to the assessments themselves, but to the functions they serve in supporting students’ learning and providing evidence that is used to adapt the teaching to meet learning needs. Taking this functional view, successful implementation of formative assessment depends on the learning approach and teachers’ knowledge, skills and strategies that they use to carry out complex pedagogical processes (Webb and Jones 2009). From this perspective, several studies demonstrated that, while formative assessment is desirable, it is not easy for teachers to achieve (e.g. Torrance and Pryor 2001; Marshall and Drummond 2006).

3 Problems in effective implementation

Although there is a growing literature reporting positive effects of formative assessment upon teaching practice and students’ outcomes, there is also a growing literature on the difficulties of introducing formative assessment in ordinary classroom settings. For example, Hall and Burke (2003) have found that although teachers perceived positively formative assessment and acknowledged its importance, they faced several difficulties in implementing effectively formative assessment practices in their classrooms. Consistent with these findings, both the National Research Council (1996) and the National Council of Teachers of Mathematics (2000) in the USA have recommended that teachers develop and use formative assessment practices on a systematic basis. As The Office for Standards in Education in England (OFSTED) supports, “Although the quality of formative assessment has improved perceptibly, it continues to be a weakness in many schools” (1998, Section 5.6). Black (1996) also argues that formative assessment is still undervalued and underdeveloped. He claims that in Britain and in the rest of the world teachers do not use effective assessment practices while teaching. There are several reasons why successful implementation of formative assessment is still problematic, as elaborated below.

As discussed in the previous section, the various definitions and the consequent conceptual understandings of the concept have created a confusion of what formative assessment really implies in terms of classroom practices (Klenowski 2009). Most teachers are familiar with summative assessment, and only few implement, effectively, formative assessment in their classrooms (Black and Wiliam 1998). Several studies (e.g. Morgan 1996; Preece and Skinner; 1999; Shen 2002) have shown how the summative assessment requirements dominate the assessment practice of many teachers. Particularly, in the context of USA primary classrooms, it was found that teachers do not distinguish between formative and summative purposes (Bachor and Anderson 1994), and based on these findings, Shepard’s (2000) calls for a transformation of classroom assessment practices, to support and enhance learning.

Moreover, effective implementation of formative assessment requires the development of new tools and changing classroom practices (Black and William 2003). Such changes may be related to practical issues such as an increase in record keeping required in some formative assessment practices (MacPhail and Halbert 2005; Brookhart 2010). In addition, formative assessment is difficult to achieve because empirically derived models of learning are not generally available and the shift in teacher practice required is large and may also involve changing teacher beliefs and values related to effective teaching and learning (Webb and Jones 2009).

In addition, the increasing policy emphasis on measuring academic standards and the need for evidence-based policy development has created a pervasive emphasis on summative evaluation for high stakes purposes. Although summative assessment has been subject to severe challenge and its ability to improve the teaching and learning process has been questioned (Black and Wiliam 2009), educational accountability today is synonymous with student achievement outcome testing and the sanctions that accompany the results (Darling-Hammond 2004). National educational policies internationally have moved forward to create approved level assessments and targets for schools, and students to make adequate yearly progress. In turn, such kind of national policies have pressed individual schools to meet student achievement targets on summative large-scale evaluations (Militello et al. 2010).

As many Organisation for Economic Co-operation and Development (OECD) countries are beginning to develop commonalities of understanding and practice in relation to formative assessment (Sebba 2006), the difficulties in effective implementation need to be identified and tackled by researchers and policy makers if formative assessment is to fulfill its promise (Baird 2010). This is important as much of what is claimed for formative assessment is based on rhetoric rather than any real understanding of the process involved (Gattullo 2000). As Torrance and Pryor (2001) argue, it is necessary to explore systematically teachers’ daily practice in order to facilitate the firm grounding of future programmes of change. Without systematic analyses of formative assessment, based on empirical research in classrooms, research evidence can only provide us with a limited understanding of the nature and process of formative assessment. From this perspective, this study aims to analyse the particular ways in which teachers understand and implement formative assessment, since this could make an important contribution to the development of a conceptually based framework of actions and strategies encompassing the relationship between teaching, assessment and learning.

4 Research aims

This study aims to contribute to the discussions related to teachers’ perceptions, understanding and actions in relation to formative assessment by providing an in-depth understanding, a thick description, of what the primary school teachers in Cyprus do that could count as formative assessment practices and to explain their underlying rationale. The intention is to engage critically with the theory of formative assessment (Black and Wiliam 2006) by investigating the emerging issues from a more practical and applied perspective. Within the main question important aspects of formative assessment such as teachers’ perceptions of formative assessment, teachers’ attitudes to feedback to students and student peer- and self-assessment opportunities will also be basic elements of the study. In particular, the objectives of this study are to:
  1. 1.

    Explore what the teacher do that could count as formative assessment practices.

     
  2. 2.

    Understand the teachers’ rationale of their actions and their attitudes towards formative assessment.

     
  3. 3.

    Develop a framework of processes for the analysis of formative assessment.

     

The study took place in the Cyprus educational system. The Cyprus Ministry of Education and Culture (Ministry of Education and Culture 2004) suggests that teachers should make use of all kinds of assessments: diagnostic, summative and formative. The ministry particularly acknowledges the important role that formative assessment can have on learning. Teachers are expected to use a variety of techniques such as written tests, observation, communication and pupils’ self-evaluation. However, the MOEC has not provided guidance for teachers on how to assess primary school students formatively and there are no instruments or a clear policy regarding how to assess pupils in order to enhance their learning (Kyriakides 2004). Since the study reported in this paper was conducted in Cyprus, information about the context of the educational system of Cyprus is provided in the Appendix in order to enable an international readership to interpret the findings of the study. In the following section, the methods employed in the study and particularly the participants, the data collection methods and the data analysis procedure are described.

5 Methods

The study was based on a qualitative, interpretive and collaborative exploration into teachers’ practices and understanding of formative assessment. Four teachers of the third and fourth grade participated in the study working in two public primary schools, within the two biggest school districts of Cyprus. The teachers differ in their teaching experience and gender. Teacher 1, teaching a fourth grade class, was a man with 18 years of teaching experience. Teacher 2, teaching a third grade class, was a woman, having 8 years of teaching experience. Teacher 3 was a also a woman with 11 years of experience, teaching a third grade class, and teacher 4 was a man with 12 years of experience, teaching a fourth grade class. All four teachers had a university degree in Primary Education and a master’s degree in an educational related field, not however directly related with educational assessment. All four volunteered to participate in the study. The number of students in each class ranged from 21 to 25. Having authorized access, initial meetings introduced the four teachers to the project. In addition initial observations were made in each class to note general instructional practices and classroom routines, and to familiarize the researchers, teachers and students with each other. Also, informed consent letters were sent to the parents of the students of each class.

The research employed various methods for data collection: (a) individual semi-structured, in-depth interviews with each teacher shortly before the beginning and by the end of their lessons, (b) classroom non-participant observations and (c) documentary analysis of children’s work for written feedback. Classroom observations aimed to provide an understanding of the complex reality surrounding formative assessment in classroom settings and to ascertain what teachers actually did in the classroom that could count as formative assessment practices. The focus of the classroom observations was the whole-class student-teacher interactions, and sometimes, in small group situations, between pupil and pupil, during which an assessment was conducted. A total of 24 lessons in various subjects of the primary school curriculum were observed at different times of the day, generating 16 h of systematic audio tape recordings and extensive field notes of classroom interactions. Note-taking was also used as a subsidiary activity to support tape-recording and provided important contextual details (e.g. the nature of the task, grouping arrangements, timing).

Alongside the classroom observations, interviews with the four teachers were also conducted. An initial meeting at the beginning of the fieldwork introduced the teachers to the project and its general aims. In addition, semi-structured, tape-recorded interviews took place to document teachers’ views of the purpose, usefulness, relevance and importance of formative assessment. We were seeking to ascertain teachers’ perceptions of what they thought was implied by the term ‘formative assessment’ and how they were attempting to put it into practice in their classrooms. We were also interested in their understanding of the relationship between teaching, learning and assessment. Moreover, the interviews were structured around specific formative classroom assessment incidents, as those have been observed in their classrooms. Thus, the interview questions aimed at eliciting information about the respective assessments’ perceived task characteristics, perceived self-efficacy to meet the challenge posed by the task, amount of effort expended and the reasons for that effort, expected level of success and how the teacher felt about that on an overall basis. A total of 48 interviews were conducted with the four teachers. Finally, examination of children’s work for written feedback was carried out and photocopies of examples were taken.

In order to identify the different formative assessment processes, we have focused on particular assessment incidents as the unit of analysis. Torrance and Pryor (1998) used the terms ‘assessment event’ or ‘assessment incident’ to refer to the teacher-pupil interactions during formative assessment in the classroom. The classroom assessment event as defined in this study includes the in-class assessment, the students’ and teachers’ preparation for it, and the feedback and action from it, considered as a whole unit. Thus, the focus here is on the assessment event as a unit of learning more than a unit of inter-personal interaction. All recordings and interviews were transcribed in full and the Nvivo software was employed for the data analysis. Particularly, the data were analysed using the constant comparative method, while two external researchers were also consulted on a regular basis when questions arose. In this way, the development of the conceptual framework of formative assessment was grounded to the data, with the data leading to the development of the theory. The next section describes the findings of the study and elaborates on each category of the theoretical framework.

6 Research findings

At first, the findings related with teachers’ understanding and perceptions of formative assessment and their preparation for carrying out formative assessment activities are presented. Then, the formative assessment framework is described and elaborated.

6.1 Teachers understanding and perceptions of formative assessment

Initial meetings revealed that the teachers had positive perceptions and values towards using formative assessment practices, but had a fairly narrow view of what actually constitutes formative assessment and the teacher role in it. They also had complex conceptions of assessment and claimed to use different forms of assessment to achieve different purposes. The term ‘formative’ was open to a variety of interpretations and often meant no more than that assessment carried out frequently by the teacher. The student role in formative assessment was not mentioned and formative assessment was conceived as assessment helping the teacher to identify areas where more explanation or practice is needed. They were also unable to explicitly describe what they did in the classroom that could count as formative assessment.

Formative assessment has merit and can benefit both students and teachers… I cannot really describe all my practices which fall into formative assessment, I do several things …for example, I regularly assign application tasks to my students.

I believe that assessment is really important in everyday teaching, and especially formative assessment, which can provide evidence about student learning… I check on my students’ work regularly…

The teachers, when asked to explain what they do in relation to formative assessment, referred to issues such as reporting to parents, extrinsically motivating students, organising group instruction, and raising many questions during their teaching. The underlying assumption was that, because formative assessment has to be carried out by teachers, all assessment by teachers is formative, adding to the blurring of the distinction between formative and summative purposes and to teachers changing their own ongoing assessment into a series of ‘mini’ assessments each of which is essentially summative in character.

6.2 Unanticipated formative assessment events

The teachers’ interviews provide evidence that formative assessments could have two forms: planned or anticipated and interactive or unanticipated formative assessment. A key difference between the two was that of planning and purpose. In planned formative assessment (e.g. a prepared oral test of short questions at the beginning of the lesson), the teacher has planned the assessment before the lesson begins. In interactive formative assessment, the teacher is responsive to events that arise during his or her interactions with students in the lesson. While the teachers are prepared for interactive formative assessment, e.g. by increasing the opportunities for interaction with students during the lesson, they do not know the details of when and what assessment will occur. As the teachers stated in their interviews:

Officially we have to plan assessment activities just like we plan for our teaching. But to be honest very few teachers are doing so and follow that in a strict manner. I use mainly oral questioning to assess my students.

I was planning to administer a small quiz at the end of the lesson, to evaluate the extent to which they master the lesson objectives, but from their questions I realize that they didn’t, well before the end of the lesson…I learned much more about their understanding from engaging into dialogue.

As Harlen and James (1997) argue, it is important to recognise that the reality of formative assessment is that it is bound to be incomplete, since even the best plans for observing activities or setting certain tasks can be torpedoed by unanticipated events. The above insights are important, as many people think that formative assessment is related to planned activities rather than how one treats ongoing interactions and dialogue. A comparable study by Cowie and Bell (1999) proposed a model which also distinguishes between planned and interactive formative assessment. Based on teachers’ responses the latter is more demanding, and its practice is more fragile under stress.

6.3 The analytical framework

The framework, emerged from the observational data and to a certain extent supported by the data from teacher interviews, is based on a sequence of five interrelated processes, which are common to every completed formative assessment incident: (1) communication of expectancies and success criteria, (2) elicitation and collection of information, (3) interpretation of information/judgement, (4) providing feedback and (5) taking action/regulation of learning. The third process, i.e., interpreting the information/judging is a mental activity that could not be observed during classroom observations. Formative assessment, like all educational measurement, is an inferential process because teachers cannot know with certainty what understanding exists inside a student’s head (Pellegrino et al. 2001). However, the outcomes of this interpretation are observable through the provision of feedback (process 4) and/or initiation of action to regulate teaching and learning (process 5). Although the development of the framework was grounded to the data, with the data leading to the development of the theory (Glaser and Strauss 1967), the emerged framework presents important similarities with other models proposed in the literature (e.g. Black and Wiliam 2009) which have been described earlier. In addition, the model is similar to an inquiry cycle model and to some extent an action research design, developed in the 1960s by John Elliot’s Action Research Network. Nevertheless, the purpose of the framework developed in this study is to understand and promote better implementation in classroom settings, thus it is a framework of action. It is not only about thinking of formative assessment, but aims to assist action and improvements in teaching practices. The analytical framework is presented in Fig. 1. A description of the analytical framework follows, complemented with teachers’ views explaining their intentions and actions.
Fig. 1

Theoretical framework of formative assessment

6.3.1 Process 1: Communication of expectancies and success criteria

The findings revealed a lack of specific criteria of the lower acceptable level of attainment against which a lesson’s or a task’s objectives might be attained. The teachers underlined this, stating that often they had no clear idea of the criteria by which they assess, and thus, could not easily spell them out. Some teachers blamed the educational system as not making explicit the criteria in each unit, adopting an external locus of control. Referring to criteria consistently was not seen as easy, and this was seen as a time-consuming activity.

I am not sure whether I have the time to make the success criteria explicit to children. They are doing the best they can…

…I rarely spend time on commenting quality criteria. Firstly, the ministry should have made those criteria explicit for different levels of students and secondly, when you have 40 min to teach a new topic, you don’t have many options.

One of the teachers mentioned the problem of over emphasizing simply completing a task rather than doing it well:

I usually push my students, while working in the classroom to do more, instead of doing better…also you always have the time pressure…I believe that the lack of specific and predetermined criteria is the main reason for paying more attention to the quantity rather than on the quality of student work.

Another teacher saw as problematic the possibility that teachers might be specifying aspects of quality ‘beyond the attainment of certain children’. Thus, he argued that the expression of quality criteria in particular had to be accomplished more by interaction with individual pupils through questioning and feedback, rather than by articulation to the whole class.

The main reason I don’t usually comment on the specific success criteria of a task in the classroom, is that students in my class come from various backgrounds, with various abilities and skills. So, if you set the criteria for the best students, then I don’t know how many of the rest students would be able to reach that level of success or even get close to the predetermined criteria.

Indeed, as Torrance (2001) argues, an important insight into the setting of criteria is that the process is not just confined to the start of a lesson, but rather is achieved through dialogue during a process of literal or metaphorical ‘drafting’. Providing opportunities to improve an initial attempt at a task both extend the learning event and create the conditions for continuous clarification of criteria.

6.3.2 Process 2: Eliciting and collecting information

The second element of the formative assessment framework is related to eliciting and collecting information on student understanding. The teachers used a wide variety of practices to collect evidence, depending on the subject, the particular classroom circumstances and the intended purposes. These practices included: oral questioning; class or individual discussions; informal observation and commenting on children’s performance and student interaction with the teacher or peers. It also included a variety of written exercises, such as worksheets, textbook assignments, text-embedded tasks, and teacher-made tests. In relation to the level of classroom organization and the actual conduct of teacher formative assessment, most assessments seemed to take place in ‘focus groups’ largely because of the fact that all classes were organized in student groups. Particularly, this involved new work being introduced to the whole class and then children pursuing particular tasks as individuals and/or collaboratively in their groups. Teachers reported (and indeed subsequently were observed), that they would often sit or stand with a selected group (their focus group for teaching and assessment purposes) while the rest of the class got on with their work. In those focus groups, the teacher often appeared to be using a Vygotskian guided discovery approach to learning, designing a flexible task with reference to the attainment targets, observing and questioning the group as they worked, but also intervening to support learning when appropriate.

On an overall basis, unstructured observation was the foundation of the teachers’ formative assessment and a fundamental way in which they obtained information about what children know, understand and can do. Teachers observed the children working, listened to children, gave explanations and made statements. Teachers also kept an eye on children’s work in progress as well as on their behaviour and interactions. Based on teacher interviews, it was important for the teachers to avoid rushing into a judgement. One of the teachers described the process as taking “time to actually observe not just see what’s going on and then intervene”. Nevertheless, interestingly when asked at the beginning of the fieldwork, the teachers didn’t mention observation as being among their approaches for collecting assessment evidence.

The ability to observe a student was enhanced by the use of questioning techniques. The content and targets of the questions shifted as the lesson progressed. Questions at the beginning of the instruction were aimed mainly at linking the previous lesson with the new. During instruction, teachers asked questions mainly to check whether children were following and understanding. At the end of the lesson, questions were used to review, to check out understanding and rarely to facilitate the transfer of the new knowledge to different circumstances—in short, to check whether the pupils had attained the lesson’s objectives. All teachers noted that raising different forms of questioning was very important. One of the teachers noted that many questions regarded by teachers, as just routine checking “might seem threatening to students”. He talked of needing ‘to develop the ability to stand back and let the children talk and clarify without interrupting’. He reported that he had adopted an approach, where questions were designed to ‘point the way’, contributing over time to developing ‘the confidence in children to see questioning as helpful and non-threatening’. Similarly, another teacher reported that children in her class made a distinction between ‘helping’ questions as opposed to ‘testing’ questions and articulated a need for greater transparency of classroom processes whereby the differing intentions behind questions are made more explicit:

I prefer to ask open-ended questions, with the intention of having the student demonstrate their understanding by expanding their critical thinking and relating the specific knowledge to everyday situations.

The same teacher also reported that she had set-up specific tasks that involved children in questioning each other so that their talk was criteria-oriented but not dominated by needing to provide the answer that they thought the teacher wanted to hear. This however was not supported by the observational data. The observational data revealed that one of the teachers when questioning students used phrases such as “I am interested in your thinking…”, “Please help me understand. Suppose you are the teacher and I am your pupil.”, “Sometimes when I have difficulties with a problem, I break it down into small steps. Let’s do that here and find out…” and “I like it when you take the time to think…”. However, the other three teachers were regularly raising closed and low-level questioning, such as those recalling facts, rules, simple principles and dates. Very few questions recorded were open-ended asking from students to demonstrate their understanding and could further promote student learning. Also, too often, the teachers asked rhetorical questions, e.g., “Did everyone get that?” with little potential of providing an actual understanding of the students’ attainment.

Apart from questioning, teachers made extensive use of written assessments such as tests, exercises and other tasks embodied in pupils’ textbooks. As well as daily textbooks tasks, there was also a revision test on completion of a course of teaching units in some subjects, published by the Ministry of Education and Culture, to gauge the extent to which pupils had mastered the material taught so far. Although teachers conceived this to be part of their formative assessment practices, it was actually more of a summative assessment taking place at end of the teaching sequence. An important issue during the observations was the expectation for all students to work on the same tasks and to attain the same objectives regardless of the differences in their ability. Since no differentiation provision was made according to pupil’s individual strengths and weakness, the material targeted the average pupil. Thus, high achievers were often feeling bored, having finished their tasks quickly but having to wait for the rest of the class to also accomplish the task. Equally, the less academically advanced children were always striving to finish their work in time. The need for differentiation in the assessment tasks, as teachers have pointed out, placed more pressure on them, in case they would like to develop written tasks or exercises on their own, and at the same time limitations as to whether those tasks were appropriate, especially for those pupils who regularly conceived these tasks either as being too difficult or insufficiently challenging.

As time is very limited, I usually assign whole class multiple choice quizzes. By having students raise their hands to indicate which answer they picked from a multiple choice quiz, I can immediately see how well we have conveyed the material.

The teachers usually developed simple exercises, worksheets and quizzes which usually included multiple choice, filling in blanks and short-answer questions. As one of the teachers reported, ‘formative quizzes allow us to check immediately students’ understanding key points or concepts’. Another teacher also stated that:

…to gain a better understanding of student thinking, I ask students who chose different answers on a multiple choice exercise to explain why they selected those answers. A variation of this approach would be to have students choose answers individually and then share and explain their answers with their partner.

Also, the same teacher argued for the use of letting the children reflecting on the new knowledge:

…In addition to using formative quizzes, right after the lesson, I sometimes ask students to take five minutes to summarize or list the main points they believe we were trying to make. Their responses can serve as the basis for class discussion, allowing me to evaluate the extent of student understanding.

6.3.3 Processes 3 and 4: Interpretation of information and provision of feedback

For the purposes of this study both oral and written feedback by the teachers has been explored. Teachers’ oral reactions to children’s effort and products were the most overt aspects of formative assessment. Such comments varied from praise to criticism and sometimes took a neutral stance without specific comments or indications. Typically, teachers’ verbal comments were positive regarding academic aspects with the aim of encouraging children’s learning effort, and negative when in response to behavioural aspects their intention was to maintain order and to avoid disruptiveness. For example, one of the teachers publicly criticized a girl who was not paying attention to the application tasks and publicly congratulated another girl for solving a problem in mathematics. This was seen by teachers as having a positive message intended to encourage a desirable outcome or to discourage an undesirable one.

Numerous para-linguistic/non-verbal expressions of feedback were also observed, ranging from a glance and frowning to a nod or just moving a finger. These often accompanied verbal expressions conveying a particular meaning to actions. The significance of such non-verbal responses, as one of the teachers said, is important, ‘since they are used, often unconsciously, to keep the flow of teaching smooth and without interruptions’. However, as all teachers agreed, the role of non-verbal feedback, to formative assessment, aiming to assist learning was limited.

It was a common practice for all teachers to draw faces or stars on their students’ work, something which pupils seemed to understand and enjoy. Such symbols, i.e. faces, signatures, stars, and ticks, were found to be unique codes for each class; they were part of the individual assessment practice in each class and sometimes the same symbol could have a different meaning in another classroom. For example, one of the teachers drew a smiling face under a girl’s work. When asked to elaborate on this action, the teacher said:

This comes under a well-establish policy in my class. It means that you’ve got all answers right, without a single mistake. That’s why the face is happy and smiles at you.

For the most part, written feedback was almost totally characterized in terms of short-term rewards—praise, ‘smiley face’ stickers etc.—rather than detailed comment on how to develop an idea further or help with particular problems. Exactly why school policies and practice on feedback in the classroom have focused on rewards, rather than extended dialogue about the quality of the work produced is an issue that our teachers attributed to ‘pragmatic’ constraints of time and class size. It also seems to derive in part from teacher perceptions which take for granted the efficacy of behaviourist reinforcement systems. According to one teacher:

If a student takes one star for example, he/she would try to do equally well in future assignments so as to receive praise in a similar manner. In addition, other students will also try to imitate and try harder to get a star as well.

Apart from symbols and marks, one teacher was also making use of grades. As she explained, she used grades because she believed that “children and parents understand them better”. The most regular grading method she used was fractions of correct answers out of the total answers. She argued that: “a short comment wouldn’t say anything about the student’s progress. However, a number can show how much the student has improved since his/her last work, so that he/she can see at once if they have got better or not”. The same teacher argued that she used marks and grades mainly to ‘motivate and to convey to children and their parents how much progress has been made’.

Parents are kept informed…without any marking how will they be made aware of their children’s progress? …also marking prepares students for the marking system established in gymnasium and lyceum (i.e., secondary school).

On the other hand, another teacher advocated description of the children’s achievements and individual abilities in line with the ideas of child-centered education:

No marks, we must explain in detail to the children their deficiencies, and then help them to overcome them. We have to emphasize the good points of children’s work.

The above extract indicates the confusion of the teachers in relation to the different functions of assessment. Summative and formative assessments are not explicitly distinguished and teachers seem to neglect the importance of formative assessment in promoting learning. Typically, the comments of all teachers were too general and short, without providing explanations in relation to the strengths and weaknesses of the work done or how improvements could be made or maintained. Sometimes, marks or grades were accompanied by some general and brief comments such as: ‘Well done’, ‘You need to improve it’, ‘Good’, ‘This is better’, ‘Very Good’, ‘You have to try harder’. Just ticking or signing the pupils’ work were also commonly used. However, as the teachers themselves reported, children want and need comments to be specific. In none of the four classrooms observed, this specificity was found. The following comments from students’ workbooks illustrate this finding:

Well done!

You still need to work harder.

I was expecting more from you!

This is poor, you must pay more attention

Could you be more careful next time? Same mistakes repeated

(written comments on students’ work)

Sometimes teachers considered the pupil’s own past progress as a point of reference and ‘interpreted’ the evidence of the new work against it. A child was reported as doing better or worse than before. From the examination of children’s workbooks, we observed in some cases the following comments:

Well done Costa! Your composition is much better than the previous one. Keep on the good work.

You only made three mistakes Ioanna. Much better than before. Keep it going!

Last time you did better and it was neater also.

(Teachers’ comments on students’ exercise books)

As one of the teachers commented, “such kind of feedback aim to help individual students understand the difference between their present and past achievements, to identify their weaknesses”. All teachers reported that this approach helped them to avoid competition between children. However, it appeared to be rather rare, since as all teachers said, lack of time and the high teacher-pupil ratio prohibited its frequent use.

I know it does yield some positive effects on student learning. I only wish that I had some more time on my disposal…we have to take care of so many issues….

The findings described above highlight the ambivalent attitude of the teachers towards making and communicating overall judgements about work. Judgements were seen to place emphasis on products rather than process, but, as one of the teachers pointed out, perhaps paradoxically, teachers’ judgements and the provision of feedback are very important in indicating the level of achievement needed to “build the reference framework for self-assessment”.

6.3.4 Processes 3 and 5: Interpretation of information and regulation of learning

The last processes included in the formative assessment framework of actions and strategies refer to teacher decisions and consequent actions to moderate teaching and learning. Regulation of learning is the process of altering future learning activities or strategies based on collected information (Allal and Pelgrims Ducrey 2000). The collected information through formative assessment practices should be then reapplied in the teaching-learning process. The teachers in our study used the information collected to reach diverse pedagogical decisions which included alternatives such as leaving the student in his/her regular learning activities as everything is going well, providing the student feedback and ask him/her to repeat the task, modify the task—make it harder if everybody succeeds or make it more accessible in case of persisting difficulties. Nevertheless, in the last case, it was not just a question of more, harder, or easier. Sometimes the teacher needed to alter the task, into a totally different, new task, so as to fit in the student zone of proximal development. In this case, teachers were faced with decisions about possible adjustments to future learning activities based on students’ performance in relation to the pursued objective. The observation findings indicated major differences in the behaviour of the four teachers. One of the teachers was implicitly evaluating her instruction and modifying it to a certain extent according to the reactions, hints and signals she received from the children. This teacher, unlike the others seems to have captured the spirit of formative assessment (c.f. Marshall and Drummond 2006), whereas the others were still working at the level of the letter of formative assessment. As she commented:

Whenever you see blank eyes, something’s gone wrong and you have to react immediately, either by stopping and repeating what you’ve taught, or by shifting the activity to attract their attention and interest again.

…one may think that they (the students) understood but when they try to put something on paper, it becomes obvious that they haven’t. At which point I intervene again…if necessary I modify what I want them to do so that they’re more able to cope with it.

From the above extracts, the informal ways in which this teacher collected evidence and, perhaps unconsciously, assessed her own teaching effectiveness become evident, as well as the need to make decisions in order to maintain pupils’ interest. In this class, pupils’ raised hands, or the ‘light in their eyes’ (Shipman 1983) in response to questions about something they had been taught, signaled to the teacher how well the instruction was going. The teacher also gained constant feedback from the children by ‘observing their reactions, their body language, as well as from what they said, in order to take immediate decisions on how to proceed in his instruction’. Another teacher commented on the same issue:

… Whenever I realize that they haven’t understood something, I try to alter the task in order to make it more comprehensible or more feasible for them to tackle.

However, the observational data didn’t match similar arguments made by the other three teachers. Most of the times, teachers were responding to children needs very cursorily, and seemed to be more concerned about time constraint issues in order to get on with their instruction and cover the expected school year curriculum. Most of the times, teachers retained their initial planning during their teaching, moving quickly through the various tasks, without paying much attention to evaluating students’ level of understanding. For example, one of the teachers in her interview right after her lesson highlighted the fact that her expectations were not realized. She had expected the students to be familiar with some basic ideas and was surprised that they were not:

… I was quite convinced they knew the answers and they ought to, since we have done those things recently. I am sure that when they go over those things again, they will remember everything.

I have to finish teaching the content of the book by the end of the school year, and we are already quite behind.

From the above interview extracts, one could conclude that the emphasis is attributed in covering the planned curriculum and not on student learning. However, teaching is about what students learn, not what the teacher presents and the main purpose of education is to promote understanding and not short-term remembering. Thus, if understanding is to be assessed, methods are required that involve learners in using their knowledge and linking it to real context. This view of assessment eschews the model of assessment in which the teacher collects the evidence, makes judgements on the basis of that evidence, and then certain events follow, which was not observed to a great extent in our study.

7 Discussion

This study has utilized a qualitative-interpretive approach to explore how teachers perceive, interpret and implement formative assessment in Cyprus primary classrooms. Based on the research findings, a formative assessment framework of actions and strategies has been generated, consisting of a sequence of five interrelated processes. In addition, important insights on teachers’ understanding and practice have been highlighted. Particularly, it was found that the teachers had positive perceptions towards formative assessment and acknowledged that it is an important element that could promote effective teaching and learning. However, at the same time, the observational data revealed a number of weaknesses in teachers’ formative assessment practices. This is in line with previous research findings which noted that while formative assessment is desirable, it is not easy for teachers to achieve (Torrance and Pryor 2001; Hall and Burke 2003; James et al. 2007). The difficulty in effective implementation could partly be attributed to the fact that, for the teachers in our study, formative assessment does not yet represent a well-defined set of practices. As Bennett (2011) argues some of these misunderstandings, difficulties and practical challenges that teachers face derive from residual ambiguity in the definitions of the formative assessment. The teachers usually tended to be embracing the concept, but in reality implementing a set of practices that were rather mechanical without the active engagement of their students.

Based on the framework developed in the study, a number of weaknesses in teachers’ formative assessment practices and understanding have been identified. Firstly, teachers demonstrated a lack of quality criteria and could not make explicit what the purpose of certain activities was and what would count as doing them well. Even in cases where there were criteria, the teachers failed in most cases to make those criteria explicit. Teachers underlined this finding by saying that they often had no clear idea of the criteria by which they assess, and could not easily spell them out. A possible explanation would be that most of the assignments and tasks are being taken out of the school textbooks in a rather mechanistic procedure. However, according to Torrance (2001), the time spent articulating criteria at the beginning of an activity can mean less ‘trouble-shooting’ later as they became a focal point of feedback which the teacher build on and underline their importance. In relation to the lack of explicit quality criteria, no techniques were used by the teachers to encourage students to engage in self-assessment. All four teachers were unwilling to pass some of their assessment-control power over to students, through engaging them into self-assessment practices. The assessment process was highly teacher-centered, with the only agent providing feedback and judgement being the teacher.

The alternative approach is for students to develop skills in evaluating the quality of their own work, especially during the process of production. This, however, also implies changes in what pupils do and how they might become more involved in assessment and in reflecting on their own learning. Indeed, questioning, giving appropriate feedback and reflecting on criteria of quality can all be rolled up in peer and self-assessment (Kyriakides 1999). For example, in a research study by Fontana and Fernandes (1994), primary school pupils were progressively trained to carry out self-assessment that involved setting their own learning objectives, constructing relevant problems to test their learning, selecting appropriate tasks, and carrying out self-assessments. Over the period of the experiment, the learning gains of this group were twice as great as those of a matched ‘control’ group. Other findings also demonstrate the importance of pupils’ understanding the success criteria in different curriculum areas and at different stages in their development as learners (James 2006). Thus, the instructional system must take explicit provision for students themselves to acquire evaluative expertise. Although the teachers participated in this study believed strongly that formative assessment is crucial in promoting learning, they were not prepared to persist and overcome difficulties, for example in establishing the appropriate classroom culture for student self-evaluation.

Secondly, it was found that teachers were using a variety of practices to collect information on student attainment. Unstructured observation and questioning were the most often used methods. According to Airasian (1991), through unplanned observations most teachers make note of idiosyncratic, unsystematic happenings in the primary classroom which they see, mentally record and interpret. Likewise, Torrance (2001) supports the importance of the teacher being a trained classroom observer. However, teachers seemed to make little or no use of some types of questioning and negotiations that could be fed into formative assessment and enhance the learning process. As James et al. (2007) argue, questions phrased simply to establish whether pupils know the correct answer are of little value for formative purposes. The findings revealed that some formative assessment actions were more common than others (i.e. whole-class questioning, correcting, judging), at the expense of those that could be considered more beneficial for learning e.g., observing process, metacognitive questioning, observing and questioning individual students (James and Pollard 2011).

It must here be noted that extensive reflection on the detailed progress of each individual pupil was not considered feasible, due to the number of students in the class. Although the teachers described formative assessment in essence as very important for student learning, they also perceived formative assessment as a time-consuming process which could result in a heavy record-keeping burden. Thus, the focus of attention tended to be the class, or in most cases student groups, rather than the individual child. We can note that teachers did indeed focus on individuals, and saw the benefits of doing so, in certain circumstances, but this could not be sustained at such a level of detail over a significant period of time. However, in such whole-class teaching situations, pupil misinterpretations cannot be explored in detail, since as stated by one teacher “to spend time focusing on one child’s responses would risk losing the attention of the majority of the class”. This in turn, may lead to significant misjudgements about pupil understandings and achievements. If learning is to be secure, superficially ‘correct’ answers need to be probed and misconceptions explored. In this way, pupils’ learning needs can be diagnosed (James and Pedder 2005; 2006).

Thirdly, the data reveal a tension between what the teachers claimed to do and what they actually do, regarding the process of providing descriptive feedback to students. This is important as feedback is an essential component of formative assessment where the intention is to support learning (Bell and Cowie 2001). The rather general and vague comments made, without specific guidance as to how the student could improve, indicate that the teachers had a vague impression of the quality criteria, as previously discussed. Feedback was provided, through usually short-term rewards rather than detailed analysis of problems and suggestions for improvement, with little explanation of what the strengths and mistakes were and how improvements could be made or maintained. Such vague comments do not provide children with beneficial information about where they went wrong, and do not encourage them to improve. The use of grades by one of the teachers was a notable and a rather unexpected finding, given that the school inspectorates of the ministry of education and culture in Cyprus do not encourage the use of marks in primary schools. As Black et al., (2003) argue feedback is always important but it needs to be approached cautiously because research draws attention to potential negative effects. This happens when feedback focuses on pupils’ self-esteem or self-image, as is the case when marks are given, or when praise focuses on the person rather than the learning. Praise can make pupils feel good but it does not help their learning unless it is explicit about what the pupil has done well (Swaffield 2008).

The data related with the final process of the assessment framework, i.e. the regulation of learning according to the collected and interpreted information, demonstrate interesting differences between teachers’ perceptions and actions. Although all four teachers had fairly interesting ideas and good intentions about taking actions to regulate their teaching, only one teacher was able to operationalize these ideas. This is important as a characteristic of effective teachers is their ability to analyse the learning requirements and class conditions in order to identify alternate activities and procedures that might remediate student difficulties and design supplemental, remedial, or enrichment activities (Kyriakides et al. 2009). The three teachers acknowledged that assessment information should be obtained from pupils and then used for the planning and regulation of teaching. However, how they might consistently do this, i.e. actually use assessment data to regulate their teaching accordingly to the needs of the children in their class, was not clear to them. It seems that teachers organized and implemented instructional programmes based upon their knowledge of the subject matter, the availability of equipment and time constraints, rather than the needs and abilities of their students. Indeed, the data highlight that many times teachers moved on to the next topic, without taking into much consideration the information on students’ attainment. Little opportunity was given to altering or changing instruction on the basis of assessment findings. Although, in many cases, the collected information on student learning elicited an immediate feedback, then such information was not fully exploited for the potential insight that it might have provided into the learning process.

In all classes, learning seemed to be predicated much more on what might be characterized as a Piagetian model, overlain with elements of behaviourism (reinforcement). Work was provided with some vague reference to the attainment targets at what the teacher thought was an appropriate level of difficulty for the children’s stage of development. However, there was much less positive intervention, and feedback was often a ‘classroom management’ variety—offering clarification or further instruction on how to proceed rather than engaging in extended dialogue about the learning intentions underlying the task.

Taking into consideration the findings of this study, we could argue that in order to accomplish the full potential of formative assessment as “a teaching strategy of very high leverage” (Hargreaves 2004, p. 24), it is essential to provide training, developmental opportunities and support to teachers so as to enable them to use assessment in a genuinely formative way. This is essential as teachers will not take up attractive sounding ideas, albeit based on extensive research, if these are presented as general principles, which leave entirely to them the task of translating them into everyday practice (Black and Wiliam 1998; 2006). This of course is not an easy task. As Shavelson (2008, p. 294) argue, referring to his experience creating, implementing, and studying the effects of formative assessment, “…formative assessment, like so many other education reforms, has a long way to go before it can be wielded masterfully by a majority of teachers to positive ends.” The challenge would be to support and facilitate teachers to capture not only the letter, but most importantly the spirit of formative assessment (c.f. Marshall and Drummond 2006) and improve their understanding and doing during the teaching and learning process accordingly.

The findings of the study have also implications for future research. Particularly, future studies may utilize the analytical framework emerged in this study in three different ways, which are in accordance with the three functions of assessment in education. Diagnostically, the framework can be used for exploring formative assessment practices, and identifying weaknesses and priorities for improvement, something that this research has performed to some extent. Summatively, in order to evaluate formative assessment processes in the Cyprus educational context as compared with other OECD countries, and most importantly formatively for developing and improving teachers’ formative assessment practices by developing appropriate intervention and professional development programmes (Antoniou and Kyriakides 2011, 2013). In this way the present study could be seen as the ‘reconnaissance’ stage of future studies aiming to improve teachers’ formative assessment practices. In addition, the impact of the formative assessment practices on student learning could be further explored and the connections between formative assessment and students learning could be further analysed. Future research can also explore not only teachers’ but also pupils’ perceptions and understanding of particular formative assessment practices in which they were involved. This might reveal the extent to which teachers and pupils share a common understanding of the nature and purpose of formative assessment processes and the way in which the outcomes of assessment were constructed through teacher-pupil and pupil-pupil interactions.

It must be noted that it is not the intent of this study to deliver any kind of final statements on the teachers’ practices and rationale of formative assessment in every educational setting. To attempt to do so would be to fall prey to several fallacies, not the least of which is the failure to recognise the context-dependent nature of every classroom, every teacher-student relationship. The four participating teachers are particular to their time and place and certainly do not represent all teachers in Cyprus. Although no legitimate attempt can be made to make generalizations from these data, overall, the findings of this study may suggest trends and approaches that could lend themselves to a more precise definition and improvement of formative assessment in the future. It is in this explanatory vein that the report of the findings and their interpretations is made.

References

  1. Airasian, P. W. (1991). Classroom assessment. New York: McGraw-Hill.Google Scholar
  2. Allal, L., & Pelgrims Ducrey, G. (2000). Assessment of—or in—the zone of proximal development. Learning and Instruction, 10(2), 137–152.CrossRefGoogle Scholar
  3. Antoniou, P., & Kyriakides, L. (2011). The impact of a dynamic approach to professional development on teacher instruction and student learning: results from an experimental study. School Effectiveness and School Improvement, 22(3), 291–311.CrossRefGoogle Scholar
  4. Antoniou, P., & Kyriakides, L. (2013). A dynamic integrated approach to teacher professional development: impact and sustainability of the effects on improving teacher behavior and student outcomes. Teaching and Teacher Education, 29(1), 1–12.CrossRefGoogle Scholar
  5. Assessment Reform Group (2002). Assessment for learning: 10 principles. Available on the Assessment Reform Group web-site : www.assessment-reform-group.org.uk.
  6. Bachor, D. G., & Anderson, J. O. (1994). Elementary teachers’ assessment practices as observed in the province of British Columbia, Canada. Assessment in Education, 1(1), 63–94.CrossRefGoogle Scholar
  7. Baird, J.-A. (2010). Beliefs and practice in teacher assessment. Assessment in Education: Principles, Policy and Practice, 17(1), 1–5.CrossRefGoogle Scholar
  8. Bell, B., & Cowie, B. (2001). Formative assessment and science education. Dordrecht: Kluwer.Google Scholar
  9. Bennett, R. E. (2011). Formative assessment: a critical review. Assessment in Education: Principles, Policy and Practice, 18(1), 5–25.CrossRefGoogle Scholar
  10. Black, P. (1996). Formative assessment and the improvement of learning. British Journal of Special Education, 23(2), 51–56.CrossRefGoogle Scholar
  11. Black, P. (1998). Formative assessment: raising standards inside the classroom. School Science Review, 80(291), 39–46.Google Scholar
  12. Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education, 5(1), 7–74.CrossRefGoogle Scholar
  13. Black, P., & Wiliam, D. (2006). Developing a theory of formative assessment. In J. Gardner (Ed.), Assessment and learning (pp. 81–100). London: Sage.Google Scholar
  14. Black, P., & Wiliam, D. (2009). Developing a theory of formative assessment. Educational Assessment, Evaluation and Accountability, 21(1), 5–31.CrossRefGoogle Scholar
  15. Black, P., & William, D. (2003). In praise of educational research: formative assessment. British Educational Research Journal, 29(5), 623–637.CrossRefGoogle Scholar
  16. Black, P., Harrison, C., Lee, C., Marshall, B., & Wiliam, D. (2003). Assessment for learning: putting it into practice. Buckingham: Open University Press.Google Scholar
  17. Brookhart, S. M. (2001). Successful students’ formative and summative uses of assessment information. Assessment in Education, 8(2), 153–170.CrossRefGoogle Scholar
  18. Brookhart, S. M. (2010). Formative assessment strategies for every classroom (2nd ed.). Alexandria: ASCD.Google Scholar
  19. Christoforidou, M., Kyriakides, L., Antoniou, P. & Creemers, B. P. M. (2013). Searching for stages of teacher’s skills in assessment. Studies in Educational Evaluation. doi: 10.1016/j.stueduc.2013.11.006.
  20. Council, N. R. (1996). National science education standards. Washington: National Academy Press.Google Scholar
  21. Cowie, B., & Bell, B. (1999). A model of formative assessment in science education. Assessment in Education, 6(1), 101–116.CrossRefGoogle Scholar
  22. Creemers, B. P. M., Kyriakides, L., & Antoniou, P. (2012). Teacher professional development for improving quality of teaching. Springer Publishing, New York, USA.Google Scholar
  23. Darling-Hammond, L. (2004). Standards, accountability, and school reform. Teachers College Record, 106(6), 1047–1085.CrossRefGoogle Scholar
  24. Earl, L. (2003). Assessment as learning. Thousand Oaks: Corwin.Google Scholar
  25. Earl, L., & Katz, S. (2000). Changing classroom assessment: teachers’ struggles. In N. Bascia and A. Hargreaves (Eds.), The sharp edge of educational change: Teaching, leading and the realities of reform. London: Falmer.Google Scholar
  26. Fontana, D., & Fernandes, M. (1994). Improvements in mathematics performance as a consequence of self-assessment in Portuguese primary school pupils. British Journal of Educational Psychology, 64, 407–417.CrossRefGoogle Scholar
  27. Gattullo, F. (2000). Formative assessment in ELT primary (elementary) classrooms: an Italian case study. Language Testing, 17(2), 278–288.CrossRefGoogle Scholar
  28. Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: strategies for qualitative research. Chicago: Aldine.Google Scholar
  29. Hall, K., & Burke, W. (2003). Making formative assessment work: effective practice in the primary classroom. Maidenhead: Open University Press.Google Scholar
  30. Hargreaves, D. (2004). Personalizing learning 2: student voice and assessment for learning. London: Specialist Schools Trust.Google Scholar
  31. Harlen, W. (2007). Assessment of learning. London: Sage.Google Scholar
  32. Harlen, W., & James, M. (1997). Assessment and learning: differences and relationships between formative and summative assessment. Assessment in Education, 4(3), 365–380.CrossRefGoogle Scholar
  33. James, M. (2006). Assessment, teaching and theories of learning. In J. Gardner (Ed.), Assessment and learning (pp. 45–60). London: Sage.Google Scholar
  34. James, M., & Pedder, D. (2005). Professional learning as a condition for assessment for learning. In J. Gardner (Ed.), Assessment and learning (pp. 27–44). London: Sage.Google Scholar
  35. James, M., & Pedder, D. (2006). Beyond method: assessment and learning practices and values. The Curriculum Journal, 17(2), 109–138.CrossRefGoogle Scholar
  36. James, M., & Pollard, A. (2011). TLRP’s ten principles for effective pedagogy: rationale, development, evidence, argument and impact. Research Papers in Education, 26(3), 275–328.CrossRefGoogle Scholar
  37. James, M., McCormick, R., Black, P., Carmichael, P., Drummond, M.-J., Fox, A., MacBeath, J., et al. (2007). Improving learning how to learn: classrooms, schools and networks. 3rd Ed, Routledge.Google Scholar
  38. Klenowski, V. (2009). Assessment for learning revisited: an Asia-Pacific perspective. Assessment in Education: Principles, Policy and Practice, 16(3), 263–268.CrossRefGoogle Scholar
  39. Kyriakides, L. (1997). Influences on primary teachers’ practice: some problems for curriculum change theory. British Educational Research Journal, 23(1), 39–46.Google Scholar
  40. Kyriakides, L. (1999) Research on baseline assessment in mathematics at school entry, Assessment in Education: principles, policy and practice, 6(3), 357–375.Google Scholar
  41. Kyriakides, L. (2004). Investigating validity from teachers’ perspective through their engagement in large-scale assessment: the emergent literacy baseline assessment project. Assessment in Education: Principles, Policy and Practice, 11(2), 143–165.CrossRefGoogle Scholar
  42. Kyriakides, L., Creemers, B. P. M., & Antoniou, P. (2009). Teacher behaviour and student outcomes: suggestions for research on teacher training and professional development. Teaching and Teacher Education, 25(1), 12–23.CrossRefGoogle Scholar
  43. Lock, C., & Munby, H. (2000). Changing assessment practices in the classroom: a study of one teacher’s challenge. Alberta Journal of Educational Research, 46(3), 267–79Google Scholar
  44. MacPhail, A., & Halbert, J. (2005). The implementation of a revised physical education syllabus in ireland: circumstances, rewards and costs. European Physical Education Review, 11, 287–308.CrossRefGoogle Scholar
  45. Marshall, B., & Drummond, M. J. (2006). How teachers engage with assessment for learning: lessons from the classroom. Research Papers in Education, 21(2), 133–149.CrossRefGoogle Scholar
  46. Militello, M., Schweid, J., & Sireci, G. S. (2010). Formative assessment systems: evaluating the fit between school districts’ needs and assessment systems’ characteristics. Educational Assessment, Evaluation and Accountability, 22, 29–52.CrossRefGoogle Scholar
  47. Ministry of Education and Culture. (2004). The new curriculum. Nicosia, Cyprus: Ministry of Education and Culture.Google Scholar
  48. Morgan, C. (1996). The teacher as examiner: the case of mathematics coursework. Assessment in Education, 3(3), 353–376.CrossRefGoogle Scholar
  49. National Council of Teachers of Mathematics. (2000). Principles and standards for school mathematics. Reston: National Council of Teachers of Mathematics.Google Scholar
  50. Office for Standards in Education. (1998). School evaluation matters (raising standards series). London: OFSTED.Google Scholar
  51. Pearson (2005). Achieving student progress with scientifically based formative assessment: A white paper from Pearson. http://www.pearsoned.com/RESRPTS_FOR_POSTING/PASeries_RESEARCH/PA1.%20Scientific_Basis_PASeries%206.05.pdf (accessed October 28, 2012).
  52. Pellegrino, J. W., Chudowsky, N., & Glaser, R. (2001). Knowing what students know: the science and design of educational assessment. Washington: National Research Council.Google Scholar
  53. Pollard, A., Broadfoot, P., Croll, P., Osborn, M., & Abott, D. (1994). Changing English primary schools: the impact of the education reform act at key stage one. London: Cassell.Google Scholar
  54. Popham, W. J. (2008). Transformative assessment. Alexandria: ASCD.Google Scholar
  55. Preece, P. F. W., & Skinner, M. C. (1999). The national assessment in science at Key Stage 3 in England and Wales and its impact on teaching and learning. Assessment in Education, 6(1), 11–26.CrossRefGoogle Scholar
  56. Richard, J. F., & Godbout, P. (2000). Formative assessment as an integral part of the teaching-learning process. Physical and Health Education Journal, 66(3), 4–13.Google Scholar
  57. Scriven, M. (1967). The methodology of evaluation. In R. W. Tyler, R. M. Gagne, & M. Scriven (Eds.), Perspectives of curriculum evaluation. Chicago: Rand McNally.Google Scholar
  58. Sebba, J. (2006). Policy and practice in assessment for learning: the experience of selected OECD countries. In J. Gardner (Ed.), Assessment for learning: policy and practice. London: Sage.Google Scholar
  59. Shavelson, R. J. (2008). Guest editor’s introduction. Applied Measurement in Education, 21(4), 293–294.CrossRefGoogle Scholar
  60. Shen, C. (2002). Revisiting the relationship between students’ achievement and their self-perceptions: a cross-national analysis based on TIMSS 1999 data. Assessment in Education, 9(2), 161–184.CrossRefGoogle Scholar
  61. Shepard, L. (2000). The role of assessment in a learning culture. Educational Researcher, 29(7), 4–14.CrossRefGoogle Scholar
  62. Shepard, L. A. (2008). Formative assessment: caveat emptor. In C. A. Dwyer (Ed.), The future of assessment: shaping teaching and learning (pp. 279–303). New York: Erlbaum.Google Scholar
  63. Shipman, M. (1983). Assessment in primary and middle schools. London: Croom Helm.Google Scholar
  64. Swaffield, S. (Ed.). (2008). Unlocking assessment: understanding for reflection and application. London: Routledge.Google Scholar
  65. Torrance, H. (2001). Assessment for learning: developing formative assessment in the classroom, Education 3–13, 29, 3, pp. 26–32.Google Scholar
  66. Torrance, H., & Pryor, J. (1998). Investigating formative assessment, teaching, learning and assessment in the classroom. Buckingham: Open University Press.Google Scholar
  67. Torrance, H., & Pryor, J. (2001). Developing formative assessment in the classroom: using action research to explore and modify theory. British Educational Research Journal, 27(5), 615–631.CrossRefGoogle Scholar
  68. Webb, M. E., & Jones, J. (2009). Exploring tensions in developing assessment for learning. Assessment in Education: Principles, Policy and Practice, 16(2), 165–184.CrossRefGoogle Scholar
  69. Wiliam, D., & Thompson, M. (2008). Integrating assessment with learning: what will it take to make it work? In C. A. Dwyer (Ed.), The future of assessment: shaping teaching and learning (pp. 53–82). New York: Erlbaum.Google Scholar
  70. Williams, D. (2011). Embedded formative assessment. Bloomington: Solution Tree.Google Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  1. 1.Faculty of EducationUniversity of CambridgeCambridgeUK

Personalised recommendations