Teacher or Examiner? The Tensions between Formative and Summative Assessment in the Case of Science Coursework
- First Online:
- Cite this article as:
- Gioka, O. Res Sci Educ (2009) 39: 411. doi:10.1007/s11165-008-9086-9
- 587 Views
The aim of the reported study was to explore how science teachers made sense of their roles and responsibilities in teaching and assessing science coursework. The focus was on the teacher assessment, the feedback that teachers gave to students and, how they perceived their role when they taught and assessed students’ science coursework reports. The research methodology included observation of science lessons, collection of marked students’ reports and two interviews with each of the nine participant teachers. Two cases of teachers are considered as representative of the participant teachers and their perceptions and practices are compared and contrasted. Teachers either adopted the role of the examiner or combined the role of the teacher with that of examiner. They distinguished marking of science theory exercises from marking of coursework and, teaching theory from teaching investigations, on the basis that the grade they assigned to coursework contributed to the total grade for external exams. A key conclusion is that teaching and assessment of science coursework need to re-focus on learning. The study calls for changes in public policy for summative assessment to place more reliance on teachers’ assessments and secondly, for changes in school practices in formative assessment for teachers to support students to learn in the case of science coursework.
KeywordsAssessment criteriaFeedbackFormative assessmentScience courseworkSummative assessmentTeacher assessment
The present study is motivated by the formative assessment perspective of teaching and learning, as discussed in the literature review by Black and Wiliam (1998), which provided strong evidence of the utility of formative assessment for enhancing the attainment of students and raising standards of achievement. According to Black and Wiliam (1998), the term formative assessment or ‘assessment for learning’ can be used to describe all those activities undertaken by learners and teachers for the purpose of assisting the learners in finding out where they are in their learning, where they are going, and how to get there. ‘Assessment for learning’ serves its formative function when the information fed back to the learners, and the subsequent activities in which they engage, lead directly to learning. A key role for the teacher in formative assessment is the elicitation of evidence of students’ current attainment, and the provision of feedback (Cowie and Bell 1999) to students in task-involving, rather than ego-involving terms (Butler 1987, 1988). A key role for the learner is the understanding of the criteria for quality that will be applied to their work, and being able to assess their progress towards the learning goals. According to the implications of the same review (Black and Wiliam 1998), the essential elements of any strategy to improve learning through implementation of formative assessment would be the setting of clear goals, the design of appropriate learning and assessment tasks, the communication of assessment criteria, the provision for good quality feedback (oral and written) and opportunities for self- and peer-assessment. This set of guiding principles underlying teachers’ classroom strategies stems from an attention to the social discourse in the classroom and the socio-cultural theory of learning (Gipps 1999, 2002). The research study undertaken by Cowie and Bell (1999) found out that the participant teachers employed two kinds of formative assessment: planned and interactive. Planned formative assessment involved the teachers eliciting and interpreting assessment information and then taking action. It tended to be carried out with the whole class. Interactive assessment involved the teachers in noticing, recognising and responding, and tended to be carried out with some individual students or small groups.
Two further reviews published more recently are relevant to the current study. The review of the French literature by Allal and Motiez-Lopez (2005) had a particular focus on the concept of “regulation” (how teachers orchestrate learning for and with students). Allal and Mottiez-Lopez (2005) emphasised not only the importance of feedback, but also how to tailor instruction to the needs of different students and the importance of providing them with skills and tools for self-assessment.
The review by Koller (2005) explored the German literature in educational psychology. This review put emphasis on how students respond to various forms of feedback, a key element in formative assessment. The findings pointed to the greater impact of feedback on individual progress toward the learning goals, rather than on comparison with other students.
There is also international evidence that the tensions between ‘assessment for learning’ and assessment for summative purposes are very real and thus, if not reconciled, they can have a negative impact on learning and teaching (Harlen and Deakin Crick 2002). The difference between formative and summative assessment lies in the way in which evidence is interpreted and used and not in the nature or mode of collection of that data (Wiliam and Black 1996; Wiliam 2003). Whilst formative assessment must be pursued for its main purpose of feeding back into the learning process, it can also produce information which can be drawn upon for summative purposes (Black 1998). Black and his team suggested the use of summative tests for formative purposes (Black and Wiliam 2003; Black et al. 2003, 2004). For example, external assessments can be used formatively when curriculum revisions are made on the basis of assessment results. Along the same line, Shavelson et al. argued in favour of an alignment between formative and summative assessment (Shavelson et al. 2002; Shavelson 2003).
communication of learning goals,
communication of assessment criteria,
provision of feedback (oral in classroom dialogue and written in marking) and,
provision of opportunities for self- and peer-assessment.
The focus of this paper will be mainly on the feedback that teachers gave to students—oral (in classroom dialogue) and written (in their science coursework reports)—the perceptions of their roles and, responsibilities when they taught and assessed science coursework. This study illuminated and enriched the theoretical considerations, by giving a detailed insight into the formative assessment practices in science education in the specific area of science investigations.
The Context of the Study and Research Questions
In the English secondary school system, the most common component of practical work in secondary science classes is investigations. An investigation is a practical activity of a fair-test kind that engages students in obtaining data, drawing and interpreting graphs and using them to develop an explanation, reach some conclusions and evaluate the whole activity. Accordingly, the term of ‘science coursework’ means the report that students write and submit after having carried out a science investigation. Science coursework is to assess students’ science process skills. That is, the skills of predicting, planning, carrying out the investigation, taking measurements, analysing, concluding and evaluating the whole activity.
Teacher assessment of coursework represents 20% of the total mark in the national exams (GCSE, the General Certificate of Secondary Education, and A-level, University entrance exams). The grade that students obtain for the purpose of science coursework contributes to the final grade for the subject. The grades are to be used not only for reporting to parents but also to allow the Government to construct ‘league tables’, listing schools in each Local Educational Authority (LEA) in order of their results.
However, science coursework is assessed by the teachers themselves rather than by external examiners. This makes the role of the teacher challenging. On the one hand, the teacher has to teach and support the learning process. On the other hand, she has to assess, examine and award a grade to each of her own students.
The choice of the investigation topics was not under the researcher’s control: they were chosen by the teachers. Common investigation topics were: photosynthesis, rates of reactions, osmosis, the effect of temperature and enzyme concentration on enzyme activity, anaerobic respiration and so on. From all the observed GCSE and A-level lessons, the researcher was mostly interested in observing the investigation activity, and mostly the lesson units in which students collect, analyse and interpret experimental data. Coursework was for final external exams.
How did science teachers perceive their role when they taught and assessed science coursework?
What kind of feedback did they give to their students when they taught and assessed science coursework?
Did they use formative assessment practices when they taught and assessed GCSE and A-level science coursework?
The study was conducted over the period of one full school year. It was carried out in four schools in the greater London area in three different local educational authorities. Nine science teachers who participated in the study taught GCSE and A-level science classes. They were well-qualified in sciences, had substantial experience in teaching and assessing CCSE and A-level coursework and had attended in-service courses related to it.
The four participant schools, in which the study was carried out, were representative of a range of types of schools. Students in the observed classes came from a wide variety of social and ethnic backgrounds and were mainly from middle- and low-income families. Pseudonyms for teachers, students and schools are used throughout the paper.
The study was interpretive; the content of interest was the perspectives and actions of the participants within a socio-cultural context (Erickson 1998; Lincoln and Guba 1985; Miles and Huberman 1994). It was a small-scale classroom-based piece of research and exploratory in nature. The qualitative nature of the study with its ethnographic aspects involved the documentation of what actually went on in the teaching process, through first-hand observation and in-depth interviews with the participant teachers in a classroom situation.
A total of 210 1-h science lessons that included investigations were observed throughout one year, and these were evenly distributed across the teachers. The aim was to obtain information about how they addressed students’ weaknesses and difficulties in science coursework. The study was conducted as part of regular classroom instruction. Teachers were not asked to change their usual teaching or adopt any intervention methods of teaching. Instead, they were observed over the course of their ordinary science classes teaching scientific theory, investigations and coursework. Therefore, this piece of research was a naturalistic study set in the natural setting of the everyday classroom teaching. The researcher did not interact with the teacher or the students; she was simply a non-participant observer. Audio recording was carried out on a regular basis in each class through each teacher wearing a tape-recorder and a clip-on microphone that picked up the teachers’ voice and any student talking nearby. All recordings were transcribed in full. The teachers validated the observation notes and read the narrative describing their teaching.
Collection of Marked Students’ Science Reports
Marked students’ coursework reports were collected to look at teachers’ marking and the feedback teachers gave to their students.
Pre-observation and Post-observation Interviews with Teachers
Two in-depth interviews with teachers were carried out to look at how they perceived their role in teaching investigations, assessing science reports and helping students improve reports; one interview before or around the beginning of the classroom observations and one on completion of observations. The aim during the two interviews was to get teachers to talk about their perceptions of their role, practices and the reasons behind their practices in their own terms. Teachers’ perceptions about how they should help their students improve science coursework, as reported in the interviews, will enable us to interpret teachers’ actual goals and the specific strategies that occurred in the teaching process.
In particular, in the pre-observation interview teachers were asked about how they taught, assessed and marked students’ achievement (the strategies they used) and specifically, science coursework. Some questions from the two interview protocols are presented in the Appendix. Here, for the specific purpose of this paper, I wanted to emphasise that teachers were asked whether they supported and encouraged students to talk to them, communicate assessment criteria to the class and, whether teachers let students talk to and help each other while working in the laboratory.
On completion of the observation period, in the post-observation interview, teachers were presented with ‘concrete situations’: samples of students’ coursework, where pseudonyms were used to provide students’ anonymity. These were samples of GCSE and A-level coursework and served as the basis for discussion during the interviews. They were of low and average attainment. The samples were selected from four different investigation topics most commonly taught in GCSE and A-level lessons. However, any students’ mistakes were kept, even spelling mistakes.
The post-observation interviews were structured. The aim was to present teachers with real situations, get them to comment on their quality and reflect on what they would do with the individual students and their classes since major problems arose. Teachers were also asked what they would do in the case that students did not make the progress that teachers would wish or the progress that they would expect. This question is of great importance for this research study, since from the ‘assessment for learning’ perspective, assessment is an on-going process that also informs teachers about students’ learning and then it should inform subsequent planning and teaching.
the individual level (how teachers supported students individually to improve their work) and,
the classroom level (how teachers addressed problems and difficulties that the majority of students in a class experienced) (Cowie and Bell 1999).
What sort of advice would you give to Margaret to help her improve the analysis and explanation of her report?
What would you do to help Paul improve the evaluation section?
(Margaret and Paul were pseudonyms whose reports were used in the interview).
If most students did this analysis (like Emma or Margaret) how would you follow this up?
How would you plan to teach a lesson to help children like Emma and Margaret improve their analysis of results?
If most students did this (as Paul did) how would you deal with this?
Again, Emma, Margaret and Paul were students’ pseudonyms. The intention here was to get teachers to suggest how or, better, with what specific strategies and tasks they would support students to improve. Very often discussion in the post-observation interview would relate strategies back to some incidents in the already observed lessons to illustrate and, in this way, better understand the teachers’ responses.
All the interviews were audio taped and transcribed verbatim. The participant teachers checked the interview transcript for accuracy; they made corrections or added some points prior to analysis. The reliability of data from interviews increases when the informants are asked to check the interview transcripts (Guba and Lincoln 1989).
The process of analysis of the teacher interviews and the transcripts of observed lessons were characterised by action at different levels of detail, beginning with broad groupings, then identifying finer aspects of the data, and sorting them out into more specific categories. Thus, categories for analysis were not set a priori, but emerged from and were grounded in the data to identify potential themes for each participant (Miles and Huberman 1994). The qualitative data in the lesson observation transcripts were analysed using a grounded theory approach (Glaser and Strauss 1967). The field notes and the interview transcripts were analysed through the process of analytical induction to find patterns in the teachers’ responses to the interview questions. To uncover these patterns, an iterative procedure was used. This entailed the continual examination and re-examination of the data to identify themes or patterns, which were then verified. The analytical process involved extensive listening to the audio-tapes and reading and re-reading of the transcripts to gain a better sense of teachers’ practices. From the transcripts, an initial set of categories was generated. This involved taking teacher responses in interview transcripts and trying to group similar statements together by describing what they had in common. The initial categories were then reviewed and revised with the teachers’ responses categorised. When teachers’ responses did not fit into the emergent themes a new category would be defined. In addition, a second researcher reviewed the transcripts, discussed the comments and agreed upon the developed categories. The whole process included looking within and across teachers. To avoid misinterpretation of the results, it has to be noticed that the findings represent general trends for which the amount of supporting data substantially exceeds the amount of refuting data.
As themes and patterns began to emerge, data from the lesson observations, the two interviews and collection of students’ marked reports were compared and contrasted to support, further elaborate or suggest contradictions between what teachers said and what they actually did in the classroom. Triangulation was used throughout the analysis across data sources for each of the teachers (Lincoln and Guba 1985). Triangulation is not seen as generating a unique knowledge of pure facts and truths, but as ensuring that the constructed knowledge constitutes a co-ordination of data and interpretation (Muralihdar 1993). The fact that data were provided by different methods (interviews, observations and collection of marked students’ work) facilitated a more valid and reliable interpretation of the collected research evidence. These multiple data sources and methods of data collection allowed alternative perspectives to build and shape the researcher’s interpretations. Excerpts from the lesson transcripts and quotations from the interviews have been presented to illustrate the points made. Underlined words or sentences stand for words emphasised by teachers in the interviews or in the observed lessons. The interviews elicited statements that clarified what was happening in these classrooms and provided further evidence supporting or diverging from the patterns noted during classroom observations.
The following section will report on the research findings. Because of the limited length of the paper, it would not be possible to report upon each teacher separately and in the same detail. Two cases of teachers will be reported as representative of the participant teachers—Michael and John—and their perceptions are compared and contrasted. This comparison highlights strong differences between the practices of the two teachers involved. The strategies identified were also in the other participants and the two case studies below may be seen to illustrate both the types of the strategies and the extent of the differences found in the sample as a whole. More cases can be found in Gioka (2005).
Michael: “I help them but I cannot for ever be giving them feedback [...] because that would affect the way I am marking their coursework, then, if I give them excessive amount of help... it is not fair!...”
Perceptions of the Role of the Teacher
You show them the mark scheme written by the examiner. ‘This is what you need for perfect marks’. Because, of course in this, you are actually limited in the amount of help you can give them. You’ve got to be a lot less helpful than you are usually prepared to be. I am saying to them only a few words (from the first interview).
I would talk to them in general terms, first of all, about the sort of level of work they would need to do in order for them to achieve the highest possible marks. And, the aim is your students to do well in the exams and get a good mark (first interview).
The focus of teaching was on completing the coursework in a way that allowed students to succeed in their A-level exams and obtain a high grade. To achieve this goal, students needed to get familiar with the exam board requirements and criteria. On the other hand, there was always the need and pressure to cover the syllabus in a limited period of time. The teacher did not show students any exemplar material, as he suspected they may have copied from each other. When students worked on a piece of coursework individually or in pairs, Michael did not give them oral feedback. Rather, he would tell students if they were right or wrong and what they would need to correct. The common question he raised with students was: “How could you fit in with the criteria that the exam board wants?”
Marking and Feedback
When Michael talked about marking, he distinguished two different strands: marking class- and homework from marking of coursework. For the first case, marking of class and homework, including test and exam questions, was simple and straightforward. When a single-word answer was required, it was most likely that there was only one “right” answer. He corrected answers by putting a tick or a cross for “right” answers. When longer answers were required there was usually a maximum mark for each question and it was the teacher who allocated it. For the case of coursework, he said that he marked it against the exam board criteria. He gave them only a grade. As far as feedback was concerned, Michael said that he did not give any feedback for coursework because he was not allowed to. “It is not fair to help students improve their coursework”. Instead, he gave comments on homework by writing: “Well done” or “Very good” or “Good” or “Be careful, you do this way”, “Do a few more of these problems and then you’ll be OK”, or “Go to your book and do another two problems which I’ve not given you”. However, Michael emphasised that it was not always possible for him to mark homework very often and give extensive comments, because “marking is time-consuming and much workload”.
He reported that he did not let students mark their own or peers’ coursework. Instead, he gave students the opportunity to mark end-of-unit tests or simpler exercises, which had blanks to be filled in and usually asked for one “right” answer. One reason that he gave to explain this practice was mainly that students undervalue self- and peer-assessment and gave more importance and value to marks given by the teacher. Marking GCSE and A-level coursework was a “demanding task” and secondly, coursework “counted” in the overall mark in their exams. Therefore, what Michael did was to take out marks from previous years’ students’ coursework and let his current students mark it by using the exam board criteria. He commented that his students were very good at it.
Strategies to Help Students Improve
If I did get back coursework from my students, which were like Paul’s, I mean, I cannot really advise them so much. I am not supposed to tell them what to do. Obviously, if they have problems I would try and help them but at the end of the day cannot for ever, be giving them feedback in terms of how to improve their coursework, because that would affect the way I am marking their coursework, then, if I give them excessive amount of help... it is not fair!... (post-observation interview).
With two more questions, Michael was asked how he would have supported Emma, Victoria and Margaret (students) on a personal basis to help them improve their analysis and conclusions section. He responded by indicating what was missing in each analysis, what each student had to “correct”, “change”, “add” or even “do” from all over the beginning. Thus, Michael did not refer to and did not talk about instructional strategies and methods that he would have used as teacher. He only criticised how good or poor each analysis was and he suggested what had to be done by each of the three students. With regard to his strategies in response to unexpected or poor students’ progress, Michael said that he would have discussed problems that may have arose with parents, communicated problems in the report for parents or may have had an informal discussion with the particular students.
Michael with six more teachers was not specific when asked about how they would have helped particular students. Or, at least, they were not happy to talk about how they would have planned and delivered lessons to support students on a whole class basis or individually. They restricted themselves to talking on the basis of the exam board criteria and commenting on what needed to be “included” and “added” in each section of coursework in order to fulfil the criteria and follow the exam requirements. Yet, they did not talk about their instructional strategies by considering the wide range of attainment in some classes, or as it emerged from their marking. They did not talk about guidance they would have provided on how a particular piece of work can improve. What is more worrying is that some teachers said that unanticipated learning or performance would have never occurred, so they did not talk about unexpected outcomes. Perhaps this was simply because the individual support was missing from their classroom practices.
John: “The Way I Teach, to be Honest with You, I Play Like a Game”
Perceptions of the Role of the Teacher
‘I would like your attention. I am going to talk about enzymes. The examiners are fussy about enzymes.
You need to follow this in order to get a high mark (A, B grades) in GCSE’.
And, ‘You must use this term. This is what they want in the exams. The examiners are fan of this’.
Despite the above emphasis on exam preparation, John created a supportive environment to provide the necessary preparation for students to succeed in their exams. In the following paragraphs, some of his most common strategies are presented.
Strategies to Help Students Improve
When John started a new topic, he introduced his students to relevant background theory. With a whole-class introductory discussion students got prepared to start the preliminary experiment. They decided on their plan and made a prediction supported by the relevant theory. They carried out the preliminary experiment, did the write-up and they handed in a draft version to the teacher. In this way, John had the opportunity to check major difficulties that had to be dealt with, before students carried on with the main investigation. John gave back the preliminary piece of work with specific comments and guidance for changes or improvement. More interestingly, he wrote questions in the margins at both sides of the coursework in order to encourage students to think about how to make corrections. Students had to make use of comments, work on the teacher’s feedback and then, carry on with the main experiment. The teacher also assisted students by discussing step-by-step parts of the investigation and the report. When John gave feedback in-between the different parts of coursework, he did not give any marks. The final piece of work was the coursework report and it was assessed against the exam board criteria. For the final piece of work students were given only marks. There was not much opportunity for students to receive help by the teacher when they carried out the main investigation, as coursework was going to “count” in their final mark in the exams. It is worth noting that John, quite similarly with Michael, distinguished teaching coursework from teaching theory. This is because when he taught coursework, his support, as a teacher was limited, since the report is going to “count” in the final grade in the GCSE or A-level exams. However, in contrast to Michael, John gave much importance to the preliminary work or, he gave time to students during lunch breaks in order to help them with the difficulties they experienced. At lunch breaks he had the opportunity to talk to students about their work individually. When students did the main investigation, the teacher moved around in order to provide limited help and support, but also to assess students by using an observation checklist.
In contrast to Michael, John said that he liked to and did talk a lot during coursework teaching because students needed guidance. The teacher emphasised the need to talk in order to show students certain ways to carry out an investigation task. By using questioning strategies he aimed to guide them to the right direction and get them to think about how to improve. Lesson observations showed that his oral feedback to students was formative to a great extent. His strategy was to provide oral feedback on both an individual and whole class basis. John gave oral feedback to students in the form of questions in order to support them with particular difficulties. It was always focused on the task showing students what they had to do to improve a piece of work. For example, on an individual basis, he asked two students the following questions: “Can you explain the anomalous results?” and, “How can you use this graph in your analysis?” (from a lesson observation transcript) to help them use graphs and analyse their data. On other occasions, John addressed the whole class and asked the following questions to help his students with the explanation and conclusions: “OK. All right. You’ve got these data, as a class. What kind of things can we do with it? What’s the most important graph to plot?” With these two questions the teacher firstly, tried to get students to think about how to improve their answers and secondly, get them to relate their work to the intended quality and assessment criteria. After all, John’s oral feedback and the particular questions he posed to students were based on the cognitive demands of an investigation activity. In addition, with his oral feedback, in the form of questions, he tried to ask for a full explanation to explore their thinking or help them improve their reasoning.
John said that self- and peer-assessment were not used extensively because they were very time-consuming and they led to disputes among students. An additional problem mentioned by the same teacher was that students did not give the appropriate concentration on that and, it was difficult for the teacher to keep all students moving through at the same pace. However, the teacher said that students assessed their partner’s work when it was a simple test of questions with straightforward answers, like multiple-choice questions. This was the case, after the teacher had already allocated the maximum marks to each question.
In the post-observation interview, when John was presented with a piece of coursework and asked with which strategies he would have supported students with attainment similar to Paul’s (student’s name), he talked extensively about his instructional strategies. He said that if the majority of students’ reports were of low attainment, he would have decided to teach a lesson on how to write a piece of coursework. In such lessons, the teacher modelled the process of writing a report section by section. After the modelling and while students had practice, the teacher guided and helped them with writing. The teacher withdrew his support when their reports improved and students’ confidence increased. However, once more, John underlined that he was not allowed to give much help to students in coursework teaching. John pointed out that, when he taught A-level coursework, time was always limited to plan for additional lessons and practice.
With two more questions, John was asked how he would have supported three particular students (Emma, Victoria and Margaret), whose coursework he was presented, to help them improve certain sections of the coursework report. He, then, talked about his instructional methods and any judgements he had made, based on his knowledge of the students and their abilities, as well as the demands of the particular investigation activity. For example, John said that he would have prompted Victoria by using two main strategies: good questioning and oral feedback. When John talked about Margaret whose report was much poorer, he reported that he would have supported students and guided them with questions specific to the task (i.e. what needs to improve). With his questions, John would have guided students to improve some sections or the whole coursework. The teacher would have talked to students individually; he would have sat with them and discussed the different coursework sections sentence by sentence. It is worth noting that when John assisted them on an individual basis, he always took into consideration the particular student’s “potential”: how far each student can improve and achieve. More importantly, with regard to low achievement students, John said that they would need guidance from an earlier stage with the investigations, the structure and the writing of the report. He said that he would encourage his students to develop writing skills and support them to improve. John talked about a wide range of strategies he had employed. Firstly, he modelled the quality of the write-up of coursework. In doing so, he showed his students some examples of good analysis and conclusions: what a good analysis of evidence had to include. Secondly, he organised discussions with the whole class where students had to make comments and judge whether a piece of analysis was good or not and why. The teacher read out an analysis from previous years’ students’ coursework. Discussions about the criteria and the required attainment helped students internalise what was expected of them to achieve. John argued that teachers should have explicitly taught and modelled how the analysis needed to be done. He explicitly instructed the exact order in which their ideas should have been presented in the analysis. Explicit teaching had also to address the quality of each section of the report.
John claimed that the whole issue of assisting students to improve the analysis section or the whole report needed directed teaching in small steps. The teacher placed much importance on the development of good writing skills so that students would become able to communicate their thoughts as clearly as possible. In his teaching, he emphasised that the lab report should have a clear structure. In turn, a clear structure helps students check both the quality of their coursework and their own progress. He paid personal attention to students’ difficulties and learning needs by asking them questions. Students wrote down notes and thoughts during the individual discussion with the teacher. Furthermore, when John talked about the sort of support he would have given to Margaret and students with similar achievement, he underlined the bigger problem of making judgements as teacher, in an attempt to explain the reasons behind certain students’ attainment and difficulties with certain sections of the coursework reports. In doing so, the teacher would have had usually come up with two groups of different attainment within the same class: students who did well in coursework and students who experienced difficulties. In making decisions about how to deal with the two different groups, John emphasised that there had to be flexibility and much thought in his decisions about how he, as the teacher, would proceed and what he would do “next”. “... the judgement and the balance between each child about this standard of work. Why is it like this?” (second interview).
So, there is the awkward decision about: “Do I devote more lesson time to covering a point that the vast majority of the class has not had opportunities to learn or move on to the next topic?” (post-observation interview).
Marking and Feedback
For the analysis of results: “What did you expect to have in the rate?” “How can you use this graph?”
For the evaluation of investigation activity: “How could you do it better? How could you fit in with the criteria that the exam board want?”
Thus, the teacher’s feedback was specific to the task, by giving clear guidance on what was missing and on what each student needed to do to improve.
Discussion and Conclusions
The two teachers were seen to differ in the perceptions of their role, in marking and feedback practices and in the strategies they employed to help students improve. Similar varying positions and contrasting strategies were also found among the other participant teachers. All the participant teachers distinguished coursework teaching from teaching theory (in their own words: “usual” science teaching). They also distinguished coursework marking from marking of theory (in their own words: “usual” science work). This distinction was made on the basis that the grade they assigned “counts” in the total grade in external exams. The evidence that we found for teachers’ perceptions of their role, as well as their strategies related to the teaching and assessment of coursework, suggests very strongly that two underlying approaches can be discerned. Teachers either adopted the role of the examiner, or combined the role of the teacher and examiner. In the latter case, they tried to reconcile their responsibilities as teachers (to facilitate and support the learning process) and examiners and assessors of their own students’ coursework.
Thus, for Michael and six more teachers (out of the total nine), the focus of teaching and assessing coursework was on the preparation of students for the national exams. Michael rejected the role of the teacher in favour of that of an examiner. While he strictly followed the exam board regulations and the exam board criteria, he was not allowed to give feedback or help students improve their coursework. Central to his teaching and assessment practices was the notion of “fairness”: the grade for coursework contributes to the total science grade, therefore, his help and feedback had to be limited so that he was fair to all students. Perhaps the most striking aspect of the evidence from the current study was that Michael took on the role of examiner for granted, without questioning himself or experiencing any conflict between the two roles of teacher and examiner. When he taught or examined “usual science” (his own term) he adopted the role of the teacher, whereas in the case of coursework, his role was to examine and assess students’ coursework. He took upon the role of the examiner unproblematically by following the rules, regulations and the constraints related to the teaching and assessment of science investigations. Thus, there was no room in his teaching for any formative assessment practices to be employed. He was not allowed to give feedback, self- and peer-assessment opportunities or any chances for improvement of the coursework report. As a result, his students missed out on many learning experiences during the actual investigation and the write-up of the report. Moreover, the consequence was that the benefits from good formative assessment in terms of improving students’ learning were lost.
In contrast, John and one more teacher (Scott) clearly took on the role of the teacher, according to which the teacher had to teach and support the learning process. They employed a wide variety of strategies when teaching and assessing coursework. More crucially, John gave formative feedback and asked questions (orally or in the form of written comments in students’ reports) showing them what they needed to do to improve. For John, the tension and the dilemma between good teaching and “fair” assessment existed but he reconciled the two in an attempt to play the “game” (his own term) and help students both learn and succeed in the exams. As a result, the two teachers (John and Scott) perceived a different role for themselves and took on different responsibilities. However, we do not have enough evidence to claim whether this approach was because of their initiative or, according to the school or the particular science department policy. Another study, looking at the effects on teachers’ assessment in the subject of Design and Technology in England and Wales, reported on the frustrations experienced with and the dilemmas faced by teachers with regard to their conflicting roles as assessors and instructors (Paechter 1995).
One can see here the impact of the external imposed curriculum and testing; the external summative exams dominate and distort the teaching and assessment of science coursework. Only a few of the participant teachers implemented some elements of formative assessment practices. All teachers reported that they would not have given students the opportunity to assess their own or peers’ coursework. Most teachers were not competent to talk about the strategies with which they, as teachers, would help students improve coursework or address problems at the whole class level. Rather, for most of them teaching and assessment were two separate ‘events’. There was a belief in the regulations that as this was an examination, students could not be taught anything while working on the task. Talk, help and feedback were limited.
Also, they did not see coursework as a piece of work under development and along the route of progression and improvement. Instead, they considered it as a ‘final product’—an end in itself—to which they assigned a grade. In this way, assessment seemed to be separated from further planning and teaching. This approach is mainly due to the fact that coursework “counts” in the final grade for GCSE and A-levels and thus, teachers took for granted that they were not allowed to help, intervene and suggest ways for improvement. Only two teachers—John and Scott—talked about the possibility of improving draft reports. The two teachers, while not rejecting the examiner role, also took on the role of the teacher. While keeping the exam board regulations and requirements, their role and responsibilities were to support and promote student learning. For these two, as clearly demonstrated in the observed lessons and articulated in the interviews, ‘assessment for learning’ and summative assessment did go together to support learning. That is, the same piece of students’ work, here the coursework report, was made to serve both formative and summative assessment purposes. They gave more constructive feedback which is the feedback showing how work can improve and which is informed by the learning goals, the assessment criteria and the intended quality. John’s marking was constructive since he started with providing students with information about what they were doing well, suggestions for improvements and specific guidance on where corrections are needed and how to go about them. In the analysis section, the teacher aimed to guide students and support their writing by giving very specific comments on the required content and components of analysis. The two teachers provided students with time to respond to marked work. Students were provided with time to read and act upon comments on marked work by re-working on it and handing it in again. The two teachers were formative in their approach because they would find out what students knew or did not know and then allocated time and effort on the areas that required improvement rather than simply ensuring coverage of a topic. The formative assessment was implemented by providing lesson time for students to improve their first attempt in the supportive environment of the class and with the teacher at hand to check their progress. In these ways, coursework teaching and assessment situations can become opportunities for learning, rather than activities divorced from learning (Black et al. 2003, 2004). They can provide a considerable scope for formative assessment and the promotion of student learning.
A key conclusion of this study is that teaching and assessment of science coursework need to re-focus on learning. Teaching is about helping students to learn and improve. This conclusion is in line with Torrance (2007) and his team’s conclusion (Torrance et al. 2005) that the practice of assessment has moved from assessment of learning, through assessment for learning, to assessment as learning, with assessment procedures and practices coming completely to dominate the learning experience. Furthermore, teaching by employing formative assessment practices is about good interactions, mainly by providing good quality feedback and opportunities for self- and peer-assessment. The balance between ‘being fair’ and providing the kind of feedback that leads students to improve should be a focus for considered attention and an issue of professional judgement for teachers. However, this piece of research shows that this was often not the case. One reason for this may be related to teachers’ individual professional competence. It can be argued that making judgements about students’ achievement and, subsequently, about how to support students to improve is an issue requiring professional judgement that has to be informed by teachers’ ‘pedagogical content’ knowledge (Shulman 1986). To make effective use of formative assessment, the teacher needs to have the flexibility needed to respond to student needs, as they emerge from a particular investigation or a piece of work. Experience is crucial to having both the knowledge and confidence to work in this way (Torrance and Pryor 1995, 1998; Marshall and Drummond 2006). This responsiveness makes teachers’ decision-making very demanding: “To pitch feedback at the right level to stimulate and help is a delicate skill” (Black and Harrison 2000, p. 38). We need teachers competent and confident in their own professional judgements. The Centre for Educational Research and Innovation (CERI 2005) at the Organization for Economic Cooperation and Development (OECD) is currently engaged in a study of exemplary practice of formative assessment (Looney 2007). In addition, the King’s Medway Oxfordshire Formative Assessment Project (KMOFAP) was designed to support secondary teachers to develop formative assessment practices in the UK (Wiliam et al. 2004). Moreland and her team reported on the development of formative assessment practices in primary teachers in the subject of technology (Moreland et al. 2001).
... it is ironic [that] the GCSE and its predecessors seem to command public and political confidence where newer approaches do not, and amazing that external SATs are seen to command trust and support while teacher assessment has to fight for comfortable recognition (Black 1990, p. 26).
Black argued that teacher assessment demonstrates greater validity than any other external test can achieve. Along the same lines, a recent publication by the National Research Council on Testing and Assessment in the US, reported: “The balance of mandates and resources should be shifted from an emphasis on external forms of assessment to an increased emphasis on classroom formative assessment designed to assist learning” (Pellegrino et al. 2001, p. 310). If one considers the very nature of practical work in science, it can be argued that it is only the teacher assessment that can secure the validity of the examination. Achievement in experimental science cannot be effectively assessed by large-scale, external assessment. To assess students’ investigative and writing skills and how coherently students present ideas in coursework, one has to give more importance and value to the teacher assessment.
I am grateful to students, teachers and their departments for participating in the study. I would like to thank the two anonymous reviewers for suggested revisions to the first manuscript. I am particularly grateful to Professor Stephen Lerman for supervising and the Greek State Scholarships Foundation (IKY) for funding the study.