This sequential mixed methods study took place during the last year of a five-year grant awarded to our institution to validate the effectiveness of a cognitive-strategies approach to writing instruction in partnership with Norwalk La Mirada Unified School District (NLMUSD) and three other school districts in California. The previous four years were spent on designing and conducting a randomized control trial involving the districts’ grade 7 to 12 grade students. During the year we conducted this study (2017–2018), NLMUSD exclusively requested that grade 6 teachers be provided with the same professional development in an effort to institutionalize and scale-up the intervention (Olson et al., 2019). NLMUSD is a large, urban school district that serves 80% Hispanic students, 8% White students, 7% Asian students, 3% African American students, and 2% are Other Ethnicities. Additionally, 61% of their students are English Only students, 17% of their students are English Learners, 16% are Reclassified Fluent English Proficient students, and 6% of their students are Initially English Proficient. About 75% of the district’s students participate in the Federal Reduced Price Lunch program. Participating teachers and students were recruited from NLMUSD specifically as the other three school districts institutionalized the intervention in other ways. Teachers in this grade 6 cohort all received the same intervention as previous cohorts of teachers. However, in addition to testing the efficacy of the teacher intervention with all teachers and students, we were also interested in testing a student intervention, that we hypothesized would have implications on their self-efficacy as writers. The focus of our student intervention, thus, was at a different level and with a different grade level than that of the larger RCT study. We collected quantitative data on students first, then followed by a qualitative component to understand what may have contributed to students’ self-efficacy while revising.
Teacher and student participants
This cohort of participating grade 6 teachers consisted of 13 teachers. Each teacher had one focal class. Approximately 401 students were part of this cohort. All teachers participated in our professional development intervention. The student intervention component differed between randomly assigned groups. Of these students, 131 students were in the treatment group and 83 were in the comparison group, as one teacher declined to participate in the random assignment, representing an 8% attrition rate of teachers (leaving 12 teachers to be randomized). Across both groups, 52% of the students were female, 76% were Hispanic, and 62% of the students are Redesignated English Learners, a percentage that is much higher than the overall district demographics, since focal classes with higher percentages of ELs and RFEPs for all teachers’ classes (treatment and comparison) were selected for the study. The Self-Efficacy in Writing (SEW) means at baseline for both groups were not statistically different (mtx = 3.61; mc = 3.56).
Professional development intervention for teachers
In order to distinguish between the grade 7 to 12 study and this sub study of grade 6 teachers, we are providing a description of the professional development program since all teachers in this study were in the same PD and were trained together. We will subsequently explain what the “treatment” teachers in our self-efficacy intervention did that was above and beyond the PD all teachers attended to account for differences in student outcomes. Participating teachers attended 46 h of professional development throughout the school year, consisting of six full-day meetings and five after-school meetings.
The professional development intervention is informed by cognitive, sociocognitive, and sociocultural theory. In their cognitive process theory of writing, Flower and Hayes (1981) posit that writing is best understood “as a set of distinct thinking processes which writers orchestrate and organize during the act of composing” (p. 375), including planning, organizing, goal setting, translating, monitoring, reviewing, evaluating, and revising. They liken these processes to a “writer’s tool kit” (p. 385), which is not constrained by any fixed order or series of stages.
In describing the difficulty of composing written texts, Flower and Hayes (1980) aptly conceptualized writers as simultaneously juggling “a number of demands being made on conscious attention” (p. 32). While all learners face similar cognitive, linguistic, communicative, contextual, and textual constraints when learning to write (Frederiksen & Dominic, 1981), the difficulties younger, inexperienced, and underprepared students face are magnified. For these students, juggling constraints can cause cognitive overload. For example, ELs are often cognitively overloaded, especially in mainstreamed classrooms where they are held to the same performance standards as native English speakers (Short & Fitzsimmons, 2007).
Graham (2018) has pointed out that “available cognitive models mostly ignore cultural, social, political, and historical influences on writing development” (p. 272). He asserts that writing is “inherently a social activity, situated within a specific context” (p. 273). This view echoes Langer (1991) who, drawing on Vygotsky (1986), suggests that literacy is the ability to think and reason like a literate person within a particular society. In other words, literacy is culture specific and meaning is socially constructed. From a sociocognitive perspective, teachers should pay more attention to the social purposes to which literacy skills are applied, and should go beyond delivering lessons on content to impart strategies for thinking necessary to complete literacy tasks, first with guidance and, ultimately, independently.
Finally, sociocultural theory views meaning as being “negotiated at the intersection of individuals, culture, and activity” (Englert et al., 2006, p. 208). Three tenets of sociocultural theory are applicable to the intervention (Adapted from Englert et al., 2006): (1) Cognitive apprenticeships: in which novices learn literate behaviors through the repeated modeling of more mature, experienced adults or peers to provide access to strategies and tools demonstrated by successful readers and writers (Vygotsky, 1986). (2) Procedural facilitators and tools: where teachers are most effective when they lead cognitive development in advance of what students can accomplish alone by presenting challenging material along with procedural and facilitative tools to help readers and writers address those cognitive challenges. (3) Community of practice: the establishment of communities of practice in which teachers actively encourage students to collaborate and provide ongoing opportunities and thoughtful activities that invite students to engage in shared inquiry.
The central core of the PD is the use of cognitive strategies to support all students in reading and writing about complex text. Cognitive strategies are conceptual tools and processes that can help students become more meta-cognitive about their work. The following are the cognitive strategies introduced in the PD:
Planning and Goal Setting, Tapping Prior Knowledge, Asking Questions and Making Predictions, Constructing the Gist, Monitoring, Revising Meaning, Reflecting and Relating, and Evaluating. Some sub-components are: Visualizing, Making Connections, Summarizing, Adopting an Alignment, Forming Interpretations, Analyzing Author’s Craft, and Clarifying Understanding (Olson, 2011, p. 23)
The primary intent of the professional development is to provide teachers with lessons and materials to introduce the cognitive strategies to students toward the intended goal of improving students’ analytical essays about either fiction or non-fiction texts.
Teachers also learned specific writing strategies to help students revise their writing. To avoid “teaching to the test,” teachers use a different text, but similar in topic as the text used for the writing assessment as a training tool in order to model how to revise the pre-test into a multiple draft essay. Throughout a series of mini-lessons, students are taught a variety of skills through examining a mentor text/essay based on the training text. Students first read the training text using the aforementioned 15 cognitive strategies. Then, they are given a writing prompt similar to the one they used on the writing assessment. This writing prompt is dissected by having students fill out a Do/What Chart which instructs students to circle all of the verbs (Do) and underline all of the task words (What) in the prompt and transfer the verbs and tasks words onto a T-chart to help them understand what they are being asked to do (for example, “Select one important theme and create a theme statement.”) Then, students are given a mentor text/essay addressing the prompt they just dissected. This mentor text/essay is analyzed for the moves the writer makes, particularly in how he or she constructed the introduction, body paragraphs, and conclusion.
When working with the introduction, students are taught the HoT S-C Team (Hook/TAG/Story-Conflict/Thesis) acronym. The students are to identify that a writer often starts with an engaging hook that could be a quote, question or statement to make people think, fact, or even anecdote; then identifies the title-author-genre (TAG) of the text being written about to set the context for writing; adds purposeful summary of the story or conflict, and includes a thesis (claim).
Each component of the mentor text is color-coded using yellow (for summary sentences), green (for textual evidence), and blue (for student commentary) to help the students understand that a balance of purposeful summary, textual evidence, and commentary is important when constructing an analytical essay. Additionally, students are also taught about grammar brushstrokes (Noden, 2011) such as adding adjectives out of order, appositives, or using active verbs and are encouraged to revise some of their sentences with these brushstrokes to enhance sentence variety.
One of the essential activities in this intervention is to have teachers help students revise their on-demand writing samples into a more polished analytical essay, after these writing samples have been read and commented on by trained readers. It is during this part of the main study that our team decided to conduct the sub study on student self-efficacy. Given that all teachers experienced and received the same professional development and, in turn, taught the same revision strategies to their students, the only difference that we tested rested solely on asking the treatment students to use the Pre-Test Essay Revision Planner and Revised Pre-Test Reflection form. This self-assessment, planning and goal setting, and reflection strategy is aligned to Hayes’ (2012) control level of writing, which involves student self-efficacy. A more detailed explanation of this new intervention strategy follows.
Student intervention: pre-test revision planner and revised pre-test reflection form
In prior studies of our intervention, we have routinely asked teachers to analyze their students’ pre-tests as a formative assessment and to fill out their own reflection planner regarding their students’ strengths and areas needing growth as a tool to help with instruction. After reading about how much student self-efficacy influences writing outcomes (Bruning et al., 2013), we wondered if having students participate in assessing their own strengths and areas for improvement as writers and fill out a reflection similar to the one their teachers created would lead to better writing outcomes. With the consent of teachers participating in the intervention, we randomized the teachers’ classes into two groups. The comparison group received instruction from their teacher on how to revise their pre-test essays and were provided with comments from a trained reader. The treatment group not only received instruction from the teacher and comments from a trained reader, but also conducted a self-assessment of their work and filled out the Pre-Test Essay Revision Planner and Revised Pre-Test Reflection form to describe their process and assess the quality of their product after revising. Since all the students, treatment and comparison groups, participated in the same intervention and were taught the same strategies, this study tests the impact of the Pre-Test Essay Revision Planner and Revised Pre-Test Reflection form on students’ self-efficacy and writing quality. To promote treatment fidelity, all teachers, treatment and comparison, were required to submit their students’ revised pre-tests to the intervention developers in order to receive their stipend for participating in the year-long study. Treatment teachers were also required to submit the Pre-Test Essay Revision Planner planner.
To elaborate, the process we took treatment students involved two steps. The first part of the Pre-Test Essay Revision Planner (see Appendix 1) asked students to self-assess what they did effectively as writers on their essay and what they might have struggled with on the writing task. They were then asked to decide on goals for revisions in bulleted form, weighing suggestions by trained readers who commented on their papers. These were action steps the student proposed to take when revising his or her essay. After they have completed their revision, students reflected on what changes they made, what they were most proud of, and what their teacher did to help them reach their revision goals using the Revised Pre-Test Reflection form. In the comparison condition, students revised their pretests, but without the use of a planner, keeping everything else equal.
Sample student pre-test, revision planner, and revised pre-test
This section illustrates the multi-faceted components of the intervention. We start by examining a student’s pre-test with commentary from an experienced reader, then her revision planner, next her revised pre-test, and finally her self-assessment and reflection and consider how these components affect a student’s self-efficacy in writing.
The prompt the student responded to was an analysis of Virginia Driving Hawk Sneve’s short story “The Medicine Bag” for its theme as exhibited through the evolving relationship between the narrator and his great-grandfather as he visits him unexpectedly and the symbolism behind the gift he leaves the narrator prior to his passing (Fig. 1):
The student’s attempt at the on-demand essay consists almost exclusively of summary, indicating that her command of analytical writing is still developing. The student starts the analysis with “In the beginning of the story…” followed by a long summary of the plot and puts forth the claim “this proves that Martin is embaress [sic] of his grandpa…” While this is not a theme statement it does indicate the writer’s understanding of the text. The trained reader also notes the writer’s recognition that the character changes over time and encourages her to focus on the author’s message or lesson when she revises (Fig. 2).
The comments the student received from the trained reader focused revision on connecting commentary to textual evidence, developing a theme statement, and the role symbols play in the story. These types of comments are quite typical of the responses many students in this study received from our trained readers. After teachers received these comments and reviewed them, they passed these papers back to their students and treatment teachers had students fill out the Pre-Test Essay Revision Planner (see Fig. 3). We conjecture that this opportunity to self-assess may contribute to her persistence through the revision process better than her comparison peers who may only rely on given feedback, but no reflection nor goal setting (Bruning & Kauffman, 2016).
In her Pre-Test Essay Revision Planner, the student first focused on the strengths of her essay—what she did well. Then she addressed what she struggled with or didn’t do as well in her essay. Next, she set a goal to revise the introduction, by including a hook and TAG which indicates Title, Author, and Genre, and especially to “talk more about the message.” Much of what the student plans to do is quite specific to revising the introduction; revising an introduction and knowing what is expected can help students produce more focused papers that are organized with a clear direction in terms of analysis. Below is her revision of the writing assessment (Fig. 4):
The student’s revision is a noticeable improvement over her original pre-test. The revision has included a hook (e.g., an anecdote around traditions), attempts a theme statement (e.g., the importance of traditions), addresses the changing the relationship between the narrator and his grand-father, and also focuses on the medicine bag as a symbol. Notice how the student meets her revision goals, but also takes up the suggestion to focus on symbolism. The moves the student makes from pre-test to revision are akin to a student that makes a transition from knowledge-telling (e.g., summary) to knowledge-transformation (e.g., commentary) in their writing (Bereiter & Scardamalia, 1987). For example, in the students’ pre-test, she summarized how Martin exaggerates about his grandfather but did not explain how this exaggeration relates to his embarrassment. In the revision of this paper, the student explains, in detail, why Martin was embarrassed by his grandfather and why he felt compelled to make him seem more “glamorous” and larger than life. Moreover, the reflection on the revisions she made (below) demonstrates ownership over her revision process, with her teacher’s help (Fig. 5):
The student recognizes the changes she made from her pre-test to the revised version, particularly the inclusion of a message or theme statement and the improvements she made. She also emphasized how helpful her teacher was in helping her revise her body paragraphs, which was a goal that was not particularly emphasized on her revision planner, but proved to be a writing move that was successfully executed. The student exhibited a strong sense of self-efficacy. Note, her expression of pride in working on and completing the assignment).
Data collection and measures
Self-efficacy for writing scale
To examine student growth in self-efficacy, particularly in writing, we adapted a pre-existing self-efficacy in writing measure called the Self-Efficacy for Writing Scale (SEWS), reliably measured by another research team (Bruning et al., 2013), by adding additional questions regarding revision practices. After cleaning the data for complete entries at pre and post-survey, our sample size consisted of 214 students who had completely filled out a pre and post-survey. The SEW survey had 22 Likert-scale questions on a scale from 1 to 5 in terms of how much they agree with each statement. To further analyze the SEW survey, but to also simplify the analytical process, we also conducted a factor analysis to reduce the number of components and created four composites, for specific areas of self-efficacy, as a result. The four composites used in our analysis, the questions that pertained to each one, and the factor loadings after applying orthogonal varimax rotations (Abdi, 2003) are in Table 1 below:
Ideation groups questions regarding students’ ideas and content in their essays together; Syntax pertains to students’ focus on grammar, spelling, and paragraph formation; Volition pertains to students’ abilities to follow-through with their assignment and complete it; and Revision questions pertain to students’ abilities to revise their paper for specific skills.
Academic writing assessment
In order to test the impact of an increase in self-efficacy in writing on students’ analytical writing, we used students’ scores on the Academic Writing Assessment, a writing assessment created for our intervention, that is administered to students prior to the intervention and after revision of the pre-test. Two prompts (one on “The Medicine Bag” and one on “Ribbons”) were created regarding two texts where the main character’s relationship with a grandparent changes throughout the story. The students stated a claim or theme statement about relationships and use textual evidence to support this theme. To control for prompt effects half of the students wrote to one of these prompts at pre-test and wrote to the other prompt at post-test, and vice versa.
Approximately twenty papers were randomly selected for scoring per teacher. Assessments were scored in a double blind process over four hours where the scorer neither knew if the paper they were scoring was written by a treatment or comparison student nor whether they were scoring a pre-test or post-test. Each paper was read twice and given a score from a range of 1 to 6, with possible score points from 2 to 12. If the two readers differed by more than two points (e.g., a 2 and 4) then a third, more experienced reader also gave the paper a score. If the third reader’s score matches either the first or the second reader, the third reader’s score was added to the score it matched. If the third reader’s score fell in between the first and second reader’s score, the third reader’s score was kept and the average of the first and second reader’s score was added to the kept score. All papers were scored in such a manner during a scoring event held over four hours. Raters agreed within a score point or better for 95% of the papers; 5% required a third reading, and 49% of the papers had exact agreements between the two scorers.
Pre-test essay revision planner and revised pre-test reflection form
To reiterate, the form asked students to self-assess, plan and goal set during revision, and reflect on the process after finishing their revisions. The reflection side of the planner was inspired by Daniel et al. (2015) who found that students who wrote a cover letter to their instructors detailing the changes they made to a revised paper, based on instructor feedback, submitted higher-quality revised papers than their control peers. The theory of change behind this planner is that it encourages students to identify problem areas, set goals, and remind them of these goals as they revise their pretest, encouraging them to accomplish these goals (see Daniel et al., 2015).
A sub-set of students from both the comparison and treatment classrooms were selected for interview purposes. Without knowing students’ AWA scores, teachers were asked to nominate one developing writer and one more proficient writer for the interviews. Selected students were provided with their pre-test, their revised pre-test, and their post-test; treatment students also were provided with their Pre-Test Essay Revision Planner and Revised Pre-Test Reflection form. Students were interviewed in the same room, but sat far away enough from each other so that ambient noise from the other interview being conducted would not be captured. Students were asked a series of open-ended questions (see Appendix 2) about their identities as writers (e.g., From a scale of 1 to 10, how would you rate yourself as a writer?); about their revision process; and what helped them to revise their papers/to meet their goals.
Randomly selected teachers chose one focal class with which to conduct these research activities:
Students in the selected classes were asked to take two timed on-demand writing assessments–one at the beginning of the school year and one at the end of the school year. These essays were scored during a double-blind session based on the Academic Writing Assessment (AWA) rubric that we created and validated in other studies (see Olson et al., 2017).
The students also took two self-efficacy in writing (SEW) surveys, one at the beginning of the school year and one at the end of the school year.
In between the two SEW surveys students’ teachers either were randomly assigned to have students reflect on their writing or not to reflect on their writing using the Pre-Test Essay Revision Planner and Revised Pre-Test Reflection form while revising their pretests.
Afterwards, two students from each class were randomly interviewed on their writing process with questions that focused on their identity as a writer and what helped them as writers.
To analyze growth on our SEW measure, we ran t-tests to measure change from pre to post on each of our aforementioned components from our factor analysis (ideation, syntax, volition, and revision). We then also ran t-tests to measure change from pre to post on the AWA differentiating between the treatment and comparison groups in order to test the impact of self-efficacy in writing on timed on-demand writing tasks.
Students’ revision-planners and post-revision reflections were analyzed for the types of goals students created for themselves by looking at idea units. Student interviews were transcribed by the first and second author, divided into idea units, and coded for students’ revision processes and what strategies/resources might have assisted them in doing so. Codes were independently generated and then verified between the two coders until they were agreed upon (Miles & Huberman, 2008).