Introduction

The writing teacher’s main task is to provide students with feedback on their texts (Ferris et al., 2011). While providing feedback on students’ writing is time-consuming, it is a prominent and non-negotiable component in writing instruction (Ferris and Hedgcock, 2013). Feedback provision is a much more complicated task in second-language (L2) writing environments when compared to first-language (L1) ones. Due to the more writing difficulties encountered in such L2 learning environments, teachers try to meet a wider range of students’ needs and expectations and help them overcome different types of writing problems. Feedback provision is theoretically grounded in Vygotsky’s (1978) socio-cultural theory. According to this theory, learners gain more benefits when teachers encourage them to self-correct and scaffold their language products by adjusting their choice of corrective strategies (Lantolf, 2000). Teachers’ feedback beliefs and practices are important contextual factors in L2 writing learning and development.

Feedback in writing classes entails using different modes, strategies, and activities. In writing classes, we can distinguish between ‘teacher feedback provision practices’ and ‘teacher uses of feedback activities’. In writing instruction literature, these two terms are alternatively labeled as ‘teacher feedback forms’ and ‘non-teacher feedback forms’ (see for example, Abdel Latif et al., 2024; Saito, 1994). The former term ‘teacher feedback provision practices’ refers to the feedback modes and types/forms the teacher uses when evaluating students’ texts. As for ‘teacher uses of feedback activities’, this term means the learner-centered feedback activities the teacher makes use of in the writing classroom; these normally include writing self-evaluation, peer feedback, and automated writing evaluation activities. The main role of the writing teacher in such activities is not to provide students with feedback but to guide and organize their participation in them, to monitor and evaluate their responses to the activities, and to assess their writing outcomes. The ultimate goal of both ‘teacher feedback provision practices’ and ‘teacher uses of feedback activities’—or teacher and non-teacher feedback forms– is to raise students’ awareness of their writing strengths and weaknesses and in turn bring about desired changes in their performance.

Feedback provision and activity use represent an indicator of writing instruction effectiveness and students’ successful learning (Wang et al., 2023). In their attempts to meet L2 students’ writing needs, teachers have to optimize feedback provision practices and activities in their classes taking some contextual factors into account, such as time constraints, students’ levels, and class size. Besides, L2 writing difficulties can negatively influence the successful implementation of classroom learner-centered feedback activities (e.g., Özkanal and Gezen, 2023; Tian and Zhou, 2020). Accordingly, understanding writing teacher feedback provision practices and their uses of feedback activities is key to improving both forms of feedback in writing classes.

The last three decades have witnessed the publication of an increasing number of studies on feedback in L2 writing environments. Some studies have been concerned with teacher feedback (e.g., Mao and Crosthwaite, 2019; Wei and Cao, 2020; Zacharias, 2007). Other studies have focused mainly on learner-centered feedback activities (e.g., Dikli and Bleyle, 2014; Özkanal and Gezen, 2023; Sari and Han, 2024; Zhao, 2010). While these empirical attempts have widened our understanding of feedback practices in L2 writing classes, some research issues remain unaddressed. For example, it can be noted that previous studies have been concerned with some international settings rather than others. Besides, previous studies on learner-centered feedback activities have only evaluated them in light of students’ perceptions and experiences and neglected those of teachers. In an attempt to deal with these two research gaps, the study reported in this paper has tried to profile Saudi university English writing teachers’ feedback provision practices and their perspectives on learner-centered feedback activities. Before presenting the study, in the following section, we review and discuss pertinent research on L2 writing teacher feedback, learner-centered writing feedback, and English writing feedback practices in the Saudi context, respectively.

Literature review

L2 writing teacher feedback research

Using various research approaches, previous relevant works have focused on exploring L2 writing teachers’ feedback provision strategies and beliefs in different international educational settings. For example, Diab (2006) surveyed the feedback provision strategies of 14 university L2 teachers in Lebanon. The teachers were found to focus on word choice, text organization, writing style, and ideational content in their feedback. In a similar vein, using focus group interviews with L2 writing teachers at a Singaporean university, Lee, Leong, and Song (2016) found that these teachers’ feedback provision beliefs were shaped by students’ needs and that in their feedback provision, these teachers paid particular attention to grammar, text purpose, and text organization.

Wei and Cao (2020) also surveyed the feedback practices of 245 EFL lecturers working at Chinese, Thai, and Vietnamese universities. The results revealed that the teacher participants preferred high-demand feedback (i.e., requiring students to respond to feedback) and indirect correction. Their preferences for such strategies were explained in light of training experiences, as well as contextual factors such as limited available resources. In the Chinese EFL context, Mao and Crosthwaite (2019) also analyzed questionnaire and interview data from five English writing teachers and found that the teachers frequently used indirect feedback predominantly targeting local instead of global errors. Additionally, their study showed a mismatch between the teachers’ beliefs and practices due to some contextual factors, including time limitations, heavy workloads, and perceptions of students’ attitudes toward feedback. In contrast, Lee (2011), who collected textual feedback data from 26 English teachers in Hong Kong and interviewed six of them, found that the teachers’ most commonly used feedback strategy was direct error correction with a focus on content and accuracy.

Two main limitations can be noted in the above writing teacher feedback studies. First, most of them involved a small number of participant teachers. This raises a question about whether these research findings are generalizable or not. Second, the feedback issues they covered were not comprehensive enough as they focused on limited feedback practices. A main research gap particularly concerns investigating teachers’ preferred or commonly used feedback modes. The very vast majority of the studies available about writing teachers’ use of feedback modes are of experimental nature (e.g., Alsahil et al., 2024, Bakla, 2020; Cunningham and Link, 2021). Thus, there is a need for research profiling the feedback practices of a larger number of teachers and dealing with these practices from a more comprehensive angle.

Learner-centered writing feedback research

The writing feedback generated in learner-centered activities also represents an important information source learners can use to improve the quality of their texts. Literature indicates that the three most popular learner-centered feedback activities in writing classes are: self-evaluation, peer feedback, and automated writing evaluation. Increasing research has been published on these non-teacher feedback forms in the last few years.

The studies comparing teacher feedback with non-teacher forms represent a relatively recent research strand. These studies have notably increased since the early 2010s. Some studies have only compared the perceived usefulness of teacher feedback with peer feedback. For example, Miao et al. (2006) analyzed the questionnaire and interview data of two groups of EFL Chinese students and found that they preferred teacher feedback over peer feedback. Meanwhile, the meta-analysis conducted by Thirakunkovit and Chamcharatsri (2019) revealed that teacher feedback has a larger effect size in comparison to peer feedback. Yet, their further analysis of peer feedback research shows a notable difference between peer feedback with and without training.

Some other studies compared teacher feedback with self-evaluation and automated writing evaluation. For example, Dikli and Bleyle (2014) reported L2 students’ positive perceptions of automated writing evolution though they reported a preference for teacher feedback. Similarly, Özkanal and Gezen (2023) found that Turkish university students valued and integrated teacher feedback more than automated writing evaluation and peer feedback, respectively. In a more recent study, Sari and Han (2024) used a focus group interview with eight EFL students and found that they had positive attitudes toward using combined teacher and automated feedback.

There is a paucity of research on teacher evaluation of non-teacher forms or learner-centered feedback activities. The relevant research available dealt only with teacher perceptions and experiences of automated writing evaluation tools; though such research is even scarce (Koltovskaia, 2023). For example, Wilson et al. (2021) used focus group interviews to explore elementary writing teachers’ attitudes towards and experiences of using an automated writing evaluation in U.S. elementary schools. The teachers viewed automated writing evaluation tools may potentially fostering students’ motivation and autonomy, but raised concerns about the functionality of such tools causing some instructional challenges. In a classroom-based study with three L2 writing teachers in U.S. university classes, Li (2021) found that while they had positive perceptions of the tool, they reported limitations for it as a feedback alternative and concerns regarding the required ecological changes in the learning and instructional environment. In Koltovskaia’s (2023) interview-based study, six U.S. university teachers had a positive attitude towards relying upon automated writing evaluation as a feedback tool, but they viewed that such potential could be accomplished through practical instructional experiences.

In addition to the scarcity of research on teacher evaluation of non-teacher feedback forms, some contextual gaps also pertain to the studies on teacher evaluation of learner-centered feedback activities. As noted, all three studies have been conducted in non-Arab contexts. Accordingly, the issue of teacher evaluation of non-teacher feedback forms needs also to be addressed in under-explored contexts such as the Saudi one.

English writing feedback in the Saudi context

Not much research has been reported on English writing feedback practices in the Saudi context. Of the few research attempts made on teacher feedback are the ones reported by Alshahrani and Storch (2014) and Alkhatib (2015). Alkhatib (2015), for example, investigated how the feedback practices of ten English writing teachers at a Saudi university matched their instructional beliefs and also their students’ feedback preferences. Her data revealed matches and mismatches between teachers’ writing feedback beliefs and practices, and also between feedback practices and students’ preferences, particularly with regard to explicitness of written corrective feedback. The teachers tended to give comprehensive direct feedback on language-related aspects and indirect feedback on content and organization. Some factors were found to influence the teachers’ feedback practices such as time constraints and students’ levels.

As for researching non-teacher writing feedback forms in Saudi Arabia, the case is very similar to the international contexts highlighted above. On the one hand, the very scarce studies reported addressed the perceptions of students- rather than teachers- of learner-centered feedback activities. On the other hand, most of these studies explored automated writing evaluation. Aldukhail (2023), for instance, combined a questionnaire with semi-structured interview data to probe Saudi university students’ evaluation of self-directed, teacher, and peer feedback types. Her study revealed that teacher feedback was the most preferred feedback type, while peer feedback was the least preferred one. With regard to automated writing evaluation, the two studies reported by Alnasser (2022) and Alhabib and Alghammas (2023) generally indicate that Saudi university students had positive attitudes towards it but they were dissatisfied with the technical difficulties of tools, and their feedback on text content and organization, respectively.

In light of the above, the picture of English writing feedback practices and feedback activity uses in the Saudi context is still unclear. This has been caused by some noted limitations in previous studies. First, these studies involved small numbers of participant teachers, which may limit the generalizability of their findings. Second, such studies also examined a limited range of teachers’ feedback practices and activity uses, leaving gaps in our understanding of a wider scope of them. Finally, while research indicates the role of context-related factors in teachers’ feedback practices and activity uses, there have been scarce relevant studies in the Saudi context. Thus, there is a need to investigate the feedback practices of a larger number of Saudi university English writing teachers and to cover these practices more comprehensively in order to profile them in a clearer and more generalizable way.

The present study

To fill in the above research gaps, the present study explored multiple dimensions in English writing teachers’ perspectives on feedback provision practices and learner-centered feedback activity uses at Saudi universities. The study dealt with this topic through three original angles. First, it explored various dimensions in teacher feedback provision practices (feedback modes, and error correction explicitness, scope, and focus strategies) and also in teacher uses of learner-centered or non-teacher writing feedback forms. Second, it tried to profile teachers’ feedback practices and the beliefs rationalizing them by collecting open-ended qualitative data from a much larger number of participants in comparison to the previous relevant studies conducted in both the international educational settings and the Saudi one. Third, the study collected data from female and male teachers working at a number of Saudi universities rather than one university only. Accordingly, the study is guided by the following three questions:

RQ1: Which feedback provision modes do Saudi university English writing teachers rely upon more frequently, and what reasons do they give for the modes used?

RQ2: Which error correction explicitness, scope, and focus strategies do Saudi university English writing teachers use more frequently, and what reasons do they give for the error correction strategies used?

RQ3: To what extent do Saudi university English writing teachers use learner-centered feedback activities in their classes, and what reasons do they give for their reported uses of these activities?

By answering these questions, the present study could provide important implications for improving feedback practices and the implementation of learner-centered activities in English writing classes at Saudi universities. Such instructional improvements could in turn lead to bringing about the desired changes in students’ writing performance.

To answer the research questions, the study drew upon the qualitative approach. Literature indicates the qualitative research approach is helpful in exploring a central educational phenomenon and developing a detailed understanding of it (Creswell, 2012). It is mainly used to inductively explore how people are experiencing the target educational phenomena, and their interpretations of these experiences without having any hypotheses to confirm or reject (Bogdan and Biklen, 1997; Lodico et al., 2006). In line with the qualitative approach, we decided to collect data using a questionnaire part with open-ended questions instead of interviews to access the largest possible number of research participants, and in turn to profile the participants’ feedback practices more reliably. According to Creswell (2012), open-ended questions help research participants best voice their experiences without any restrictions and allow researchers to explore participants’ reasons for the reported practices.

Participants

The data of this study was collected from a sample of English writing teachers who were working in English language departments and programs at several Saudi universities. They were faculty members teaching English writing along with some other language areas and/or linguistics courses in their workplaces. They were working primarily at English teacher and/or translator education programs in which students have to take 4–6 English writing courses; each course is taught for 2–3 h a week over one academic term. At these programs, the number of students attending writing courses ranges normally from 15 to 25.

The questionnaire responses analyzed in this paper were gathered from 74 teachers. Forty-eight teachers were females and twenty-six were males. The 74 teachers were working at eight Saudi universities during the data collection stage. The larger number of these teachers were Saudi; other teacher nationalities include Egyptian, Indian, Jordanian, Sudanese, and Yemeni. They had varied academic degrees; 45 were assistant professors and the remaining 29 participants were in other academic ranks (lecturers = 16, associate professors = 8, professors = 5). Their teaching experiences also varied and ranged mostly from five to 20 years. The teachers’ writing instruction experiences and the number of writing courses they taught also varied. During the data collection stage, most participants had taught more than five writing courses. All the teachers participated in the study based on informed consent indicated in the questionnaire introduction which explained to them the purpose of the study, confirmed the protection of their privacy and personal data, and indicated that submitting their questionnaire responses means approving to take part voluntarily in the study.

The open-ended questionnaire

As indicated above, in this study we drew upon open-ended questionnaire responses for accessing the largest possible number of teachers, allowing participant teachers to report their practices and reasons freely without restrictions, and minimizing participants’ potential social desirability, which may increase with using some other similar data sources such as interviews. We specifically used a questionnaire part with eight open-ended questions for collecting the data for this study. The eight open-ended questions were a part of a whole questionnaire– with other closed-ended parts– used in a larger research project. The open-ended questions were developed based on the purpose of the study and the relevant literature on writing feedback modes, error correction strategy types, and non-teacher feedback forms (e.g., Abdel Latif et al., 2024; Ferris, 2007; Ferris and Hedgcock, 2023). Thus, with the 74 teacher responses obtained, it is assumed that the open-ended questionnaire questions have enabled us to collect detailed and objective feedback practices and activity uses in the target research context.

We worked collaboratively on developing the open-ended questions which were written in English. Some revisions in the phrasing of the questions were made based on our mutual discussion of the suitability of the questions. In its final version, the questionnaire starts with a bio part about the respondent’s workplace, nationality, and teaching and writing instruction experiences. This bio section is followed by the main questionnaire with its closed-ended and open-ended parts. The open-ended questionnaire part used in this study includes eight questions about the teachers’ uses of feedback delivery modes, error correction strategies, and learner-centered feedback activities, and the factors or beliefs accounting for these uses. Each question about the feedback provision modes/strategies, or a learner-centered activity was followed by another question part about the reason for the reported response. For example: When correcting students’ English writing errors, do you correct them directly (by giving the correct alternative) or indirectly (by highlighting them or indicating their types)? Please explain in detail and give reasons.; To what extent do you use peer feedback or evaluation activities in the English writing classes you teach? Please explain in detail and give reasons.

Data collection and analysis

The data of this study was collected over eight weeks. The open-ended questionnaire was written using Google Forms, and its URL was circulated to the faculty members working at Saudi universities. This was done by individually sending the questionnaire webpage to some faculty members and also to groups of them through WhatsApp or emails. We also asked some faculty members to circulate the questionnaire URL to their workplace colleagues. Only those faculty members with experience in teaching English writing were asked to complete the questionnaire. Eighty-two responses were initially obtained from the target sample, but only 74 teachers completed all the open-ended questions reliably. The remaining eight teachers responded to most open-ended questions by just adding a dot or letter to skip answering them after completing the closed-ended questionnaire parts; therefore, their open-ended responses were excluded from the analysis. Accordingly, this qualitative data collection process ended up with gathering 74 reliable questionnaire responses.

The data analysis process took several stages. We worked undependably and then collaboratively on analyzing the qualitative data drawing upon the following guidelines proposed by Lodico et al. (2006): sorting out the data by organizing the respondents’ answers to each question in one section, exploring the data by initially reading the respondents’ answers to each questionnaire question and comparing them, identifying the related descriptions of the participants’ reported feedback provision and classroom practices and their reported reasons, and confirming the evidence emerging from the data. After independently exploring potential emerging themes in the data, we met online to discuss the main emerging themes each one identified in the data. Through such discussion, we resolved the slight differences noted in our data analysis. To verify the credibility and trustworthiness of the qualitative data analysis at this early stage, we asked an expert applied linguist to read our agreed-upon data analysis framework and to indicate whether or not he would agree with the themes and categories identified. An inter-coding agreement rate of 92% was found between our data analysis and the expert applied linguist’s evaluation and analysis. We used the finally agreed-upon framework to analyze the whole data set and to look for more related details supporting each theme. The last stage involved revising the sub-themes relevant to the finally agreed-upon main data analysis categories, and calculating some of them when needed (Guest et al., 2012).

Results of the study

The results of the data analysis are provided in the following subsections. The presentation of these results is guided by the research questions as it incorporates the teachers’ reported uses of feedback practices or activities along with their reasons: the teachers’ feedback delivery modes section provides the answer to RQ1, error correction strategies section gives the answer to RQ2, and the content of the teachers’ use of learner-centered feedback activities section relates to the answer to RQ3.

The teachers’ feedback delivery modes

Dominant feedback modes

In their answers to the first two open-ended questionnaire questions which concern feedback delivery modes, the respondent teachers reported using different approaches to feedback provision. Overall, the teachers’ answers showed that handwritten feedback is their most preferred and commonly used mode. Specifically, 47 teachers reported depending on handwritten feedback either as their only delivery mode or in combination with another one. Collectively, they view providing students with handwritten comments as (a) a more effective, clearer, and desirable mode for students; (b) more helpful for students in their future writing tasks; (c) more focused than oral feedback; and (d) easier, faster and more convenient for themselves as teachers than the electronic or audio modes. The following exemplary answers indicate these opinions:

  • Handwritten feedback is easier and faster. Also, I can write down quickly the comments that pop into my mind. The large number of students’ essays makes it difficult to evaluate using electronic comments. Moreover, electronic feedback is a technique that I did not learn how to use.

  • Written comments are more effective as students learn from their mistakes and they can be discussed orally if the student needs more help.

  • I am a paper-and-pencil person. Plus, I feel students pay more attention to written comments. They are more memorable and handwriting itself carries meaning.

As noted in the above answers, some participant teachers developed a longstanding reliance on handwritten feedback as part of their professional careers. That is why they do not like using technology in feedback provision.

The teachers’ second most commonly used mode is oral feedback. According to the teachers reporting it as their preferred mode (n = 37), oral feedback is personalized and immediate, and it is also an easier and less time-consuming feedback option that meets students’ psychological and learning needs. Two main issues are worth noting with regard to the oral feedback practices the teachers reported. First, it is used in most cases in combination with handwritten comments or error correction. Second, the teachers mentioned using the following three forms of oral feedback:

  • The one-to-one conferencing in which they discuss with students their writing.

  • Individual– normally brief– comments given to each student in the classroom.

  • Oral comments were given to the whole class on sample texts written by the students.

Conferencing is the least used form of the three oral ones; only five teachers reported using it. Conversely, the individual comments given to each student in the classroom represent the teachers’ most used oral feedback form (n = 24 teachers). The following answer summarizes how this oral feedback takes place in the classroom:

  • I believe individual oral comments are the most useful for students as they understand the source of their errors through the face-to-face discussion inside the class where I walk to each student and point out her writing errors and explain their causes.

On the other hand, 12 teachers mentioned giving oral comments to the whole class on sample texts written by students. The following two answers indicate the different approaches some of these teachers use in providing oral group feedback:

  • Sometimes, I show one of my student’s written work after taking his permission. I ask all students to find the mistakes and we discuss them together.

  • After individual oral comments and once all students have submitted their work, I collect sample anonymous essays (collection of mistakes) and present them asking students to correct the mistakes. From my experience, this is an excellent way for students to notice their mistakes.

As noted in the above answers, some teachers give students oral group feedback on anonymous texts they submit, whereas others discuss students’ texts before their classmates non-anonymously. This latter technique could cause students’ some writing anxiety and apprehension symptoms.

Regarding electronic feedback, 17 teachers reported using it with their students. Compared to other feedback modes, these teachers find electronic comments easier to manage and more comfortable to deliver. Meanwhile, they also view that with electronic feedback, students have the advantage of obtaining comments with clearer features that are easier to read and understand, and change. The following answers clearly indicate these reasons:

  • For me, I prefer to make comments in the email because this is easy. Handwriting is tiring, and nowadays we rarely write on paper.

  • I like to write comments electronically because I feel it is easier for students to read and understand. I can edit directly, or write comments on the side, or write elaborate comments at the end. I have more space than in a hard copy.

The first answer above implies that some teachers provide students with electronic feedback using emails. Other teachers mentioned they include their comments using specific platforms. It is also noteworthy that four teachers also mentioned electronic feedback because their students submit texts electronically via emails or platforms.

The teachers’ answers showed that audio-recorded feedback was their least used feedback mode. Only five teachers reported using it. For example:

  • I like also to give audio-recorded comments in the first draft because this allows me to elaborate more on their written texts in terms of organization and ideas.

  • Recorded notes were useful when we were teaching online during COVID-19.

As noted in the above answers, some teachers use audio-recorded feedback for a functional reason (e.g., to give more detailed comments) or due to some extenuating circumstances such as emergency online teaching.

Using combinations of feedback modes

In addition to depending primarily on one particular feedback delivery mode, some teachers were found to use two or more modes in their feedback provision. The teachers’ answers revealed they use the following three different combinations of feedback modes:

  • Combining handwritten and oral feedback (n=9);

  • Combining electronic and oral feedback (n=4 teachers);

  • Combining three feedback modes (n=5 teachers).

The 11 teachers who mentioned combining handwritten and oral feedback were found to ask students to submit their texts in a non-electronic format. These teachers’ answers suggest that with this combined handwritten-oral feedback mode, they seem to provide students with a minimal level of handwritten comments in these situations, perhaps due to the large number of students and classroom time limitations. The following two answers support this conclusion:

  • I usually use handwritten comments because I ask students to write in class using pen and paper. I use oral comments to the whole class about the repeated errors and techniques that would help them to overcome them.

  • I use handwritten comments on the paper while correcting the first draft, and oral comments for the whole class so they all can learn from them.

On the other hand, a richer feedback level is noted in the answers given by two teachers combining electronic and oral feedback:

  • I give students both electronic and oral comments. After correcting their assignments, I compile them anonymously in a PDF file, send them on the general chat box on Teams, and ask students to specify an online extra class so that I explain each student’s mistakes. There is no time in classes to point out all students’ mistakes. Therefore, the online meeting helps me a lot and students’ levels also improve notably.

  • What I do is receive students’ essays through emails. Then I evaluate them and add my comments on them. Following this, I add all the evaluated essays in one Word file and discuss all the essays anonymously with all students in the classroom using the projector to show them their writing weaknesses and strengths. After the class, I send students this file through email. I think my students always like this.

In the above answers, we could feel that the two teachers have self-satisfaction with this combined handwritten-oral feedback; a self-satisfaction level which has been reinforced by students’ feedback on their feedback practices (for example: I think my students always like this).

In addition to the above two feedback mode combinations, five teachers said they use three feedback modes. The following answer, for instance, exemplifies an oral, audio-recorded, and handwritten feedback mode combination:

  • I like to use oral comments. I do it in two ways: the face-to-face comments in the classroom, and sending students private audio messages on Telegram so that each student can ask me about something on her essay. I provide my feedback by replying to my students in an audio message on Telegram. Besides, I also like to give students written feedback either through Teams or on paper.

The five teachers’ answers generally indicate their feedback mode combinations take two patterns: (a) handwritten, electronic, and oral comments (n = 3); and (b) oral, handwritten, and audio-recorded comments (n = 2).

Error correction strategies

The teachers’ use of direct versus indirect error correction

The teachers’ answers to the third open-ended question were analyzed to understand the explicitness in their error correction of students’ writing, and their reasons. Thirty-three teachers mentioned that they use direct error correction as they have to take into account students’ language maturity levels, and also to help them understand their errors and thus avoid them in future writing tasks. The following answers exemplify these reasons:

  • Without giving students the correct alternative, they may not realize the error and cannot understand why it is incorrect. Giving the correct alternative will help them compare and know why it is more suitable than the other and will show them a model they can learn from.

  • I assume that when errors are not corrected, they might be made again. That is why I correct them in their texts, and explain the most common ones in class orally to give other students an overall idea of the most commonly committed mistakes.

The last exemplary answer above shows that in addition to written corrective feedback, some teachers correct students’ errors orally.

On the other hand, a relatively smaller number of teachers (n = 29) said they mainly use indirect error correction. They view it could helping students think actively about their errors and how to correct them. They also think that time constraints and students’ numbers make it difficult for them to correct errors directly. For example:

  • It is hard to rewrite corrections for all students. They need to figure out their mistakes and correct them.

  • I would like to make students try to find the alternatives themselves. In my opinion, this will have a positive impact on their writing.

Regarding the nature of the teachers’ indirect error correction, some teachers said they highlight the errors only, others said they highlight them and indicate their types, whereas two teachers mentioned they correct some errors directly and then highlight other ones for students to correct.

For a third group of teachers (n = 12), using direct or indirect error corrections depends on a number of variables. These include students’ numbers and levels, draft type (first versus final draft), course stage, and assignment type. It is noted that the phrase “it depends” is very frequent in these 12 teachers’ answers. For example:

  • It depends on how far we are in the course and whether direct correction has already been given to them on previous essays or not.

  • It depends on students’ level; high-proficiency students benefit more from highlighting the errors and identifying their types in contrast to low-proficiency students.

  • I correct errors directly in the early assignments to help students recognize the right alternatives. After that, I only specify the type of error for students to help them think about it.

The above answers imply that the teachers’ error correction strategies are mediated by different contextual factors and that they are only caused by their preferences or beliefs.

The teachers’ use of comprehensive versus selective error correction

The fourth open-ended questionnaire question concerns the teachers’ use of comprehensive versus selective error correction. In their answers to this question, the larger group of teachers (n = 43) reported adopting a comprehensive approach to correcting students’ writing errors. Like the previous case of the teachers with interest in direct error direction, the teachers with the comprehensive error correction orientation also view it helps students be aware of their English writing errors. As one of them summarizes it:

  • Students need to see all the errors corrected; otherwise, they would regard what is left uncorrected as correct.

It is worth mentioning that there are some slight differences in this group of teachers’ comprehensive approaches to error correction; while most teachers mentioned correcting all writing errors in students’ written texts, five teachers in the group referred to correcting a large number of students’ writing errors. Two other teachers in the group used the phrase “I try…” to describe their attempts to correct all writing errors. These two notes generally suggest that some teachers in this group tend to correct all errors in students’ texts, but they find it a challenging task.

The second group of teachers (n = 23) reported using a selective or focused approach to correcting students’ writing errors. According to these teachers, such a focused error correction approach is a better alternative as it prioritizes major writing problems and fosters students’ error awareness raising. Some teachers in this group also regard this approach as non-detrimental to students’ writing motivation and meets their optimal error correction expectations, whereas other teachers believe that in some cases students’ writing errors are too many to be dealt with either explicitly or implicitly. The following answers further explain these views:

  • Students usually don’t pay attention to the detailed comments.

  • From my experience, students can feel demotivated if they see that they need to fix many errors. To avoid such pressure and to keep them motivated, I prefer to focus on specific and attainable goals suitable for their current level.

  • Sometimes you don’t even know where to start with a sentence because learners often have more than one level of error in one sentence.

With such issues in mind, this group of teachers prefers to focus only on correcting selected errors deemed important for their students to avoid.

In a way similar to the teachers’ perspectives on direct versus indirect error correction, a third group of teachers (n = 8) also reported that covering students’ errors comprehensively or selectively depends primarily on some factors such as the error type, students’ levels and numbers, the time available, and the writing course stage. For example:

  • It differs based on the types of errors (simple versus complicated errors).

  • I usually start any writing course I teach by correcting a large number of errors, particularly with the initial drafts. Then my error correction becomes gradually less comprehensive on the final drafts and also towards the end of the course because it will be very time-consuming if I keep providing feedback in this way.

As noted in the answers, the factors influencing the teachers’ error correction selectivity or comprehensiveness are almost identical to the ones accounting partially for their error correction explicitness or implicitness (see the above subsection). This was also noted in the influential factors a few teachers mentioned regarding their error correction focus. In other words, there are some common factors influencing some teachers’ varied approaches to error correction explicitness, comprehensiveness, and focus.

The teachers’ error correction focus

The main dimensions determining the quality of written texts include the content, organization, grammar or language use, vocabulary, and mechanics (spelling and punctuation) (see Jacobs et al., 1981). Broadly speaking, we can categorize these dimensions into two main categories: text ideational and organizational aspects versus language-related ones. The teachers’ answers to the fifth open-ended questionnaire question showed important issues with their error correction focus. Overall, the teachers were divided into three groups in this regard: a group focusing on text ideational and organizational aspects only (n = 35); a group focusing on language-related aspects only (n = 28); and a third one addressing both types of aspects (n = 11). The following three exemplary answers represent these three error correction focus orientations, respectively:

  • I try to cover all the aspects (organization, ideas, and mechanics) to help students improve their writing while focusing more on the skills provided by the course.

  • I focus on what they would need to learn the most. Sometimes, I work on a particular aspect, such as (punctuation or capitalization issues, only) if I feel that students are having apparent issues with the correct usage of these elements.

  • Generally in my feedback, I focus on different aspects of writing (coherence, organization, and sentence structure).

The feedback focus areas the teachers frequently referred to in their answers include text ideas, organization, text coherence, punctuation, and grammar, respectively. The teachers’ answers generally suggest that many of them do not seem to provide their students with a deep level of language-related feedback; the teachers’ brief descriptions only concern correcting students’ punctuation and grammar errors but not their vocabulary.

Seven teachers’ answers revealed that the text draft plays an important role in their feedback focus. They reported they tend to focus on different aspects when providing feedback on multiple text drafts. However, these teachers’ focus areas vary. For example, the following two answers show the focus area of two teachers:

  • In any required assignment, I like to give students feedback on two drafts if time allows. In the first draft, I direct students to the overall organization of the essay and idea coherence. In the last draft, I give detailed feedback on many aspects of their writing such as grammar and punctuation.

  • When reading the student’s first draft, I like to comment on students’ written text in terms of organization and ideas. The second draft is the one that will be graded; therefore I have to comment on all aspects particularly students’ language.

The first answer shows that the teacher evaluates text organizational and ideational aspects in the earlier draft, but focuses on grammar and punctuation in the second draft; the opposite case is noted in the second teacher’s answer.

The teachers’ use of learner-centered feedback activities

Using student self-evaluation activities

The teachers’ responses to the last three open-ended questionnaire questions were used to profile their use of learner-centered feedback activities and beliefs about them. Regarding self-evaluation activities, more than two-thirds of the teachers (n = 53) said they do not use them in their writing classes. According to these teachers, a number of factors make it difficult to use this kind of activity such as students’ unfamiliarity with them, students’ inability to detect their own errors, and time limits. The following sample answers clarify these reasons:

  • Students are not familiar with them.

  • I believe students have difficulty doing that due to their level.

  • I am afraid, that students don’t notice their errors.

Three teachers gave two other unique reasons for not using self-evaluation activities in the classroom. For one teacher, students self-evaluate their texts indirectly; and for the two other teachers, students need some time gap between writing the text and self-reviewing it to effectively realize their errors:

  • I don’t use this kind of activity because students already do it indirectly before they submit their essays.

  • There is no need. But I always ask students to leave their first essay draft for some time and then go back and check it. In this way, they will spot some naive writing errors they were not aware of after immediately writing the essay.

As may be inferred from the second answer, the time gap needed for helping students effectively realize their errors is not likely to be met in a one- or two-hour writing class.

With regard to the 21 teachers who reported using student self-evaluation activities in their writing classes, they view that these activities help students discover their errors, reflect upon them, become more motivated, and be autonomous learners. For example:

  • I use self-evaluation activities to allow students to realize their mistakes and understand why they are incorrect and how to improve their writing.

  • I always encourage my students to assess their own writing since it is very effective in finding out the gaps in their work and this helps them to be more independent.

A few teachers described the ways they engage students in evaluating their own texts. Their descriptions, however, are brief and do not reveal many details about the implementation of self-evaluation activities in their classes (for example, their frequency, duration, the teacher’s role, and the nature of the text drafts used). Two teachers, for instance, said they use text quality rubrics to guide these activities, and two other ones referred to using model texts to help students compare theirs to theirs. Overall, the teachers’ reported attitudes and uses indicate that self-evaluation activities are unpopular in their writing classes.

Using peer feedback activities

Unlike their reported uses of self-evaluation activities, about three-quarters of the teachers (n = 57) mentioned employing peer feedback in their English writing classes. It is worth noting, however, that five of these teachers used the word “sometimes” in their descriptions of the use of peer feedback. The teachers gave multiple reasons for such use. Collectively, these reasons include: changing teaching methods and classroom atmosphere, motivating students, helping students learn from each other, and improving their essay assessment ability and communication skills. For example:

  • I use it to help students get an idea of other students’ writings that help them to evaluate their own writing as well. At the same time, they benefit from giving and providing feedback.

  • Students can see others’ mistakes in a better way than seeing their own mistakes; this will eventually make them aware of how to write properly.

The teachers with this positive attitude, however, did not report many details about the way they implement peer feedback activities in the classroom. Their descriptions include phrases such as “encourage students”, “ask students” “to share essays”, “to exchange essays”, “to share thoughts”, “to collaborate”, and “to discuss”. Two teachers talked about dividing peer groups into students of varied writing abilities, and one teacher mentioned guiding these activities by using model texts.

The 17 teachers with a negative attitude toward using peer feedback also have their reasons. For these teachers, there are no real gains from peer feedback activities because students do not take their classmates’ evaluations seriously and do not regard them as reliable sources, and they apprehend peer evaluation. Besides, students’ similar low levels do not help in effectively implementing peer feedback activities. The following answers illustrate these concerns:

  • Students do not take peer feedback activities seriously; sometimes they talk about irrelevant life issues during them.

  • It is less likable to me. I think many students do not like to criticize each other’s writing.

  • Students don’t like their mistakes to be seen by friends.

Due to these reasons, these teachers, or at least a number of them view peer feedback as a time-consuming activity in writing classes.

Using automated writing evaluation activities

Compared to self-evaluation and peer feedback activities, the teachers reported the least positive attitude towards using automated writing evaluation. Sixty teachers answered the relevant questionnaire question negatively. Some of these teachers do not trust automated writing evaluation applications and feel they are not helpful enough as they just provide students with text grammar and mechanics correction suggestions, but do not make them aware of their nature or causes. Other teachers believe that students will not obtain significant learning gains from using them. Therefore, they regard such activities as time-consuming, particularly if we take into account that students can use them on their own. For example:

  • Don’t use it. Never thought about it. I think students will naturally use it at home.

  • I don’t like to use it because students will add the suggested changes without understanding why their writing is wrong.

For the 14 teachers who reported a positive attitude towards using automated writing evaluation as a classroom-based tool, they feel it could guide students properly, and help them to write correctly. They also believe it could save time, and perceive its use in the classroom as a way for coping with technological advances and for reinforcing students’ out-of-classroom writing learning experiences. As one teacher summarizes this last reason:

  • I use automated writing evaluation because students already use it; so, I teach them how to use it and the best ways of making use of online apps.

In some of the relevant answers these teachers provided, we can also note the words “advise” and “recommend”. In other words, some of these teachers also suggest particular more reliable automated writing evaluation applications for their students to use independently. Taking all the positive answers into account, we may generally conclude that a few teachers in this context occasionally integrate automated writing evaluation applications in their classes, and/or alternatively advise students to use them out of classrooms.

Discussion and conclusions

The results above show the complexity of the teachers’ feedback provision process. The teachers in this study reported their dependence on some feedback delivery modes and error correction strategies more than others. Both handwritten and oral feedback modes are more dominant in their feedback practices than the other modes. Meanwhile, a small group of teachers reported they combined two or three feedback modes. With regard to direct versus indirect error correction, a relatively larger group of teachers mentioned using direct error correction. While these results concur with Lee’s (2011) findings, they contradict those reported by Alshahrani and Storch (2014), Liu and Wu (2019), Mao and Crosthwaite (2019), Wei and Cao (2020) whose participant teachers had preferences for indirect feedback. Regarding error correction scope (i.e., comprehensive versus selective), a relatively larger group of teachers reported correcting students’ writing errors comprehensively. These results are congruent with those of Alshahrani and Storch (2014) and Alkhatib (2015) in indicating that comprehensive error correction is more common in Saudi university English writing classes.

The study showed the teachers are somewhat divided in their focus on text ideational and organizational versus language-related aspects, but more teachers were found to pay slightly more attention to text ideational and organizational aspects. Specifically, the textual areas the teachers reported focusing on in their feedback are text ideas, organization, text coherence, punctuation, and grammar, respectively. These results contradict those of Diab (2006), Lee (2011), Lee et al. (2016), and Mao and Crosthwaite (2019) whose participant teachers were found to prioritize language accuracy error correction. In their language-related error correction focus, the teachers were found to be concerned with grammar and mechanics in particular. Some previous studies (e.g., Alshahrani and Storch, 2014; Cheng et al., 2021) strongly support this point. The conclusion drawn here is that in many L2 writing contexts, teachers focus far more on grammar and mechanics than vocabulary. The study also emphasizes previous research findings (e.g., Alkhatib, 2015; Chen, 2023; Lee et al., 2016) that many L2 writing teachers view considering students’ needs as a decisive factor in prioritizing their feedback provision practices. What may seem unique in the results of the present study is that many teachers use combinations of feedback modes and direct-indirect and comprehensive-selective error correction. This may be inconsistent with the previous conceptualizations proposing that writing teachers normally use one particular feedback mode or error correction type rather than the other (e.g., Ferris, 2007; Ferris and Hedgcock, 2023). Finally, the teachers considered some factors potentially influencing their use of particular feedback modes and error correction explicitness, scope, or focus strategies. Collectively, these factors are text draft type (first versus final draft), writing error type, assignment type, students’ levels and numbers, available time, and the writing course stage. Some similar contextual factors such as time limitations and workload were also found in previous studies (e.g., Mao and Crosthwaite, 2019; Wei and Cao, 2020).

As for the teachers’ use of learner-centered feedback activities, it was found that a larger number of them make use of peer feedback activities only in their classes, and neglect student self-evaluation and automated writing evaluation. The teachers’ reasons varied depending on the activity type, but it seems that the perceived value of the particular activity has influenced their attitude towards using it. Overall, these results confirm previous learner and teacher research findings (e.g., Aldukhail, 2023; Alhabib and Alghammas, 2023; Alnasser, 2022; Koltovskaia, 2023; Li, 2021; Wilson et al., 2021) about the teachers’ concerns related to using student self-evaluation and automated writing evaluation activities in writing classes.

The results of this study indicate the need for fostering some particular dimensions in Saudi university teachers’ L2 writing feedback literacy. As noted in the results section, there seem to be shortcomings in the teachers’ practices regarding, for instance, making more effective use of electronic feedback, providing more comprehensive language-related feedback, and optimizing peer feedback activities. Teachers need to receive adequate training in these dimensions. In-service teacher training should also focus on raising teachers’ awareness of how to effectively use self-evaluation and automated writing evaluation activities in their writing classes. Given that large class sizes hinder effective feedback practices, there is a need to minimize students’ number in Saudi university English writing classes. With an average number of about 15 students in one class, feedback practices in English writing classes are expected to significantly improve. These recommendations are also generalizable to the L2 writing instruction context with characteristics similar to the Saudi one.

The qualitative approach used in this study has enabled us to provide a detailed profile of teacher feedback practices and feedback activity uses in Saudi university English writing classes. Yet, further research is still needed to complete this profile. Future studies could combine writing teacher feedback open questionnaire data with large sample feedback comments. This will add significantly to profiling feedback practices in the Saudi context. Important also is comparing writing teacher and learner feedback perspectives using qualitative and quantitative data. Such methodological approaches may be also adopted in examining writing feedback practices in other international contexts. By profiling feedback perspectives in this way, we could identify writing teacher feedback literacy needs and learner feedback expectations.