Crisp (2007) stated that the importance of feedback has been emphasized in policy documents and standards (e.g., the Next Generation Science Standards [NGSS] (NGSS Lead States, 2013)) beyond the level discussed in the assessment literature (e.g., Evans, 2013; Evans & Waring, 2011; Hattie & Timperley, 2007; Li & De Luca, 2014; Shute, 2008; Winstone & Boud, 2022). Feedback is an essential aspect of formative assessment and has strong influences on learning and achievement (Black & Wiliam, 1998; Clark, 2012; Hattie & Timperley, 2007; Havnes et al., 2012; Ruiz-Primo & Li, 2013; Yan et al., 2021). Feedback is a vital step. It allows students and instructors to communicate. This communication helps identify student needs and improve learning. No general agreement has been reached regarding the definition of effective feedback (Evans, 2013; Shute, 2008). Feedback can be implemented in a variety of ways, and its effectiveness changes based on the student, context, and purpose of feedback (Evans, 2013; Hattie & Timperley, 2007). However, certain general attributes of feedback can help match feedback to student needs. In this study, we expand the feedback dimensions identified by Hatzipanagos and Warburton (2009) to analyze effective feedback.

Research has indicated that technology can assist teachers during the feedback process, helping them meet students’ needs (Maeng, 2017). Technology can support feedback in a variety of ways: immediate feedback (e.g., Buckley et al., 2010; Zhang & Yu, 2021), personalized feedback (e.g., Penuel & Yarnall, 2005), collaborative learning communities (e.g., Lai & Ng, 2011), and feedback to the instructor (e.g., Feldman & Capobianco, 2008). An anytime-anywhere approach within technology improves communication between teachers and students (Evans, 2013), thereby promoting the feedback process.

Since feedback is an essential and widely discussed phenomenon in the literature and relevant standards, this empirical study explored the role of technology in the process of providing feedback. Technology-based feedback can be provided through a variety of mediums: internet applications, interactive multimedia, electronic games, and mobile devices (Evans, 2013). In this study, feedback was provided via mobile devices, specifically through the use of different applications (i.e., computer programs, also known as “apps”) on an iPad. We explored the potential of iPad apps to support the feedback dimensions within a high school physics course. Specifically, we compared the affordances of apps to teacher practices.

Feedback

A major aim of feedback is to improve students’ learning (Black & Wiliam, 1998; Jones & Blankenship, 2014; Ruiz-Primo & Li, 2013; Siegel et al., 2006; Winstone & Boud, 2022). Researchers have identified a gap between students’ current performance and the desired learning goal; accordingly, feedback should facilitate the narrowing of this gap (Lizzio & Wilson, 2008; Nicol & Macfarlane-Dick, 2006; Sadler, 1989). Although researchers have agreed that feedback is an important part of assessment, the definitions of feedback have varied widely (Black & Wiliam, 2009; Evans, 2013; Li & De Luca, 2014; Shute, 2008). On one hand, Kepner defined feedback as “any procedure used to inform a learner whether an instructional response is right or wrong” (as cited in Jones & Blankenship, 2014, p. 2). On the other hand, Li and De Luca (2014) used the term “assessment feedback” to refer to the comments or grades that instructors use to improve student learning. While the effectiveness of feedback has been debated, Hatzipanagos and Warburton (2009) were able to define certain common attributes.

Feedback Attributes

Hatzipanagos and Warburton (2009) summarized the attributes of feedback based on the existing feedback literature. In their paper, feedback attributes were grouped into eight categories. We expanded upon these categories by referring to recent literature on feedback. Moreover, we organized these dimensions into two meta-categories: “Strategies of feedback” and “Impact of feedback.” The dimensions included in the “Strategies of Feedback” meta-category focus on strategies and approaches related to providing feedback. Conversely, the dimensions included in the “Impact of Feedback” meta-category emphasize the outcomes and effects of feedback on students and the educational process.

Table 1 is based on Hatzipanagos and Warburton’s (2009) view of “feedback as a dialogue.” This viewpoint is grounded in the idea that feedback is an active and participative process. According to these authors, “In formative feedback, dialogue forms the mechanism by which the learner monitors, identifies, and then is able to ‘bridge’ the gap in the learning process” (p. 46). In alignment with their views, we believe that learning is a social activity. From our perspective, assessment cannot be separated from the learning process. Therefore, it is inherently social. The interconnection between assessment and learning is integral, reflecting the understanding that these aspects are intricately linked. Participation is pivotal in social activities. Thus, feedback must support communication among students and the teacher. Hatzipanagos and Warburton (2009) underlined the importance of communication by stating “Communication is part of the mechanism by which the learner identifies and then bridges the gap between the current learning achievements and the goals by the tutor” (p 47). Feedback enables students to understand their own learning progress within the community. We believe that feedback should foster the growth of learning communities and empower students to take responsibility for their own learning. It should enable students to reflect on the feedback they receive and take corresponding action.

Table 1 Dimensions of feedback.

Although feedback has generally been defined in the literature as referring to situations in which a teacher provides feedback to students, other directions also exist: students can provide feedback to teachers, to their peers, or themselves. The impacts of self and peer feedback on students’ learning cannot be underestimated (Hatzipanagos & Warburton, 2009). These uses of feedback play a pivotal role in fostering student responsibility and increasing engagement in their learning (Hatzipanagos & Warburton, 2009; McConnell, 2006; Sadler, 1989; To, 2022). These utilizations also contribute to the development of students’ self-assessment skills. When students receive feedback from their peers, this process improves their dialogue as well as promotes the exchange of diverse perspectives. This situation thus empowers students by enabling them to take more responsibility and rely on external mediation beyond the student–teacher relationship.

To reach students with feedback, such feedback must be appropriate for students to meet their needs. According to Kluger and DeNisi (1996), feedback has the greatest effect when the corresponding goals are specific and challenging and when the level of task complexity is low. However, while these attributes related to active participation should be emphasized, the importance of the timing and visibility dimensions of feedback should not be underestimated. Some researchers have claimed that providing immediate feedback has a significant effect on student learning (Black & Wiliam, 1998; Hattie & Timperley, 2007; Zhang & Yu, 2021). However, Mathan and Koedinger (2002) argued that the timing of feedback depends on the nature of the assessment task and students’ capacities. The visibility dimension focuses on monitoring students to identify their dynamic understanding and learning progress. Through such monitoring, the teacher can facilitate the creation of a shared understanding among community members (Radinsky et al., 2010). Thus, this dimension is essential for effective feedback and serves as an initial step in fostering communication between students and the teacher.

Feedback and Technology

Technology can help a teacher during the feedback process in a variety of ways (Maeng, 2017). Research on technology-based feedback (also known as e-assessment feedback) has been increasing (Evans, 2013). Such feedback can be provided through a variety of mediums, including mobile devices and internet platforms. Technology-based feedback is diverse. It can be synchronous or asynchronous, can be generated by the teacher or a computer, and can support either individual or group learning.

Technology-based feedback can provide opportunities that would otherwise be impossible due to various factors, including time constraints, geographical limitations, and the large number of students (Gilbert et al., 2011). Technology facilitates the establishment of an environment that can support a learning community (Lai & Ng, 2011), helps teachers collect data (e.g., Feldman & Capobianco, 2008), provides immediate feedback (Buckley et al., 2010; Zhang & Yu, 2021; Balta & Tzalfilkou, 2019), provides personalized feedback (e.g., Buckley et al., 2010; Penuel & Yarnall, 2005), and facilitates self-assessment and peer assessment (Foo, 2021; Hickey et al., 2009; Ng & Lai, 2012; Yarnall et al., 2006).

Technology-based feedback impacts student motivation and engagement (De Nisi & Kluger, 2000; Zhang & Yu, 2021), and the degree of such impact varies (Evans, 2013). Gilbert et al. (2011), in their Synthesis Report of Assessment and Feedback with Technology Enhancement (SRAFTE), reported that the success of technology depends on how it is implemented rather than on the specific technology itself. Thus, engagement and the improvement of student learning depend on the implementation of specific technologies. Therefore, in this study, we explored both the affordances of apps and teacher practices.

This study offers a unique perspective on technology-enhanced feedback by examining both the affordances of feedback apps and teachers’ feedback practices. It also extends the work of Hatzipanagos and Warburton (2009) by incorporating recent feedback literature to enhance our understanding of feedback attributes. The purpose of this study is to explore the potential of application affordances in promoting feedback attributes.

The specific research questions guiding our study are as follows:

How are defined feedback dimensions fulfilled by iPad applications used in the classroom? Namely,

  1. (a)

    to what extent do iPad apps fulfill the feedback dimensions?

  2. (b)

    to what extent does the use of iPad by the teacher fulfill the feedback dimensions?

Methods

In order to conduct this qualitative case study, we examined the potential of technology to support feedback. Specifically, we investigated whether iPad applications (“apps”) could enhance feedback attributes. Throughout the study, we maintained detailed records of our research procedures. Additionally, we conducted weekly meetings to discuss methodological decisions related to the process of data collection and analysis. We also addressed emerging issues in the field and validated our coding to ensure trustworthiness (Guba & Lincoln, 1989).

Research Participants

Data were collected from the classroom of a high school physics teacher. The district Science Coordinator recommended this teacher due to her reputation as an innovative educator. She actively incorporated iPads into her teaching practices. We employed a purposeful sampling approach to account for “the key constituencies relevant to the subject matter” (Ritchie et al., 2003, p. 79). This approach allowed us to gain in-depth insights into phenomena by selecting samples that could provide the most information (Creswell & Poth, 2016; Merriam, 1998).

The participant teacher, who is referred to pseudonymously as Amy, has been teaching since 1997. She has taught courses in physical science, physics, and honors physics. During her career, she has achieved National Board Certification and earned the title of Professional Development Classroom Teacher. Amy has received several local and statewide awards. She has also been honored at the national level with the prestigious Presidential Award for Excellence in Mathematics and Science Teaching. She holds both a master’s and a bachelor’s degree in science education.

Amy taught in a junior high school prior to the high school in which she worked while serving as a research participant in this study. Although she had previously used some technology in her teaching, she started incorporating iPads and technology more extensively in the high school setting. One year before the school’s opening, teachers were chosen. During this preparatory period, which was referred to as “year zero,” the school provided each teacher with an iPad. Over the course of that year, the teachers began acquiring technology skills and worked to prepare the school for its eventual opening. Amy was accepted as a department chair; thus, she attended additional workshops and conferences to extend her knowledge of the use of technology in the classroom.

This study was conducted at a public high school in the midwestern United States that featured a diverse student population. The student–teacher ratio was 18:1. This school was founded as a technology-immersed school. Before admitting students, teachers underwent training in both technology and iPad use. Teachers met quarterly during this transition year and were encouraged to use iPads in class.

For this study, the first author participated in two of Amy’s classrooms during the spring and fall semesters. Both of the classrooms were honors physics. In the first classroom, during the spring semester, Amy taught units on Newton’s Laws and Waves, while in the second classroom, she taught uniform motion. The classes were representative of the school’s student population in terms of gender ratio, socioeconomic status, and racial-ethnic composition.

Researchers’ Role

During the fall and spring semesters, the first author participated in Amy’s classrooms as she taught Newton’s Law, Waves and Uniform Motion. Throughout this period, the first author conducted classroom observations and recorded all of the courses. This study was conducted as part of the first author’s dissertation. As part of another study in her dissertation, she interviewed both students and the teacher and investigated students’ work. Consequently, she became very familiar with the students and the classroom environment. This closeness may thus have influenced her interpretation of the teacher’s practices.

The second author is an experienced researcher in science education and teachers’ assessment practices. She provided support throughout the study. The first and second authors engaged in weekly meetings throughout the study to discuss the study design, data collection, data analysis, and results. The second author provided valuable insights, reviewed the coding, and validated the results.

Data Sources

The data sources included classroom video recordings and the websites associated with the relevant apps. In this study, the apps that Amy preferred to use in her classroom were evaluated. All these apps were used in the classroom for teaching and assessment purposes. These apps were QR Code Reader, Schoology, Kahoot!, Nearpod, Socrative, ZipGrade, and The Physics Classroom (Appendix 1). These apps and the corresponding websites were used to understand the affordances of the apps. Each app was downloaded and then used with available data. Each of the associated websites was visited and analyzed.

As data sources, eighteen classes were recorded. Normal classes were 85 min long, while two short classes were 45 min long (for 24 h in total). The researcher took pictures and field notes during the classroom observations. To understand participants, their behaviors, and the corresponding context in depth, scholars have recommended capturing a comprehensive picture of classroom observations (Glesne, 2006; Yin, 2018). In this study, classroom observations (videotapes and field notes) provided information regarding the teacher’s feedback practices.

Data Analysis

Typological data analysis was used for this study. This type of analysis is used when a study has a narrow focus. Data were collected for specific purposes, and the categories for the data were predetermined (Hatch, 2002).

We used an enhanced version of the feedback dimensions identified by Hatzipanagos and Warburton (2009) for analysis. To assess the affordances of each app, the first author visited the website of each app to understand the app’s features. Our primary goal was to determine which mobile apps aligned with the feedback dimensions. The first author installed and personally tested each app; subsequently, detailed memos were generated. These memos were used to code the affordances of the apps.

To analyze teacher practices, classroom video recordings were reviewed and categorized by the apps. After categorization, videos pertaining to each app were analyzed to determine their levels of alignment with the feedback attributes associated with the dimensions. We established specific categorization criteria (Table 2): “not applicable” (0), “poor” (1), “potential” (2), and “good” (3). While we coded the affordances of the apps based on their support for the attributes associated with each dimension, teacher practices were assessed based on any teacher activities involving app usage. For example, since students could not send questions to the teacher (or to each other) via QR Code Reader, this app was coded as not supporting the “questioning” attribute associated with the dialogue dimension. Another example is that the teacher encouraged students to share information with their peers while using Kahoot!, despite the fact that this app did not support “questioning.” As a result, we coded the teacher’s practice as supporting peer assessment, which is associated with the community dimension.

Table 2 Categorization criteria for the feedback dimensions

Trustworthiness

To establish the trustworthiness of this study, several strategies were used. To establish credibility, the strategies of prolonged engagement, triangulation, peer debriefing, and member-checking were utilized. Prolonged engagement in the research environment allowed the researchers to understand the context in depth. A thorough examination of the data was facilitated by the triangulation of data collected from several sources. Additionally, member checking was employed, which involved sharing and confirming the initial results of the study with the teacher. Peer debriefing with a colleague who had experience in classroom assessment and in-service teacher education also enhanced the dependability of the study. Furthermore, providing thick and rich descriptions of the teacher’s practices and the researchers’ roles alongside detailed examples contributed to the confirmability and transferability of the study (Creswell & Poth, 2016; Lincoln & Guba, 1985). Finally, the study employed an expanded version of the feedback dimensions identified by Hatzipanagos and Warburton (2009). This approach facilitated logical inferences and clear reasoning throughout the data analysis process (Brantlinger et al., 2005).

Findings

Our findings regarding assessment feedback in relation to the affordances of feedback apps and teachers’ feedback practices are presented below. First, we provide the findings regarding application affordances, next, teacher practices with apps, and finally, a comparison of these two sets of findings. The results are depicted in the figures, which were created using Excel software for Microsoft Office.

Figure 1 displays two meta-categories and eleven feedback dimensions related to the affordances of apps. An examination revealed variations in the affordances of apps, which were rated as “good,” “potential,” “poor,” and “not applicable.”

Fig. 1
figure 1

Feedback dimensions for affordances of apps

In the strategies of feedback meta-category, visibility received the most “good” ratings (5 of 7), followed by timeliness (4 of 7) and learning (4 of 7). Dialogue and learning were the only two dimensions that received a “potential” rating, and only 1 of 7 apps received this rating. Notably, complexity was the only dimension that did not receive either a “good” or “potential” rating. The dialogue dimension received the most “poor” ratings (5 of 7), followed closely by complexity (4 of 7). Most apps were rated as “not applicable” in the dimensions of clearness (6 of 7) and appropriateness (5 of 7).

In the impact of feedback meta-category, power received the most “good” ratings (3 of 7), while other dimensions received “good” ratings for only 1 of 7 apps. A total of 4 of 7 apps were rated as “potential” in the action and reflections dimensions, while only 1 of 7 apps received “potential” ratings in the community and power dimensions. The community dimension received the most “poor” ratings (5 of 7), followed by power (3 of 7). None of the dimensions included in the impact feedback meta-category were rated as “not applicable.”

Teacher practices related to the use of apps to provide feedback were also analyzed (Fig. 2).

Fig. 2
figure 2

Feedback dimensions for teacher practices

Figure 2 demonstrates two meta-categories and eleven dimensions of feedback on teacher practices. In the strategies of feedback meta-category, all apps (7 of 7) were rated as “good” in the appropriateness and clearness dimensions, but only 1 of 7 apps received a “good” rating in the complexity dimension. Additionally, in other dimensions, 5 of 7 apps were rated as “good.” For three dimensions (dialogue, learning, and timeliness), 2 of 7 apps were rated as “potential,” while only 1 of 7 apps was rated as “potential” in the visibility and complexity dimensions. The complexity dimension received the most “poor” ratings (5 of 7), followed by visibility (1 of 7). None of the dimensions associated with the strategies of feedback meta-category was rated as “not applicable.”

In the impact of feedback meta-category, 6 of 7 apps were rated as “good” in the reflection and action dimensions. Following those, 4 of 7 apps received “good” ratings in the power dimension, while the community dimension received the fewest “good” ratings (2 of 7). The dimension that received the most “potential” ratings was community (3 of 7), followed by power (2 of 7). In the remaining dimensions, 1 of 7 apps were rated as “potential.” The community dimension received the most “poor” ratings (2 of 7), followed by power (1 of 7). None of the dimensions associated with the impact of feedback meta-category was rated as “not applicable.”

Subsequently, we compared the affordances of the apps (Fig. 1) to teacher practices (Fig. 2). While the affordances of the apps exhibited diversity, teacher practices were predominantly rated as “good” across all feedback dimensions. Using our meta-categories, we explored the differences in each dimension between the affordances of the apps and teacher practices.

In the strategies of feedback meta-category, increased “good” ratings were observed for dialogue (1 to 5), appropriateness (0 to 7), learning (4 to 5), timeliness (4 to 5), clearness (1 to 7), and complexity (0 to 1). The number of apps receiving “good” ratings remained the same (5) only in the visibility dimension. It is also important to highlight the fact that while clearness and appropriateness frequently received rankings of “not applicable” for the affordances of apps (i.e., 6 of 7 for clearness and 5 of 7 for appropriateness), these dimensions were rated as “good” for the teacher practice for all apps. Dialogue was frequently rated as “poor” for the affordances of apps (5 of 7); however, it was frequently rated as “good” for teacher practice (5 of 7). Complexity was frequently rated as “poor” concerning both the affordances of apps (4 of 7) and teacher practices (5 of 7).

In the impact of feedback meta-category, increased “good” ratings were observed for community (1 to 2), power (3 to 4), reflection (1 to 6), and action (1 to 6). It is also important to highlight the fact that while reaction and action were frequently rated as “potential” (4 of 7) for the affordances of apps, these dimensions were rated as “good” (6 of 7) for teacher practice. Community received ratings of “poor” (5 of 7), “potential” (1 of 7), and “good” (1 of 7) for the affordances of apps. Its ratings changed to “poor” (2 of 7), “potential” (3 of 7), and “good” (2 of 7) for teacher practice.

In addition to the previous analysis, we evaluated each app across all feedback dimensions for both app affordances (Fig. 3) and teacher practices (Fig. 4). This process enabled us to understand which apps performed well in terms of the two meta-categories and the eleven feedback dimensions.

Fig. 3
figure 3

Feedback dimensions of app affordances with regard to each app

Fig. 4
figure 4

Feedback dimensions of teacher practices with regard to each app

Figure 3 illustrates the affordances associated with the app ratings for seven apps across the two meta-categories and the eleven feedback dimensions. An examination of each app revealed that only The Physics Classroom was applicable to every dimension, while all other apps were rated as “not applicable” in at least one dimension. To summarize each app briefly, QR Code Reader received “good” ratings in the learning dimension of the strategies of feedback meta-category and in the power dimension of the impact of feedback meta-category. Schoology received “good” ratings in the dialogue, visibility, and learning dimensions of the strategies of feedback meta-category and all dimensions of the impact of feedback meta-category. Kahoot! achieved “good” ratings in the visibility and timeliness dimensions of the strategies of feedback meta-category, but it was not rated as “good” in any dimensions of the impact of feedback meta-category. Nearpod received “good” ratings in the visibility, learning, and timeliness dimensions of the strategies of feedback meta-category, and it received ratings of “potential” in all dimensions of the impact of feedback meta-category. Socrative received “good” ratings in the visibility and timeliness dimensions and a “potential” rating in the learning dimension of the strategies of feedback meta-category, while it received ratings of “potential” in the reflection and action dimensions of the impact of feedback meta-category. ZipGrade received “good” ratings in the visibility dimension of the strategies of feedback meta-category, while it received ratings of “potential” in the reflection and action dimensions of the impact of feedback meta-category. Finally, The Physics Classroom received “good” ratings in the appropriateness, learning, timeliness, and clearness dimensions of the strategies of feedback meta-category as well as a rating of “good” in the power dimension and ratings of “potential” in the reflection and action dimensions of the impact of feedback meta-category.

Figure 4 presents the ratings of teacher practices for the seven apps across the two meta-categories and the eleven dimensions of feedback. Notably, all the apps were applicable to every feedback dimension in the context of teacher practices. In the summary of the ratings for each app, QR Code Reader stands out due to the fact that it received “good” ratings in the dialogue, appropriateness, learning, and clearness dimensions, alongside “potential” ratings in the visibility and timeliness dimensions of the strategies of feedback meta-category. Furthermore, with regard to the QR Code Reader, the power, reflection, and action dimensions of the impact of the feedback meta-category were all rated as “good.” Schoology received “good” ratings across all dimensions of the strategies of feedback meta-category while also earning “good” ratings in the power, reflection, and action dimensions of the impact of feedback meta-category. For Kahoot!, “good” ratings were observed in the visibility, appropriateness, timeliness, and clearness dimensions of the strategies of the feedback meta-category, whereas the dialogue and learning dimensions received “potential” ratings. Furthermore, Kahoot! also received a rating of “good” in the community dimension and ratings of “potential” in the reflection and action dimensions of the impact of feedback meta-category. Nearpod received “good” ratings in all dimensions of the strategies of feedback meta-category, with the exception of the complexity dimension, which was rated as “potential.” It also received “good” ratings in all dimensions of the impact of feedback meta-category, with the exception of the power dimension, which was rated as “potential.” Socrative received “good” ratings in all dimensions of the strategies of feedback meta-category with the exception of complexity, which received a “poor” rating. It also received ratings of “good” in all dimensions of the impact of feedback meta-category, with the exception of the community dimension, which was rated as “potential.” ZipGrade received “good” ratings in the visibility, appropriateness, and clearness dimensions as well as “potential” ratings in the dialogue, learning, and timeliness dimensions of the strategies of feedback meta-category. It also received “good” ratings in the reflection and action dimensions and “potential” ratings in the community and power dimensions of the impact of feedback meta-category. Finally, The Physics Classroom received “good” ratings in the dialogue, appropriateness, learning, timeliness, and clearness dimensions of the strategies of feedback meta-category. In contrast, the power, reflection, and action dimensions of the impact of feedback meta-category were rated as “good,” while the community dimension was rated as “potential.”

Finally, we compared the affordances of the apps (Fig. 3) to teacher practices (Fig. 4). Our analysis revealed that the changes in the strategies of feedback meta-category from app affordances to teacher practices took two forms: either they exhibited the same rating or the ratings improved. For example, in Kahoot!, the complexity dimension was rated as “poor” for both the affordances app and teacher practices. Similarly, in Socrative, the visibility dimension was rated as “good” for both the affordances app and teacher practices. Some improvements were dramatic, such as in the rating of the clearness dimension for Nearpod, which improved from “not applicable (0)” to “good (3).” Some less significant improvements were also observed, such as in QR Code Reader, in which the visibility dimension ratings increased from “poor (1)” to “potential (2).”

The findings regarding the impact of feedback meta-category indicated that the shifts in ratings from app affordances to teacher practices were largely consistent with those observed in the previous meta-category. Teacher practices either received the same rating as the affordances of the app or the ratings improved. However, Schoology was an exception since the community dimension received a “good” rating (3) for affordances of the app but a “poor” rating (1) for teacher practices. One notable difference in this meta-category was that the ratings did not indicate significant improvement. For example, for ZipGrade, the rating of the power dimension improved from “poor (1)” to “potential (2),” while for QR Code Reader, the rating of the action dimension improved from “poor (1)” to “good (3).”

Examples of the Feedback Dimensions

In the previous section, we quantitatively examined the differences between the affordances of apps and teacher practices for addressing the feedback dimensions. In this section, we provide examples along with detailed and vivid explanations (see Table 3).

Table 3 Examples of the feedback dimensions

Discussion and Conclusion

This study provides a nuanced analysis of technology-enhanced feedback, focusing on both the functionalities of feedback apps and teachers’ feedback methods. By drawing on the contemporary feedback literature and extending the foundational work of Hatzipanagos and Warburton (2009), this research aims to deepen our understanding of feedback attributes. Through empirical exploration, this research contributes valuable knowledge regarding the intersection of technology and pedagogy, offering implications for both educators and future research in the field of educational technology. The findings shed light on which feedback dimensions were supported by the apps and how teacher practices can enhance them. We discuss our findings in two sections: strategies for providing feedback and the guiding impact of feedback (Table 4).

One central emphasis of this study lies in the importance of investigating strategies for providing feedback. Our findings reveal that the visibility, timeliness, and learning dimensions were well supported by most of the apps in terms of both app affordance and teacher practices.

In particular, “visibility,” received high ratings from most of the apps. Specifically, Schoology, Kahoot!, Socrative, Nearpod, and ZipGrade were effective in addressing the visibility dimension. This dimension highlights the need for teachers to closely monitor their students. Teachers can use these apps to monitor both individual students and the entire class, thereby contributing to the enhanced visibility of student progress.

Additionally, a majority of the apps facilitated the delivery of information to teachers and the provision of timely feedback to students. The “timeliness” dimension, which stresses the importance of providing feedback promptly if it is to be valuable, was also addressed effectively by most of the apps. Specifically, Kahoot!, Socrative, Nearpod, and The Physics Classroom were effective with respect to the timeliness dimension. These apps can help teachers provide immediate or frequent feedback to individual students or groups, thus satisfying the requirement for timely responses.

Regarding the “learning” dimension, which emphasizes the task of fostering learning rather than simply assigning grades, most apps did not prioritize providing grades to students. Specifically, QR Code Reader, Schoology, Nearpod, and The Physics Classroom were effective with regard to the learning dimension. These findings align with previous research, which has indicated that technology can indeed facilitate immediate feedback (as shown by Buckley et al., 2010; West et al., 2021; Zhang & Yu, 2021), which has been widely recognized as having a positive impact on student learning (Black & Wiliam, 1998; Hattie & Timperley, 2007; Zhang & Yu, 2021).

However, the study revealed some significant challenges pertaining to app affordances in the context of feedback strategies. To provide effective feedback, it is critical to use clear and consistent language to provide context and detail, to challenge students to think critically, and to align learning objectives with assessment criteria with the goal of obtaining a broader perspective (Fu et al., 2022; Hatzipanagos & Warburton, 2009; Izci et al., 2020; Khajeloo et al., 2022; Nicol & Macfarlane‐Dick, 2006). The apps, however, had limitations in these areas. For example, they exhibited only limited performance in the complexity and dialogue dimensions, and they could not be evaluated in the clearness and appropriateness dimensions. Nevertheless, teacher practices improved the performance of the apps in these aspects. For instance, the limitation of facilitating meaningful interactions and discussions between students and the teacher was overcome by the teacher by providing opportunities for students to interact orally. However, it is worth noting that “complexity” remained a challenge in terms of both app affordances and teacher practices, with only Nearpod performed well with respect to teacher practice. This finding could be due to the fact that these seven apps may not have been tailored to students’ specific levels, as complexity emphasizes feedback pertaining to appropriate challenges to support students’ thinking processes (Izci et al., 2020).

In terms of the guiding impact of feedback, the findings showed that most apps had the potential to support the reflection and action dimensions. Specifically, Nearpod, Socrative, ZipGrade, and The Physics Classroom had potential in these dimensions. Namely, these apps have the potential to assist students in the process of developing self-awareness skills and to encourage them to reflect on their work and make modifications (Hatzipanagos & Warburton, 2009; McConnell, 2006; Shute, 2008) as well as to allow teachers to modify their teaching (Feldman & Capobianco, 2008). Technology provides opportunities to promote active and continuous formative assessment (Conejo et al., 2016) as students reflect on their work and modify their future work. Teacher practices improved the performance of all the apps in these dimensions. For example, modifying instruction cannot be supported solely by an app because it involves a decision-making process. However, apps can help teachers make decisions by providing them with information regarding students’ learning processes.

The main challenge in this context pertained to the community dimension, thus suggesting challenges with regard to fostering a sense of community or collaboration using these apps. Although in most apps, the community dimension slightly improved, thereby highlighting the positive impact of teacher involvement, Schoology’s community dimension was an exception. While this app supported the community dimension well in terms of its affordances, its support decreased in teacher practices. This divergence might indicate that although the app exhibited strong community-building features on paper, these features were not effectively leveraged in the classroom.

In this study, we explored the potential of iPad app affordances with regard to providing effective feedback. Our data highlight the importance of recognizing variability in terms of app affordances and the pivotal role played by educators in shaping the feedback process. Teacher practices play a crucial role in enhancing the feedback experience, with most dimensions showing improvements in the context of teacher practices as compared to app affordances. Educators have the potential to significantly improve the feedback process when engaging with educational apps, thus highlighting their role in optimizing feedback for students. Gilbert et al. (2011) asserted that in the context of technology-enhanced feedback, technology is merely an enabler; furthermore, these authors claimed that success lies in pedagogy. Evans (2013) highlighted the pivotal role of teachers in designing and implementing feedback. Although more research is needed to confirm this claim, our study provides evidence that is consistent with previous research. This finding highlights the fact that teacher practice plays a crucial role in enhancing the affordances of iPad apps with regard to providing effective feedback.

Our findings are in alignment with those reported by Mimouni (2022), who also asserted that supporting multimedia tools with an instructional approach can increase the corresponding effect on students’ learning. Therefore, teachers should be supported in their attempts to introduce these apps into teaching and to emphasize the proper use apps to provide effective feedback. Our study provided data regarding the potential of apps to address the effective feedback dimensions alongside detailed examples. Teachers’ knowledge of the feedback process in their classroom practices as well as their skills and experience in presenting this feedback to their students are crucial. This proficiency is particularly essential in a technology-supported environment and serves as an impetus for supporting students’ learning outcomes.

These findings highlight the significance of considering not only the perceived potential of apps but also their tangible utility within the classroom. The effectiveness of apps may vary based on their practical implementation. Educators are encouraged to explore and experiment with different apps to identify those aligning best with their specific teaching objectives and student needs. Additionally, schools can establish guidelines or committees to evaluate and select apps that align with their educational objectives and student population. Collaborating with educational technology specialists or consulting reputable sources for app recommendations can also facilitate the selection process. Ultimately, a thorough understanding of students’ learning goals, instructional needs, and technological capabilities is essential for maximizing the benefits of app integration for feedback in the classroom. This study emphasizes the importance of continuous professional development and training. This approach ensures that educators can leverage the full potential of educational technology, thereby maximizing its impact on feedback and learning outcomes.

Another pillar that can support this process is that app developers have information regarding feedback and teachers’ needs for feedback and can thus develop apps that are capable of meeting these needs. This analysis highlights areas where further improvements in app design are needed, especially concerning complexity and the cultivation of a sense of community among users. This underscores the need for app developers to focus on making complex concepts more accessible and understandable through their platforms. Furthermore, the prevalence of “not applicable” ratings for “clearness” and “appropriateness” dimensions suggests that these aspects may not be adequately addressed by current apps. These dimensions are critical for creating a conducive learning environment and should be areas of improvement for app developers. Additionally, teachers can benefit when app providers furnish specific information about the strengths and specifics of their app in relation to the dimensions of feedback. Such information can help teachers select the most suitable app for providing feedback to their students.

In summary, the study encourages a balanced approach, where apps are viewed as complementary tools that can lead to significant improvements and innovations in pedagogy. As we progress in the realm of educational technology, it is crucial to recognize that apps can enhance learning experiences. However, their effectiveness is most pronounced when integrated with thoughtful pedagogy. The study communicates a clear message: apps, when they are employed by skilled and dedicated educators, have the potential to transform education and empower students to reach their full potential. This synergy between technology and teaching is the future of education, and this future is filled with promise and possibilities.

Limitations

In this study, we investigated the potential of iPad app affordances for feedback as well as the teacher’s practices when utilizing these apps for feedback. It is critical to emphasize that Amy is an experienced teacher who has received a national award and works in a school that actively supports the integration of technology into the classroom. These factors are crucial when explaining how teachers can implement technology in their classrooms effectively.

Another limitation is that our study focused on the seven apps that the teacher used in her classroom. Different apps can be used for feedback purposes and may work equally well. Therefore, teachers must pay attention to the dimensions while selecting apps. We hope that the examples we provided in Table 3 can help visualize ways of using these dimensions. It is not necessary for apps to address all the dimensions; teachers may choose an app based on their specific needs. For instance, if a teacher wants to provide timely feedback, they might use apps such as Kahoot!. On the other hand, a teacher may choose to use more than one app to provide feedback and strengthen it. We did not explore this issue since it was not the focus of our research.

Directions for Future Research

While our study exhibits only limited generalizability due to its case study design, we believe that it represents an important step. It helps us understand teacher practices and app affordances in the context of feedback. Our investigation highlighted the practices of an experienced teacher with a supportive institutional backdrop. Future studies should examine the experiences of teachers with varying levels of comfort and experience with technology. A longitudinal study comparing the learning outcomes of students receiving feedback from novice and veteran teachers using the same technology could offer insights into the training and support needed at different stages of a teacher’s career. Besides, the success of technology integration in our study was partially attributed to substantial technical and institutional support. Future research should explore the spectrum of technology adoption and implementation success in environments where such support is minimal or absent. This could also yield valuable information that could enable us to identify the challenges and strategies for overcoming them in less supportive contexts.

Furthermore, future investigations should aim to conduct comparative studies across various education levels as well as across diverse cultural landscapes. Such a broader approach could reveal the collective principles underlying technology-enhanced feedback as well as context-specific practices that are effective in unique educational ecosystems. Our study did not focus on students; thus, we did not collect information regarding students. Future studies can explore the effect of apps on students’ success and understanding. Moreover, exploring how students with different ages and learning preferences respond to various feedback mechanisms can guide the development of more adaptive and inclusive feedback tools.

Despite the limitations of our study, we believe that our findings contribute to the literature on technology-enhanced feedback and highlight the importance of teachers’ active involvement in the design and implementation of effective feedback practices using app affordances.