Skip to main content

Interaction in asynchronous discussion boards: a campus-wide analysis to better understand regular and substantive interaction

Abstract

Discussion boards can provide a glimpse into the regular and substantive interaction required in online courses. Advances in technology and an increased interest in learning analytics now provides researchers with billions of data points about instructor and student interaction within a learning management system (LMS). This study used LMS data to explore the frequency of interaction between instructors and students in discussion boards in online courses at one institution. Overall, 415 courses were analyzed spanning two semesters. Results from the study found that the average number of posts by an instructor was 32.9. The average instructor interaction was 1.49 instructor posts per student. 23% of courses had no instructor posts. Student posts averaged 470 per course and the average posts per student was 19.9. Based on the discussion board activity, the most discussion interaction occurred during the first two weeks of the semester. Results also suggested that there is no relationship between student satisfaction and the number of total posts in a course. The paper concludes with implications for research and practice.

Introduction

The number of online students in higher education have increased steadily during the last decade (Seaman et al., 2018). Traditional and non-traditional students are choosing online courses, among other reasons, to fit within their busy schedules (Ortagus, 2017). Seaman et al. (2018) reported that 52.8% of online students also took at least one course on campus, which suggests that an increasing number of online students live close enough to attend face-to-face classes on campus. Additionally, even before COVID-19, institutions identified online education as being critical to their long-term institutional strategy (Allen & Seaman, 2016). In other words, institutions see online learning as not only a way to reach more geographic areas, but also a way to meet the demands of residential students, to address space shortages of classrooms, and to address budget issues (Allen & Seaman, 2016). For these reasons, institutions are increasingly investing in online courses and online programs.

Despite the popularity and increased investments, online courses have been criticized for being inferior to face-to-face courses (Allen & Seaman, 2016; Singh & Hurley, 2017). For example, in 2015, 25% of academic leaders reported they believed online learning outcomes were “somewhat inferior” or “inferior” to face-to-face instruction (Allen & Seaman, 2016). Public opinion of online education appears to be similarly mixed. In 2013, a Gallup poll suggested that most people felt that traditional face-to-face education was better than online education. However, it is important to note that instructors who have experience teaching online, generally believe that online learning is equivalent to face-to-face instruction (Jaschik & Lederman, 2018). This suggests that as more instructors are exposed to teaching online, perceptions may improve. Nevertheless, the perception that online education is inadequate or lower quality has institutions seeking ways to validate online education and improve the quality of online courses.

Research suggests that one critical variable that influences students’ perception about online courses is the interactions that take place between instructors and students (Battalio, 2007; Richardson & Swan, 2003; Stein et al., 2005). Interactions between an instructor and students have been linked to learner satisfaction and student achievement (Lee et al., 2011; Sher, 2009). This information has led to the development of several “best practices” to guide online teaching (Chickering & Ehrmann, 1996; Pina & Bohn, 2014). Each of these lists of best practices include learner-instructor interaction as a key component. In fact, federal policy in the U.S. requires institutions who participate in student financial assistance programs, authorized by Title IV of the Higher Education Act (HEA), to demonstrate that online courses “support regular and substantive interaction between the students and the instructor, synchronously or asynchronously” (Legal Information Institute, n.d., 7Aii). The U.S. Department of Education’s position is that courses without regular and substantive interaction between instructors and students are a type of correspondence course and therefore not eligible for financial assistance (U.S. Department of Education, 2014). However, despite this position, there is currently not a standard definition of regular and substantive interaction (RSI) (Protopsaltis & Baum, 2019). And while researchers have created best practices highlighting the need for learner-instructor interaction, they are not explicit in how or how often this interaction should occur. This further complicates any quality assurance or retention efforts universities try to implement either at the institution, college, or department level. Given this problem, the purpose of this study was to explore and better understand how instructors and students interact in discussion boards in online courses and how this interaction is related to student satisfaction. In the following paper, we present the results of our inquiry and implications for future research and practice.

Background

Moore (1989) identified three types of interaction: learner-content interaction, learner-instructor interaction, and learner-learner interaction. Learner-content interaction refers to the interaction of the learner with the subject matter. Moore (1989) described student-content interaction as “… the process of intellectually interacting with the content that results in changes in the learner’s understanding, the learner’s perspective, or the cognitive structures of the learner’s mind” (p. 2). Learner-instructor interaction references the dialogue between the instructor and student, but also includes how the instructor motivates the learners, presents or demonstrates information, provides feedback, and supports and encourages the learners (Moore, 1989). The separation of instructor and student in online courses creates gaps in communication between the student and instructor, but also creates psychological challenges for the student (Moore, 1997). To address the challenges of separation, Moore (1997) suggested an increase of dialogue between student and instructor could create a decreased sense of transactional distance. Finally, the third type of interaction is learner-learner interaction. According to Moore (1989), learner-learner interaction is important in the learning process and challenges traditional ideas of teaching and learning. Together, the three types of interaction provide a framework that can enable educators to be more thoughtful and purposeful about how they teach online (Falloon, 2011).

Although all three types of interaction are equally important in online learning, learner-instructor interaction has been found to be the most important type of interaction for predicting satisfaction (Hong, 2002; Jung et al., 2002; Kuo et al., 2014; Swan, 2004). Hong (2002) concluded that “interaction with the instructor was the most significant contributor to satisfaction and learning in web-based courses” (p. 278). Based on these results, Hong (2002) concluded that active participation by the instructor could increase student participation and would increase learning. Similarly, Dennen et al. (2007) found that “posting to discussion board” was ranked by students as the second most important action by an instructor, below checking email (p. 74). Therefore, Dennen et al. (2007) recommended that instructors prioritize interactions and focus on maintaining frequency of contact, having a regular presence in class discussion spaces, and making expectations clear to learners.

Moore’s theory offers a lens which can be used to identify ways in which students and instructors interact. Interactions can occur synchronously or asynchronously, and instructors can facilitate these interactions with a variety of technologies, such as web conferencing, chat, discussion boards, and email (Lowenthal & Moore, 2020; Lowenthal et al., 2021; Sher, 2009). Discussion boards are widely used in online teaching, allowing interaction to occur without being limited by time or space (Hew et al., 2010). In discussion boards, participants can see discussion posts of others, organized by author, topic, and date/time, and respond to them on their own time (Brown & Green, 2009). Research suggests that when instructors participate in discussion boards students are more motivated (Xie et al., 2006), students are more satisfied (Sher, 2009), and instructor participation is highly valued by students (Nandi et al., 2012; Lowenthal & Dunlap, 2020). This increase in dialogue between student and instructor can not only reduce transactional distance but serve to meet regular and substantive interaction requirements for online courses.

Method

We used a quantitative exploratory research design to answer the following research questions:

  1. 1.

    How do instructors interact in asynchronous discussion boards in online courses?

  2. 2.

    How do students interact in asynchronous discussion boards in online courses?

  3. 3.

    How do students and instructors interact each week in asynchronous discussion boards in online courses?

  4. 4.

    Is there a relationship between interaction in asynchronous discussion boards and student satisfaction?

Data collection

This study utilized archival data from two sources, Canvas Data and end-of-course evaluation data. Data from Canvas was exported from the Amazon cloud and imported into Exasol, a high performance, in-memory database. End-of-course evaluation data was downloaded from a publicly accessible database. A query was run in Exasol to create a comprehensive list of online courses offered during the period of the study. We first identified all courses in a single academic year (N = 6152). Next, we filtered the list to only include online courses (N = 675). Then we removed courses with multiple sections, courses with multiple instructors or Teaching Assistants, and any courses with less than five students. We ended up with 415 courses in the initial dataset representing six schools or colleges (see Table 1).

Table 1 Courses in the Initial Dataset by School or College

We then pulled specific data for each course. Course information from Canvas Data was combined with end-of-course evaluation data (see Tables 2 and 3) to create the data set for this study. The end-of-course evaluation had eight questions, answered on a scale of 1 (low) to 6 (high). Since no single question asked about student satisfaction, the scores on the eight questions were combined and averaged to create a student satisfaction score due to previous research suggesting that while end-of-course student evaluations might not be good measures of teaching quality, they are an adequate indicator of student satisfaction (see Lowenthal & Davidson-Shivers, 2019).

Table 2 Canvas Variable Description and Type
Table 3 End-of-Course Evaluation Data Variables

Data analysis

As is common in the field of data science, preliminary data analysis was performed using Tableau to explore the variables in the dataset through frequencies, descriptive statistics, and cross-tabulations. Once the initial analysis was complete, the dataset was exported to an Excel file and imported into IBM SPSS Statistics for the statistical analysis. Descriptive statistics were used to answer the first three research questions about how instructors and students interact in discussion boards. Then correlation testing was used to determine if a relationship existed between discussion board interaction measures and student satisfaction. After exploring the variables, it was determined that a Spearman’s Rho test would be used to determine if a relationship existed. A Spearman’s Rho test was selected because the assumptions regarding normality were not met. Table 4 illustrates the data source and type of analysis used to answer each research question.

Table 4 Alignment of Research Questions to Data Analysis

Results

The purpose of this study was to explore the frequency of interaction between instructors and students in discussion boards in online courses and if this interaction is related to student satisfaction to better understand and set a baseline for this institution and others for understanding regular and substantive interaction. We report the results of our inquiry in the following section.

Demographics of the courses

A total of 415 online courses, taught over a single academic year, across six schools and colleges, were identified for the study (see Table 5). Most of the courses were taught in the College of Liberal Arts and Sciences (44.6%) which serves not only a diverse student population but offers a diverse number of online programs at the university. The other schools and colleges made up the remaining 55% of the courses in the study.

Table 5 Distribution of Courses by Level and School/College

Tenure-track vs non-tenure-track instructors

In the study, 82% of the instructors were non-tenure-track, thus less than 20% of the instructors were tenure-track. However, the Business School (27%), the College of Arts and Media (23%), and the School of Education and Human Development (26%) had slightly higher percentages of tenure-tracked faculty teaching online courses compared to the other schools and colleges.

Course levels

The distribution of course levels is shown in Table 5. Courses were categorized as lower division, upper division, and graduate. For this study, 27.71% (N = 115) of the courses were lower-level undergraduate courses, 38.07% (N = 158) were upper level undergraduate courses, and 34.22% (N = 142) were graduate level courses.

Instructors, TAs, and students

Courses in the study had only one instructor and no teaching assistants (TA). This decision was made to eliminate courses with multiple instructors or a TA. Courses with TAs were also removed since a TA can have a combination of roles in a course, from designer to facilitator, to teacher. The number of students in a course, though, ranged from five to 79 students (N = 415, M = 25.43, SD = 11.3).

Number of discussion boards

To better understand how instructors and students interact in discussion boards, it was important to analyze the number of discussions in a course. The total number of discussion boards in a course ranged from 0 to 140. There were 23 courses with no discussions. These courses were removed from further analysis since these courses did not use discussion boards. Therefore, 392 courses were included in the remaining analysis.

Total posts

Total posts refer to the total number of posts per course to any discussion board in the course. A post is a reply to the discussion topic or another post. A post can be made by the instructor or a student. This number is used to describe the amount of interaction in a course because a post in a discussion board is like a face-to-face discussion where students and instructors exchange ideas through taking turns speaking. The minimum number of posts in a course was two and the maximum number of posts was 2468 with an average of 503.21 (SD = 447.2) posts per course.

Research question 1: Instructor interaction

Research question one was, “How do instructors interact in asynchronous discussions in online courses?” Results answering this question provides baseline data regarding frequency of discussion board posts as well as the rate of interaction for instructors in online courses. It is not possible to determine whether the instructor or students created the initial discussion board in the data set. However, regardless of who created the discussion, interaction occurs through a series of posts, or replies between the instructor and students. The number of posts by an instructor ranged from 0 to 347, with the average instructor posting 32.90 times throughout a course.

An instructor post would be in response to either the initial discussion board or a student in the course. We found that 63.7% (or 250 out of 392 courses) of courses had the instructor post less than 32 times (the mean in this sample) during the semester. Of those 250 courses, 28.8% of the courses had no instructor posts at all.

It is important to note that the total number of posts an instructor makes in an online course provides only a glimpse into their interactions with students during a course. While it is helpful to know if an instructor is posting below the average number of posts for the institution (e.g., to identify absentee instructors), the number does not take into account situational factors, such as class size. For instance, we contend that the effect of 32 posts by an instructor is more impactful with a course with 25 students versus a course with 75 students. Thus, researchers and practitioners need a way to better understand how active instructors are in a course. One method was created by Bliss and Lawrence (2009a, b). In this method, the calculation of instructor participation is the total number of instructor posts divided by the number of students in the course. This means that in a course with five students and an instructor who posted 80 times during the semester would have an average interaction rate of 16 posts per student. While a course with 25 students and an instructor who posted 80 times during the semester would have an average interaction rate of 3.2 posts per student.

Instructor interaction rate was calculated for each course in the study. Instructor interaction ranged from 0 to 18.9 with a mean of 1.49 and a standard deviation of 2.33. These results indicate a varied approach to discussion boards. A closer look at the distribution (see Fig. 1) shows that although most courses had an average instructor interaction rate of less than one post per student, there was a large spread with some instructors having an interaction rate of over ten posts per student. This spread could indicate varied approaches by the instructors. For instance, some instructors may post less frequently in discussions, but have other strategies or methods of communication, like summarizing discussions after each week, sending individual emails or using synchronous forms of communication. The wide variety of tools available within and outside the learning management system means that interaction is not limited to discussion boards only. With this in mind, though, based on the data from Canvas, instructors in this study posted an average of 1.49 times a semester for every student in their class.

Fig. 1.
figure 1

Distribution of Instructor Interaction Rate Scores

Research question 2: Student interaction

Research question two was “How do students interact in asynchronous discussions in online courses?” Results answering this question provides baseline data about student use of discussion boards in online courses. In an online course, discussion boards serve as a primary opportunity for person-to-person interaction (Lieberman, 2019). When a student posts to a discussion board, makes a reply to a discussion board, or another person’s post, it is meant to simulate a conversation in a face-to-face classroom. Descriptive statistics were used to analyze the number of posts by students. The total number of student posts per course ranged from 0 to 2438 (N = 392, M = 470.31, SD = 432.8).

When assessing the shape of the distribution (see Fig. 2), almost half of the courses in the study had over 350 student posts (N = 194) throughout the semester. However, 48 courses (12.2%) had less than 0 student posts.

Fig. 2.
figure 2

Number of Courses by Student Post Frequency

Since each course has a variable number of students, it is difficult to determine from total posts alone whether a course has a lot of interaction. Therefore, it was important to look at the average number of posts per student, in addition to total numbers. Our analysis revealed that the average number of total posts per student was 19.9 per student per course (SD = 18.1). This means that on average, a student posted in the discussion boards approximately 19 times per semester. Given that the semester is 15 weeks, plus final weeks, this averages out to each student posting a little more than once a week.

We also found that 25% of courses had an average of less than 5 posts per student (N = 98). Based on these results, students who post more than 20 times per semester have an above average number of posts. This information could be used by instructors or administrators looking to identify students who may need additional support or encouragement in order to fulfil the requirement of regular interaction. In this case, an instructor may identify students who have posted only a few times during the first two weeks of the semester. Then, the instructor could reach out to those students regarding the expectation of regular interaction.

Research question 3: Weekly interaction

Research question three was “How do students and instructors interact each week in asynchronous discussions in online courses?” Results answering this question provides baseline data for discussion board activity in online courses. This data could be used to identify courses early in the semester who have low levels of discussion board interaction. An instructor or administrator may wish to identify students or instructors who have low levels of interaction to promote regular learner-instructor interaction. To answer this research question, weekly totals of discussion posts were calculated. For each week, the number of student posts and instructor posts were reported for each of the courses. The courses in the data set were offered over fall or spring semester; the courses were assumed to have followed the university’s traditional 15-week schedule, plus finals week. All courses are expected to take part in finals week, either by giving an exam or fulfilling two contact hours of instruction. Table 6 shows the weekly totals of posts for all courses as well as the totals for instructors and for students. Additionally, the average number of posts per course was calculated along with the percentage of overall posts for each week.

Table 6 Weekly Total Discussion Board Posts

Based on the data set, most interaction happened in the discussion boards during the first two weeks of a semester. This was true for both students and instructors. After that, there was a steady decrease in the number of discussion board posts. The least amount of interaction in the discussion boards happened during finals week and spring or winter break (depending on the semester). Further, it is worth pointing out that the last few weeks of the semester have about a third of the interaction as the first week (see Table 6).

As discussed previously, class size can influence interaction. Therefore, using the average class size of the courses in the study (M = 25.43), average instructor interaction rate and average posts per student were calculated each week. These numbers provide a baseline measure which could be used to identify courses with low interaction rates. Since this data could be particularly helpful during the first few weeks of the semester to encourage participation from students and ensure that instructors are practicing regular interaction, Table 7 shows the average instructor interaction rate and average posts per student for the first four weeks of the semester. After that, average interaction drops off.

Table 7 Average Interactions for Instructors and Students by Week

Based on the average instructor interaction rate and average posts for students, instructors should possibly attempt to post an average of once, per every three students in their class and a student should post at least twice. During week two, an instructor should post an average of once per every seven students in their class and a student should post at least once. Using the average instructor interaction rate and average posts per students, these numbers could help assist instructors on setting targets numbers which they can use to help ensure they are maintaining regular interaction with their students.

The two semesters used in the study showed similar results for interaction. Term 1 had 207 courses and term 2 had 185 courses. Fig. 4.9 shows the total posts by term. As shown in Fig. 3, posts for both students and instructors decrease from the first week of the semester to the last week. This decrease in posts may indicate a reduction in interaction throughout the semester. However, additional research would need to be done to determine if interaction was occurring in different ways at different points in the semester.

Fig. 3.
figure 3

Total Discussion Posts Based on Enrollment Type

Research question 4: Correlation testing

Research question 4 was, “Is there a relationship between asynchronous discussion interaction measures and student satisfaction?” This research question focuses on whether there is a correlation between total posts (i.e., interaction) in a course and student satisfaction. It is important to understand if the total posts in an online course is associated with student satisfaction. If a correlation was found, course design and delivery methods could be modified to increase student satisfaction. For the variable, total posts, from the 392 courses with discussions, the total number of posts ranged from two to 2468 posts, with a mean of 503.21 (SD = 447.2). For the variable, student satisfaction, from the 392 courses with discussions, student satisfaction ranged from 2.625 to 6.0 with a scale from zero to six. The mean was 4.96 (SD = 0.5).

To determine the appropriate statistical technique, a test of normality was used to assess the distribution of the scores (Pallant, 2013). Results of the Kolmogorov-Smirnov and Shapiro Wilk provided the Sig. value of .000 for both total posts and student satisfaction, suggesting violation of the assumption of normality. An inspection of the normal probability plots confirmed a non-normal distribution for both variables. Several attempts were made to normalize the data. This included removing outliers and transforming the variables. Since student satisfaction was already a new variable introduced by averaging the scores from eight questions from the end-of-course evaluation, it felt excessive to transform that variable. In addition, there is “considerable controversy” concerning transforming variables (Pallant, 2013, p. 96). When removing outliers, results from correlation testing produced similar results as when not removing outliers. Therefore, a non-parametric technique was selected. Non-parametric tests are useful in cases where the assumption required for parametric tests are not met (Pallant, 2013, p. 221). Therefore, a Spearman’s Rho correlation was selected to measure the relationship between the two variables. A Spearman’s rank-order correlation was run to assess the relationship between student satisfaction score and total posts in a course. 392 courses were used in the analysis. Preliminary analysis showed the relationship to be non-monotonic, as assessed by visual inspection of a scatterplot. There was no statistically significant correlation between student satisfaction scores and total posts, rs = −.060, p = .240 (see Table 8).

Table 8 Results from the Spearman’s Rho Correlation

Discussion

Findings from this study are intended to provide insight into how instructors and students interact in discussion boards. The exploratory nature of this research was meant to provide baseline data that can help instructors, department chairs, and administrators to better understand how instructors and students interact in online courses.

Research question 1

Research question 1 explored how instructors interact in discussion boards in online courses. Research suggests that instructors should play an active role in online discussions and research indicates that regular interaction between students and instructors encourages discussion and improves learner satisfaction (Darabi et al., 2013; Dennen, 2005; Moller, 1998; Nandi et al., 2012). Results from this study showed that instructor interaction varies greatly from course to course. In some courses, instructors did not post at all in discussions, while in other courses, instructors posted over 200 times. On average, an instructor posted 33 times during the semester.

In addition, instructor interaction rate was calculated for each course. The calculation was determined by taking the total number of instructor posts and dividing it by the number of students in the course. Instructor interaction ranged from 0 to 18.9 with a mean of 1.49 (SD = 2.3). Since there was a wide range of instructor interaction, it is possible that instructors used different approaches to discussion boards or perhaps instructors used other tools, beyond the discussion boards, for facilitating interaction; at the same time, it is also possible that instructors were simply absent from the course.

Although there is no magic number for the number of posts an instructor makes in a course, research indicates and regulation requires, that regular interaction from the instructor has an impact on student perceived learning, student satisfaction, and student engagement (U.S. Department of Education, 2014; Hrastinski, 2008; Jung et al., 2002; Swan, 2004). Online discussions create opportunities for collaborative, knowledge sharing, and social interaction (Fleming, 2008; Rovai, 2002; Thompson, 2006). Specifically, when it comes to instructor interaction, Ringler et al. (2015) found that “there is a positive relationship between the number of instructor posts and the number of posts per student” (p. 23). Meaning that the more often instructors participated, the more discussion occurred. The thought is that more discussion means greater learning and a stronger sense of community. However, depending on one’s teaching style, the instructor may post more or less often (Quitadamo & Brown, 2001). Meaning if an instructor posted infrequently, perhaps they were writing (or recording) longer posts of higher quality or choosing to summarize discussions at the end of the week (Rovai, 2007). Or perhaps an instructor found that when posting too frequently, students shut down or merely waited for the instructor to respond instead of responding to a fellow student’s post and therefore believed that posting less frequently actually simulated student-student discussion (Mazzolini & Maddison, 2003). The variety of strategies and facilitation strategies makes it difficult to judge the quality of the course just on the number of posts by an instructor.

In addition to instructor posts, the number of discussion boards also varied greatly from course to course. In some courses there were no discussion boards, while in other courses over 100. The average number of discussion boards in a course was 14, roughly one a week. The design of the course and the beliefs of the instructor likely influenced how many discussion boards were in the course. According to Covelli (2017), there are several techniques that can be applied to the course or by the instructor to encourage effective discussions. Research suggests that facilitating discussions may not come naturally to instructors and therefore, instructors should engage in professional development on facilitating effective discussions (Covelli, 2017). For example, learning how to incorporate audio and video into discussions can add texture and personality to discussions (Covelli, 2017). Additionally, the course design may offer opportunities for small group or whole class discussions which can assist in building community within the course (Covelli, 2017).

The institution at which this study was conducted has a faculty-driven development and delivery model, meaning that courses are designed and taught by instructors with little or no assistance from an instructional designer. This was common practice during the early years of online learning to increase production of online courses (Oblinger & Hawkins, 2006). Faculty were provided release time or a stipend in exchange for developing and delivering online courses (Oblinger & Hawkins, 2006). However, this decentralized approach to course design also means that some instructors may have received no training or limited support, which can lead to different approaches to course design and specifically to the design and facilitation of online discussions.

Research question 2

Research suggests there are many factors that influence student contribution in online discussions (Hew et al., 2010; Xie et al., 2006). Results from this study found that the frequency of student posts varied from zero to over two thousand in a course during the semester with a mean of 470 posts per course. Due to differences in class size, the average number of posts per student was calculated by dividing the total number of student posts by the number of students in the course The average posts per student was 19 times per semester or just barely more than once per week. One of the challenges with this measure is that it assumes that every student participated in the discussions (Bliss & Lawrence, 2009a).

Like instructor postings, the total number of student posts only tells part of the story. Other factors, such as instructor expectations, the design of the discussion, and extrinsic motivation can influence the number of posts or level of engagement of students in online discussions (Rovai, 2007). These factors are reflected in popular online learning standards. For example, Chickering and Gamson’s (1987) seven principles of good teaching includes communicating high expectations. Specifically related to online discussions, Rovai (2007) suggests clearly communicating with students what the requirements are for active participation in discussions; a discussion rubric can assist in setting those expectations (Rovai, 2007). Popular online learning standards include the design of learning activities, and specifically online discussions, as an important component in effective online courses. Maddix (2012) argued that discussion questions should be open-ended and encourage critical and creative thinking. Related to design, the size of the discussion board can also affect participation. For example, Reonieri (2006) found that 10–15 students was the ideal size for an effective online discussion. In addition, Bliss and Lawrence (2009b) found that students participated more frequently in small group discussions than in whole class discussions. Finally, extrinsic motivation can affect discussion participation. All the popular online learning standards include assessment. Best practices for discussion boards recommend evaluating and grading discussion board interactions in online classes (Maddix, 2012; Rovai, 2007). The use of rubrics can assist not only in the grading process, but also provide expectations for participation (Ringler et al., 2015).

Research question 3

Research question 3 looked at weekly interaction between students and instructors in discussion boards. This is because it is one thing to understand how instructors and students interact across an entire semester once a semester is over, but it is another thing to better understand how these interactions occur each week. Total posts, average posts per course, and the percentage of overall posts was calculated for each week. Results found that the most interaction occurred during the first two weeks of the semester. After the first week, interaction dropped nearly every week for both instructors and students. During week two, interaction dropped 25% and then during week three interaction dropped 20%. After the first three weeks, on average, interaction dropped about 4% each week. The lowest number of interactions occurred during semester break and finals week.

Best practices for online learning often recommend an “introductory discussion” where students and the instructor can introduce themselves and become acquainted (Gunawardena & Zittle, 1997; Rovai, 2007). These introductory discussions are meant to spark a sense of community (Gunawardena & Zittle, 1997). However, like other studies (Pham et al., 2014), interactions in this data set dropped over the semester. Pham et al. (2014) found after a high level of engagement at the beginning of the course, momentum faded as the semester continued. Research, though, has highlighted the importance for online instructors to create motivation throughout the semester to increase student engagement in discussions (Rovai, 2007). This means that without extrinsic motivation, even the most motivated student may have a hard time staying engaged in an online course. One strategy identified by researchers to increase extrinsic motivation is to assign a grade for discussion participation ranging from 10 to 35% of the overall course grade (Rovai, 2007). Rovai (2007) points out that students should be clear on what and how their being graded. Some instructors use discussion board rubrics, to assist students in self assessing their participation and provide clear expectations, while others simply require a minimum number of posts each week. Other strategies for maintaining motivation and increasing interaction throughout the semester include making sure the discussions are directly tied to the course objectives, use small group discussions to encourage participation from students who may be reluctant to post in larger discussions, and provide tutorials or detailed instructions for those who may not be familiar with discussion board technology (Suler, 2004). Finally, many researchers believe that the instructor should actively participate in discussions, but without taking over or responding too quickly (Bliss & Lawrence, 2009a).

Research question 4

Research suggests that learner-instructor interaction plays an important role in student satisfaction, therefore, research question 4 looked at the possible relationship between asynchronous discussion interaction measures and student satisfaction scores based on end-of-course evaluations. Although there is a large body of research which suggests that classroom participation and engagement is positively associated with student satisfaction, results of this study found no association (Hrastinski, 2008; Jung et al., 2002; Sher, 2009; Swan, 2004).

However, there are several possible explanations for this. First, there could be issues with using an average of the end-of-course evaluation as a measure of student satisfaction.

Another possible explanation is that the discussions were not the only place instructors and students interacted. Huang and Hsiao (2012) identified seven different communication tools which facilitated online interaction between learners and instructors. Those tools included email, discussion boards, announcements, blogs, streaming audio/video, chat, and web-conferencing (Huang & Hsiao, 2012). It could be that a variety of communication tools were being used in online courses and in order to fully understand the effects of interactions on student satisfaction additional research needs to be done.

Conclusion

The U.S. Department of Education has identified regular and substantive interaction between the instructor and students as a standard and required practice for online education to be considered for federal funding (U.S. Department of Education, 2014). Best practices for online education also acknowledge the importance of interaction (Lowenthal & Davison-Shivers, 2019; Richardson & Swan, 2003; Swan, 2004). And although there are an increasing number of ways to facilitate this interaction, asynchronous discussion boards are still the most popular (Lieberman, 2019). Therefore, this study sought to explore and better understand the frequency of interaction between instructors and students in discussion boards in online courses.

The first major finding was that numbers alone do not tell the entire story. Although LMS data has become more readily available and accessible for analysis, the differences in course design and course facilitation made it difficult to generalize across all courses. Courses in this study had wildly different practices when it came to discussions. For example, in some courses, no discussions were present while other courses had over 100 discussions. Due to the decentralized development model for online courses at this institution, the differences in the number of discussions is unexplained. However, perhaps courses with many discussions break students into discussion groups or even pairs. Meaning that for every discussion, there are duplicates of that discussion to allow groups or pairs to respond to one another, as opposed to the entire class. Although there are other ways of accomplishing this in an LMS, depending on training, the instructor may be unaware. Additionally, there may be pedagogical reasons for making group discussions available to other groups in the course. Without a deeper analysis of course design and course facilitation, the numbers from the LMS data only tell a part of the story. Therefore, it would be suggested that if department chairs or administrators wanted to use discussion board activity to inform evaluation or any type of student intervention, they not only identify appropriate levels of interaction for the courses offered in their programs as well as possibly adding other data points.

Another major finding was that the total posts in a course was not correlated to student satisfaction. However, additional research would need to be conducted to confirm these results in other contexts as well as with other instruments to measure student satisfaction. Given this, it would be logical to continue to follow best practices which include making efforts to participate regularly in discussions, setting expectations, and assigning grades for participation in discussions.

In addition, this research makes use of LMS data, which historically has been difficult to obtain. With a growing interest in using student data to improve teaching and learning, this research serves as an example of how advances in technology and reduced data storage costs has allowed institutions to take advantage of the tremendous amount of data available in the LMS (Viberg et al., 2018). However, this research also brings up a number of concerns about “if” Canvas data should be used. Viberg et al. (2018) suggests that concerns of data privacy, security and informed consent of learning data should be considered as institutions scale research efforts using learning data. Although data from this study was anonymized and exploratory in nature, it brings up questions about how institutions should ensure ethical practices as future research is conducted.

Specifically, the results from this study could be used to inform department chairs and administrators of the general practices of discussion board use. Using this information, department chairs or administrators could target courses with low number of discussions or instructors and students with fewer than average number of discussion posts during the first few weeks of class. By catching low levels of interaction early, support and guidance can be provided to instructors or students to increase interaction throughout the semester. These results could also be used by instructional designers in order to guide recommendations for future training and support. This research could also be helpful to share with instructors as a baseline of minimum interaction that should be occurring in their online classes.

The results of this study are limited due to the size and scope of the study. The courses, instructors, and students in this study were from a single university with a common LMS, Canvas. The actual teaching methods used varied. Additionally, this research took a campus wide view of discussion interactions. It did not consider situational variables, (e.g., class size, subject matter, faculty experience). Additionally, we did not have access to other datasets, such as course grades or retention rates, which would be worthwhile to investigate beyond student satisfaction. Finally, due to the exploratory nature of this study, additional research would need to be completed to more fully understand how students and instructors are interacting in online courses. Another limitation is that this study focused only on the quantity and not the quality of discussion boards which in turn really focuses just on “regular” and not necessarily on “substantive” interaction. Analyzing the quality of posts would be needed to truly assess the substance of discussion board interactions. However, these additional metrics would require significant resources and therefore not be as available simply from current learning analytic data.

Future research could expand to include the quality of posts, length of posts, as well as the extent of threading. Bliss and Lawrence (2009a) recommend using multi-factor metrics to provide a more complete view of how interactions occur in online discussion boards. Additionally, with an array of best practices for discussion boards, it would be valuable to explore if the use of best practices, like providing clear guidelines for discussions or grading discussions, has any effect on the quantity or quality of posts. Although not touched on in this research, the impact of faculty training on the quantity of interactions may provide guidance or direction for faculty development organizations. With access to Canvas data, there are many possibilities to explore. Discussion boards are just one tool for interacting in online courses. The single metric is not adequate for measuring or ensuring that online courses meet the “regular and substantive” interaction requirement set by the U.S. Department of Education. Future research could look more diversely at the toolset used for communication in online courses to establish metrics which could be used to measure interaction.

The findings of this study build upon current research and theory related to the importance of interaction between students and instructors in online courses. Results found that discussion board activity of students and instructors varied greatly depending on the course. There was no relationship between the number of discussion board interactions and student satisfaction, as tested in this study. The result of this study, though, still contributes to the research and practice for online education by extending the research related to asynchronous discussion boards. In addition, this research serves as a proof of concept for additional research which uses data available from the LMS to continue the work of improving online education.

Availability of data and material

Not able to share.

Code availability

Not applicable.

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Crystal Gasell.

Ethics declarations

Ethics approval

Approved Protocol Number: 101-SB19–175.

Consent to participate

Not applicable.

Consent for publication

Not applicable.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Gasell, C., Lowenthal, P.R., Uribe-Flórez, L.J. et al. Interaction in asynchronous discussion boards: a campus-wide analysis to better understand regular and substantive interaction. Educ Inf Technol 27, 3421–3445 (2022). https://doi.org/10.1007/s10639-021-10745-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10639-021-10745-3

Keywords

  • Distance education and online learning
  • Evaluation methodologies
  • Teaching/learning strategies
  • Post-secondary education