Online learning is extremely prevalent in education, more than ever given the COVID-19 pandemic that has shifted most educational services to an online platform. More specifically, in 2015, close to six million students were taking at least one online learning course, which was 29.7% of all postsecondary students (U.S. Department of Education, National Center for Education Statistics 2018). In 2017, the Online Learning Consortium reported an almost 4% increase in online learning students in 2015 as compared to the previous two years. Although online learning is becoming more prevalent, there has been little to no research to determine what makes online learning most effective. Those that have, either have not compared modalities (i.e., only testing one format) (Sella et al. 2014; Walker and Rehfeldt 2012) or have focused on another aspect of the learning (e.g., does grading anonymously affect performance) (Liu et al. 2018). Determining the components of online learning that lead to better student outcomes will add to the current literature and improve online learning as a whole. The primary purpose of this experiment was to determine what forms of discussion (synchronous vs asynchronous) are most effective in an asynchronous online master-level applied behavior analysis course.
Although there are not any accurate statistics to demonstrate the current levels of online learning, in 2020, the COVID-19 pandemic has shifted the vast majority of educational services to an online platform. More than ever it is incredibly important to evaluate the efficacy of various online learning methodologies.
In 2015, almost 30% of all postsecondary students were taking at least one distance education course, which equates to close to six million students (U.S. Department of Education, National Center for Education Statistics 2018). Online education is significantly more accessible than traditional classroom-based education to many students. Online education is not constrained by location for the teacher or the students, which means that students (and teachers) can participate from all over the world (Bartley and Golek 2004; Ragusa and Crampton 2017). This allows for an increased student pool, increased geographic reach, and higher levels of convenience and feasibility. In fact, 78% of surveyed students chose online learning due to time flexibility, not quality of education (Ragusa and Crampton 2017). Essentially, online learning makes education available to working-caretakers, those with full-time jobs, and anyone with other commitments that prevent them from attending a traditional college class (Ragusa and Crampton 2017). Quality of educational experience and accessibility are variables that must be considered and evaluated across formats.
The Department of Education published a meta-analysis of evidence-based learning in 2010. The report covered 50 studies from 2004 to 2010. Of those 50 studies, 11 used K-12 participants and 39 used college or professional participants. There were several consistent findings across these studies including students in online programs performed slightly better than comparable traditional face-to-face instruction, instruction that included both online and traditional face-to-face components resulted in improved student outcomes, and effect sizes were larger when online instruction was instructor-director or collaborative rather than student-directed. The results from the meta-analysis conducted by the Department of Education provide some evidence to support a possible difference between synchronous and asynchronous formats of online education. Synchronous formats of online education tend to require increased instructor-directed instruction because the discussion formats require an instructor present. However, asynchronous formats tend to include more independent work and less instructor-directed assignments. The meta-analysis also acknowledges that there is a wide variety in instructional practices when conducting online education and the effectiveness of such instruction varies widely across content and learners.
In general, the current literature has focused on student preference of modality (i.e., online learning or traditional classroom learning) (Buxton 2014; Kemp and Grieve 2014; Mallin et al. 2014; Nguyen and Zhang 2011) and few have focused on the effects of learning modality on student performance (de Jong et al. 2013; Jordan et al. 2013). Additionally, those studies that compared various modalities have focused on a traditional classroom format versus an asynchronous online format (de Jong et al. 2013; Jordan et al. 2013). There has been little to no experimental research conducted to determine what makes online learning most effective. Studies that have attempted to determine teaching method efficacy either have not compared modalities (i.e., only testing one format) (Sella et al. 2014; Walker and Rehfeldt 2012) or have focused on another aspect of the learning (e.g., does grading anonymously affect performance) (Liu et al. 2018). Determining which components of online learning lead to better student outcomes will add to the current literature and improve online learning as a whole. Most teaching components can be implemented in both traditional and online learning formats (e.g., tests, quizzes, PowerPoint slides, etc.), however, the discussion type is more variable. The purpose of the current study is to evaluate the effects of synchronous discussion sessions in an asynchronous online course.
One male and three female master-level ABA students enrolled in an asynchronous online course—Measurement and Experimental Evaluation—at a college in New England. Students were assigned (based on availability for synchronous discussion sessions) to either an asynchronous discussion session or a synchronous discussion session for each course module.
The study occurred completely online via the course software Canvas with the exception of work required of the students that needed to be completed offline (i.e. supplemental assignments, study guides, readings, etc.). Synchronous discussion sessions utilized Big Blue Button conferences, which was accessible only through the Canvas course website.
The asynchronous course was slightly modified from the original course requirements. Original course requirements included the completion of study guide questions, weekly quizzes, weekly fluency drills on key terms (e.g., students recorded themselves completing the fluency drills), weekly projects addressing the topic of the given unit (i.e., taking data on sample videos, creating graphs, describing measurement systems, etc.), participation in a weekly asynchronous discussion forum in which students were required to post a designated number of times (e.g., a minimum of three posts—one of which was required to be a response to another student), and a final exam. All requirements remained the same except instead of participating in asynchronous discussion for each module of course content, students were assigned to participate in either a synchronous (live) discussion session or an asynchronous discussion session for each module. The course teacher, an independent doctoral-level student in ABA that was not explicitly aware of the purpose of the study other than some asynchronous discussion sessions were being replaced by synchronous discussion sessions, led the synchronous discussion sessions and moderated the asynchronous discussion sessions. The asynchronous discussion sessions were available to students for the duration of the module (2–3 days) and the synchronous discussion sessions lasted up to 60 min in duration. As noted above, students participated in either an asynchronous session or a synchronous session.
Measurement and Design
The synchronous discussion sessions were led in a question-and-answer format rather than in a typical lecture format. The discussion sessions were designed to stimulate conversations among students and the teacher, as well as mimic conversations they could have in face-to-face classrooms. These conversations served to clarify content, deepen understanding, and improve learning. Prior to the synchronous discussion sessions, the students were instructed to review the module objectives and be prepared to ask and answer questions regarding the current module content. If the students did not come prepared with questions and/or comments, the teacher reviewed the module objectives (i.e., study guide questions) that were given to them prior to the session. Some sample interactions included the student presenting a question to the teacher for clarification and the teacher posing the question to other students present. After another student attempted to answer the question, the teacher provided feedback and clarification, when appropriate. If there were no questions, the teacher began with the module objectives and posed a question to the students. Again, once a student attempted to answer the question the teacher provided feedback and clarification, when appropriate. This continued until the hour-long session was met and/or the objectives were thoroughly reviewed.
Additionally, teacher statements and questions were yoked across each type of discussion. For example, if the teacher made 10 statements in the synchronous discussion session for module 1 they also made 10 statements in the asynchronous discussion session for module 1. Although students were assigned to attend an equal number of synchronous and asynchronous discussion sessions, due to some technical issues, Group A participated in four synchronous sessions and two asynchronous sessions, whereas Group B participated in two synchronous sessions and four asynchronous sessions. For their course grades, the full-credit criteria remained constant across both discussion-types and was consistent with the current course syllabus (i.e., 5 course credit points based on a minimum of 4 relevant comments/questions within the synchronous session and/or 4 relevant posts in the asynchronous forum).
The synchronous and asynchronous discussion sessions were designed to be as similar as possible; however, there were slight differences due to format. The synchronous discussion sessions were initiated in two different ways—either with a student asking a question and/or making a comment or with the instructor initiating the discussion by reviewing the objectives provided to the students for each module of content. The asynchronous discussion sessions were initiated by a prompt question relevant to the given module, but the students were encouraged to discuss any content relevant to the module. Lastly, the teacher responded immediately to students in the synchronous sessions, but in the asynchronous sessions, responses were slightly delayed. All teacher responses in asynchronous sessions were within 24 h.
The experimental design was an alternating treatments design. The alternating conditions were Synchronous Discussion Sessions and Asynchronous Discussion Sessions. The strength of the design depends on the differentiation of responding across Synchronous Discussion Sessions and Asynchronous Discussion Sessions for each individual student. However, statistical analyses were also conducted.
The quantitative measures collected were level of participation in each format of discussion sessions and grades on module quizzes and supplemental assignments. Level of participation was measured by tallying the number of relevant statements and/or questions for each student. Within both discussion sessions, statements were counted separately from questions, but a single statement could be multiple utterances. It would only be considered a second statement if another person spoke between utterances and/or if the student made another statement on a different topic. For example, if a student responded to a teacher’s question and made a statement about a different topic in a single utterance, it was considered two statements. Or if they made a statement about a single topic and then asked a follow-up question, it was recorded as one statement and one question. Affirmation statements did not count. For example, just saying, “I agree” or “that’s important” or “I thought that, too” did not count as statements. A statement and/or question was considered relevant if it addressed the current unit objectives (i.e., if the session was for module 2 the statement/question needed to address module 2, if the statement/question addressed module 1 it was not recorded as a relevant statement/question). Components that required a human grader (i.e., it is not recorded electronically), were evaluated by the teacher for each course and an additional master or doctoral level student (i.e., interobserver agreement).
Individual Module Performance
There were some observable differences between the modules in which students participated in synchronous discussion sessions and those in which they participated in asynchronous discussion sessions (Figs. 1, 2 and 3), however, there were no consistent differences across discussion type.
Overall Quiz Performance
On average, quiz scores were slightly higher for modules that students participated in asynchronous discussion sessions (12.69 out of 15) than modules that students participated in synchronous discussion sessions (12.44 out of 15) (Figs. 4, 5 and 6). However, this difference was only 0.25 points out of 15, which is unlikely to be considered significant. Additionally, this difference was not found to be statistically significant (p = 0.826).
Discussion Session Participation
Overall, participation was higher during the synchronous discussion sessions than the asynchronous discussion sessions (Fig. 7). On average, students made 3.81 statements during asynchronous discussion sessions in comparison to 9.94 statements during the synchronous discussion sessions. On average, students asked 0.25 questions during asynchronous discussion sessions in comparison to 1.94 questions during the synchronous discussion sessions. For teacher comments and questions during each module see Table 1.
Total count IOA was used to determine score agreement for student statements and questions during the synchronous and asynchronous discussion sessions. IOA was recorded for 33% of sessions. Student and teacher statements and student questions were recorded separately. For synchronous discussion sessions, student statement agreement was 97% and student question agreement was 100%. Teacher statement agreement was 95% and teacher question agreement was 95%. For asynchronous discussion sessions, student statement agreement was 81% and student question agreement was 100%. Teacher statement agreement was 86% and teacher question agreement was 95%.
Overall, there were no consistent difference between participating in synchronous discussion sessions and asynchronous discussion sessions. To control for student–teacher interactions, teacher comments and statements in synchronous discussion sessions were yoked with the asynchronous discussion sessions. This meant that the teacher made the same number of comments and statements in both discussion sessions. After students consented to participate in the study, they were assigned to participate in either synchronous discussion or asynchronous discussion for each module. Students were not randomly assigned; they were assigned based on availability. This was due to the students signing up for an asynchronous course, therefore their availability was not consistent throughout the week. This arrangement allowed for a direct comparison of performance when the student participated in a synchronous discussion session without the asynchronous discussion session and when they participated in asynchronous discussion session without the synchronous discussion session.
There were some individual differences in performance for the students, but nothing consistent across all students. Student MA_1 scored slightly higher on her assignments when she participated in asynchronous discussion sessions rather than synchronous discussion sessions. Student MA18_2 scored slightly higher on her projects when she participated in synchronous discussion sessions rather than the asynchronous discussion sessions, however, her quiz scores were slightly higher when she participated in asynchronous discussion sessions than the synchronous discussion sessions. Student MA18_4 scored higher on her quizzes when she participated in the synchronous discussion sessions rather than the asynchronous discussion sessions. Her projects were undifferentiated.
Overall quiz performance was undifferentiated across students. However, it should be noted that student participation was higher during the synchronous discussion session than the asynchronous discussion sessions. Students made 2.6 times more comments and asked 7.76 times more questions during the synchronous discussion sessions than the asynchronous discussion sessions even though the course required minimums were the same for both session types. Additionally, teacher comments and questions were the same for both session types. Therefore, although the performance outcomes were not consistently affected by one discussion type over the other, synchronous discussion sessions did result in increased student participation, which has been shown to improve student performance (Bost and Riccomini 2006; Drevno et al. 1994).
The current experiment has several limitations. First, there was a potential ceiling effect. The students performed at a high level regardless of the variation in discussion format. It is hypothesized this result was due to the content of the courses being foundational and focused on memorization rather than application. Second, the synchronous discussion sessions were not necessarily limited to the current module content. The teacher led the discussion on the current module of content; however, they did not ignore student statements and comments that were not relevant to the current module. Although the discussion session data only reflect relevant statements and questions, the actual discussion sessions may have included content from other modules, which may have varied across each module. These conversations may have resulted in a confound for the alternating treatments design. Third, there were a limited number of participants in this experiment. Although single-case research does not require a large number of participants to demonstrate experimental control, increasing the overall number of participants would increase generality of these results. Fourth, due to technical difficulties each participant experienced a different number of synchronous and asynchronous discussion sessions. Lastly, it is understood that not all universities may be able to offer synchronous discussion in asynchronous courses. The inherent flexibility in asynchronous courses is appealing to many students and if a university offers an asynchronous course, it may not be feasible to require synchronous discussion. However, the purpose of this experiment was to determine if there were differences in performance regarding the two types of discussion to help inform decisions about education.
Future research should focus on utilizing a wide variety of courses, introductory and advanced, to determine whether discussion type affects student performance. Additionally, future research should implement a group design with randomly assigned students into either a synchronous or an asynchronous discussion group. Alternately, a more systematic alternating treatments design that controls for learning opportunities could also be used to look at this from a single case perspective. This would determine whether any level of participation in synchronous or asynchronous discussion differentially affect student outcomes.
The current data provide cursory evidence that synchronous and asynchronous discussion do not affect student performance in asynchronous courses. Although there were no differences found between the two types of discussion, the current data provide valuable information for instructors. Synchronous discussion tends to require more time and resources on the part of the instructor and if there are no improvements in learning outcomes it may be preferable for an instructor to design asynchronous courses. However, based on the limitations of the current experiments more experimental research is needed to determine what instructional components of online learning are most effective. To ensure a comprehensive analysis of instructional components, future research may consider analyzing the content and complexity of the statements and questions made by the students and instructors, the level of education/experience of the instructor leading the discussion session, and how the discussion sessions are initiated. Lastly, it is also important to consider what content is being taught in the courses as introductory courses may demonstrate a ceiling effect. However, this is an empirical question that needs further investigation. More investigation is needed of the components of instruction that boost performance and mastery, across and within instructional formats.
Bartley, S. J., & Golek, J. H. (2004). Evaluating the cost effectiveness of online and face-to-face instruction. Educational Technology and Society, 7(4), 167–175.
Bost, L. W., & Riccomini, P. J. (2006). Effective instruction: An inconspicuous strategy for dropout prevention. Remedial and Special Education, 27(5), 301–311.
Buxton, E. C. (2014). Pharmacists’ perception of synchronous verses asynchronous distance learning for continuing education programs. American Journal of Pharmaceutical Education, 78(1), 8.
de Jong, N., Verstegen, D. M. L., Tan, F. E. S., & O’Connor, S. J. (2013). A comparison of classroom and online asynchronous problem-based learning for students undertaking statistics training as part of a public health master’s degree. Advancements in Health Science Education, 18, 245–264.
Drevno, G. E., Kimball, J. W., Possi, M. K., Heward, W. L., Gardner, R., III., & Barbetta, P. M. (1994). Effects of active student response during error correction on the acquisition, maintenance, and generalization of science vocabulary by elementary students: A systematic replication. Journal of Applied Behavior Analysis, 27(1), 179–180.
Jordan, J., Jalali, A., Clarke, S., Dyne, P., Spector, T., & Coates, W. (2013). Asynchronous vs didactic education: It’s too early to throw in the towel on tradition. BMW Medical Education, 13, 105.
Kemp, N., & Grieve, R. (2014). Face-to-face or face-to-screen? Undergraduates’ opinions and test performance in classroom vs online learning. Frontiers in Psychology, 5, 1278.
Liu, X., Li, L., & Zhang, Z. (2018). Small group discussion as a key component in online assessment training for enhanced student learning in web-based peer assessment. Assessment and Evaluation in Higher Education, 43(2), 207–222.
Mallin, M., Schlein, S., Doctor, S., Stroud, S., Dawson, M., & Fix, M. (2014). A survey of the current utilization of asynchronous education among emergency medicine residents in the United States. Academic Medicine, 89(4), 598–601.
Nguyen, D., & Zhang, Y. J. (2011). College students’ attitudes toward learning process and outcome of online instruction and distance learning across learning styles. Journal of College Teaching and Learning, 8(12), 35–42.
Nguyen, T. (2015). The effectiveness of online learning: Beyond no significant difference and future horizons. MERLOT Journal of Online Learning and Teaching, 11(2), 309–319.
Online Learning Consortium (2017). Digital learning compass: Distance education enrollment report 2017. Retrieved from: https://onlinelearningconsortium.org/read/digital-learning-compass-distance-education-enrollment-report-2017/
Ragusa, A. T., & Crampton, A. (2017). Online learning: Cheap degrees or educational pluralization? British Journal of Educational Technology, 48(6), 1208–1216.
Sella, A. C., Mendonça Ribeiro, D., & White, G. W. (2014). Effects of an online stimulus equivalence teaching procedure on research design open-ended question performances of international undergraduate students. The Psychological Record, 64(1), 89–103.
Todd, E. M., Watts, L. L., Mulhearn, T. J., Torrence, B. S., Turner, M. R., Connelly, S., & Mumford, . (2017). A meta-analytic comparison of face-to-face and online delivery in ethics instruction: The case for a hybrid approach. Science and Engineering Ethics, 23(6), 1719–1754.
U.S. Department of Education (2010). Evaluation of evidence-based practices in online learning: A meta-analysis and review of online learning studies.
U.S. Department of Education, National Center for Education Statistics (2018). Digest of Education Statistics, 2016. Retrieved from: https://nces.ed.gov/fastfacts/display.asp?id=80
University of the Potomac (2017). Online vs. traditional learning. Retrieved from: https://potomac.edu/learning/online-learning-vs-traditional-learning/
Walker, B. D., & Rehfeldt, R. A. (2012). An evaluation of the stimulus equivalence paradigm to teach single-subject design to distance education students via blackboard. Journal of Applied Behavior Analysis, 45, 329–344.
Walker, B., Rehfeldt, R., & Ninness, C. (2010). Using the stimulus equivalence paradigm to teach course material in an undergraduate rehabilitation course. Journal of Applied Behavior Analysis, 43(4), 615.
Conflict of interest
The author(s) declare that they have no conflict of interest.
Human or Animal Rights
Human participants were involved in this project.
Informed consent and IRB approval are attached.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Farros, J.N., Shawler, L.A., Gatzunis, K.S. et al. The Effect of Synchronous Discussion Sessions in an Asynchronous Course. J Behav Educ 31, 718–730 (2022). https://doi.org/10.1007/s10864-020-09421-2
- Online learning
- Distance education