Impact of LMS-mediated readiness assurance testing on performance in pharmacy calculations

Readiness assurance testing has enjoyed increased use as a tool for the delivery and reinforcement of pharmacy concepts. This study was conducted to determine the influence of the readiness assurance test (RAT) on major examination outcomes in pharmacy calculations. The Blackboard Learning Management System was identified as an efficient platform for RAT implementation. The outcomes of three consecutive offerings of pharmacy calculations were compared. Cohorts 1 and 2 were exposed to recitation and standard assessments in lecture and laboratory sessions. Cohort 3 was additionally administered weekly individual (IRAT) and team (TRAT) readiness assurance tests for concept areas. All cohorts were exposed to a common comprehensive final exam. Significant differences were observed in major lecture exam scores among cohorts. Cohort 3’s mean final exam score was significantly greater than that of Cohort 1, which received the most conventional method of subject delivery. Students feedback was generally positive regarding use of RAT. Use of RAT assessment was positively associated with final exam score outcomes in pharmacy calculations, including a positive shift in final exam score distribution in Cohort 3. Use of RAT promotes increased student exposure to conceptual material and instruction in pharmacy calculations.


Introduction
Learning assessments are associated with student success as it relates to program curricula, and as it relates to passage of licensure examinations. Use of well-designed team-based learning (TBL) assessments can improve students' academic performance and can improve students' critical thinking skills [1]; thus, TBL has become a mainstay in higher educational instruction in recent years. TBL includes the use of small groups of typically four to seven students who work together to process and to retain course information [2]. In the classic sense, TBL requires substantial student preparation beyond the classroom; therefore, the strategy fits quite well within the framework of flipped classrooms and technology-driven learning modalities [3].
Whereas problem-based learning and case based learning have been used liberally in the instruction of allied health sciences for decades, TBL has made significant in-roads as a useful technique for improving student engagement and achievement of learning outcomes. For instance, TBL has been reported to correlate well with students' final exam performances in comparison to case-based teaching methodology [4] and TBL has shown effectiveness when used to establish conceptual understanding prior to the implementation of case-based learning [5]. Students have been noted to value TBL due to its collaborative nature and due to its potential to motivate students through promotion of competition [2].
A useful addition to the arsenal of TBL performance measurements is the readiness assurance test (RAT). The RAT is an assessment technique that has gained favor in pharmacy education in recent years. It is a versatile technique that may be administered to individuals and to groups of students. The technique often features facilitator encouragement of student remediation through rapid assessment and feedback. Assurance testing is reported to be an effective motivator in student preparation [6] and is applicable to technology-driven delivery methods [7].
The RAT possesses the added advantage of allowing weaker-prepared students to be identified and provides a primer prior to receiving higher-stakes assessment [8]. Readiness assurance tests are often presented as active learning exercises in the form individual (IRAT) or team (TRAT) assessments, which are often administered in tandem. The informal nature of the RAT has makes it an attractive vehicle for concept introduction and/or concept reinforcement in higher and professional education.
Learning Management Systems (LMS) have become commonplace in higher education. Platforms such as Blackboard™, ExamSoft™, Moodle™ and Nearpod™ are teaching and assessment mainstays, as they function both as conduits for dissemination of information and as convenient tools for data collection and management [9][10][11]. The Blackboard Learning and Management System enjoys use in many professional schools, including schools of pharmacy. Blackboard's strength lies in its versatility. It can be used for direct online instruction through its Collaborate function. It can also be used to post announcements and written materials in support of course activities. Individual or group communication can be facilitated through platform emails and through internal message boards. In addition, Blackboard LMS possesses assessment capabilities. Question banks can be created for subsequent test compilation. The system can also compile and maintain records of testing statistics to assist in assessment reporting. Despite its advantages, Blackboard may be underutilized in conjunction with face-to-face instruction [12], as many of the platform features receive limited use from faculty.
Higher educational use of Blackboard LMS has yielded positive outcomes. For instance, Blackboard-facilitated TBL was well-received and was successfully used in the instruction of anatomy in allied health courses [13]. A positive correlation was observed between time spent using Blackboard LMS and grade outcomes in material science and engineering students, as greater active engagement was associated with greater synthesis of knowledge [14]. Use of Blackboard in flipped and in blended classroom contexts have proven useful as well. Students exhibited better emotional and behavioral engagement as a result of course delivery mediated through the platform [15]. In addition, students have found use of online assessments via Blackboard to be impactful, as observed in final exam performances [16].
Pharmacy calculations is a pharmaceutics course that provides an important foundational contribution to pharmacy curricula. As is the case with any mathematics-based course, success in pharmacy calculations is often proportional to the extent to which students are exposed to conceptual material. A challenge to student success in pharmacy calculations is lack of student motivation to practice calculations regularly. Novel assessment techniques may aid in the reinforcement of conceptual material and serve as a remedy to lack of student motivation and classroom engagement in pharmacy calculations. Despite the popularity of RAT interventions in higher education, there is very little information regarding the use of the technique in pharmacy calculations. This study was conducted to determine the impact of LMS-mediated RAT use on formal quiz and examination outcomes in pharmacy calculations. It is hypothesized that such methods of student engagement are an effective means by which students can improve in pharmacy calculations through teambased learning and through repetitive exposure to conceptual material.

Methods
This observational cohort study was approved for conduct by institutional review board at a school of pharmacy in the United States of America. Data was collected from 3 consecutive initial offerings of pharmacy calculations lecture and its adjoining laboratory course. Pharmacy calculations and its lab are first year courses in the professional pharmacy program. The lecture and lab courses spanned a single semester during each administration. To ensure student confidentiality during data analysis, all data was de-identified prior to analysis. All student cohorts encountered lecture course assessments in an identical manner. Written assessments common to both lecture course administrations included 5 quizzes, 3 major (hour) exams, and a standard comprehensive Final Exam.
The topics of prescription interpretation, the metric system, and intra-and inter-system unit conversion were assessed on Quiz 1 and Exam 1. Whereas prescription interpretation and metric system conversion were points of emphasis on Quiz 1, the distribution of topics was more even on Exam 1. Dosage calculations, reducing and enlarging formulas and specific gravity were the focus of Exam 2. Quiz 2 emphasized dose calculations but reducing and enlarging formulas to a lesser extent, whereas Quiz 3 emphasized reducing and enlarging formulas but specific gravity to a lesser extent. Exam 3 assessed the topics of expressions of strength, dilution and concentration, and intravenous admixtures/infusions. Quiz 4 emphasized expressions of strength but dilution and concentration to a lesser extent. One remaining topic-chemical equivalency-was assessed on Quiz 5. All topics were assessed on the comprehensive Final Exam. The comprehensive final exam consisted of identical questions during all course administrations and served as a comparative measure of student performance.

Lab procedures
The pharmacy calculations laboratory course provides a significant block of time for concept reinforcement. The calculations lab has traditionally served as a recitation period, during which time students are provided additional questionand-answer sessions, worksheets, and supplemental activities. Cohorts 1 (n = 80) and Cohort 2 (n = 76)-the control groups-experienced lab in traditional fashion, with the lone exception of periodic handwritten pop quizzes being incorporated into session proceedings in the case of Cohort 2. Cohort 3 (n = 83)-the experimental group-experienced readiness assessment testing during lab sessions. The lab was identified as an ideal setting to conduct RAT exercises, particularly in the case of TBL applications. The facility's layout allowed for groups of 3 to 4 students to meet at large tables. Cohort 3 performed IRATs and TRATs for a series of calculations topics that mirrored those discussed during lecture sessions, including following. Abbreviations are indicated as well: Twenty minutes were allotted for IRAT encounters, which were immediately followed by 20-min TRAT sessions involving question stems identical to IRAT questions. A brief topic review was provided to students shortly after each RAT session. Students' IRAT and TRAT scores were revealed upon submission of each assessment.

Data collection and analysis
The Blackboard learning management system served as the platform for Cohort 3's lab assessments. Students were allowed to use their personal laptops or tablets to access the system. The Blackboard system offered conveniences such as randomization of numerical values used in question stems, as well as randomized of question order. Blackboard also allowed for long-term record keeping following assessments. Blackboard assessments were delivered using the Respondus Lockdown Browser™, through which assessments were secured. Faculty and/or graduate students at the College proctored all IRAT and TRAT assessments. Students were provided written versions of IRAT and/or TRAT assessments if they forgot to bring their computers to the lab session. Written alternatives were also provided during instances of technical difficulty.
A confidential end-of-course survey was distributed to students to assess their reception to RAT. Completion of the Likert-styled survey was optional. Statistical analyses of assessment performances included summary statistics, correlation and significance. Shapiro-Wilk, Anderson-Darling, and Lilliefors tests of normality were applied to formal assessment scores and RAT scores. Spearman rank correlation coefficients were calculated among assessment scores and RAT scores. Analysis of variance and Tukey's honestly significant difference test were used to determine mean differences in assessment scores among cohorts. Statistical analyses were performed using Excel™ (Microsoft Corporation) software in conjunction with XLSTAT™ (Addinsoft Inc.) software. A threshold of α = 0.05 was selected for tests of significance.

Results
Non-normal distributions were indicated among lecture assessment and RAT score data for all cohorts, regardless of the normality test applied. Spearman rank correlation coefficient results for Cohort 3's RAT and quiz scores are provided in Table 1. With the lone exception of the correlation between the topic of dilution and concentration and Quiz 4, all IRAT assessment scores exhibited moderate (r < 0.5) but significant correlations with quiz performances. In contrast, TRAT score correlations with quiz scores were all low with only one correlation-that associated with dose calculations-yielding statistical significance.
Spearman rank correlation coefficient results for Cohort 3's RAT and exam scores are provided in Table 2. IRAT scores exhibited low correlations with hour exam scores. In contrast, IRAT scores exhibited moderate yet significant correlations with several final exam scores, the exceptions being the topics of prescription interpretation and dose calculations. TRAT scores yielded low correlations with both hour exam and final exam scores except for moderate and significant correlations in the categories of expressions of strength and dilution and concentration. Correlations were higher between IRAT scores and exam scores than TRAT scores and exam scores.
Pharmacy calculations quiz results were compared among cohorts, as referenced in Table 3. The results exhibit significant differences in quiz performance. Similarly, comparisons of pharmacy calculations final exam results for all cohorts revealed significant differences in performance. However, no discernible trends were observed in the mean comparisons. Cohort 3 outperformed Cohorts 1 and 2 one quiz-Quiz 5. The performance differences were not significant on Quiz 5. Cohort 3 outperformed the other cohorts on Exam 1 and on the final exam. Whereas Cohort 3's Exam 1 performance was significantly different from both other cohorts, Cohort 3's final exam performance was significantly different from that of Cohort 1.
Likert survey responses are provided in Table 4. Statements referencing RAT provided insight into the perceived usefulness of its application. Responses were predominantly positive for the statements. "Strongly Agree" or "Agree" was selected by 89%, 94% and 72% of respondents for the statements about RAT. The highest percentages of response indicated on the survey correspond to the option "Agree" for statements regarding reception to IRAT exercises and TRAT exercises.
The histograms featured in Figs. 1, 2, 3 depict cohort standardized final exam performances, in which Cohort 3's final exam score distribution is distinguished from those of Cohorts 1 and 2. The histograms indicate a shift in score achievement in Cohort 3 compared to the other cohorts.

Correlation interpretations and cohort comparisions
Moderate and significant correlations between IRAT scores and quiz scores (Table 1) allude to similarities in level of difficulty. They might also allude to similarities in topic emphasis and assessment length. IRAT comparisons differ sharply from TRAT comparisons. Comparatively weaker correlations observed between TRAT and quiz scores may be attributed to the consistently higher scores often associated with TRAT because of its communal construct. TRAT scores and exam scores ( Table 2) exhibit low correlations that mimic TRAT/quiz comparisons. This is an intuitive result. However, IRAT and hour exam score correlations are in stark contrast with quiz results, whereas comprehensive final exam comparisons mostly exhibit moderate and significant correlations reminiscent of IRAT/quiz results. These contrasts are enigmatic. Hour exams are historically more rigorous than the comprehensive final exam, which might explain the dichotomy. IRAT assessments are measures of an individual's mastery of conceptual material. Students scored higher on TRAT assessments as compared to corresponding IRATs, which alludes to an insulating effect of group interaction on performance. It is not uncommon for students to achieve perfect scores when taking assessments as teams. Weakerprepared students benefit from substantial gains during TRAT sessions due to attainment of insight from stronger students in the peer groups, in addition to the advantage repeat exposure to questions. Exam performances do not reflect gains observed during TRAT sessions; class score results revert to a scattered distribution typical of individual effort. Lower TRAT to quiz and TRAT to exam correlations allude to this phenomenon.
TRAT assessments provided opportunities for peer interaction and peer instruction. TRATs assisted students with concept reinforcement through communication and teamwork. Team-based learning has been associated with improvement in student assessment performance, particularly in the case of the students who are comparatively weaker performers in conventional lecture environments [17,18]. It is possible that the incorporation of non-traditional assessment techniques in pharmacy calculations can enhance students' course experiences by providing them an alternative way of practicing course content and achieving content mastery. In addition, such methods have been associated increased peer connection and improved leaning outcomes [19]. Improvement in comprehensive final exam performance in Cohort 3 is especially encouraging and bodes well for use of RAT reinforcement in the future.
The statistically significant but moderate correlation results are typical of those associated with small effect sizes in educational research [20]. R-correlations have been shown to vary in meaning from one discipline to the next [21]. Reports of pharmacy calculations correlation data are limited, so the extent to which such moderate results are meaningful remains to be seen. Cohort comparisons reveal no remarkable trends in hour exam performance. This was not unexpected. Although efforts are made to maintain consistent rigor on hour exams from semester to semester, it is possible that variations in question difficulty impacted hour exam results. The standard final exam, on the other hand, is consistent regarding question composition across semesters. Larger standard deviations in hour exam scores in comparison to final exam scores allude to the discrepancy between hour exam rigor and final exam rigor. Additionally, the histograms featured in Figs. 1, 2, 3 are impactful because the standard final exam allows for direct cohort comparisons.

Implications of RAT and LMS use in pharmacy calculations
Although mastery of pharmacy calculations course concepts is possible through student engagement during lectures and recitation periods, long term retention of concepts is facilitated through repetitive application of said concepts. The results of this comparison suggest that readiness assurance test exposure aids in the retention of conceptual materials in pharmacy calculations. Increased assessment opportunities are associated with improvements in standard final examination performance in students in Cohort 3. Improvements in learning outcomes following increased use of assessment activities in health-related disciplines have been reported by several investigators. Larsen et al. [22], for instance, observed that repeated assessment improved long term retention of concepts pertaining to medical phenomena in pediatric and emergency medicine. These gains were evidenced by final examination score improvement more than 6 months after students' initial exposure to conceptual materials. Specific to pharmacy disciplines, student performance improvements have been reported because of additional exposure to pharmacy calculations and associated assessments [23]. The notion of increased exposure to assessments improving learning outcomes is supported by personalized system of instruction (PSI) methodology [24]. It has been posited that positive associations between the number of question attempts and student performance during PSI is a reflection of increased exposure promoted by the technique. It is possible that the increase in Cohort 3's assessment opportunities using RAT provided an analogous benefit for pharmacy calculations students. At minimum, additional exposure to assessment questions in the lab course appear to assist in student mastery of course materials. Blackboard LMS proved a powerful tool for readiness assessment testing. Online facilitation of IRAT and TRAT administrations promoted efficient collection of data. Assessment information such as submission records, attempt numbers and time signatures were automatically stored by Blackboard LMS. The stored information was efficiently transferred onto spreadsheet programs for analysis. This is consistent with the assertions of Koh and colleagues [25], who noted that LMS facilitation of TBL offers efficient compilation of and access to data that can rapidly inform instruction. In addition, Blackboard LMS features such as randomization of question order, algorithm capabilities and lockdown browser capabilities created an environment that limited opportunities for academic dishonesty. RAT assessments were programmed for automatic submission to promote a well-controlled testing environment. Although the feature was not used in this instance, Blackboard LMS also possessed exam monitoring capabilities through the Respondus Lockdown Browser. Promotion of academic integrity has been identified as an advantage of LMS use when compared assessment platforms such as automated response systems [26]. Although investigations associating RAT use with learning outcomes have been largely limited to other subjects, LMS-mediated RAT has been reported in pharmacy calculations [27,28]. Mastropietro et al., for instance, used Examsoft™ to deliver RAT to great effect. The inherent compatibility of LMS technology with TBL through RAT presents an opportunity for expansion of this assessment technique in pharmacy calculations.

Limitations
There were limitations to the study. The study accounts for 3 semesters of student performances in pharmacy calculations. An evaluation of data from more offerings of the pharmacy calculations series could provide more compelling information regarding the impact of RAT on calculations performance. In addition, baseline indicators of preparation such as Pharmacy College Admission Test (PCAT) score, incoming grade point averages or degree status were not evaluated for influence on student performance. Alternatively, a comparison of performance in which a single cohort is exposed to all laboratory course treatments could reveal more conclusive results regarding the usefulness of RAT. Because RAT was performed using LMS, it would have been useful to compare LMS-mediated performances with RAT delivered in a hand-written format. In addition, although lecture activities were structurally identical for all cohorts, there was no duplication of quiz or hour exam questions between course offerings. Historically, apart from the comprehensive final exam, all assessment questions are returned to students for the purpose of long-term self-reflection. No formal method was implemented to ensure reproduction of rigor among semesters. Another limitation was that survey information was limited to feedback about RAT and TBL rather than the technology associated with its delivery. Last, the extent to which use of RAT might produce diminishing returns was not evaluated. The notion of fine-tuning RAT for better effectiveness is intriguing.

Conclusions
This investigation evaluated the extent to which LMS-mediated readiness assurance testing influenced student performance in pharmacy calculations. The calculations lab course was leveraged as a block of time during which IRAT and TRAT opportunities could be explored. Although little association was observed between TRAT performance and lecture course quiz or exam performances, moderate associations were observed between IRAT performance and quiz performance and between IRAT performance and comprehensive Final Exam performance in several concept categories. Comparison of comprehensive final exam performances among class cohorts revealed a significant increase in standard Final Exam scores in Cohort 3. In addition to TBL experiences, Cohort 3 encountered a greater number of assessments than Cohorts 1 and 2. The exercises were well-received by students. The results suggest a positive relationship between RAT exposure and pharmacy calculations performance.