1 Introduction

Student anxiety about performance and attainment of desired letter grades is a problematic issue in higher education. Pascoe and colleagues [1] reviewed Organization for Economic Cooperation and Development survey data that revealed two-thirds of students experienced stress regarding grade outcomes [2]. Greater than half of the students surveyed indicated that they experience test anxiety, with over one-third of students describing feelings of anxiety even while studying. Shankar and Park reviewed the impact of stress on students and cited various effects of stress on their endocrine and immune functions, behavior, cognition and psychology [3]. Student depression and panic attacks have been identified as examples of stress-induced challenges to students’ mental health [4]. Elevated levels of test anxiety are also linked to procrastination and loss of motivation to engage in academic work [5]. Pharmacy students have reported negative effects of stress on mental health and quality of life [6], as stress is associated with lower grade point average in these students [7].

1.1 Context

Pharmacy calculations is an important course of study in many pharmacy curricula. Student mastery of pharmacy calculations is integral to successful passage of the North American Pharmacist Licensure Examination (NAPLEX). Pharmacy calculations is distinct in its emphasis on application of knowledge rather than memorization of knowledge. Pharmacy calculations assessments often require time-intensive, manually inputted answer entries (at times including work shown) rather than multiple choice recognition of answers.

The departmental grading scale works well when applied to pharmacy courses that feature large numbers of assessment questions during limited time intervals. For instance, students can often comfortably navigate fifty or more question multiple choice exams within an average hour-long exam period. However, completion of pharmacy calculations questions require considerably greater time expenditures (3 to 5 mins) per question; therefore, fewer questions are posed within standard assessment intervals. In addition, pharmacy calculations questions are generally graded in an all-or-nothing fashion. This feature, along with assessments possessing limited question quantities, contributes to anxiety among students. (Anecdotal) reports of anxiety increase as the course progresses. Indices of anxiety such as task and practice avoidance are also observed in students [8]. In some cases, students’ confidence in themselves erode quickly, especially from mid-terms to semester’s end. Because math anxiety can impact achievement in pharmacy calculations [9, 10], the observed anxiety is reminiscent of reports that math anxiety has significant and negative associations with performance outcomes [11,12,13].

Given the constraints associated with question number, time, and grading scales, the incorporation of assessment rigor is a challenging proposition for pharmacy calculations instructors. In the absence of rigor, conditions are less than optimal for students to reach full development, be it academic or personal in nature [14]. Faculty must balance the desire to challenge students with an awareness of the pressure students face to perform very well within narrow grading constructs. How can students be motivated to shift their perspectives about how and why they learn pharmacy calculations in a high-stakes, high-pressure professional school environment? Considering grading scale constraints, how can fears regarding the course be diminished without resorting to grade inflation? Multiple-attempt testing strategies are a form of retrieval practice, which has been observed to promote long term retention of information [15] and to promote educational outcome attainment in weaker-prepared students [16, 17]. Retrieval practice is also effective in promoting stress-resistant memory, which could inhibit test anxiety [18]. Accordingly, retrieval practice has been observed to reduce test anxiety in students [19,20,21], including use of applications such as ungraded quizzes [22].

Use of multiple-attempt testing has proven useful in allied health disciplines such as medicine [23,24,25] nursing [26,27,28] and pharmacy [29]. The strategy is often associated with self-paced assessment, which has been featured in pharmacy calculations [30,31,32] and math courses [33, 34]. The objective of this investigation was to determine whether self-paced, multiple-attempt assessments would have a positive impact on students’ perception of anxiety in pharmacy calculations. The association of assessment activities with final grade outcomes were also evaluated. It was hypothesized that use of these assessment strategies would lower student perceptions of anxiety about pharmacy calculations.

2 Materials and methods

An overview of the study’s design is presented in Fig. 1.

Fig. 1
figure 1

Design overview

The investigation took place in the United States. The study featured quasi-experimental research methodology and pre-post surveys and was exempted by the Texas Southern University Institutional Review Board (Federal Wide Assurance number: FWA00003570). The study protocol was exempted in accordance with United States federal regulations 45CFR 46.104(d)(1) and 45CFR 46. 104(d)(2), pertaining to impacts on student learning and interactions limited to educational testing, respectively. Informed consent was obtained from all participants in the study.

The study featured correlational research methodology through association of module assessment attempt data with the outcomes of final exam score and course grade. Sixty professional pharmacy program students were enrolled in pharmacy calculations. All students enrolled were first-year pharmacy students. All data collected during the study was sourced from this group of students. The study population was 67–33% female to male and was approximately 63% Black, 25% Asian, 10% Hispanic and 2% Caucasian. The pharmacy calculations course was three credit hours and lasted one semester. The course consisted of the following topic areas that are encountered to varying degrees in other courses in the pharmacy curriculum:

  • Prescription interpretation, metric system and unit conversions

  • Dose calculations

  • Reducing and enlarging formulas

  • Specific gravity

  • Expressions of strength

  • Dilution and concentration

  • Infusions and IV admixtures

  • Chemical equivalency

2.1 Course structure and grading scheme

The pharmacy calculations course consisted of in-person lecture intervals for each topic, which was reinforced by in-person problem solving sessions. Lectures met for fifty minutes, three days per week, over sixteen weeks. Problem solving sessions took place over three hours, one day per week, over fourteen weeks. Formal assessments included seven examinations (eight questions each) and a final examination (thirty questions). All assessments were conducted online using Blackboard Learning Management System (LMS)® (Blackboard Learn 9.1 version 3900.28). The course also included practice assessments in the form of online modules (ten questions each) on the Blackboard platform. An increased number of questions were selected for course modules because the modules were untimed and fully self-paced.

Due to the increased emphasis on self-pacing and multiple-attempt assessment, the grade weighting used in this study differed from previous course offerings. Previous offerings of the course consisted of 8% graded practice, 67% quizzes and exams, and a final exam accounting for 25%. For this study, the weighting was shifted to 31% self-paced graded practice, 13% self-paced exams, and a final exam accounting for 56% of the final grade.

2.2 Assessments

Students were informed about self-pacing and multiple-attempt opportunities in the course syllabus. Students were also informed about these details during verbal reviews of course policies and procedures. Pharmacy calculations practice modules largely consisted of retired exam questions drawn randomly from question banks pertaining to each topic area in the course. The modules were accessed using Blackboard LMS and required entry of numerical answers. The assessments were randomized both in terms of question order and numerical answers. Modules could be attempted in unlimited fashion until maximum scores were attained for each topic. Students were provided answers to each module question along with automated feedback about each question upon submission of the module attempt. Feedback was topic-specific and provided detailed instructions to obtain problem solutions. Use of feedback has been demonstrated as a useful tool for self-assessment and concept reinforcement in the health sciences [32, 35]. Students were allowed to use text, lecture and online resources to assist them as they attempted modules. Each successive module could be accessed when students achieved a 69.5% score threshold that was programmed into the Blackboard Learn 9.1 system. The progression threshold was set as such because 69.5% is the traditional pass/fail line in the department. The modules were untimed, and attempts could be paused and resumed at students’ leisure. All modules could be accessed twenty-four hours a day from a few days prior to the first lecture in the course until four days following the comprehensive final exam assessment.

With the exception of the final exam, unlimited attempts were allowed on formal exams for concept reinforcement and to provide an opportunity to achieve maximum assessment scores. Use of repeated examination has been observed to benefit lower-performing students in pharmacy calculations [36], and increased exam number has been observed to reduce feelings of anxiety in pharmacy students. In contrast to the module assessments, exams were timed (sixty minutes) to better simulate a conventional exam environment. Formal exams were also distinct from modules in that they possessed consistent question stems on each attempt. Blackboard LMS enabled use of algorithms to randomize question order and numerical answer values. Successive exams could be accessed upon achievement of a score threshold (69.5%) on each prior assessment in sequence. All exams were available for use until four days following the final exam assessment. The comprehensive final exam was given at the end of the course and was conducted using traditional in-person proctoring over a two-hour time span. In contrast to all other assessments, the final exam was single-attempt.

Assessment data was collected using Blackboard Learn LMS. Module and exam indices of number of attempts, number of attempts within fourteen days of submission deadline, average high score per module topic, average low score per module topic, and average scores per module topic were included. Assessment indices were correlated with corresponding course grades.

2.3 Anxiety surveys

Pre-post surveys were distributed to assess student test anxiety in pharmacy calculations felt during the first week of the course and then during the week prior to the final exam. The surveys were disseminated to students with the caveat that its statements were specific to their experiences in pharmacy calculations. The Likert surveys consisted of the Westside Test Anxiety Scale, a validated tool used to measure test anxiety [37]. The following are the Westside Anxiety Scale Likert interpretations:

  • 1.0—1.9 Comfortably low test anxiety

  • 2.0—2.5 Normal or average test anxiety

  • 2.5—2.9 High normal test anxiety

  • 3.0—3.4 Moderately high (some items rated 4 = high)

  • 3.5—3.9 High test anxiety (half or more of the items rated 4 = high)

  • 4.0—5.0 Extremely high anxiety (items rated 4 = high and 5 = extreme)

Students were informed that survey participation was voluntary, and that survey participation would have no effect on course grades. Students were also informed that all survey responses and commentary were de-identified by Blackboard LMS, ensuring that response sources were unknown to the instructor. A link to access the informed consent page and survey was communicated on the Blackboard course announcement page. Both surveys were available for access during five-day intervals. The survey was deactivated and removed from Blackboard at semester’s end.

Student commentary about the use of self-paced, multiple-attempt assessments was encouraged and collected alongside survey feedback. The comments were evaluated to codify recurrent and/or notable statements. Comment themes were then developed that indicated consensus about self-pacing and multiple-attempt strategies. The most salient comments were preserved to represent the consensus.

2.4 Assessment data collection and analysis

The Respondus Lockdown Browser® (Version 2.0.7.06, Respondus, Inc.) was used to secure and distribute all Blackboard assessment and survey items. Microsoft Excel® (Excel 2019 for Windows, Microsoft Corporation) software and XLSTAT® (Version 2021.3.1, Addinsoft Inc.) software were used to perform statistical analyses. Summary statistics were calculated, and a multiple regression model was used to evaluate assessment indices and course grades. Data normality was determined using the Shapiro-Wilk test, and Pearson correlation coefficients were calculated among data indices. Median and modes for pre-post surveys were also calculated. In addition, Mann–Whitney U tests were used to determine differences in pre-post survey item responses. Due to de-identification of survey data, two-tailed unpaired t tests were used to determine differences between pre-post survey Likert scale composite responses. A threshold of α = 0.05 was selected for tests of significance.

3 Results

Westside Test Anxiety Scale survey response statistics are presented in Table 1, which correspond to the pre- and post-survey disseminations, respectively. Fifty-four students participated in the pre-survey, whereas forty-eight students participated in the post-survey. Anxiety responses indicated reductions in feelings of extreme anxiety for all 10 statements. Combined percent responses pertaining to high-to-extreme anxiety (ie., selections 4 and 5) decreased for all statements between surveys. In contrast, the combined percent responses referencing low anxiety (ie., selections 1 and 2) increased for all statements except for Statement 10, which pertained to anxiety associated with writing assignments.

Table 1 First Westside Test Anxiety Scale survey results

The medians and mode of responses to pre-post survey statements were compared (Table 1). Selection medians decreased between surveys for Statements 3, 5, 7 and 9, whereas selection modes decreased for Statements 1, 3, 4, 5, 6 and 9. Only one increase in an anxiety measure—the mode for Statement 10—occurred during the post-survey.

Statistically significant (p ≤ 0.05) changes in Likert question responses for the following statements were observed in the post-survey.

  • Statement 3: During important exams, I think that I am doing awful or that I may fail.

  • Statement 9: After an exam, I worry about whether I did well enough.

In addition, a statistically significant difference (p < 0.0001) between pre-post survey composite Likert scale score was observed.

Shapiro–Wilk test results revealed normal distributions for module and assessment data. Significant correlations were noted between course grade and several exam indices (Table 2), including a weak negative association with number of attempts made during the final two weeks of module availability. Weak and moderate positive associations were observed between course grade and exam low score and exam high score, respectively. A strong association was observed between course grade and mean exam score. In reference to practice modules, course grade was moderately and significantly associated with module low score, while module mean score and module high score were strongly and significantly associated with course grade (Table 2).

Table 2 Exam and module parameter correlations to final grade outcomes

The distribution of grade results and their associations with exam and module attempts are displayed in Table 3. The data depicts an association of fewest attempts with both the very highest and very lowest grades in the course.

Table 3 Attempts by final grade

Representative comments about the use of self-pacing and repetition of practice and assessments are included in Table 4. Thematic analysis yielded four themes: control, comprehension, anxiety and deadlines. Students indicated favorable opinions for use of study interventions due to the advantages of increased outcome control through multiple attempts, improved comprehension of concepts through self-pacing, and reduction of anxiety through self-pacing. Several students communicated a desire for assessment submission deadlines at regular intervals.

Table 4 Student comments about self-pacing and multiple-attempt assessment

4 Discussion

Westside Test Anxiety Scale survey results indicated consistent reduction in student test anxiety following exposure to self-pacing and assessment repetition. Perhaps most notable was the substantial decrease in the percentages of students who selected the “extremely high anxiety” response for most statements. Statements pertaining to negative feelings during exams (Statement 3) and post-exam concerns about performance (Statement 9) were significantly different between surveys and share the common feature of assessing general test anxiety, albeit at distinct moments of the test encounter process. Other survey statements are specific by comparison. Statement 10—the lone statement in which “low anxiety” percentages decreased on the post-survey—referenced writing assignments. This statement was least applicable to course procedures. Pharmacy calculations assessments feature short-answer numerical responses, so there were minimal writing tasks during the semester.

Use of multiple attempts on Blackboard Learn 9.1 LMS created more opportunities for practice and for assessment. Online modules presented an efficient and accessible way for students to reinforce concepts. In addition, attachment of a graded component to these online activities was a positive motivator for students. The inclusion of these activities reduced perceptions of student anxiety about calculations testing. Repeated low-stakes testing has been reported assist students to acclimate to tests and to reduce test anxiety [21, 27, 38, 39].

Greater control of destiny was a common theme identified in student comments (Table 4). Control of factors such as time, environment and grade determination are deemed advantages to self-pacing and multiple attempt assessments in multiple contexts [40,41,42,43]. Several students commented that self-pacing assisted in their comprehension of concepts). Some students were happy about the ability to learn the subject at a manageable pace, which might have proven difficult under a traditional assessment structure. Students also alluded to strengthening concept mastery through repetition. This is consistent with reports of repetitive assessment promoting mastery of concepts [32, 44]. Some students explicitly referenced feelings of reduced anxiety because of assessment interventions. The comments were notable because students had not been prompted to discuss interventions in this context. Previously Tchen and colleagues [45] reported self-pacing to have neutral effects on feelings of anxiety in advanced pharmacy practice experience students. Despite the cited positives of self-pacing, a common critique offered by students was the desire to incorporate structured, regular submission deadlines for assessments rather than a single deadline at the conclusion of the course. Strategies such as implementation of formal quizzes [46] and incorporation of time intervals and progress indicators [47] have been reported to improve student engagement with self-pacing.Moderate and significant correlations observed between course grade and exam indices suggest an association between self-paced exam activity and grade performance. Whereas total exam attempts had very little association with course grades, “last-minute” exam attempts exhibited a significant negative association with grades. Last-minute bursts of exam attempt activity were especially prevalent in students who performed poorly in the course, which might indicate a lack of preparation in students who procrastinated during much of the semester. In contrast, students who opted to complete their exams in pace with lecture topics achieved more successful course grades. Procrastination has been cited as a major drawback to self-paced, limitless scoring opportunities [48].

Strong correlations among module indices (high score and mean score) and course grades suggest that module performance associated with course performance. This result is reminiscent of observations that student homework performance better correlates with course grade outcome than formal exam performance [49], and that unsupervised online assessment enhances exam performance [50]. Module exercises included automatic feedback regarding both correct and incorrect answer submissions. Use of item feedback has been associated with positive reinforcement of memory during task learning [51].

The results allude to the notion of better prepared, higher- performing students requiring less assessment repetition to achieve desired grade outcomes, and some lesser-prepared students making fewer assessment attempts than what might have been optimal for success. In the case of students who earned “F” grades in the course, some students refrained from attempting during the final two weeks of the course altogether. It is possible that the observed performance trends among strong and weaker students had a negating effect on correlations between attempt numbers and grades. Weak correlation of unlimited attempts with assessment outcomes [52], and better assessment performance with fewer attempts [53] have also been reported by investigators. Poor performance has been observed despite use of unlimited-attempt learning and assessment strategies [54], including lack of significant improvement in knowledge retention of pharmacy calculations with repeated testing despite student perceptions of successful retention [36].

5 Limitations

There were limitations to the study. A single cohort was exposed to self-pacing and repeat testing. Partitioning the class into two cohorts—a group encouraged to self-pace and a group receiving traditional assessment—would have been an intriguing design.

The modules consisted of questions randomly drawn from test banks. Some students expressed frustrations about encountering new questions with each module attempt. Use of curated, restricted lists of module questions could prove effective in encouraging module engagement in the future. In addition, module assessments were un-proctored. Although this decision was rooted in the belief that academic dishonesty would likely result in poor performance on the final exam, its weakness is acknowledged.

Procrastination was a deterrent to student success. Despite students’ warm receptions to self-pacing, the desire for assessment submission deadlines warrants attention. Students were allowed access to the modules for a brief interval after the conclusion of the final exam. Although this access was allowed to provide students with more time to obtain maximum module scores, the gesture was likely detrimental to the worst procrastinators. Imposition of an access deadline at various intervals during the semester and/or prior to the final exam might have better prepared procrastinators for the final exam. Because the final exam accounted for a substantial percentage of students’ course grades, a higher-than-usual proportion of students failed the course. Better measures will be taken to limit procrastination in the future. Inhibition of procrastination might be integral to decreasing failure rate in the course.

6 Conclusions

This investigation featured use of self-paced, multiple-attempt assessments to reduce student anxiety about pharmacy calculations. Use of multiple-attempt exams and modules contributed to statistically significant reduction in students’ perceived anxiety over the course of a semester. Although some association was observed among indices of assessment performance and course grades, procrastination in completing self-paced assessments had a negative impact on grade performance in weaker students. Refinement of assessment and grading measures is warranted to promote improvement not only in student comfort and reception, but in student performance in pharmacy calculations as well.