Introduction

In 1995, Becker and Watts (1996) began a national quinquennialFootnote 1 survey of teaching and assessment methods in undergraduate economics in response to calls for college instructors to devote more time using a variety of teaching methods and less time doing “chalk and talk” (Watts and Schaur 2011). The authors of the original survey sought to document how economics was being taught, planning to provide longitudinal updates to track trends in the profession. The quinquennial survey of academic economists (Becker and Watts 1996, 2001; Watts and Becker 2008; Watts and Schaur 2011) includes a section on teaching methods; a section on assessment methods; and a section on instructor, school, and department characteristics. By including both teaching and assessment methods in the survey, Becker and Watts acknowledged the importance of documenting both types of activities to accurately chronicle the teaching and learning of economics. Choosing how to assess student learning is a key element in teaching, as it provides a method by which instructors can gauge effectiveness in meeting learning goals (Rebeck and Asarta 2012). Assessment is of primary concern to students as well, as it defines the makeup of their course grades.

This manuscript focuses on the assessment section of the 2020 quinquennial survey and is critical for continuing the longitudinal efforts started by Becker and Watts in 1995. While results from the administration of the 2015 survey were neither published nor distributed, due to the unexpected passing of Professor Michael Watts, the 2010 results showed an increase in variation of assessment methods, with more weight being placed on writing assignments and presentations, and less weight being placed on in-class exams. Thus, the discipline seemed to be moving toward assessment methods that Schaur et al. (2008) had linked to an increase in student engagement. That shift, however, is not evident in 2020; the results of the 2020 survey are more similar to those documented in 2000. While we document a small increase in the variety of miscellaneous types of assessment used in undergraduate economics courses, the overall results suggest that the discipline has moved backwards in the last decade, at least in terms of assessment.

The first section of this article provides a brief history of assessment methods from the quinquennial survey. Then, the second section documents the assessment method results for the 2020 quinquennial survey and is divided according to type of course (e.g., intermediate theory). Finally, the last section provides a discussion of the findings and a conclusion.

A Brief History of Assessment Methods from the Quinquennial Survey

From 1995–2010, results from the national quinquennial surveys of undergraduate economics instructors conducted by Becker and Watts, and later Schaur, documented teaching methods and assessment methods used in four types of undergraduate economics classes: introductory economics, intermediate theory, statistics and econometrics, and other upper-division field courses (Becker and Watts 1996, 2001; Watts and Becker 2008; Watts and Schaur 2011). For the assessment methods section, the authors provided a list of ten different assessment methods in three broad categories that instructors might use. Surveyed instructors were asked to focus on the composition of students’ course grades by reporting the percentage of the grades determined by a specific assessment method. From 1995–2010, the authors saw little change in reported teaching methods, but discovered notable changes in assessment activities, with an increase in average weights assigned to different types of written assignments and student presentations.

As presented by Watts and Schaur (2011), from 1995–2010 the authors observed that the graded activity most frequently used in introductory courses was multiple–choice examinations, accounting for an average of 42–51% of a student’s grade. Exams were commonly used in other types of economics courses, too, but multiple–choice assessment made up only 18% of students’ grades in intermediate theory courses, on average, and even less in statistics, econometrics, and other upper-division field courses. Short-answer examination questions were common in all four types of undergraduate economics courses, ranging from an average of about 30% of students’ grades in introductory classes to closer to 40–50% in other classes. By 2010, long–answer exam questions made up about 30-35% of students’ grades, on average, in all types of classes except introductory economics, where the average fell to 17%. The weight assigned to homework and problem sets increased, on average, in all four types of classes, from 14% to as much as 32%. Term papers were most commonly used in statistics/econometrics and other upper-division field classes, representing as much as 30% of students’ grades, on average. The average weight assigned to short papers and oral presentations increased across all courses, but particularly in statistics, econometrics, and other upper-division field classes. Class participation made up a small but increasing part of students’ grades in all four types of courses over this period. Watts and Schaur noted that the shift toward more variety in assessment methods occurred during a period of little change in teaching methods.

Results from the 1995–2010 surveys illustrated a shift in the profession toward giving increased weight to written assignments in all type of economics courses. Less reliance on traditional in-class exams reduced high-stakes assessments, reduced the focus on test-taking abilities, and provided students with more time to review and edit their written coursework. This offered students greater opportunity to apply knowledge through term papers in statistics and econometrics courses, and papers and presentations in other upper-division field courses. Prior research from the Harvard Assessment Seminars had reported that the amount of writing required in a course was more strongly related to student engagement than any other course characteristic (Light 1992). Based on this evidence, Schaur et al. (2008) linked the observed changes in assessment methods in undergraduate economics to an increase in student engagement.

In recent years, the National Survey of Student Engagement (NSSE) has been used to track how student engagement has changed over time. In a 2019 report, researchers noted that students, freshmen as well as seniors, were spending more time on academic preparation than they did previously, although this increase had leveled off in recent years (NSSE 2019). According to the report:

These increases correspond to as much as two more hours per week for all students, on average. Spending more time on academics is a positive outcome, whether the result is from higher expectations, more emphasis on collaborative learning, or wider adoption of new instructional methods such as flipped classrooms, problem-based learning, or real-world applications. (NSSE 2019, p.4).

The changes described in the NSSE report, in general, are similar to what Becker and Watts’ prior surveys showed occurring in economics, at least with respect to assessment methods, with economists placing more emphasis on writing and other work prepared outside of class.

The 2020 National Quinquennial Survey

Asarta et al. (2021) described in detail the development and administration of the 2020 national quinquennial survey on teaching and assessment methods as part of the first published report of basic findings on teaching methods in introductory economics courses. The survey, approved by the University of Delaware Institutional Review Board, was distributed online to 11,544 full-time and part-time instructors teaching at U.S. higher education institutions. Data collection ended before the COVID-19 global pandemic led to widespread changes in higher education modalities. The authors used the Qualtrics software platform (Qualtrics 2020), utilizing built-in branch logic to reduce participants’ cognitive load and minimize the amount of time spent completing the survey. Response rates, as well as the distribution of instructors based on the Carnegie Classification of Institutions of Higher Learning, are presented in Asarta et al. (2021). Overall, 1,664 instructors completed the survey in 2020; that number represents a response rate of 14.4%, 3.9 percentage points higher than in 2010.

Part I of the survey focused exclusively on teaching methods in undergraduate economics courses, a topic that is not the focus of this manuscript. For Part II, participants were asked to indicate the percentage (0–100%) of a student’s course grade determined by examinations (broken down into categories of multiple-choice questions and short- and long-answer essays/problems), written assignments (divided into term papers, shorter papers, homework/problem sets, and other written assignments), and miscellaneous other assessment methods (including class participation; oral presentations; performance in games, simulations, or experiments; and other miscellaneous types of assignments).

Survey Results

The results for the assessment methods section of the quinquennial survey, as presented below, focus on the responses provided by instructors teaching undergraduate economics courses. In keeping with the presentation of the quinquennial survey results established by Becker and Watts (1996), 2020 mean and median responses for the assessment method questions are provided in Table 1. As indicated by Watts and Becker (2008) and Asarta et al. (2021), we are unable to establish whether those responding to the survey are representative of all undergraduate instructors in the USA.

Table 1 Assessment methods (% of grade determined by different types of assignments or assessment instruments): mean (and median) responses in percentages, 2020

In 2020, examinations remained the predominant assessment method used in undergraduate economics courses of all types. Within the examinations category, multiple-choice questions dominated in introductory classes, while short-answer questions dominated in intermediate theory, statistics and econometrics, and other upper-division field courses. These findings may be driven by class size. However, as noted in Schaur et al. (2008), average class sizes were “remarkably stable” across the survey periods (Schaur et al. 2008, p. 553). In fact, median class sizes for the different types of courses were nearly the same in 2020 as reported in earlier years: 40 students in introductory classes, 30 students in intermediate theory courses, 25 students in statistics and econometrics, and 25 students in other upper-division field classes.Footnote 2 In intermediate theory and econometrics and statistics courses, the assessment method with the next highest average weight after examinations was homework and problem sets; in upper-division field courses, the assessment method with the next highest average weight was term papers. In introductory courses, the assessment method with the second-highest average weight was other miscellaneous assessment methods. Descriptions of these other miscellaneous assessment activities provided by respondents included, among other things: in-class knowledge checks; attendance; various types of projects; video and blog creation assignments; in-class assessments using student response systems (e.g., clickers); and various online activities and assignments, such as adaptive assignments, some of which are created by instructors and others by publishers. Other miscellaneous assessment activities ranked third in terms of the percentage of students’ grades in intermediate theory. In statistics, other miscellaneous assessment activities ranked fourth, after examinations, homework and problem sets, and term papers. There was a much lower weight placed on other miscellaneous assessment activities in upper-division field courses as compared to specific assessment activities listed on the survey.

To investigate the “Miscellaneous Other” assessment activities further, we turned to the 93 open-ended descriptions of assessment methods offered by survey respondents. The survey did not ask for separate text answers for each type of course taught, and about 77% of the 93 respondents taught more than one type of class, so it is not clear precisely which miscellaneous activities were used in which types of classes. About 80% of the 93 respondents taught introductory economics, 76% taught other upper-division field classes, 46% taught intermediate theory courses, and 20% taught statistics and econometrics. Although we were unable to identify specific learning goals for these “other” assignments, some of the descriptions hinted at whether the cognitive process being assessed would be categorized as higher-order based on Bloom’s (revised) taxonomy (Anderson et al. 2001). Reviewing the responses, we estimated that 52% of the “Miscellaneous Other” activities required students to evaluate (23%) or create (29%), both of which are higher-level cognitive processes as “…students cannot answer these types of assessments correctly by relying on memory alone” and must accomplish such activities as to justify a position or decision (evaluate) or to produce new or original work (create) (Anderson et al. 2001, p.71). The 93 instructors who provided open-ended descriptions represented only about 6% of the survey respondents, but the evidence suggests that a small number of instructors are requiring students to engage with the material by evaluating and creating. These educators may be shifting the weight of their grades, moving away from the more traditional types of assessments and focusing instead on current technologies (e.g., video production), blog creation, debates, peer reviews, and reflective journal articles. For 20% of the 93 respondents, the miscellaneous assessment activities involved some form of quiz, whether in-class, instructor-created, collaborative, or app-based. In these cases, we do not know whether the quizzes were knowledge checks or more conceptual in nature. It is unlikely, however, that these activities involved the type of cognitive learning associated with evaluating or creating. As such, instructors may be substituting grade weight away from written assignments, class participation, and oral presentations and toward activities that rely more on textbooks and publisher materials. Perhaps, as described by Ongeri (2017), instructors desire to cover as much of a textbook’s content as possible during a course, and thus they rely on the publisher-created materials that accompany the text. This may also be a way to provide more low-stakes assessment activities in lieu of reliance on high-stakes exams. The increased use of miscellaneous other assessment activities is a new observation in 2020 and suggests that changes to the survey are needed before the next quinquennial administration, to better document the changes in types of assessments used in the different types of courses.

For 2020, we have presented and compared the assessment method results from various years in a different format, to highlight differences that exist due to the online administration of the most recent survey. In Qualtrics, we were able to design the assessment methods question in such a way that responses for a particular type of course had to sum to 100%. In the earlier paper and pencil surveys, there was no such restriction because it could not be enforced (Watts and Schaur 2011). In all cases, the question asks respondents to estimate the weight placed by different types of assessments on students’ grades. In order to compare the 2020 results with those from 2000 and 2010, we transformed the responses presented by Becker, Watts, and later Schaur into percentages that sum to 100. The average values for each type of assessment as presented in their tables (Watts and Schaur 2011) were summed to get a total for a particular year, which was then used to create weights or percentages for each type of assessment by dividing the average for each type of assessment by this total. These weights provide comparable average estimates of weights assigned to the different types of assessment activities and are presented in Figs. 1, 2, 3, and 4.

Introductory Courses

According to Table 2, the number of introductory economics instructors completing the assessment section and responding to the personal information and background section of the survey varied from 582–622, depending on the personal information and background survey question asked. Sixty-two percent of those instructors self-identified as being male. Eighty-nine percent were white and 6% were Hispanic. On average, these instructors had 17.49 years of teaching experience. Ninety percent of respondents indicated they had earned a Ph.D. and 70 percent were in a tenure-track position.

Table 2 Instructors’ personal information and background per course type, 2020

Figure 1 illustrates the average percentage of a student’s grade determined by different types of assignments or assessment instruments in three broad categories in introductory economics courses for 2000, 2010, and 2020. The primary assessment method for these courses in 2020 was in-class exams with multiple-choice questions, with an average weight of 37%. The relative importance of this type of assessment method has increased slightly since 2000 after a decline in 2010. Short-answer exam questions remained a significant part of students’ overall course grades with little change in their relative importance over time. Additionally, comparing the 2020 results with the transformed values for earlier years shows a decline in the average percentage of a student’s course grade that is determined by long-answer exam questions.

Fig. 1
figure 1

Average weights of assessment activities in students’ course grades in introductory economics courses. Notes: While specific numbers of observations are not provided in the tables presented by Watts and Schaur (2011), the authors report that 591 surveys were returned in 2000, and 424 surveys were returned in 2010. A subset of those would have reported about assessment methods used in each type of class, and those results are used in Figures 1-4. For the 2020 survey, we have the following numbers: Introductory courses, N=627; Intermediate Theory courses, N=366; Statistics and Econometrics courses, N=277; Other Upper-Division Field courses, N=678.

All forms of written assignments have decreased in weight in introductory economics courses. Term papers and shorter papers seemed to very rarely count toward a student’s grade in 2020. The mean weights assigned to term papers and shorter papers were 1% and 3%, respectively. Homework and problem sets, however, made up, on average, 10% of a student’s grade, a significantly lower weight than the 15% reported in 2010. The mean weight on other written assignments is relatively low with little change over time.

Class participation; oral presentations; and performance in games, simulations, or experiments contributed less to a student’s grade, on average, in 2020 than in earlier years. Miscellaneous other types of assessment activities increased to 14% of a student's grade, which is significantly higher than the 1% observed in 2000 and 2% seen in 2010. Descriptions of some of these other activities were provided above; clearly, changes are happening regarding assessment methods in introductory economics classes. Overall, our findings seem to indicate that introductory economics courses still tend to use multiple-choice and short-answer examination questions as the main instruments to calculate students’ grades, with increasing use of a variety of other types of assessment methods.

Intermediate Theory Courses

According to Table 2, the number of intermediate theory instructors completing the assessment section and responding to the personal information and background section of the survey varied from 344–362, depending on the personal information and background survey question asked. Sixty-eight percent of respondents self-identified as being male. Ninety-one percent were white, and 6% were Hispanic. On average, instructors had 17.39 years of teaching experience. Ninety-six percent of respondents indicated they had earned a Ph.D. and 76% were in a tenure-track position.

Figure 2 presents the average percentage of a student’s grade determined by different types of assignments or assessment instruments in three broad categories in intermediate theory economics courses for 2000, 2010, and 2020. In 2020, the primary assessment method for those courses was examinations with short-answer questions. The mean weight for this type of examination assessment method increased from 27% in 2000 to 31% in 2020. The assessment method with the second-highest average weight was long-answer examination questions, although the weight declined significantly, from 27% in 2000 to 18% in 2020. The average weight assigned to multiple-choice questions on an exam increased to 16% in 2020, after declining to 11% in 2010.

Fig. 2
figure 2

Average weights of assessment activities in students’ course grades in intermediate theory economics courses

Consistent with the findings in introductory economics courses, term papers, and shorter papers were rarely used in intermediate theory economics courses, and the average weight these activities placed on students’ grades has declined over time. Homework and problem sets remained the primary method of assessment within the written assignments category. The weight they hold in determining students’ grades, however, saw a decline from 19% in 2010 to 15% in 2020.

Oral presentations were rarely used to determine grades in intermediate theory courses. The weights on class participation and performance in games, simulations, or experiments were low and have decreased over time, on average. There was a large increase in miscellaneous other assessment activities, from an average of 2% of students’ grades in 2010 to 7% in 2020. Overall, our findings show that students’ grades in intermediate theory economics courses are predominantly determined by examinations comprised of essays, problems (short- and long-answer), and multiple-choice questions, along with homework and problem sets.

Statistics and Econometrics Courses

According to Table 2, the number of statistics and econometrics instructors completing the assessment section and responding to the personal information and background section of the survey varied from 256–273, depending on the personal information and background survey question asked. These sample numbers represent the lowest instructor participation for any of the four categories of courses included in the quinquennial survey. Sixty-three percent of respondents self-identified as being male. Eighty-seven percent were white, and 7% were Hispanic. On average, instructors had 16.39 years of teaching experience. Ninety-six percent of respondents had earned a Ph.D. and 77% indicated that they are in a tenure-track position.

Figure 3 illustrates the average percentage of a student’s grade determined by different types of assignments or assessment instruments in three broad categories in statistics and econometrics courses for 2000, 2010, and 2020. Consistent with the findings for intermediate theory courses, short-answer questions on examinations were the predominant assessment method in statistics and econometrics courses, with an average weight on students’ grades of 31% in 2020. The average weight assigned to long-answer questions on examinations declined from 24% in 2000–10% in 2020. Similar to the results reported for introductory and intermediate theory courses, the mean weight for multiple-choice questions increased to 10% in 2020 after declining in 2010.

Fig. 3
figure 3

Average weights of assessment activities in students’ course grades in statistics and econometrics courses

When it comes to written assignments, the highest weights, on average, have consistently been assigned to term papers, homework, and problem sets since 2000. These weights increased from 2000 to 2010, but declined in 2020. This pattern also holds for the average weight assigned to shorter papers, where the mean weight fell to 3% in 2020. The average weight assigned to other written assignments has remained fairly low and constant over time.

The average weights assigned to class participation; oral presentations; and performance in games, simulations, or experiments were relatively low in 2020, less than in previous years. The weight assigned to miscellaneous other types of assessment, while still low at an average of 8%, increased significantly from about 1% in both 2000 and 2010. This is due to the use of online assessment activities and other types of projects and assignments, as described earlier. Overall, our results for statistics and econometrics courses are similar to those for intermediate theory courses: the use of examinations comprised of short-answer questions is the main instrument for calculating students’ grades. Other notable methods of assessment include homework and problem sets, term papers, and other types of questions on exams.

Other Upper-Division Field Courses

According to Table 2, the number of “other upper-division” instructors completing the assessment section and responding to the personal information and background section of the survey varied from 641–671, depending on the personal information and background survey question asked. These sample numbers represent the highest instructor participation for any of the four categories of courses included in the survey, an expected finding given the large number of different upper-division field courses available at U.S. institutions. Sixty-four percent of respondents self-identified as being male. Ninety percent were white, and 5% were Hispanic. On average, these instructors exhibited the highest level of teaching experience at 18.80 years. Ninety-six percent of respondents held a Ph.D. and 78% indicated they were in a tenure-track position.

Figure 4 presents the average percentage of a student’s grade determined by different types of assignments or assessment instruments in other upper-division field courses for 2000, 2010, and 2020. As was the case in intermediate theory and statistics and econometrics courses, the main assessment method in other upper-division field courses was short-answer questions on exams, which carried an average weight of 23% of a student’s grade in 2020. The weight on long-answer examination questions declined from a mean weight of 24% in 2000 to 15% in 2020, while the average weight on multiple-choice exam questions increased to 10% in 2020. Once again, this pattern is consistent with the results for all of the other types of economics courses.

Fig. 4
figure 4

Average weights of assessment activities in students’ course grades in other upper-division economics field courses

While term papers remained the top assessment method within the written assignments category for other upper-division field courses, their average weight on students’ grades declined to 13% in 2020. Homework and problem sets, as well as shorter papers, have also experienced relatively small declines, on average, since 2010.

Class participation and oral presentations have declined in weight since 2010, to 6% each in 2020. Performance in games, simulations, or experiments was an insignificant part of a student’s grade in field classes at an average of 1%. The weight on miscellaneous other types of assessment activities increased in 2020, just like in the other types of courses, although it was still relatively low at 5%, on average. Overall, the results for other upper-division economics courses presented a more balanced approach to assessment methods, between essays and problems on exams, a variety of written assignments, and some use of multiple-choice exam questions. It is worth noting that the average weight of 6% each for class participation and oral presentations, despite being relatively low, was still higher than in other types of economics courses.

Discussion and Conclusion

The similarities in observed changes over time in four different types of undergraduate economics courses are noteworthy. Table 2 provides evidence that the number of survey respondents for each type of course differed in 2020. However, the patterns of change in assessment methods used in the different types of courses were similar. This suggests that the observed shifts are discipline-wide in all types of courses, with little differentiation among specific course types. In all four types of courses—introductory, intermediate theory, statistics and econometrics, and other upper-division field courses—the weight placed on multiple-choice exam questions in determining students’ grades, after falling in 2010, has increased in 2020 to average levels higher than those observed in 2000. Examinations with short-answer questions held increased weight on student grades and examinations with long-answer questions held decreased weight on student grades. The weights assigned to all types of written assignments declined or stayed the same, on average, in 2020. In the broad category of miscellaneous assessment activities, weights were lower (or unchanged) for all types of activities in all types of courses, except the miscellaneous other activities category. This result, combined with typed survey responses describing some of the other activities used, indicates a small but observable increase in the variety of assessment methods being used in undergraduate economics courses. In introductory courses, an average of 14% of students’ grades was determined by miscellaneous other types of activities. This percentage is 7% in intermediate theory courses, 8% in statistics and econometrics courses, and 5% in other upper-division field courses.

Some of the variations were driven by technological change. Examples of new technologies being used include student response systems, discussion board assignments, and online adaptive assignments. Discussion board assignments engage students by requiring them to share thoughts and comments in small groups, while online adaptive assignments can be graded in lieu of or in addition to requiring students to read the textbook. Other activities, such as take-home exams, debates, and poster presentations, may reflect some instructors’ efforts to decrease the weight on in-class exams and reduce in-class stress levels and cognitive load. Looking to the future, the COVID-19 global pandemic will likely increase the pace at which instructors adopt virtual teaching and assessment methods. With most instruction shifted to online delivery, at least for the 2020–21 academic year, it is likely that the use of online assessment methods will continue to increase. We will adapt the survey to incorporate probable changes more explicitly in future administrations.

In describing changes observed from 2000–2010, Watts and Schaur (2011) noted a reduction in the average weight on exams and an increase in the average weight on written assignments in all types of economics courses. This shift has not continued from 2010–2020. The progress in student engagement that Schaur et al. (2008) linked to the increase in the amount of written work required in undergraduate economics courses has vanished by 2020. However, the small observed shift toward greater variety in assessment activities may hint at increased student engagement as described by the 2019 NSSE report. At least some of the miscellaneous other assessment activities may increase the amount of time students spend with the material and offer more opportunities for engagement. Instructors who provide variety in their assessment methods could also foster inclusiveness as described by Gosselin and Gagné (2014) since different methods require students to use different skills and intellectual abilities to demonstrate knowledge and understanding of economics. The increase in variety may also indicate greater use of low-stakes formative assessments that provide feedback signals to students throughout the semester, as opposed to feedback being provided only a high-stakes exam.

The 2020 survey results show that there is a more balanced mix of assessment types used in other upper-division field courses than in the other types of undergraduate economics courses. However, based on the descriptions of miscellaneous other types of assessments being used, the small increase in variety observed in introductory courses, in particular, may signal that a few instructors are using more application activities and introducing beginning students to more of the actual work of economists, rather than following the historical approach of waiting for that work to take place after students choose the major and advance to upper-division courses. In reporting results of changes in teaching methods from the 2020 quinquennial survey, Asarta et al. (2021) and Harter and Asarta (forthcoming) noted increases in the use of cooperative learning/small-group assignments, as well as in instructor-led and “student(s) with student(s)” discussions. These methods may coincide with the use of some of the newer assessment activities involving video production, blog creation, debates, peer reviews, and reflective journal articles. If this shift continues, it could increase interest in undergraduate economics and promote more diversity and inclusion in the classroom. We plan to update the survey for the 2025 administration to account for updates in teaching methods and assessment methods. Perhaps future survey results will show more balance in the mix of assessment activities used in lower-level economics courses. Making these types of changes to the way introductory economics is taught and assessed might attract more students and a broader segment of the population to the discipline, which would strengthen the discipline of economics (Bayer and Rouse 2016; Bayer et al. 2020).

In working to identify determinants of instructor choices of assessment methods, Schaur et al. (2012) used the 1995–2005 national survey data to determine that larger class sizes and higher teaching loads were associated with reduced weights placed on term papers and other written assignments in determining students’ grades. Then, in 2010, they observed a shift toward greater use of written assignments when teaching undergraduate economics, despite median class sizes remaining the same. While class sizes vary by institution type and course, median class sizes in 2020 were nearly the same as reported for 2010 and earlier, and results showed a shift toward lower weights placed on written assignments in determining students’ grades. Future analyses of the mix of instructors across course types and institutional characteristics may yield additional insights about what drives instructor choice in assessment methods as well as teaching methods. In addition, the changes forced on economics educators by the exogenous shock of the COVID–19 pandemic, along with the current attention being paid within our discipline to the inclusion of more diverse voices, may lead to a reshaping of the assessment methods used in undergraduate economics courses. This report provides the necessary foundation to be able to measure and track that transformation in the future.