Up front, I will say that faculty should be the leaders in the evaluation of course and teaching effectiveness, not administration. Why? A brief explanation follows.

This fall the Institute of Higher Education kicked off the 24th year of the Governor’s Teaching Fellows program, https://ihe.uga.edu/governors-teaching-fellows/. This program was established by former Georgia Governor Zell Miller to engage faculty members from public and private post-secondary institutions across Georgia in workshops and seminars to improve teaching and learning. Each year approximately 36 participants are designated as fellows through a competitive application process, and they participate in intermittent academic year seminars or an intensive two-week May session. Overtime, more than 600 faculty have been designated as fellows/ and they have shared their skills, challenges, and goals for instructional improvement. While the topics and challenges have shifted across cohorts, the participants’ joy for teaching and the commitment to students and improvement have never waned.

A new GTF cohort launched in August, and I luckily spent the afternoon with them. The program coordinator asked the fellows to introduce themselves, providing their backgrounds and goals for participation. All taught undergraduates, yet the fields and disciplines were diverse. Nothing was really a surprise: many of the faculty mentioned institutional shifts to more online courses and programs to satisfy enrollments and provide accessibility. One science professor talked about working to solve the DFW (i.e., drop, fail, withdrawal) problem, and others showed interest in building hybrid courses or flipping the classroom. Another noted the difficulty in generating the same enthusiasm for online teaching that she held in abundance for teaching face-to-face. The mention of Generation Z also brought some head nodding, and all seemed to share the stress of balancing “imperative content” and “active learning.”

The GTF cohort was diverse in age, nationality, gender, and discipline, in fact—strikingly diverse. Yet, they all shared a visible enthusiasm for their roles as college teachers. The dialogue was intimate, complex, humble, joyful, and carried a slight undercurrent of stress. The fellows talked about their challenges and their aspirations for improved courses and student learning, all in front of “strangers” whom they clearly recognized and trusted as peers. Now that’s the bases for progress! One went so far as to tell the group that she is in a rut, a real rut, and that she was looking to rebuild her enthusiasm for a course that she had taught multiple times. Noteworthy, all are actively teaching at their home institution—three, four, or more courses during the term—while serving as a fellow, and no one mentioned student evaluations of teaching.

Coincidently, a few days after the first GTF seminar, I came across two online articles on student evaluations of teaching (SETs). The American Sociological Association (ASA, 2019) had just released a statement which noted that student evaluations of teaching are quite ubiquitous in North America. SET instruments are easy to use, cheap to implement, and commonly utilized in personnel decisions (e.g., promotion, tenure, merit raises). The two-page ASA statement goes on to highlight statistical problems with the instruments, issues of potential for bias against women and people of color, problematic interactions with course characteristics (e.g., subject, class size) and weak associations with other measures of teaching effectiveness. Colleen Flaherty (2019) reported in Inside Higher Ed that the American Historical Association (AHA) and at least a dozen other professional associations, which are faculty-led, have called the current use of SETs into question. Plus, some higher education institutions have stated they will no longer use them as the primary measure of teaching effectiveness or in high-stakes personnel decisions.

The ASA (2019), based on their review of the literature, recommends the use of “evidence-based best practices” to evaluate courses and teaching. SETs are recommended to be part of a holistic approach that would include revised instruments focused on feedback not ratings, peer observations, reviews of teaching materials, and instructor self-reflection.

BINGO! The GTF cohort exhibited an abundance of critical awareness and self-appraisal regarding instructional performance, student learning, and ways to improve. Perhaps the omission of SETs as a topic by the fellows reflects the inevitability of end-of-course student evaluations and the acceptance of their limited value. Or, maybe some of the campuses have perfected an evidence-based process! I suspect the former.

I applaud the associations for their formal statements on current evaluation practices and the need for reform; however, institutional change will be difficult. Professional associations, like the ASA and AHA where faculty are held together by disciplinary interests, do not have many levers to enact institutional change. Yet, building on the interest by professional associations, now is a propitious time for a grass roots, campus-level effort. Across disciplines and fields, faculty could be supported by Centers for Teaching and Learning or other faculty entities to rethink the process used to evaluate and improve the central mission of all colleges and universities – instruction. Administrators, alone or by directive, do not have to develop a new system.

I heard the GTF participants from a diversity of institutional types speak. We are wise to turn this important, essential responsibility over to such dedicated faculty members; and they can be found on every campus. I believe that they will carefully design and implement new holistic approaches and that they will support and hold each other accountable for the best in teaching and learning. I think faculty members are up to the task.