1 Introduction

External program review is a requirement of publicly funded postsecondary institutions in North America, either as part of institutional accreditation (United States of America) or provincial and territorial regulations (Canada) and is regarded as a best practice for ensuring academic quality [23, 31, 33]. In Canada, program review is often considered the “gold standard” (perceived highest level of evidence) approach to evaluating academic programs and curriculum in postsecondary institutions due to its comprehensive nature. Provincial regulations call for an evaluation of a degree program at least once every five to seven years. In British Columbia, for example, the provincial government requires that postsecondary institutions have robust, structured processes in place to evaluate program quality that include a self-study undertaken by faculty members and an assessment conducted by external experts who produce an external reviewer report. Further, institutions must provide a formal response to recommendations outlined in the external reviewer report. Institutions are regularly assessed as to their compliance with this provincial policy through the Quality Assurance Process Audit—an external review process that assures that public post-secondary institutions periodically conduct rigorous, ongoing program and institutional quality assessment.

However, despite the importance placed on program review by external regulators, there is skepticism regarding the ability of program review to impact institutional planning [10, 11] and engage academics in a meaningful way [9] to support continuous improvement. Furthermore, scant literature exists that theoretically or empirically evidences the effective use of program review in institutional strategic planning; however, scholars frequently applaud its potential utility for such purposes [3, 6, 12]).

Program (or curricular) review, a subset of program evaluation, is a concept commonly associated with postsecondary quality assurance and is designed “to reform and revitalize the curriculum” [10], p. 5). It is a “critical, evidence-based examination of an academic program for the purpose of optimizing student learning and student experience” [13]. Despite the intended emphasis on enhancing teaching and learning, as noted by DiPietro et al., scholars have acknowledged a disconnect between quality assurance practices and academic development [16, 19, 28], where increasingly we see quality assurance activities, such as program review, managed by offices of quality assurance and administrators with accountability agendas as opposed to facilitated by educational developers for faculty development and continuous quality improvement. Research has shown that quality assurance practices that emphasize accountability over improvement are seen as threatening and scary [23], which can lead to passive engagement or active disengagement.

Hakkola and King [18] drew attention to the need for more developmental approaches to program review through highly collaborative, inclusive, and transparent practices. Their model for program review contrasted transactional evaluation approaches (for example, see [43] for a description of transactional versus relational evaluation approaches) by allowing teams of faculty, staff, and students to define unique evaluation questions and determine discipline-specific measures as opposed to a pre-determined approach. While Hakkola and King’s innovative model provides maximum autonomy to intended users of the data, several challenges can stymie such an approach (as Hakkola and King noted) including personnel turnover, faculty engagement, and changes in leadership. We believe that our structured, course-based approach to program review led by teams of faculty members can address the challenges that Hakkola and King identified by balancing established curriculum (i.e., program review course content) and faculty choice. Using the academic Program Review Learning Community model (PRLC by [24]) via a course structure, we aim to provide a developmental approach to program review with necessary supports and structures to increase the impact and sustainability of program reviews.

Canadian public postsecondary institutions are provincially mandated to engage in external program review; despite this requirement, little theoretical or practical guidance is provided on how to complete a review [35]. As noted by Senter et al. [35], it is often assumed that department chairs have the skills to complete required components effectively with limited resources and minimal guidance. The aim of this paper is to build and expand upon the PRLC, by describing a practical process for operationalizing the PRLC model using a course structure via a learning management system (e.g., Moodle or Blackboard). For context, the approach discussed in this paper is coordinated through an office of quality assurance under the portfolio of an associate vice-president academic in collaboration with a centre for teaching and learning and department of institutional research. The PRLC model relies on course facilitators and connections with collaborative services who orchestrate and coordinate various aspects of the process. Though the nature of the internal collaboration may vary by the culture of the campus, the authors believe that the model proposed in this paper provides opportunities for customization based on the local context and customs.

2 Theoretical background

When considering an epistemological approach to engaging in program review, our priorities fit well within social constructionism [5]. We believe that program review team members (faculty) are the experts and provide the deepest and most valuable knowledge for program evaluation. As quality assurance practitioners and academic developers, it was our goal to create an environment that encouraged peer-to-peer learning and engagement from departmental faculty. We contend that if team members are given an opportunity for reciprocal dialogue, they will collectively create the knowledge needed to advance program evaluation and development, increasing perception of faculty ownership and buy-in [17].

In addition to concepts of social constructionism, we propose that the PRLC is positioned within an appreciative paradigm which attempts to engage in a potential postcolonial approach to conducting program review [32]. Viewing program review through an appreciative lens acknowledges that those who have shown up to participate in and complete the review hold the theoretical and experiential knowledge necessary to move the process forward and invites a wide audience of people to participate in the process. As a central concept, appreciative inquiry implores us to value the contributions of all team members, while creating a system that generates the necessary momentum to drive itself forward through a shared ethos, curiosity, and connection [1, 42].

Appreciative inquiry (AI) is underpinned by “social constructionist theory and practice, the new sciences (quantum physics, chaos theory, complexity theory and self-organizing systems), research on the power of image, and research on the power of the positive” [42]. The guiding principles that serve as the foundations of AI stress that practitioners must challenge themselves to think about systems [42], p. 84) rather than concrete problems that need solving. By framing the system as having underlying patterns that may not need a solution, but rather, a change in perspective or orientation, the AI approach deconstructs strategic planning and encourages learning and grounded observation [42]. AI practitioners also posit that a strengths-focus helps create change through learning and growth, and this approach gives everyone involved an opportunity to learn and work together toward shared and meaningful goals [7, 21, 38, 39]. How shared learning is achieved is described in more depth under Module 3: SOAR Analysis, which follows below.

Finally, we suggest that appreciative inquiry extends beyond the discipline or unit undergoing review. That is, program review can be supported and facilitated by institutional quality assurance practitioners, academic faculty developers, institutional researchers, and academic project managers. To effectively support the PRLC, a lens of appreciative inquiry should inform the design of all components of the review process, including the tools and resources, as well as institutional policies and procedures. An institutional transition towards an appreciative approach may require a significant cultural shift yet could advance the institution’s review process from evaluation as transactional to evaluation as relational [43]. This change from a task-based orientation to a process-based orientation is in line with constructionist assumptions that leadership, change, and learning are developmental, continuous, reciprocal, and action-oriented [26, 42]. It also means that there is a considered effort to involve part-time instructors, alumni, and students in the program review, and compensate them for their time and input.

3 Course-based approach to facilitate multiple program reviews

Program review has typically been completed as a required academic exercise framed within an accountability paradigm [15], rather than as a learning tool for academic development and continuous quality improvement [16, 19, 28]. The authors propose that a professional learning community is a novel and meaningful perspective to approach program review, particularly when the community is embedded in a reliable infrastructure like a course. Elements of a PRLC include distributed leadership, peer coaching, involvement of senior leadership, external coaching, alignment with institutional mission and vision, and internal and external regulatory requirements [24].

A program review course fits well with a new way to think about program review because successful learning communities are often embedded within institutional structures and aligned with a change initiative [27]. We recommend that the Program Review Course (the “Course”) be centralized within an office of quality assurance, a teaching and learning centre, or similar administrative unit that connects with staff and faculty. Additionally, we see value in the role of quality assurance practitioners as program review facilitators who can provide the necessary expertise and consistent resourcing for conducting periodic program reviews. Furthermore, we suggest that using a learning management system, such as Blackboard or Moodle, can offer options for accessibility, self-directed learning, and cohort interaction while acting as a central repository of all resources to support delivery of program review content.

Our intention was to purposefully create a program review structure that reduced transactional evaluation methods, increased faculty engagement, distributed leadership, involved a wide range of participants, and provided tools and opportunities for academic development so that team members became drivers of their own process. The PRLC and course provide faculty engagement in meaningful ways by avoiding excessive bureaucracy, enhancing transparency, providing user-friendly systems, and a partnership-approach that leads to empowerment of participants in the process; factors that have been shown to build a trust-based quality culture [14]. Additionally, we have found that recommendations from the program review are actually being incorporated in a timely fashion rather than just being a report on a shelf.

The Course was developed with the idea of combining the theoretical underpinnings of professional learning communities, appreciative inquiry, and the practical complexities of cyclical program review. The first author’s experience as a teacher, working with diverse learners and needing to adapt curriculum and pedagogy to multiple learners’ needs, provided the impetus for the course-based approach to program review. Pragmatically, the Course offers a means to manage multiple program reviews at once, which is essential when facilitating multiple reviews annually with limited resources. Furthermore, using learning modules to create efficiency and cooperation in delivering program review tools and customized support is intuitively appealing. The Course was expected to reduce workload on overburdened faculty, increase “buy-in” and the perception that program review is a useful planning tool, provide opportunities for multiple perspectives and voices, and reduce the tendency for self-study reports to be written by a single individual. Further, use of this approach was expected to increase clarity and consistency of the process, improve administrative feasibility, provide guidance for those new to program review, promote collegiality within and across programs, and involve team members fully in the creation of an action plan.

It was important that our approach be flexible so that components could be facilitated and delivered regardless of the makeup of the academic and service units within the postsecondary organization. Involving multiple units moves program review from an individualistic to a collectivist system, thereby ensuring the knowledge is not “owned” by one person or unit, but rather, is a valued resource shared by the whole. The results of our course-based approach offer a practical approach for implementing the academic Program Review Learning Community model—a tangible toolbox for quality assurance practitioners and academic developers to facilitate curriculum review within a postsecondary education setting. Note that, while we provide a summary overview of the course modules in this paper, the Program Review Handbook [22] offers a detailed manual for practitioners to adopt, adapt, and share the practices described herein.

The 14-month Course described here is cohort-based and co-led by a quality assurance practitioner and academic developers who guide program review teams through eight modules. The Course curriculum is available in a learning management system and delivered both asynchronously and synchronously. Faculty participating in the course engage with their peers, the quality assurance practitioner, and academic developers in interactive workshops, team meetings, virtual information sessions, and self-directed learning activities. The modules are sufficiently spaced to allow ample time to complete activities and to recognize that each program will be at a different stage of development and expertise with respect to writing or revising program learning outcomes. Importantly, program review teams are guided through the modules by experienced academic developers and quality assurance practitioners who can attend to the different needs of program review teams.

Program faculty volunteer to participate as part of a disciplinary-specific team of three to five academics and the departmental chair. Normally, the faculty dean is involved in an advisory capacity, although the dean is also provided full access to the Course and the opportunity to become part of the team if desired. From our perspective, it is important for programs to have full agency in determining team members who are experientially and theoretically committed to creating and informing change.

At most Canadian postsecondary institutions, programs are reviewed on a five- or seven-year cycle. The Course described in this paper is designed for annual intake to facilitate this throughput. In the spring, a new cohort of program review teams are enrolled in the Course and gain access to timelines, templates, a discussion forum, and other resources. During a period of 14 months, faculty are guided through a comprehensive review of their program and/or department to complete eight modules: (1) orientation to program review, (2) articulation or refinement of program learning outcomes and a program curriculum map, (3) strategic planning via an appreciative approach, (4) survey of key stakeholders, (5) completion of a self-study report, (6) engagement with external reviewers, (7) development of a seven-year action plan, and (8) presentation of results to the university community.

4 Module 1: Orientation

The orientation, a structured event (often several hours in length), is an opportunity to orient program faculty to the Program Review Course. It is protected time and space for program faculty to build relationships with an interdisciplinary cohort of peers representing faculty members, chairs, and deans from a diversity of disciplines, to establish connections with administration and staff who will provide support and resources during the Course, and to actively engage in planning the subsequent stages of the program review. Facilitated by a quality assurance practitioner, the orientation involves sharing the purpose and intent of program review, building faculty buy-in, and promoting faculty ownership of the process. As a more recent addition, we have also used this event as an opportunity for some faculty development as requested by the attendees, such as sessions on Indigenizing the curriculum. An overview of the eight program review modules is presented, and faculty are introduced to resources, templates, and timelines, and given time to begin planning the components of the review.

The orientation is designed to be participatory and to introduce faculty to a shared framework for engaging in program review. Orientation activities enable faculty to formulate and negotiate their own program goals and activities, which has been identified as crucial for building faculty buy-in [41]. The orientation is further designed to foster trust among program review team members and within the interdisciplinary cohort by establishing group norms, and roles and responsibilities for engaging in program review, with the primary goal to build a PRLC—a collegial, team-based approach to program review that is faculty-led, evidence-based, and designed to improve teaching and learning [25].

5 Module 2: Program learning outcomes and curriculum mapping

The second module is focused on the articulation or refinement of program learning outcomes (PLOs). Development and assessment of student learning outcomes is widely recognized as a crucial step in evaluating a program's quality. When program priorities change due to factors such as a new dean or new departmental chair, shifts in environmental, social, cultural, or economic priorities, significant growth or reduction in student numbers, or student and faculty express dissatisfaction, it may be time to re-evaluate PLOs. PLOs describe the knowledge, skills, behaviours, and attitudes we seek for our program graduates. Developing PLOs and a program curriculum map involves surveying individual program faculty to elicit desired program outcomes and then collectively sorting, co-creating, and refining the outcomes with the aim to reach consensus through facilitated dialogue. Research suggests that coupling academic development with opportunities to reflect on student learning and assessment prompts faculty to make changes to their own courses and pedagogical approaches [2, 8].

Curriculum mapping is a valuable tool for visually representing the program curriculum and demonstrating the alignment between course, program, and institutional learning outcomes [29]. In addition, curriculum mapping facilitates assignment and assessment design. Once the department has reached consensus on the PLOs, they engage in curriculum mapping. Faculty are also asked to identify the assessment techniques used to evaluate achievement of the PLOs to which their courses contribute. This information is then collated to produce a program curriculum map. The resulting map can help faculty identify gaps and redundancies with the goal of building a coherent curriculum.

6 Module 3: SOAR (strength opportunity aspirations results) analysis

SOAR Analysis is a strategic planning activity underpinned by appreciative inquiry—an orientation that helps people to find patterns and possibilities within group processes, utilizing a range of approaches toward leadership, evaluation, and group planning, rather than one specific technique. The analysis focuses on the program’s strengths, opportunities, aspirations, and results (SOAR). It draws from and is frequently contrasted to the well-known SWOT (Strengths, Weaknesses, Opportunities, and Threats) approach, but notably removes Weaknesses and Threats, and replaces them with Aspirations and Results. Both frameworks are used to guide strategic conversations with groups, however, there are differences in how the conversations are governed and who can be involved (see Stavros and Cole [36]). Specifically, SOAR conversations that utilize an appreciative inquiry approach are open to all stakeholders, rather than top-level executives or a leadership team. Program review teams are asked to invite students, alumni, and community partners to the SOAR workshop to gain insight from a wider range of perspectives.

All members of the SOAR Analysis activity are guided through carefully worded questions that ask participants to identify what they are proud of, their successes, what opportunities they see, and what success looks like (see [37]. The orientation of the SOAR is inward-looking to the collective organization while also outward with a focus on opportunities for ideation and growth. Using an appreciative approach, individuals are encouraged to share stories of their experiences within the organization to find out how people experience their work as part of the organization, and to understand the myriad interconnected parts when diverse people are connected in a dynamic space.

7 Module 4: Community engagement

Limited student engagement in quality assurance processes is common in North America. A recent report published by the Curriculum Working Group Meeting of the Council of Ontario Educational Developers [20] showed that current practices “fall short of recognizing the centrality of [the student] perspective and experience” (p. 14). Scholars have argued that more attention should be given to student perceptions of success and belonging [4, 25, 34, 40]. Therefore, a comprehensive evaluation of a program requires that student perspectives be considered in determining program performance. As such, we provide programs with four questionnaire templates to elicit feedback from students, alumni, faculty, and employers.

The program review questionnaire templates are standardized; however, they can be modified to meet disciplinary needs, program goals, and other contextual factors. We encourage program review teams, and other users of the templates to carefully consider contextual factors rather than applying the templates wholesale as they may not fit each programs’ focus, concerns, or context. Survey templates include questions about the university’s strategic priorities to evaluate the degree to which programs contribute to mission fulfilment. We also ask students and alumni to reflect on the knowledge, skills, and abilities gained during their education as they relate to the achievement of program and institutional learning outcomes as an indirect measure of student success. Other survey questions relate to program structure and delivery, curriculum, admissions and advising, and program strengths and opportunities for improvement. The survey templates can be found in the Program Review Handbook [22]. Support is provided through the office of quality assurance to build and distribute the surveys and produce summary reports for faculty to interpret and analyze. The survey findings contribute to goal setting and action planning (Module 7).

8 Module 5: Self-study report

The self-study report is a tool for stimulating conversations and questioning assumptions about program performance. Topics addressed in the self-study report often include program context, curriculum and assurance of learning, student achievement, governance and resources, planning, and sustainability. Additional topics may include external and internal demand, revenue generation and expenses, economic and social impact [12], the program’s contribution to mission fulfilment, program coherence [30], and evidence of student learning [6]. Preceding Course modules also inform and contribute to the completion of the self-study report. Consequently, the self-study report serves as an orienting structure for external reviewers by bounding the work of the review into a unified document.

The self-study report provides a balance of quantitative (e.g., enrolment, retention, and graduation rates; ratio of students to faculty; library holdings) and qualitative (e.g., faculty members’ perceptions and experiences regarding adequacy of facilities, collegial governance, and program sustainability) measures for a comprehensive evaluation of the program. The report is intended to be written collaboratively by faculty and conversations can be facilitated by a quality assurance practitioner or academic developer using the self-study template as a guide.

9 Module 6: External review

Postsecondary institutions in North America must undertake periodic academic review of degree programs. Canadian provincial requirements further stipulate that the review must include advice from an external review panel, normally consisting of three academic experts. The incorporation of external, discipline-specific, and industry experts (where appropriate) is an important aspect of academic program review. Fundamental to university culture, peer review is derived from the notion of a community of scholars, which supersedes academic administration. Non-peer-reviewed evaluations may not carry as much weight among academics, thus limiting their credibility and subsequent capacity to stimulate program improvement.

As part of the Course, program review teams are provided with criteria for nominating external reviewers. The criteria include disciplinary expertise, administrative and curriculum development experience, diversity, and conflict of interest. The list of nominees is approved by the dean and provost (or designate). Staff within the office of quality assurance provide support for external reviewer recruitment and scheduling.

The external review site visit spans two days to ensure ample time for reviewers to meet with students, alumni, faculty, the dean, and provost (or designate). In addition, a campus tour and dinner are often incorporated into the site visit. External reviewers receive a welcome package at least four weeks prior to the site visit that includes an overview of the university’s program review process, terms of reference, a sample site visit agenda, and information regarding travel, accommodations, and reimbursement for expenses incurred. Reviewers are provided with a report template with broad categories such as environmental and contextual factors impacting the program, curriculum, assessment of student learning, student achievement, departmental governance, strategic planning, and sustainability; and asked to provide recommendations and commendations, which contribute to the development of the program’s action plan. In order to increase the institutional impact of program review, reviewers are also requested to link recommendations and commendations to available program and faculty strategic plans and priorities. The recommendations and commendations can then be further mapped with institutional strategic priorities such that action planning contributes to the institutional full strategic planning cycle.

10 Module 7: Action plan

Module 7 is designed to emphasize participation, dialogue, and the development of enduring frameworks for continuously assessing goal alignment and relevance, and the department’s responsiveness to changing learners’ needs. This module has two primary deliverables: (1) an action plan and (2) a framework for implementing the action plan. During the workshop, participants identify broad themes evident in the data, draft goals based on the themes while considering areas that the program can modify (e.g., curriculum) and those that are often beyond the control of program faculty (e.g., faculty hires), and explore frameworks for implementing the goals. The dual products of the workshop are a foundational document that faculty can build upon and a framework for regularly and frequently revisiting, revising, and rethinking the conditions needed to achieve the stated goals, including questioning the ongoing relevance of the goals over the next five to seven years. The action plan identifies six to ten goals and associated tasks, milestones, measurable outcomes, timelines, and responsible stakeholders.

During the five-to-seven-year interim between program review reporting cycles, we rarely see faculty gathering, analyzing, and reflecting upon and updating the information gathered during the review unless regulated to do so by programmatic accreditation requirements. To address this gap, we complement the action plan with the development of a framework for implementing the action plan, which is intended to be embedded within departmental and institutional structures, such as curriculum committees, and appropriate to the local context and disciplinary culture. Program review teams are asked to reflect on questions regarding building awareness, engagement and buy-in, accountability, and how to integrate planning into departmental operations. The action planning implementation framework is an essential element toward ensuring that program review leads to institutional improvement. Linking actions to institutional priorities and implementation makes certain a clear plan exists for advancing continuous quality improvement within the program and collectively across the institution.

11 Module 8: Report to University Community

During Module 8, each program review team prepares and submits a final report to the university Senate. The final report's main audience is the university community, particularly the Academic Planning and Priorities Committee and university Senate. As such, it is intended to provide a brief overview of the review findings and list specific steps for program improvement. A final report template is provided to each program review team. In addition, the action plan is appended to the final report. The report is presented to the governing bodies by the program review team and dean, and shared publicly with the broader university community by posting on the university’s website. Presentation of the report to governing bodies allows the program review team to elicit feedback from their peers.

12 Discussion

Curricular review is an essential part of quality assurance in a postsecondary environment, ideally advancing the stated intention of optimizing student learning and experience [13]. However, despite wide acceptance that external review of academic programs is the “gold standard” for academic quality, little research has specifically responded to the question of effectiveness for program review in leading to program improvement. Given this apparent absence in the literature, the present authors contribute to the discussion by providing both a conceptual and practical model for academic program review.

We provide this paper in address to Senter et al. [35] concern that theoretical and practical guidance for academic program review is often lacking or absent. Using an appreciative inquiry lens [42], the course creates several advantages for programs completing program review within a cohort. One of the greatest strengths of the Course is its ability to demystify program review. Specifically, clear parameters, including timeframes and deadlines for completing the modules, prior to engaging in the process, appear to help faculty in moving the process forward. This helps them schedule their workload a year in advance and provides many tools, resources, and templates that are easily accessible, modifiable, and responsive to a variety of disciplines.

Other benefits of the Course are its modular structure and ability to create an environment of trust. By breaking down the program review process into eight modules dispersed over 14 months, we can capture the forward momentum of program review teams, even when progress appears slow. Additionally, the review is presented as more manageable, in discrete chunks. Each module is monitored by a quality assurance practitioner and/or academic developer so that continued support is provided to track and encourage progress and identify barriers that may arise during the review. Furthermore, the modules are structured in a logical sequence so that each module builds upon and contributes to the subsequent module.

13 Limitations and future research

We acknowledge that our model for program review is resource intensive and requires proper institutional support from senior administration. Provosts and vice-presidents play a pivotal role in setting an agenda where teaching and learning centres and offices of quality assurance share common goals and are encouraged to work collaboratively together and with faculty and students to support continuous quality improvement. In addition, the model is more effective when deans champion the process and are willing to invest the necessary resources (e.g., course releases for faculty, administrative support) and time (e.g., participating in workshops, reviewing documentation).

Finally, we recognize that, currently, we are presenting a practice approach without providing a related evaluative review. We are currently working on projects for evaluating the model that we look forward to discussing in future work. Note that data sharing is not applicable for this paper as no new data was generated given that our intent is to describe a project management process for program review based on the Program Review Learning Community model. However, we encourage others to build on and evaluate our approach so that the program review model is both improved and meaningfully evaluated.

14 Implications

When program review is effectively woven into existing institutional systems (i.e., as a mechanism for academic development; for actioning academic plans), program review can complement, or even act in place of existing institutional initiatives. For example, at our university, we have embedded processes for strategic planning, Indigenization, curriculum development, and program- and institutional-level learning outcomes and assessment into the program review modules. This broad perspective of program review positions academic program review as a “tool” for continuous quality improvement.

While we are not yet able to measurably demonstrate the value of the PRLC and course, all early indications suggest that this model may prove a useful tool for other institutions when undertaking program review. We present these ideas as a “springboard” for interested researchers and practitioners to further advance academic external program review as a useful and well-supported academic exercise.

For readers considering adopting this approach, we encourage you to review our conceptual paper, which provides a theoretical grounding and contextual factors, as well as our handbook, which provides detailed step-by-step instructions as well as templates, lesson plans, presentations, and other valuable resources for implementing and adapting this approach to meet your unique institutional needs.