It is widely known that rural populations in the USA experience health disparities for various reasons, including higher poverty rates, health problems, and a higher proportion of older adults. They also lack healthcare facilities and providers (Callaghan et al., 2023). In US rural communities, there are roughly 14 psychologists per every 100,000 rural residents—less than half the number in urban areas (Andrilla et al., 2018). Furthermore, adults living in these rural areas receive less mental health treatment overall, and mental health treatment received tends to be from providers with less specialized training than their urban counterparts (Morales et al., 2020).

Although the provision of telemental healthcare has greatly improved access to needed health care services, the adoption of telemental healthcare in rural settings has not met the demand for mental health services in these areas (Myers, 2019). Additionally, virtual providers may not be as familiar with such demographic concerns or have the cultural knowledge to treat these specific populations effectively. Furthermore, existing rural providers must often treat a diverse range of mental health conditions and populations, often without adequate resources (e.g., specialized training opportunities) or professional support by way of colleagues of their urban counterparts (Hempel et al., 2015). The lack of healthcare training facilities in rural communities is also a rate-limiting factor in the number of mental health providers in rural areas (Andrilla et al., 2018). Thus, identifying ways to enhance the functioning of rural training programs via technology is one avenue to reduce the rural health disparity.

Importance of Telesupervision in Rural Settings

Clinical supervision provides the foundation of psychological training, and supervisors serve as evaluators and gatekeepers for the profession (Falender, 2018). Providing training, supervision, and consultation opportunities through telesupervision in geographic areas of mental healthcare shortage enables future healthcare professionals to have experience working within these rural settings, while also being supervised by individuals with the cultural knowledge of the population served, and potentially aids in recruiting needed mental health professionals to these areas. Telesupervision may also allow providers in rural areas greater connection to, and enrichment of, their work environment through supervisory work, which may contribute to retention and sustainment of the rural mental health workforce.

Although telesupervision shows promise in increasing access to mental health care via data supporting equivalence between in-person and telehealth modalities (Jordan & Shearer, 2019; Tarlow et al., 2020; Thompson et al., 2023), the majority of literature thus far has primarily focused on trainee perspectives (Bernhard & Camins, 2021; Ferriby Ferber et al., 2021; Inman et al., 2019; Jordan & Shearer, 2019; Soheilian et al., 2023; Stein et al., 2023; Tarlow et al., 2020; Thompson et al., 2023) with few examinations of perspectives of supervisors (Martin et al., 2022) or of training directors (Frye et al., 2021). Also, satisfaction and supervisory alliance have been the primary variables of interest (Bernhard & Camins, 2021; Inman et al., 2019; Jordan & Shearer, 2019; Schmittel et al., 2023; Soheilian et al., 2023; Tarlow et al., 2020; Thompson et al., 2023). The expanded use of telesupervision and emerging literature base highlights the need for the examination of supervision quality as the modalities of supervision are further diversified via technology. The oversight and monitoring of the quality of supervision provided to emerging healthcare professionals provides a mechanism to ensure that mental healthcare is increased in rural areas without sacrificing quality of patient care or training.

This paper describes the development and implementation of a competency-based supervision monitoring system (applicable to telesupervision and in-person modalities) within rural clinical psychology training programs. In the project, a methodological approach to supervision monitoring was developed to enable training programs to have increased oversight of the quality of supervision delivered within their treatment setting. This paper discusses the rationale for establishing a supervision monitoring system, identifies key needs when implementing such a system, explores solutions to meet these needs, and reviews the application of this system to training healthcare professionals.

Rationale and Benefits of Creating a System to Monitor Supervision

Accrediting training entities and licensing boards are increasing allowances for telesupervision, while also bringing increased attention on measuring impact of this form of supervision. For example, American Psychological Association (APA) accreditation standards have expanded the allowable use of telesupervision, while also requiring programs to assess for outcomes and trainee satisfaction in use of telesupervision (APA, 2018, 2019, 2023). While measuring quantity and occurrence of supervision have been traditional measures of oversight, they fail to capture the quality or elements of supervision provided. Finally, typical mid- or end-of-rotation or training year evaluation periods are adequate to capture milestones of the trainee or program but are less likely to enable timely adjustments within the supervision process.

Competency-based supervision has a well-established literature base (Falender & Shafranske, 2017, 2021; Falender et al., 2014; Grus, 2013), providing a framework for understanding trainee competency development and the elements of effective supervision. These elements include the following: a working alliance between supervisor and supervisee inclusive of resolution of strains/ruptures, consistent evaluative feedback, consistent supervision meetings, direct observation of clinical work, and opportunities for trainees to see skills modeled/experiential supervision (Falender, 2018; Falender & Shafranske, 2021). The competency-based supervision data collection system utilized for an implementation-effectiveness project examining telesupervision serves as a model for other programs to implement a low-cost and low-burden system that relies on automation to provide necessary information to programs to enhance oversight of training. Three key goals drove the development of the system, including ensuring trainees receive effective supervision, ensuring both trainee and patient safety and quality of care, and providing training directors with timely information to identify immediate clinical supervision concerns and longer-term program development needs. The questionnaire content is tailored to support competency-based supervision and telesupervision practices, while supporting programs in gathering data most pertinent to their needs.

Establishing a systematic monitoring system enables near real-time data collection of supervision functioning from perspectives of the supervisor and trainee, providing an opportunity for reconciliation of what is happening in supervision while also supporting assessment of outcomes consistent with training program accreditation. Training directors can identify potential concerns early and promptly adjust (e.g., inconsistent supervision meetings or an inability to reach a supervisor during a patient emergency), thus enabling timely correction and intervention while reducing administrative burden through automation.

The competency-based supervision data collection system further supports inclusion, diversity, equity, and access efforts. Monitoring the trainees’ responses promotes successful adaptation in meeting the needs of diverse trainees (e.g., gathering information on their perspectives that could be missed), while collecting data to be utilized in aggregate form allows trainees to respond more candidly and may help offset the power differential inherent in an evaluative supervisory relationship. Further, improving the quality of supervision may improve the quality of clinical care (e.g., increased access for clients via trainee care and more equitable quality of care) and enable training programs to adjust supervision when supervision elements compromise equity and access.

Program Needs and Creating a System to Meet Those Needs

Training programs implementing a system to assess the provision of effective elements of supervision at more frequent intervals may face several challenges. In developing the current project, three key needs applicable to broader training programs emerged: (1) identifying what questions to ask, who to ask these questions to, and how often to assess specific elements of supervision; (2) identifying a low-cost, low-burden, and accessible method to collect relevant information; and (3) understanding how to utilize the information efficiently to support decision-making to improve one’s training program. Below, possible solutions to these three needs are presented.

Addressing Need 1: Who to Collect Information from and What to Collect?

As described above, the type and frequency of information collected by training programs may not adequately document the quality of supervision. However, it may be challenging for individual training programs to identify who to collect information from and what information to collect. To answer these questions, this project modeled the supervision monitoring system based on content and suggestions from the competency-based supervision framework (Falender & Shafranske, 2021) and APA supervision guidelines (APA, 2015) and accreditation standards (APA, 2018, 2019, 2023).

Who? Collect Questionnaires from Each Supervisor-Trainee Pair for Each Rotation

It is important to collect data from both supervisors and trainees at a minimum. Gathering data from only one side of the training relationship provides partial information while potentially communicating to trainees or supervisors that their information is unimportant. Other information sources that can inform decision-making that were included in the current project were training directors, patient outcome data collected from trainees, demographic information about trainees and supervisors, and aggregated evaluation ratings on competency development data for trainees. Further, because perceptions of effective supervision can vary between supervisors and trainees and potentially across different rotations, creating identifiers that link a supervisor-trainee pair to a specific rotation is critical when setting up the data system. Finally, when tailoring content for the questionnaire, careful phrasing for parallel questions in trainee- and supervisor-facing questionnaires should be attended to so that comparisons of corresponding responses from trainees and supervisors can be made.

What? Determine What Is Being Done, Are Essential Elements Included, and Barriers and Facilitators

The items developed from the competency-based model framework and psychology training standards can be grouped into three areas: (1) information describing supervisory practices (e.g., frequency, modality, and technological problems affecting telesupervision), (2) measures of core elements of effective supervision (e.g., consistent access of supervision, provision of evaluative feedback, direct observation of clinical work, and supervisory working alliance), and (3) identification of facilitators and barriers of current supervisory processes. Items and response sets used to assess supervision content and processes in each of the three areas are provided in Appendix A, sections A1A3, respectively.

Description of Supervisory Practices

An important part of providing effective supervision is understanding the basics of current supervisory practices. The quantity and frequency of supervision sessions across modalities (e.g., in-person or telesupervision) can affect the quality of supervision. Therefore, collecting information on the frequency, modality, and disruptions to supervision sessions in the current reporting period is important.

Measures of the Core Elements of Effective Supervision

Several aspects need to be assessed to determine if high-quality supervision based on competency-based supervision standards are being provided, including core elements such as consistent supervision sessions and access to supervisors, consistent provision of evaluative feedback, direct observation of clinical work, and working alliance between supervisor and supervisee inclusive of resolution of strains or ruptures (Falender, 2018; Falender & Shafranske, 2021). Further, this project adapted the Supervision Session Checklist from Falender and Shafranske (2017) to assess whether sessions included discussion of the trainee’s learning goals, diversity/multicultural identities of the patient(s), supervisee, or supervisor or interaction, engagement in experiential supervision, monitoring patient progress, and trainee’s feelings, reactivity towards a patient, and the supervisory relationship. Finally, to assess working alliance, the Supervisory Working Alliance Inventory (SWAI, Efstation et al., 1990) was used, which consists of a 19-item trainee- and 23-item supervisor-facing version and has demonstrated good internal consistency (Efstation et al., 1990; Reese et al., 2009). To minimize questionnaire burden, and in alignment with the project focus of measuring working alliance within the full context of the training experience, SWAI data were collected at the end of each rotation. Whether using the SWAI or other supervisory relationship measures or items, programs may benefit from more frequent assessments to monitor the supervisory relationship and allow for earlier intervention points for ongoing disruptions.

Identification of Facilitators and Barriers of Current Supervisory Practices

Overall, it is important to consider what information is necessary for training directors to effectively address and implement individual- and program-level changes both immediately and over time. Open-ended questions regarding general impressions on the quality of supervision during a rotation can supplement quantitative data and provide additional insight into potential ways to improve program functioning. In this project, open-ended questions were used to solicit information on overall experiences with supervision and facilitators and barriers to supervision.

Addressing Need 2: Utilizing Low-Cost and Accessible Data Collection Methods

Identifying a Low-Cost Data Collection Platform

Collecting and managing data for training program purposes can be costly and time-consuming. However, these tasks have become easier and more affordable due to the increasing availability of online survey platforms. Several free to low-cost platforms exist, each providing basic questionnaire formatting, distribution tools, response monitoring, and data summarizing capabilities. Free options include Qualtrics, Survey Monkey, Google Forms, and Microsoft Forms. More advanced, paid platforms might also be accessible to training program staff by their university or organization. For example, the VA provides staff access to the electronic data capture system, VA REDCap. Similarly, the current project used Stanford REDCap (Harris et al., 2009, 2019).Footnote 1

Notifications for Safety Issues and Adherence to Supervision Standards

Near real-time identification and notification of concerning adverse events or lapses in supervision standards can facilitate early intervention by training directors and prevent negative impacts on trainee and patient safety, quality of care, and effective supervision. To reduce the burden of actively monitoring for problematic responses, the survey platform can be programmed to automatically send alerts when certain responses to questionnaire items are received. For example, an email alert can be sent to the training director when a trainee reports that they could “rarely” or “never” contact their supervisor during a patient care crisis. Similarly, setting up an alert for a trainee or supervisor reporting that essential elements of supervision are not being addressed helps determine if competency-based supervision is being enacted and ensures that supervision standards are upheld throughout the training year.

Employ Automation

To reduce staff burden and human error, consider employing automation wherever possible. Key areas where automation are helpful include (1) identifying potential lapses in safety and training standards (as described in the previous section), (2) distributing questionnaires, (3) ensuring questionnaire completion, and (4) producing data reports. Many distribution systems can be scheduled to send emails automatically with questionnaire links based on a set of conditions throughout the training year (e.g., a specific questionnaire send date, an uploaded schedule of completion dates, or a set schedule). For this project, questionnaires were sent monthly, with the SWAI and open-ended questions added at the end of rotations. To improve response rates, it is also helpful to set up automated notifications to remind participants when questionnaires have not been responded to promptly. Most questionnaire platforms also have analytics dashboards that update automatically, and many can generate customized data reports with key metrics (e.g., frequencies for questionnaire responses).

Addressing Need 3: Developing Actionable Metrics from Data Collected

A third need is efficiently using the data collected to support decision-making and foster a positive impact on supervisory practices. One key activity to support supervision development is tailoring a set of metrics for your specific training program and determining how to summarize the information to support your program’s ability to use that information. Also, it is important to consider how to efficiently organize questionnaire information (e.g., a monthly supervision report) to meet the three primary goals of the supervision model (i.e., providing effective supervision, ensuring quality patient care, and identifying program needs).

Tailoring a Set of Metrics for Your Training Program

To increase utility of the data, a limited number of relevant metrics describing competency-based supervision processes for the specific training program should be identified based on training goals. Careful consideration in identifying the metrics derived from questionnaire responses is an important step, as these metrics provide the basis for decision-making. Table 1 presents several potential metrics related to each of the three goals. We recommend using a small number of key metrics tailored to your training program’s unique goals at the current time rather than potentially having too much information leading to inertia. Metrics can be changed over time as the program develops and sets new goals. In addition to metrics related to the three goals, we suggest including a few metrics summarizing the current supervisory practices discussed above (e.g., frequency and modality of sessions). To get an accurate view of current practices, it is important to determine both the number of supervisors and trainees who could be participating in the program (i.e., are all appropriate staff included in the data management system?) and the number of supervisors and trainees who are completing the questionnaires (i.e., are you getting a representative sample of people providing information?).

Table 1 Example metrics for competency-based supervision (CBS) for each goal

Summarizing Information to Support Decision-Making

With a tailored set of metrics, training directors must consistently review the metrics in consideration of the program’s goals to evaluate how well the program is functioning. The data monitoring frequency can be tailored to the programs’ time and resources. Within the current project, monthly monitoring is embedded in question construction and data collection. Also, the format used to present metrics can vary depending on the program staff’s preferences. For some sites, reviewing summary information (e.g., graphs and statistics) generated within the data collection program may be sufficient. For example, the conversion of raw data to metrics (e.g., percentage of trainees and supervisors providing data or frequency of trainees reporting inconsistent supervision meetings) can be captured by reviewing descriptive statistics for each questionnaire item. Further, knowledge generated from key metrics and training needs identified from these metrics that support the three goals for competency-based supervision can be incorporated into existing program monitoring and development methods.

For other sites, a summary of the metrics in a standardized format may clarify how well the program is meeting competency-based supervision and other training needs. Figure 1 provides an example of a brief monthly report detailing several metrics for each of the three goals and descriptive information on supervision modality. For programs using telesupervision, tracking the frequency and disruptiveness of technical difficulties on supervision can be monitored.

Fig. 1
figure 1

Example of a competency-based supervision monthly report

This project focused on defining, collecting, and summarizing information for use at the program level. Sharing data with trainees and supervisors should be done with caution. In this project, respondent anonymity was a primary goal in collecting all outcome data. Sites could not access to their data, and safety procedures were designed to mask individual responses to maintain honest feedback. Where deficiencies in supervision practices may be identified, issues can be addressed at the program level, serving as reminders to all supervisors and trainees about expected behaviors. Monthly reports and other summaries of information can be shared with all involved in the training programs, as needed. For safety-related behaviors, anonymity may need to be waived and training directors should work directly with those affected to maintain appropriate professional and clinical standards. Decisions about information sharing and anonymity should be shared prior to data collection so respondents have clear expectations about how the information may be disseminated.

In sum, the main suggestions are that the data are used to generate a systematically derived set of metrics tailored to specific training program goals and that the information is reviewed and consistently used for program development.

Conclusion: Recommendations for Building and Utilizing a Monitoring System for Health Care Professional Trainees

Clinical supervision is the foundation of training clinicians, with supervisors serving as both evaluators and gatekeepers (Falender, 2018). Especially in rural settings, the use of telesupervision holds a host of potential benefits, including improving rural population health, contributing to the sustainability of rural health training programs, increasing access to needed mental health care in geographical areas of shortage, and allowing trainees to have access to supervision from supervisors who have cultural and content expertise in providing care to diverse patient populations. Given this important role and potential benefits, the importance of monitoring not only the frequency but also the quality and content of supervision is imperative.

Although the described monitoring system focused on psychology training, the implementation project enables a model for utilizing a competency-based monitoring system for supervision/telesupervision across programs and health care disciplines. While application may vary, the following aims remain central: (1) ensuring trainees are receiving effective supervision/telesupervision, (2) ensuring trainee and patient safety and quality of care, and (3) providing training directors with timely information that addresses both immediate clinical supervision concerns and longer-term program development needs. In enabling this broader application of this system, it is of upmost importance first to establish how to use the data gathered and ensure all parties involved in the training program have this knowledge. This is part of establishing quality informed consent of the training experience and increasing the likelihood of genuine responses in questionnaires. This further enables the increased monitoring and data gathering to become a normative part of program improvement, creating utility for supervisors and trainees in improving their experience, as opposed to another means of being reviewed.

Second, one will need to be thoughtful about the needs of their training program and the literature base related to the professional competencies of the discipline in deciding what needs to be monitored. Across health profession trainees, it is imperative that elements associated with the quality of supervision are tracked. While derived from psychology literature, the following elements apply to other health profession training inclusive of the development of an effective working alliance between a trainee and their supervisor, consistent evaluative feedback, consistent supervision meetings/access to supervisor and access to ad hoc supervision, and direct observation of clinical work (Falender, 2018). Aspects of trainee care engagement that ensure patient safety (e.g., access to a supervisor during a crisis) should also be monitored. Furthermore, the inclusion of questions that are related to diversity and multicultural identities serves multiple functions, including enhancing patient care by attending to diversity and multicultural identities, enabling supervisors and trainees who may be underrepresented in the healthcare profession to have a way of sharing their experiences with a reduced barrier, and allowing training programs to promptly elicit feedback from the diverse voices within their respective training programs.

Once training program managers have decided on the content of what to monitor, there are numerous practical considerations for building a data collection system. Utilizing a low-cost data collection system capable of aggregating data throughout the implementation project was essential in making the monitoring system sustainable and deriving useful information. The engagement in the implementation project also highlighted the importance of a system to alert training directors when vital items needed real-time correction (e.g., when the supervisor was unavailable during a patient emergency). Frequency of assessment collection should be considered and tailored to the needs and structure of the training experience, being collected in a manner that enables timely identification of discrepancies between the supervisor and trainee dyad and allows for these discrepancies to be discussed and resolved quickly rather than waiting for formal programmatic evaluations. Data within these metrics of focus may be utilized to adjust program policy, identify areas of education needed for supervisors, identify problem areas that may require intervention, or identify ways a rotation experience could be augmented to bolster learning. Regardless of the focus of data utilization for each training program, consistent monitoring of supervision that translates into practical changes within the training program communicates a growth-focused paradigm that is programmatically normative instead of a punitive response.

In sum, this paper highlights feasible strategies for health professions training programs to efficiently and effectively monitor supervision and supervision competencies over time. Given the importance of supervision in training healthcare professionals, it is imperative that training programs build mechanisms of oversight to ensure high-caliber supervision and training. As more training programs embrace virtual supervision and training opportunities, it will become increasingly important that monitoring systems are in place. Future research is needed to determine the impact of telesupervision on rural mental health workforce recruitment and retention; however, implementing low-cost and high-impact supervision monitoring systems may allow for rural training programs to ensure the trainees are receiving high-quality supervision, delivering superlative care, and ensuring adequate support for both trainees and supervisors is provided.