Abstract
Rural populations face challenges receiving mental health treatment due to a shortage of providers and high turnover rates. Further, trainees typically participate in urban rather than rural training centers and often remain located where they train. Supervision via synchronous video-based communications, or telesupervision, may help increase the availability of supervisors in rural settings. There is ample research on telemental health, yet data regarding telesupervision is limited. While there is a gap in systematic approaches to ensuring the quality of supervision is maintained, telesupervision use is increasing. This project examined in-person and telesupervision via use of a competency-based supervision monitoring system within rural clinical psychology training programs. This paper argues for using a competency-based supervision model in psychology training programs and how the data management system developed for an implementation–effectiveness project assessing telesupervision use in seven VA training sites supports such a model. This project collected data monthly about telesupervision use, program-level data including professional competency ratings, and patient treatment and outcomes. The data collection system employed is used to outline the needs and associated solutions that help programs develop, monitor, and improve a competency-based approach to telesupervision. The use of automation and metrics can provide programs with the necessary supervision oversight information that can be implemented using low-burden and low-cost strategies. The paper concludes with best practices for utilizing a monitoring system to maintain the quality of training and clinical care when implementing new technology such as telesupervision. Findings further support the application of the monitoring system across healthcare training programs and supervision modalities.
Avoid common mistakes on your manuscript.
It is widely known that rural populations in the USA experience health disparities for various reasons, including higher poverty rates, health problems, and a higher proportion of older adults. They also lack healthcare facilities and providers (Callaghan et al., 2023). In US rural communities, there are roughly 14 psychologists per every 100,000 rural residents—less than half the number in urban areas (Andrilla et al., 2018). Furthermore, adults living in these rural areas receive less mental health treatment overall, and mental health treatment received tends to be from providers with less specialized training than their urban counterparts (Morales et al., 2020).
Although the provision of telemental healthcare has greatly improved access to needed health care services, the adoption of telemental healthcare in rural settings has not met the demand for mental health services in these areas (Myers, 2019). Additionally, virtual providers may not be as familiar with such demographic concerns or have the cultural knowledge to treat these specific populations effectively. Furthermore, existing rural providers must often treat a diverse range of mental health conditions and populations, often without adequate resources (e.g., specialized training opportunities) or professional support by way of colleagues of their urban counterparts (Hempel et al., 2015). The lack of healthcare training facilities in rural communities is also a rate-limiting factor in the number of mental health providers in rural areas (Andrilla et al., 2018). Thus, identifying ways to enhance the functioning of rural training programs via technology is one avenue to reduce the rural health disparity.
Importance of Telesupervision in Rural Settings
Clinical supervision provides the foundation of psychological training, and supervisors serve as evaluators and gatekeepers for the profession (Falender, 2018). Providing training, supervision, and consultation opportunities through telesupervision in geographic areas of mental healthcare shortage enables future healthcare professionals to have experience working within these rural settings, while also being supervised by individuals with the cultural knowledge of the population served, and potentially aids in recruiting needed mental health professionals to these areas. Telesupervision may also allow providers in rural areas greater connection to, and enrichment of, their work environment through supervisory work, which may contribute to retention and sustainment of the rural mental health workforce.
Although telesupervision shows promise in increasing access to mental health care via data supporting equivalence between in-person and telehealth modalities (Jordan & Shearer, 2019; Tarlow et al., 2020; Thompson et al., 2023), the majority of literature thus far has primarily focused on trainee perspectives (Bernhard & Camins, 2021; Ferriby Ferber et al., 2021; Inman et al., 2019; Jordan & Shearer, 2019; Soheilian et al., 2023; Stein et al., 2023; Tarlow et al., 2020; Thompson et al., 2023) with few examinations of perspectives of supervisors (Martin et al., 2022) or of training directors (Frye et al., 2021). Also, satisfaction and supervisory alliance have been the primary variables of interest (Bernhard & Camins, 2021; Inman et al., 2019; Jordan & Shearer, 2019; Schmittel et al., 2023; Soheilian et al., 2023; Tarlow et al., 2020; Thompson et al., 2023). The expanded use of telesupervision and emerging literature base highlights the need for the examination of supervision quality as the modalities of supervision are further diversified via technology. The oversight and monitoring of the quality of supervision provided to emerging healthcare professionals provides a mechanism to ensure that mental healthcare is increased in rural areas without sacrificing quality of patient care or training.
This paper describes the development and implementation of a competency-based supervision monitoring system (applicable to telesupervision and in-person modalities) within rural clinical psychology training programs. In the project, a methodological approach to supervision monitoring was developed to enable training programs to have increased oversight of the quality of supervision delivered within their treatment setting. This paper discusses the rationale for establishing a supervision monitoring system, identifies key needs when implementing such a system, explores solutions to meet these needs, and reviews the application of this system to training healthcare professionals.
Rationale and Benefits of Creating a System to Monitor Supervision
Accrediting training entities and licensing boards are increasing allowances for telesupervision, while also bringing increased attention on measuring impact of this form of supervision. For example, American Psychological Association (APA) accreditation standards have expanded the allowable use of telesupervision, while also requiring programs to assess for outcomes and trainee satisfaction in use of telesupervision (APA, 2018, 2019, 2023). While measuring quantity and occurrence of supervision have been traditional measures of oversight, they fail to capture the quality or elements of supervision provided. Finally, typical mid- or end-of-rotation or training year evaluation periods are adequate to capture milestones of the trainee or program but are less likely to enable timely adjustments within the supervision process.
Competency-based supervision has a well-established literature base (Falender & Shafranske, 2017, 2021; Falender et al., 2014; Grus, 2013), providing a framework for understanding trainee competency development and the elements of effective supervision. These elements include the following: a working alliance between supervisor and supervisee inclusive of resolution of strains/ruptures, consistent evaluative feedback, consistent supervision meetings, direct observation of clinical work, and opportunities for trainees to see skills modeled/experiential supervision (Falender, 2018; Falender & Shafranske, 2021). The competency-based supervision data collection system utilized for an implementation-effectiveness project examining telesupervision serves as a model for other programs to implement a low-cost and low-burden system that relies on automation to provide necessary information to programs to enhance oversight of training. Three key goals drove the development of the system, including ensuring trainees receive effective supervision, ensuring both trainee and patient safety and quality of care, and providing training directors with timely information to identify immediate clinical supervision concerns and longer-term program development needs. The questionnaire content is tailored to support competency-based supervision and telesupervision practices, while supporting programs in gathering data most pertinent to their needs.
Establishing a systematic monitoring system enables near real-time data collection of supervision functioning from perspectives of the supervisor and trainee, providing an opportunity for reconciliation of what is happening in supervision while also supporting assessment of outcomes consistent with training program accreditation. Training directors can identify potential concerns early and promptly adjust (e.g., inconsistent supervision meetings or an inability to reach a supervisor during a patient emergency), thus enabling timely correction and intervention while reducing administrative burden through automation.
The competency-based supervision data collection system further supports inclusion, diversity, equity, and access efforts. Monitoring the trainees’ responses promotes successful adaptation in meeting the needs of diverse trainees (e.g., gathering information on their perspectives that could be missed), while collecting data to be utilized in aggregate form allows trainees to respond more candidly and may help offset the power differential inherent in an evaluative supervisory relationship. Further, improving the quality of supervision may improve the quality of clinical care (e.g., increased access for clients via trainee care and more equitable quality of care) and enable training programs to adjust supervision when supervision elements compromise equity and access.
Program Needs and Creating a System to Meet Those Needs
Training programs implementing a system to assess the provision of effective elements of supervision at more frequent intervals may face several challenges. In developing the current project, three key needs applicable to broader training programs emerged: (1) identifying what questions to ask, who to ask these questions to, and how often to assess specific elements of supervision; (2) identifying a low-cost, low-burden, and accessible method to collect relevant information; and (3) understanding how to utilize the information efficiently to support decision-making to improve one’s training program. Below, possible solutions to these three needs are presented.
Addressing Need 1: Who to Collect Information from and What to Collect?
As described above, the type and frequency of information collected by training programs may not adequately document the quality of supervision. However, it may be challenging for individual training programs to identify who to collect information from and what information to collect. To answer these questions, this project modeled the supervision monitoring system based on content and suggestions from the competency-based supervision framework (Falender & Shafranske, 2021) and APA supervision guidelines (APA, 2015) and accreditation standards (APA, 2018, 2019, 2023).
Who? Collect Questionnaires from Each Supervisor-Trainee Pair for Each Rotation
It is important to collect data from both supervisors and trainees at a minimum. Gathering data from only one side of the training relationship provides partial information while potentially communicating to trainees or supervisors that their information is unimportant. Other information sources that can inform decision-making that were included in the current project were training directors, patient outcome data collected from trainees, demographic information about trainees and supervisors, and aggregated evaluation ratings on competency development data for trainees. Further, because perceptions of effective supervision can vary between supervisors and trainees and potentially across different rotations, creating identifiers that link a supervisor-trainee pair to a specific rotation is critical when setting up the data system. Finally, when tailoring content for the questionnaire, careful phrasing for parallel questions in trainee- and supervisor-facing questionnaires should be attended to so that comparisons of corresponding responses from trainees and supervisors can be made.
What? Determine What Is Being Done, Are Essential Elements Included, and Barriers and Facilitators
The items developed from the competency-based model framework and psychology training standards can be grouped into three areas: (1) information describing supervisory practices (e.g., frequency, modality, and technological problems affecting telesupervision), (2) measures of core elements of effective supervision (e.g., consistent access of supervision, provision of evaluative feedback, direct observation of clinical work, and supervisory working alliance), and (3) identification of facilitators and barriers of current supervisory processes. Items and response sets used to assess supervision content and processes in each of the three areas are provided in Appendix A, sections A1–A3, respectively.
Description of Supervisory Practices
An important part of providing effective supervision is understanding the basics of current supervisory practices. The quantity and frequency of supervision sessions across modalities (e.g., in-person or telesupervision) can affect the quality of supervision. Therefore, collecting information on the frequency, modality, and disruptions to supervision sessions in the current reporting period is important.
Measures of the Core Elements of Effective Supervision
Several aspects need to be assessed to determine if high-quality supervision based on competency-based supervision standards are being provided, including core elements such as consistent supervision sessions and access to supervisors, consistent provision of evaluative feedback, direct observation of clinical work, and working alliance between supervisor and supervisee inclusive of resolution of strains or ruptures (Falender, 2018; Falender & Shafranske, 2021). Further, this project adapted the Supervision Session Checklist from Falender and Shafranske (2017) to assess whether sessions included discussion of the trainee’s learning goals, diversity/multicultural identities of the patient(s), supervisee, or supervisor or interaction, engagement in experiential supervision, monitoring patient progress, and trainee’s feelings, reactivity towards a patient, and the supervisory relationship. Finally, to assess working alliance, the Supervisory Working Alliance Inventory (SWAI, Efstation et al., 1990) was used, which consists of a 19-item trainee- and 23-item supervisor-facing version and has demonstrated good internal consistency (Efstation et al., 1990; Reese et al., 2009). To minimize questionnaire burden, and in alignment with the project focus of measuring working alliance within the full context of the training experience, SWAI data were collected at the end of each rotation. Whether using the SWAI or other supervisory relationship measures or items, programs may benefit from more frequent assessments to monitor the supervisory relationship and allow for earlier intervention points for ongoing disruptions.
Identification of Facilitators and Barriers of Current Supervisory Practices
Overall, it is important to consider what information is necessary for training directors to effectively address and implement individual- and program-level changes both immediately and over time. Open-ended questions regarding general impressions on the quality of supervision during a rotation can supplement quantitative data and provide additional insight into potential ways to improve program functioning. In this project, open-ended questions were used to solicit information on overall experiences with supervision and facilitators and barriers to supervision.
Addressing Need 2: Utilizing Low-Cost and Accessible Data Collection Methods
Identifying a Low-Cost Data Collection Platform
Collecting and managing data for training program purposes can be costly and time-consuming. However, these tasks have become easier and more affordable due to the increasing availability of online survey platforms. Several free to low-cost platforms exist, each providing basic questionnaire formatting, distribution tools, response monitoring, and data summarizing capabilities. Free options include Qualtrics, Survey Monkey, Google Forms, and Microsoft Forms. More advanced, paid platforms might also be accessible to training program staff by their university or organization. For example, the VA provides staff access to the electronic data capture system, VA REDCap. Similarly, the current project used Stanford REDCap (Harris et al., 2009, 2019).Footnote 1
Notifications for Safety Issues and Adherence to Supervision Standards
Near real-time identification and notification of concerning adverse events or lapses in supervision standards can facilitate early intervention by training directors and prevent negative impacts on trainee and patient safety, quality of care, and effective supervision. To reduce the burden of actively monitoring for problematic responses, the survey platform can be programmed to automatically send alerts when certain responses to questionnaire items are received. For example, an email alert can be sent to the training director when a trainee reports that they could “rarely” or “never” contact their supervisor during a patient care crisis. Similarly, setting up an alert for a trainee or supervisor reporting that essential elements of supervision are not being addressed helps determine if competency-based supervision is being enacted and ensures that supervision standards are upheld throughout the training year.
Employ Automation
To reduce staff burden and human error, consider employing automation wherever possible. Key areas where automation are helpful include (1) identifying potential lapses in safety and training standards (as described in the previous section), (2) distributing questionnaires, (3) ensuring questionnaire completion, and (4) producing data reports. Many distribution systems can be scheduled to send emails automatically with questionnaire links based on a set of conditions throughout the training year (e.g., a specific questionnaire send date, an uploaded schedule of completion dates, or a set schedule). For this project, questionnaires were sent monthly, with the SWAI and open-ended questions added at the end of rotations. To improve response rates, it is also helpful to set up automated notifications to remind participants when questionnaires have not been responded to promptly. Most questionnaire platforms also have analytics dashboards that update automatically, and many can generate customized data reports with key metrics (e.g., frequencies for questionnaire responses).
Addressing Need 3: Developing Actionable Metrics from Data Collected
A third need is efficiently using the data collected to support decision-making and foster a positive impact on supervisory practices. One key activity to support supervision development is tailoring a set of metrics for your specific training program and determining how to summarize the information to support your program’s ability to use that information. Also, it is important to consider how to efficiently organize questionnaire information (e.g., a monthly supervision report) to meet the three primary goals of the supervision model (i.e., providing effective supervision, ensuring quality patient care, and identifying program needs).
Tailoring a Set of Metrics for Your Training Program
To increase utility of the data, a limited number of relevant metrics describing competency-based supervision processes for the specific training program should be identified based on training goals. Careful consideration in identifying the metrics derived from questionnaire responses is an important step, as these metrics provide the basis for decision-making. Table 1 presents several potential metrics related to each of the three goals. We recommend using a small number of key metrics tailored to your training program’s unique goals at the current time rather than potentially having too much information leading to inertia. Metrics can be changed over time as the program develops and sets new goals. In addition to metrics related to the three goals, we suggest including a few metrics summarizing the current supervisory practices discussed above (e.g., frequency and modality of sessions). To get an accurate view of current practices, it is important to determine both the number of supervisors and trainees who could be participating in the program (i.e., are all appropriate staff included in the data management system?) and the number of supervisors and trainees who are completing the questionnaires (i.e., are you getting a representative sample of people providing information?).
Summarizing Information to Support Decision-Making
With a tailored set of metrics, training directors must consistently review the metrics in consideration of the program’s goals to evaluate how well the program is functioning. The data monitoring frequency can be tailored to the programs’ time and resources. Within the current project, monthly monitoring is embedded in question construction and data collection. Also, the format used to present metrics can vary depending on the program staff’s preferences. For some sites, reviewing summary information (e.g., graphs and statistics) generated within the data collection program may be sufficient. For example, the conversion of raw data to metrics (e.g., percentage of trainees and supervisors providing data or frequency of trainees reporting inconsistent supervision meetings) can be captured by reviewing descriptive statistics for each questionnaire item. Further, knowledge generated from key metrics and training needs identified from these metrics that support the three goals for competency-based supervision can be incorporated into existing program monitoring and development methods.
For other sites, a summary of the metrics in a standardized format may clarify how well the program is meeting competency-based supervision and other training needs. Figure 1 provides an example of a brief monthly report detailing several metrics for each of the three goals and descriptive information on supervision modality. For programs using telesupervision, tracking the frequency and disruptiveness of technical difficulties on supervision can be monitored.
This project focused on defining, collecting, and summarizing information for use at the program level. Sharing data with trainees and supervisors should be done with caution. In this project, respondent anonymity was a primary goal in collecting all outcome data. Sites could not access to their data, and safety procedures were designed to mask individual responses to maintain honest feedback. Where deficiencies in supervision practices may be identified, issues can be addressed at the program level, serving as reminders to all supervisors and trainees about expected behaviors. Monthly reports and other summaries of information can be shared with all involved in the training programs, as needed. For safety-related behaviors, anonymity may need to be waived and training directors should work directly with those affected to maintain appropriate professional and clinical standards. Decisions about information sharing and anonymity should be shared prior to data collection so respondents have clear expectations about how the information may be disseminated.
In sum, the main suggestions are that the data are used to generate a systematically derived set of metrics tailored to specific training program goals and that the information is reviewed and consistently used for program development.
Conclusion: Recommendations for Building and Utilizing a Monitoring System for Health Care Professional Trainees
Clinical supervision is the foundation of training clinicians, with supervisors serving as both evaluators and gatekeepers (Falender, 2018). Especially in rural settings, the use of telesupervision holds a host of potential benefits, including improving rural population health, contributing to the sustainability of rural health training programs, increasing access to needed mental health care in geographical areas of shortage, and allowing trainees to have access to supervision from supervisors who have cultural and content expertise in providing care to diverse patient populations. Given this important role and potential benefits, the importance of monitoring not only the frequency but also the quality and content of supervision is imperative.
Although the described monitoring system focused on psychology training, the implementation project enables a model for utilizing a competency-based monitoring system for supervision/telesupervision across programs and health care disciplines. While application may vary, the following aims remain central: (1) ensuring trainees are receiving effective supervision/telesupervision, (2) ensuring trainee and patient safety and quality of care, and (3) providing training directors with timely information that addresses both immediate clinical supervision concerns and longer-term program development needs. In enabling this broader application of this system, it is of upmost importance first to establish how to use the data gathered and ensure all parties involved in the training program have this knowledge. This is part of establishing quality informed consent of the training experience and increasing the likelihood of genuine responses in questionnaires. This further enables the increased monitoring and data gathering to become a normative part of program improvement, creating utility for supervisors and trainees in improving their experience, as opposed to another means of being reviewed.
Second, one will need to be thoughtful about the needs of their training program and the literature base related to the professional competencies of the discipline in deciding what needs to be monitored. Across health profession trainees, it is imperative that elements associated with the quality of supervision are tracked. While derived from psychology literature, the following elements apply to other health profession training inclusive of the development of an effective working alliance between a trainee and their supervisor, consistent evaluative feedback, consistent supervision meetings/access to supervisor and access to ad hoc supervision, and direct observation of clinical work (Falender, 2018). Aspects of trainee care engagement that ensure patient safety (e.g., access to a supervisor during a crisis) should also be monitored. Furthermore, the inclusion of questions that are related to diversity and multicultural identities serves multiple functions, including enhancing patient care by attending to diversity and multicultural identities, enabling supervisors and trainees who may be underrepresented in the healthcare profession to have a way of sharing their experiences with a reduced barrier, and allowing training programs to promptly elicit feedback from the diverse voices within their respective training programs.
Once training program managers have decided on the content of what to monitor, there are numerous practical considerations for building a data collection system. Utilizing a low-cost data collection system capable of aggregating data throughout the implementation project was essential in making the monitoring system sustainable and deriving useful information. The engagement in the implementation project also highlighted the importance of a system to alert training directors when vital items needed real-time correction (e.g., when the supervisor was unavailable during a patient emergency). Frequency of assessment collection should be considered and tailored to the needs and structure of the training experience, being collected in a manner that enables timely identification of discrepancies between the supervisor and trainee dyad and allows for these discrepancies to be discussed and resolved quickly rather than waiting for formal programmatic evaluations. Data within these metrics of focus may be utilized to adjust program policy, identify areas of education needed for supervisors, identify problem areas that may require intervention, or identify ways a rotation experience could be augmented to bolster learning. Regardless of the focus of data utilization for each training program, consistent monitoring of supervision that translates into practical changes within the training program communicates a growth-focused paradigm that is programmatically normative instead of a punitive response.
In sum, this paper highlights feasible strategies for health professions training programs to efficiently and effectively monitor supervision and supervision competencies over time. Given the importance of supervision in training healthcare professionals, it is imperative that training programs build mechanisms of oversight to ensure high-caliber supervision and training. As more training programs embrace virtual supervision and training opportunities, it will become increasingly important that monitoring systems are in place. Future research is needed to determine the impact of telesupervision on rural mental health workforce recruitment and retention; however, implementing low-cost and high-impact supervision monitoring systems may allow for rural training programs to ensure the trainees are receiving high-quality supervision, delivering superlative care, and ensuring adequate support for both trainees and supervisors is provided.
Data Availability
The participants of this quality improvement project did not give written consent for their data to be shared publicly, so due to the sensitive nature of the research supporting data is not available.
Notes
The Stanford REDCap platform (http://redcap.stanford.edu) is developed and operated by Stanford Medicine Research IT team. The REDCap platform services at Stanford are subsidized by (a) Stanford School of Medicine Research Office and (b) the National Center for Research Resources and the National Center for Advancing Translational Sciences, National Institutes of Health, through grant UL1 TR001085.
References
American Psychological Association. (2015). Guidelines for clinical supervision in health service psychology. The American Psychologist, 70(1), 33–46.
American Psychological Association. (2018). Standards of accreditation for health service psychology. Author. http://www.apa.org/ed/accreditation/about/policies/standards-of-accreditation.pdf
American Psychological Association. (2019). Standards of accreditation for health service psychology (Revised). Author. http://www.apa.org/ed/accreditation/about/policies/standards-of-accreditation.pdf
American Psychological Association. (2023). Implementing regulations for the standards of accreditation. Author. https://irp.cdn-website.com/a14f9462/files/uploaded/Section%20C%20091323.pdf
Andrilla, C. H. A., Patterson, D. G., Garberson, L. A., Coulthard, C., & Larson, E. H. (2018). Geographic variation in the supply of selected behavioral health providers. American Journal of Preventive Medicine, 54, S199–S207. https://doi.org/10.1016/j.amepre.2018.01.004
Bernhard, P. A., & Camins, J. S. (2021). Supervision from afar: Trainees’ perspectives on telesupervision. Counselling Psychology Quarterly, 34(3–4), 377–386. https://doi.org/10.1080/09515070.2020.1770697
Callaghan, T., Kassabian, M., Johnson, N., Shrestha, A., Helduser, J., Horel, S., Bolin, J. N., & Ferdinand, A. O. (2023). Rural healthy people 2030: New decade, new challenges. Preventive Medicine Reports, 33. https://doi.org/10.1016/j.pmedr.2023.102176
Efstation, J. F., Patton, M. J., & Kardash, C. M. (1990). Measuring the working alliance in counselor supervision. Journal of Counseling Psychology, 37(3), 322–329. https://doi.org/10.1037/0022-0167.37.3.322
Falender, C. A. (2018). Clinical supervision—The missing ingredient. American Psychologist, 73(9), 1240–1250. https://doi.org/10.1037/amp0000385
Falender, C. A., & Shafranske, E. P. (2017). Competency-based clinical supervision: Status, opportunities, tensions, and the future. Australian Psychologist, 52(2), 86–93. https://doi.org/10.1111/ap.12265
Falender, C. A., & Shafranske, E. P. (2021). Clinical supervision: A competency-based approach (2nd ed.). American Psychological Association. https://doi.org/10.1037/0000243-000
Falender, C. A., Shafranske, E. P., & Ofek, A. (2014). Competent clinical supervision: Emerging effective practices. Counselling Psychology Quarterly, 27, 393–408. https://doi.org/10.1080/09515070.2014.934785
Ferriby Ferber, M., Heiden-Rootes, K., Meyer, D., Zubatsky, M., & Wittenborn, A. (2021). Couple and family therapy students’ experience of transitioning to teletherapy and telesupervision in the wake of the COVID-19 pandemic. International Journal of Systemic Therapy, 32(3), 194–218. https://doi.org/10.1080/2692398X.2021.1936878
Frye, W. S., Feldman, M., Katzenstein, J., & Gardner, L. (2021). Modified training experiences interns and fellows during COVID-19: Use of telepsychology and telesupervision by adolescent training programs. Journal of Clinical Psychology in Medical Settings, 29(4), 840–848. https://doi.org/10.1007/s10880-021-09839-4
Grus, C. L. (2013). The supervision competency: Advancing competency-based education and training in professional psychology. The Counseling Psychologist, 41, 131–139. https://doi.org/10.1177/0011000012453946
Harris, P. A., Taylor, R., Minor, B. L., Elliott, V., Fernandez, M., O’Neal, L., McLeod, L., Delacqua, G., Delacqua, F., Kirby, J., & Duda, S. N. (2019). The REDCap consortium: Building an international community of software platform partners. Journal of Biomedical Informatics, 95, 103208. https://doi.org/10.1016/j.jbi.2019.103208
Harris, P. A., Taylor, R., Thielke, R., Payne, J., Gonzalez, N., & Conde, J. G. (2009). Research Electronic Data Capture (REDCap): A metadata-driven methodology and workflow process for providing translational research informatics support. Journal of Biomedical Informatics, 42(2), 377–381. https://doi.org/10.1016/j.jbi.2008.08.010
Hempel, S., Gibbons, M. M., Macqueen, I., Miake-Lye, I., Beroes, J., & Shekelle, P. (2015). Rural healthcare workforce: A systematic review. Department of Veterans Affairs (US). https://www.ncbi.nlm.nih.gov/books/NBK409502
Inman, A. G., Soheilian, S. S., & Luu, L. P. (2019). Telesupervision: Building bridges in a digital era. Journal of Clinical Psychology, 75(2), 292–301. https://doi.org/10.1002/jclp.22722
Jordan, S. E., & Shearer, E. M. (2019). An exploration of supervision delivered via clinical video telehealth (CVT). Training and Education in Professional Psychology, 13(4), 323–330. https://doi.org/10.1037/tep0000245
Martin, P., Tian, E., Kumar, S., & Lizarondo, L. (2022). A rapid review of the impact of COVID-19 on clinical supervision practices of healthcare workers and students in healthcare settings. Journal of Advanced Nursing, 78(11), 3531–3539. https://doi.org/10.1111/jan.15360
Morales, D. A., Barksdale, C. L., & Beckel-Mitchener, A. C. (2020). A call to action to address rural mental health disparities. Journal of Clinical and Translational Science, 4(5), 463–467. https://doi.org/10.1017/cts.2020.42
Myers, C. R. (2019). Using telehealth to remediate rural mental health and healthcare disparities. Issues in Mental Health Nursing, 40(3), 233–239. https://doi.org/10.1080/01612840.2018.1499157
Reese, R. J., Usher, E. L., Bowman, D. C., Norsworthy, L. A., Halstead, J. L., Rowlands, S. R., & Chisholm, R. R. (2009). Using client feedback in psychotherapy training: An analysis of its influence on supervision and counselor self-efficacy. Training and Education in Professional Psychology, 3(3), 157–168. https://doi.org/10.1037/a0015673
Schmittel, E. M., Lettenberger-Klein, C., Oliver, T., Butterfras, R. F., & Adamson, D. W. (2023). Intentionality in academic telesupervision: A phenomenological study of faculty telesupervisors’ experiences. Contemporary Family Therapy, 45(1), 61–74. https://doi.org/10.1007/s10591-021-09601-w
Soheilian, S. S., O’Shaughnessy, T., Lehmann, J. S., & Rivero, M. (2023). Examining the impact of COVID-19 on supervisees’ experiences of clinical supervision. Training and Education in Professional Psychology, 17(2), 167–175. https://doi.org/10.1037/tep0000418
Stein, M. B., O’Keefe, S., Mace, R., Foley, J. D., White, A. E., Ruchensky, J. R., Curtiss, J., Moran, E., Evans, C., & Beck, S. (2023). Psychology internship training amidst COVID-19: Balancing training opportunities, patient care, and risk of exposure. Journal of Clinical Psychology in Medical Settings, 30(1), 61–71. https://doi.org/10.1007/s10880-022-09890-9
Tarlow, K. R., McCord, C. E., Nelon, J. L., & Bernhard, P. A. (2020). Comparing in-person supervision and telesupervision: A multiple baseline single-case study. Journal of Psychotherapy Integration, 30(2), 383–393. https://doi.org/10.1037/int0000210
Thompson, S. M., Keenan-Miller, D., Dunn, D., Hersh, J., Saules, K. K., Graham, S. R., Bell, D. J., Hames, J. L., Wray, A., Hiraoka, R., Heller, M. B., Taber-Thomas, S. M., Taylor, M. J., Hawkins, R. C., Schacht, R. L., Liu, N. H., Schwartz, J. L., & Akey, E. H. (2023). Preferences for and acceptability of telesupervision among health service psychology trainees. Training and Education in Professional Psychology, 17(3), 221–230. https://doi.org/10.1037/tep0000415
Acknowledgements
We would like to thank Dr. Carol Falender for her work on this project and her work with Dr. Edward Shafranske on developing the competency-based supervision model. The VA Office of Rural Health and the VA Office of Academic Affiliations for their support and contributions to the execution of the current project.
Funding
This work was supported by the Office of Rural Health Project PROJFY-008768 and support and resources from the National Center for PTSD.
Author information
Authors and Affiliations
Contributions
All authors contributed to the study conception or design, material preparation, and data collection. The first full draft of the manuscript was written by Shiloh Jordan, Shilpa Hampole, Erika Shearer, Kristen Eliason, and Margaret-Anne Mackintosh. All authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Ethical Approval
This project was designated by the VA Office of Research & Development and VA Portland Health Care System Research Office as a quality improvement project and therefore not subject to institutional review board approval.
Competing Interests
The authors have no relevant financial or non-financial interests to disclose.
Disclaimer
The contents presented do not represent the views of the U.S. Department of Veterans Affairs or the U.S. government.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Jordan, S.E., Hampole, S.R., Mackintosh, MA. et al. Implementing Efficient Systems to Monitor Competency-Based Supervision in Rural Psychology Training Programs. J. technol. behav. sci. 9, 26–34 (2024). https://doi.org/10.1007/s41347-024-00384-z
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s41347-024-00384-z