Currently, in the USA, there are 7.38 million individuals diagnosed with an intellectual or developmental disability (IDD; Residential Information Systems Project, 2020). Approximately 3.5 million individuals live with family caregivers, including 24% for whom caregivers are over the age of 60 (Braddock et al., 2013). Nearly 713,300 individuals receive residential support in congregate living settings (Harris-Kojetin et al., 2013), with an additional 473,000 individuals with IDD on waitlists for residential services (Diament, 2020). These numbers are expected to increase (Ryan et al., 2014), which exacerbates existing pressures facing service providers to deliver high-quality services in a timely and safe manner (Strouse & DiGennaro Reed, 2021).

Research has shown that individuals with IDD are at an increased risk of experiencing poor overall health and chronic health conditions (Centers for Disease Control & Prevention, 2020). For example, adults with IDD are at high risk for obesity and associated secondary conditions, including cardiovascular disease, diabetes, and hypertension (Anderson et al., 2013; Tyler et al., 2011). The coronavirus disease of 2019 (COVID-19) pandemic revealed that people residing in congregate living settings, specifically those with IDD, are at an increased risk for health-related issues including a COVID-19 diagnosis (Armitage & Nellums, 2020; Gleason et al., 2021). Moreover, individuals with IDD have a decreased life expectancy due to various factors including complications associated with their disability, obstacles related to receiving adequate health care, and socioeconomic status (Gleason et al., 2021).

Direct support professionals (DSPs) play a critical role in health-related outcomes for individuals with IDD who reside in congregate living settings. An estimated 1.3 million DSPs work with individuals with IDD (President’s Committee for People with IDD, 2017). The primary job duties of DSPs include assisting individuals with completing daily living skills (e.g., preparing healthy meals, personal hygiene, laundry) and engaging with others in their community (Friedman, 2019), and providing active treatment, such as teaching valued skills based on individualized or person-centered support plans. Other duties include administering medications and adhering to medication and treatment protocols with nursing supports (Kansas Department for Aging & Disability Services, 2010). Thus, ensuring DSPs are adequately prepared to perform their job duties is a critical component to achieving desired outcomes for individuals with IDD (Novak et al., 2019).

Given their responsibilities, DSPs require training on a wide range of health-related topics including, but not limited to, how to implement person-centered support plans, ensure individualized diets are followed, use lifts or other adaptive equipment, follow emergency preparedness plans, and administer medications. One approach adopted by service providers with geographically distributed residential programs is to rely on residential staff to function as peer trainers to newly hired DSPs (Luiselli, 2015). A peer training approach may be advantageous for several reasons. First, peer training is relatively affordable in that organizations would likely incur additional costs if a credentialed, professional trainer were responsible for providing one-on-one training for each new hire (DiGennaro Reed et al., 2013). Second, on-site peer trainers have an ongoing presence in the residential program, which could foster maintenance of DSP skills (Demchak et al., 1992; Parsons et al., 2013). Finally, training others may help minimize performance drift of peer trainers and foster maintenance of the very skills they teach (Parsons et al., 2013; Van den Pol et al., 1983).

Although evidence suggests that peer training can be an effective approach to supporting staff (e.g., Green & Reid, 1994; Harchik et al., 2001), organizations must prepare peer trainers for this important job responsibility and ensure ongoing integrity of the training system. A small number of studies have addressed this issue. For example, Parsons et al. (2013) evaluated the effects of a 60-min group training on trainer integrity (i.e., the extent to which peer trainers accurately implement behavioral skills training [BST]). The group training incorporated components of BST—instructions, modeling, practice, and feedback—to teach peer trainers how to train colleagues two target skills (i.e., embedded teaching, conducting a preference assessment). Using a multiple-probe design, Parsons et al. (2013) documented improvements in trainer integrity during simulated role plays. In addition, BST integrity was 100% for nearly all trainers during an on-the-job assessment following training. Although these findings documented that organizations can effectively prepare peer trainers, the researchers measured trainer integrity across only two target skills and did not assess generalization or maintenance.

Erath et al. (2020) extended Parsons et al. (2013) by evaluating the effects of a group workshop on trainer integrity. Peer trainers (i.e., residential supervisors, DSPs) employed by a large behavioral healthcare organization were taught to use BST to train confederate staff how to reinforce a desirable behavior during a role-play context. Like Parsons et al. (2013), the group workshop incorporated components of BST. Following the workshop, trainer integrity increased to mastery levels for 10 of 25 participants; 10 additional participants reached mastery with supplemental experimenter feedback. Moreover, training generalized to a novel skill (i.e., presenting choice) and maintained for up to 6 weeks with all three participants for whom follow-up assessments were conducted.

In a follow-up study, Erath et al. (2021) evaluated the effects of video-based training on BST integrity. Four prospective peer trainers were taught to use BST during a role play to teach confederate staff two skills, how to reinforce a desirable behavior and provide choice to consumers. The 13-min video module included on-screen text (i.e., written instruction), voiceover narration (i.e., vocal instruction), and two modeled exemplars (i.e., modeling). Participants completed guided notes while watching the module. The video-based training increased BST integrity to mastery levels for two participants; two additional participants required supplemental experimenter feedback to reach mastery. In addition, participant use of BST generalized to two novel skills (i.e., having positive interactions with consumers, delivering effective instructions), and their performance maintained at 7- to 26-day follow-up probes.

Although effective as an initial effort to improve trainer integrity, Erath et al. (2020, 2021) did not measure peer trainer integrity in the workplace amidst real-world service-delivery challenges. In addition, the training provided to peer trainers addressed only a handful of skills trainers would be expected to teach newly hired staff. We sought to address the limitations of both studies and extend their initial findings by evaluating training integrity within a peer training system that also included ongoing support. The purpose of this program description is to summarize how a consultation model was used to (a) adopt procedures from Erath et al. (2021) to prepare peer trainers for their role, (b) conduct an assessment to determine the variables influencing trainer integrity in the workplace, and (c) implement a systems-level intervention to promote high trainer integrity. Peer trainers were expected to teach newly hired staff over 60 skills, including daily routines, schedules, and health needs; teaching skills and promoting consumer participation; behavior support plans; and professional skills and general information. Our program description contains data spanning 3 years across 36 residential and 3 day-service programs with nearly 100 peer trainers. In addition, data were collected before and during the COVID-19 pandemic; thus, modifications to the intervention were necessary to ensure safety.

Method

Participants

Ninety-nine employees who worked across 36 residential and 3 day-service programs at a behavioral healthcare organization that served adults with IDD in the midwestern USA were participants (hereafter, peer trainers). The organization required employees to be at least 18 years of age, have a high-school diploma or general equivalence degree, and pass a background check. Peer trainers included DSPs and management staff (e.g., group-home managers). Demographic information for peer trainers is unavailable because we collected data as part of an ongoing consultation arrangement between 2018 and 2021 (i.e., not as a formal research study) and many peer trainers are no longer employed by the organization. The Human Rights Committee at the employer organization and the Human Research and Protection Program at the university approved data collection as part of this consultation arrangement.

Prospective peer trainers were required to participate in a workshop led by the consultation team during which they were taught how to use BST to train newly hired DSPs. Workshops were held in a conference room located at the organization’s main office and included between two and 10 peer trainers. The content included an overview of the responsibilities of a peer trainer, a review of the organization’s training checklist detailing the target skills peer trainers were expected to teach and the training components they should use (see https://osf.io/j7fdg/), video-based training (i.e., instructions and video models; Erath et al., 2021), and role play and feedback. All peer trainers were required to implement BST with 100% accuracy during a role play. In addition, peer trainers received information about the behaviors they were expected to perform during peer training (see “Measures” and “Data Analysis” sections) and were informed that the consultation team would conduct observations in their respective workplaces.

Procedures

Table 1 summarizes the most relevant consultation activities that took place between Fall 2018 and Fall 2021. A pre-intervention assessment was completed before implementing the systems-level intervention to aid in designing intervention components. Subsequently, the effect of the systems-level intervention on peer trainer performance was evaluated across several phases: baseline, feedback, program announcement, monetary incentive plus feedback, and monetary incentive plus feedback and a prompt.

Table 1 A timeline of consultation activities

Setting

Peer trainers conducted 16 to 24 h of training with newly hired DSPs in the residential and day-service programs. The residential programs were community homes with a shared living room, dining room, kitchen, and at least one bathroom as well as private bedrooms for each consumer. The day-service programs had shared art, sensory, and game rooms; a kitchen; and office space. Prior to participating in peer training, newly hired DSPs completed approximately 45 h of training as part of the organization’s initial staff training. The training topics included medication administration, CPR, First Aid, Safety Care training, and behavioral teaching, among other topics.

Observations

Consultants had access to software (e.g., Humanity) used by the organization, which allowed the team to easily determine where and when to schedule observations. The consultation team arranged an unannounced 30-min observation of each peer trainer for each DSP they trained. Observations were scheduled when the training occurred and took place in-person and remotely. During in-person observations, one member of the consultation team visited the programs where training was scheduled. The observation began when the consultant arrived at the residential or day-service program. Remote observations took place by viewing recorded footage captured via iLink Support Technologies® (DiGennaro Reed & Reed, 2013; Strouse & DiGennaro Reed, 2021), with video data saved to a HIPAA-compliant server managed by the organization. The observation began once the consultant reviewed the saved footage to identify a time when the peer trainer and DSP were both present. Remote observations were conducted for residential programs only given software capabilities.

Prior to the start of observations, consultants were trained to collect data on peer trainer behavior when training newly hired DSPs. Training consisted of BST for how to use iLink Support Technologies®; a review of the components of BST and the peer trainer behavior being measured, including exemplars and non-exemplars; a review of the training checklist peer trainers were expected to complete; and data collection practice and feedback. During training, consultants were required to accurately record data for two consecutive peer trainings before conducting observations independently.

Pre-Intervention Assessment

The pre-intervention assessment was conducted in 2018 and included a small sample of unannounced observations completed via iLink Support Technologies® and an interview. The purpose of the observations was to determine the extent to which peer trainers used BST to train newly hired DSPs. At this point in our consultation, we had not yet identified the three target behaviors that comprised the primary dependent variable (see Table 2). Twenty-three observations occurred at four residential programs. Average BST integrity across observations was 5.3% (range, 0–40%); BST integrity was 0% for 19 of 23 observations.

Because we observed low levels of training integrity, the Performance Diagnostic Checklist – Human Services (PDC-HS; Carr et al., 2013) was conducted to determine the barriers to training newly hired DSPs. The PDC-HS is an indirect assessment tool comprised of 20 “yes” or “no” questions asked in a semi-structured interview format. The questions span four areas: training; task clarification and prompting; resources, materials, and processes; and performance consequences, effort, and competition. A “no” indicates a potential area for improvement. Thus, sections of the assessment with high percentages of questions scored as “no” would be considered areas that require intervention.

We conducted the PDC-HS with four peer trainers at the organization. The assessment was completed as part of a graduate course in organizational behavior management under the supervision of the fourth author. Although an ideal assessment would incorporate all peer trainers, the limited resources available and conflicting schedules made that goal difficult. Figure 1 displays the results of the assessment. The percentage of questions with a “no” response was high in two areas: (a) task clarification and prompting; and (b) performance consequences, effort, and competition. Peer trainers revealed that they did not receive reminders for when they were scheduled to train newly hired DSPs and, thus, were often unprepared for training when DSPs arrived at the workplace. With respect to consequences, three peer trainers revealed that they were not directly monitored by a supervisor and did not receive feedback about the training they provided to newly hired DSPs.

Fig. 1
figure 1

Pre-intervention assessment results

Given these findings, the consultation team prepared a written report for the Chief Executive Officer (CEO) in January 2019 that provided recommended changes to the peer training program. We proposed four changes: (a) process improvements so peer trainers or their supervisors were notified in advance when they were scheduled to train; (b) a prompt or reminder to be delivered 24 h before the scheduled training; (c) supervisor observations of peer training and contingent feedback; and (d) delivery of a monetary incentive to peer trainers for providing training with high integrity. In Summer 2019, while the consultation team was engaged in discussions with the CEO and other organizational leaders about the specific components of the intervention, we assisted the organization with modifying the process by which peer trainers were scheduled to train and how this information was communicated. Given resource constraints, the CEO and consultation team agreed that (a) the consultation team would conduct observations and deliver feedback to peer trainers, and (b) the organization would wait to implement the monetary incentive until collecting data on the efficacy of feedback alone.

Baseline

Prior to baseline, peer trainers had participated in the workshop, successfully implemented BST during a role play with a consultant, received information about the organization’s expectations for peer trainer integrity, and were informed that consultants would conduct unannounced observations in their workplace. During baseline, peer trainers did not experience any programmed antecedents or consequences for conducting peer training as intended.

Feedback

During the feedback phase, consultants delivered feedback to peer trainers immediately following each observation. The feedback included praise for behaviors performed correctly (e.g., “Excellent job using BST during training. I like how you provided helpful feedback to the staff.”) and supportive feedback for behaviors requiring improvement (e.g., “Your professional conduct didn’t meet expectations.”). In addition, consultants described how peer trainers could improve performance when errors were made (e.g., “Please avoid saying negative comments about your supervisor to new staff.”). Peer trainers had the opportunity to ask questions and problem-solve training issues they were experiencing after feedback was delivered.

Program Announcement

During the program announcement phase, the organization shared details with employees about the systems-level intervention. The period spanned approximately 6 weeks during which peer trainers received an email from the Human Resources Department describing the monetary incentive (summarized below). The consultants also incorporated this content into the workshop they required prospective peer trainers to complete and announced the details of the intervention in consultation meetings held with various management-level employees. Although there were no programmed antecedents or consequences during this phase (i.e., conditions from the feedback phase remained in effect), we included these announcements as a separate phase given previous research showing behavior change may occur when systems-level changes are announced (Luiselli et al., 2009).

Monetary Incentive + Feedback

This phase included a monetary incentive and feedback. Peer trainers earned $1.00 for every hour they were assigned to train if they implemented training with 100% integrity (i.e., they met criterion) during the observation. That is, if peer trainers were scheduled to train a newly hired DSP for 16 h, they earned $16 if they performed the three expected behaviors with 100% integrity. This monetary amount was determined after consultation with the Finance Department who projected maximum costs to the organization assuming every peer trainer met criterion for every hour trained across a year. Each US dollar earned was added to an individualized tracking spreadsheet that the consultants shared with each peer trainer via email. Peer trainers exchanged their earnings by purchasing items from a gift-giving platform (i.e., Snappy; www.snappy.com) already adopted by the organization. Snappy’s website describes itself as a one-stop-shop for corporate gifting needs, such as recognizing employees and colleagues and appreciating customers. There is a slight fee for using the platform. The organization used Snappy for employee recognition and paid a 15% service fee on top of the actual costs of the gifts purchased. As a result, they asked us to use Snappy for exchanging the monetary incentive. The platform had items that ranged in price; thus, peer trainers could save their earnings for more expensive items. They were required to earn a minimum of $25 before making a purchase.

In addition to a monetary incentive, peer trainers received feedback similar to the feedback phase. We also informed the peer trainers about whether they earned the incentive. Because of the COVID-19 pandemic, the organization restricted in-person observations for approximately 4 months; thus, all observations were remote during most of this phase. During in-person observations (i.e., the in-person phase), feedback was provided vocally and immediately following the observation. During remote observations (i.e., remote phase), feedback was provided vocally over the phone. If peer trainers could not be reached via phone, written feedback was provided via email and a phone call could be arranged at a peer trainer’s request. During the hybrid phase, when observations were conducted both in-person and remotely, feedback was provided as described above based on the modality of the observation.

Monetary Incentive + Feedback + Prompt

In addition to a monetary incentive and feedback, this phase included a supplemental prompt. The prompt consisted of a phone call during which the consultant informed the peer trainer of the DSP trainee’s name and the date(s) and time(s) they were scheduled to train. The consultant also reminded the peer trainer of the criterion to earn the monetary incentive. Peer trainers had the opportunity to ask questions and problem-solve training issues they anticipated experiencing. If the peer trainer could not be reached via telephone, a written prompt containing the same information was provided via email.

Interobserver Agreement

A second observer collected data on peer trainer behavior for 23.6% of sessions to assess interobserver agreement. An agreement was scored when both observers recorded the peer trainer’s behavior in the same way (i.e., as correct or incorrect). A disagreement was scored when both observers did not record the peer trainer’s behavior identically. Interobserver agreement was calculated by dividing the number of behaviors with agreement by the number of scored behaviors and multiplying by 100. Interobserver agreement averaged 93.9% across all phases (range, 85.4–98.1%).

Measures

We measured peer trainer integrity during each observation. Our criterion was that peer trainers would correctly perform the following three behaviors: (1) complete the training checklist while training (thus, it must be visible and immediately available), (2) conduct themselves in a professional manner (e.g., use a pleasant tone of voice, greet colleagues, refrain from expressing frustration, refrain from gossip), and (3) implement the training components specified on the training checklist for the skill being taught. Table 2 contains more information about the target behaviors. We incorporated the first behavior (i.e., peer trainers complete the training checklist while training) given concerns expressed by agency leaders that peer trainers were not referring to the checklist throughout training and were filing incomplete checklists at the conclusion of the training. These behaviors were problematic for the agency for two reasons. First, it is impossible to remember all the skills listed on the training checklist; without referring to the checklist throughout training, it was likely that peer trainers would fail to train some skills. Second, filing incomplete checklists was concerning for state reviewers who audit program quality. If documentation was missing or incomplete, the outcomes of the review would be negatively affected. We incorporated the second behavior (i.e., peer trainers conduct themselves in a professional manner) given unprofessional behaviors we observed, complaints of newly hired staff, and concerns expressed by parents and guardians of the consumers served. Finally, we incorporated the third behavior (i.e., peer trainers implement the training components specified on the training checklist for the skill being taught) to ensure trainers used components of BST or the full package of BST as specified on the checklist.

Data Analyses

A peer trainer behavior was considered correct if the participant accurately performed it as described in Table 2. A behavior was considered incorrect if the participant did not accurately perform the behavior or omitted the behavior. For the purposes of this program description, we summarized the data as the percentage of peer trainers who met our criterion of 100% correct, which was calculated by dividing the number of peer trainers who met criterion during that observation period (i.e., 2 weeks) by the total number of peer trainers who trained during that observation period and multiplying by 100. We used visual inspection to analyze these data.

Table 2 Peer trainer behaviors

We were also interested in determining the extent to which peer trainers correctly implemented the three peer trainer behaviors. We calculated the mean component integrity of each of the three behaviors for each phase by dividing the number of trainings in which all peer trainers correctly performed the behavior by the total number of opportunities for all peer trainers to engage in that behavior and multiplying by 100.

Results

Figure 2 displays the percentage of trainers who met criterion and the number of observations that occurred during the biweekly observations. Table 3 summarizes the component integrity data for each of the peer trainer behaviors across phases. The number of trainers observed within a 2-week period across the years summarized ranged between one and 12. Thus, some variability in the data can be attributed to the number of trainers observed. During baseline in early September 2019, we conducted observations of peer trainers to determine if integrity remained low as previously identified in the pre-intervention assessment. No trainers met criterion during baseline, suggesting an intervention was necessary. In late September, the CEO provided approval for the consultation team to deliver feedback to peer trainers and indicated he would consider a monetary incentive in the future if feedback did not improve training integrity. During the feedback phase, the percentage of trainers who met criterion was low (M = 5.3%; range, 0–33%). The component integrity data revealed peer trainers used the training checklist, conducted themselves professionally, and implemented specified training components during 10.5%, 79%, and 55.6% of peer trainings, respectively.

Fig. 2
figure 2

Percentage of trainers who met criterion during biweekly observations. Note. BL = baseline, FB = feedback, PA = program announcement, MI + F = monetary incentive + feedback, MI + F + P = monetary incentive + feedback + prompt. The asterisks (*) indicated biweekly observations where no trainings were observed. Although we calculated the overall percentage of trainers who met criterion for the first observation on BL, the data recording sheets cannot be located and, thus, the number of trainings that occurred is unknown

Table 3 Component integrity analysis for each phase

During feedback conversations, multiple peer trainers indicated they were not getting paid to provide training to new staff and that training made their jobs more difficult. Consequently, the consultation team asked the organization to support an intervention with a monetary incentive. In collaboration with the organization’s finance and human resources departments, the details of the monetary incentive were finalized, and the program was announced in January and February 2020. The percentage of trainers who met criterion increased when the program was announced despite feedback conditions remaining in effect (M = 75%; range, 40–85.7%). The component integrity data revealed improvements in the percentage of peer trainers who engaged in each of the three target behaviors. Peer trainers used the training checklist, conducted themselves professionally, and implemented specified training components during 87%, 100%, and 75% of peer trainings, respectively. The monetary incentive plus feedback phase began in late February 2020. Upon introducing a monetary incentive plus feedback, the percentage of trainers who met criterion was 80%. The component integrity data revealed peer trainers used the training checklist, conducted themselves professionally, and implemented specified training components during 80%, 100%, and 90% of peer trainings, respectively. Given safety and staffing concerns associated with the COVID-19 pandemic, the organization placed the intervention, including peer training workshops, on hold for 146 days through July 2020. Thus, there is only one data point in the initial monetary incentive plus feedback phase.

The consultation team was given permission to resume consultation in late July 2020 in a remote format. We collected additional baseline data wherein we observed that no peer trainers implemented training with integrity (i.e., met the criterion). Consequently, we reintroduced the monetary incentive plus feedback (delivered remotely) and the percentage of peer trainers who met criterion increased relative to baseline (M = 38.9%; range, 20–100%). The data in this phase were not as high as the first introduction of this intervention. Peer trainers used the training checklist, conducted themselves professionally, and implemented specified training components during 44.4%, 91.7%, and 68.6% of peer trainings, respectively. Because desired levels of performance were not observed, the consultation team incorporated a supplemental prompt, a potential indicated intervention based on the pre-intervention assessment. The inclusion of a prompt for training increased the percentage of peer trainers who met the training criterion with some variability in the data (M = 75.5%, range, 0–100%). The component integrity data revealed peer trainers used the training checklist, conducted themselves professionally, and implemented specified training components during 81.3%, 90.7%, and 88.5% of peer trainings, respectively.

At the conclusion of data analysis, peer trainers had been awarded $1994.50; however, the organization assumed only $150 in direct costs and $22.50 for the Snappy service fee as most peer trainers had not exchanged their earnings for items on www.snappy.com. The reasons peer trainers had not exchanged varied. Some peer trainers no longer worked for the organization and did not exchange their earnings before their departure. Several peer trainers had not yet earned $25, the minimum threshold for exchanging their earnings for a reward. Finally, others were eligible to exchange but saved their earnings to purchase more expensive items.

Discussion

The purpose of this program description was to summarize the results of a systems-level intervention to improve peer trainer integrity within an ongoing consultation arrangement. We adopted procedures from Erath et al. (2021) and expanded them to prepare peer trainers for their role in training new DSPs. A pre-intervention assessment was then conducted to identify potential barriers peer trainers experienced in the workplace. Finally, indicated interventions were implemented in a sequential manner to evaluate their effects on trainer integrity. We adopted a sequential approach to the systems-level intervention to make careful use of limited resources. First, we modified the process for scheduling peer training and delivered feedback to peer trainers following our observations. These indicated interventions did little to produce improvements in the percentage of trainers who conducted training with integrity. Next, we added a monetary incentive and a supplemental prompt and observed increased percentages. These data suggest that the implementation of an assessment-derived intervention at the organizational level can improve the percentage of trainers who conduct training with integrity.

Ensuring peer trainers conduct training with integrity is important for several reasons. Without experiencing high-quality, empirically supported training, the performance of newly trained DSPs may be suboptimal, with implications for the quality of services delivered to consumers. For example, research has shown consumer outcomes are affected by the extent to which staff and educators implement interventions, such as behavior plans (e.g., DiGennaro et al., 2007; Wilder et al., 2006) and teaching protocols (e.g., Hirst & DiGennaro Reed, 2015). Moreover, correct implementation of health protocols by newly hired DSPs decreases the likelihood of staff and consumer injuries or potentially life-threatening risks. Employees in the present organization are regularly responsible for implementing health procedures that can impact a consumer’s well-being, such as administering medications, using adaptive equipment, or preparing pureed foods for individualized diets. Thus, implementation errors with these or other health procedures may have numerous implications for consumer health. By adopting systems to support peer trainers that ensure high-quality training, such potentially dangerous situations may be prevented.

Our findings contribute to the literature in several ways. First, this program description addresses the limitations of Erath and colleagues (2020; 2021) and extends their work to the natural environment. To our knowledge, this is the first study that describes a systematic line of research evaluating the effects of a peer training program using a consultation model within a human services organization. Taken together, this line of work provides information about one systematic approach that organizational leaders or trainers could adopt to improve peer trainer integrity.

Second, our findings provide additional support for the validity of the PDC-HS. Although previous research has demonstrated the utility and validity of the PDC-HS (e.g., Carr et al., 2013), the interventions used in those studies were not implemented across an entire organization for an extended duration or with many employees. The present findings support the long-term adoption of systems-level interventions informed by the PDC-HS. Moreover, much of the PDC-HS research describes an assessment wherein supervisors were interviewed about variables influencing staff behavior. Relatively few studies have used the PDC-HS to interview staff about the barriers they are experiencing (see Merritt et al., 2019). Our findings lend support for directly engaging the staff who will experience the intervention to determine the barriers to their performance.

Third, the delivery of a monetary incentive exchangeable through a gift-giving platform was a creative and an affordable way to provide preferred consequences for desirable peer trainer performance. That is, peer trainers did not receive the monetary incentive in their paycheck and instead were able to purchase items from www.snappy.com. The human resources director initially proposed this procedure primarily due to process challenges for paying monetary bonuses for non-exempt staff. The present findings support the use of a monetary incentive delivered in this manner (i.e., exchangeable for various goods on a gift-giving platform). Interestingly, the costs to the organization were minimal, as many peer trainers had not spent their earnings at the time of the writing of this manuscript. Although studies involving monetary incentives and employee token economies have been published in the literature (e.g., Fox et al., 1987; Vergason & Gravina, 2019), few are used within human services settings. Luiselli et al. (2009) evaluated the effects of an intervention containing a probabilistic financial incentive on chronic absenteeism of human services employees and showed reductions in the percentage of daily staff absences. Thus, the present findings contribute to this small literature and offer an affordable option for organizations to adopt.

We observed variability in responding during the monetary incentive plus feedback and prompt phase. This variability may have been caused by differing numbers of observation sessions conducted during each observation period. That is, the number of observations depended on the number of DSPs hired during that timeframe. For instance, if one observation period included two trainings and one peer trainer did not meet criterion, the percentage of peer trainers to meet criterion would have been 50%. In another week, ten trainings may have occurred and if one peer trainer did not meet criterion, the percentage of peer trainers to meet criterion would have been 90%. In either situation, one peer trainer did not meet criterion, but the overall percentage of trainers who met criterion varied substantially. The observed variability may also have been caused by the regular influx of new peer trainers due to staff turnover, promotions, or professional development opportunities. Thus, the peer trainers whose performance is captured in the data were not a static group, which could have introduced variability in the data.

Integrity improvements were observed during the program announcement phase, despite the monetary incentive not being in place. These data are like the pattern observed in Luiselli et al. (2009) when an informational brochure was distributed to announce the upcoming intervention. The informational brochure used by Luiselli et al. (2009) described the lottery intervention, eligibility, start date, and qualification guidelines. In the present study, the consultants had informal conversations with some peer trainers and supervisory staff about the forthcoming monetary incentive and the qualification guidelines. The start date for the program was not relayed at that time; therefore, peer trainers may have thought the monetary incentive was already in place.

Limitations and Future Research

There are several limitations worth noting. First, due to the nature of the ongoing consultative relationship and restrictions imposed by the COVID-19 pandemic, there is limited experimental control. Although we returned to baseline for a single observation period, and the percentages were low, our procedures were not designed to assess the effects of the intervention in a tight experimental fashion. Despite this limitation, we were able to replicate our findings within this program evaluation. A future study should adopt a behavior analytic research design, such as a multiple-baseline design across settings or a withdrawal design, to draw causal conclusions about the effects of the packaged intervention on peer trainer integrity. Second, we did not measure consumer outcomes. The primary reason to ensure training is implemented with integrity is to enhance the likelihood that newly hired DSPs will accurately implement procedures, particularly health-related protocols, with consumers. We were unable to measure consumer outcomes with the current consultation resources given the number of peer trainers, employees, and programs affected. Future research should evaluate the impact of a peer training program on consumer outcomes. Relatedly, we did not measure whether our efforts were associated with reductions in staff turnover, which would be an ideal secondary outcome. Because the pandemic has caused instability in the workforce for multiple years, we did not conduct this analysis. Additionally, we conducted PDC-HS interviews with four staff, which may not identify representative barriers to performance that all staff members experience. Future research should determine how many assessments are necessary to ensure the representativeness of the results. Finally, our observations relied on iLink Support Technologies® (DiGennaro Reed & Reed, 2013; Strouse & DiGennaro Reed, 2021), which is unique to the service setting where consultation occurred. Most organizations do not offer smart-home services with hardware and software of this type, which would make remote observation difficult if not impossible. The extent to which our procedures and findings generalize to other providers is unknown. Thus, future research should examine the external validity of these procedures.