Individuals with autism spectrum disorder (ASD) and other developmental disabilities benefit from instructional procedures that involve the systematic use of instructional prompts (Green, 2001; MacDuff et al., 2001). A prompt is any supplemental antecedent stimulus that increases the likelihood of the target response occurring in the presence of the relevant discriminative stimulus (SD; Cooper et al., 2019). Response prompts, such as gestures, models, and physical guidance, involve the behavior of an instructor (e.g., touching the correct item in an array) and likely are among the most used prompts in clinic and educational settings.

Ultimately, the instructor must fade any prompt that is added to aid the client in learning a given skill. Researchers have developed several prompt-fading procedures to accomplish this removal of prompts, including least-to-most (LTM) prompting, most-to-least (MTL) prompting, prompt delay, most-to-least prompting with a prompt delay (MTLD), and graduated guidance. These prompt fading procedures have been evaluated in numerous studies to determine their relative effectiveness and efficiency (for reviews, see Demchak, 1990; Libby et al., 2008).

In general, each of these procedures can promote the acquisition of various skills; however, results of studies comparing the relative efficacy of these procedures have been idiosyncratic across learners (Demchak, 1990; Gast et al., 1991; Libby et al., 2008; MacDuff et al., 2001; Riesen & Jameson, 2018; Walker, 2008; Wolery et al., 1992). Similar results have been reported in studies comparing different variations of other skill acquisition procedures, such differential reinforcement (e.g., Boudreau et al., 2015) and error correction (e.g., McGhan & Lerman, 2013). Such comparisons have sometimes been conducted within the context of assessment-based instruction, during which the experimenters evaluated participants’ responding under two or more interventions and used the results to guide the selection of individualized interventions (see Kodak & Halbur, 2021, for a description of this approach). For example, Seaver and Bourret (2014) assessed the relative effectiveness of different response prompts (verbal plus gestural, model, and physical) and prompt-fading strategies (LTM, MTL, and progressive prompt delay) to identify the most efficacious prompting strategy for 10 participants. The authors subsequently compared the identified most efficacious and least efficacious prompting strategies while teaching vocational skills. Results demonstrated the generality and validity of the assessment results.

Controlled assessments like that described by Seaver and Bourret (2014) represent a rigorous, empirically validated approach for identifying effective, individualized interventions for clients. These assessments can require extensive resources (e.g., time, staff) and expertise (e.g., identifying appropriate interventions to include in the assessment, conducting a logical analysis to equate targets across conditions; Kodak & Halbur, 2021). These barriers to conducting assessments reduce the likelihood that behavior analysts will incorporate assessment-based instruction into their everyday practice. Such an outcome is not ideal as behavior analysts are ethically obligated to incorporate assessments into their service delivery, and increase their competence in current best practices (Behavior Analyst Certification Board [BACB], 2020). Furthermore, the application of ineffective prompting strategies may lead to excessive client errors, delay acquisition of new skills, and create an aversive learning environment for the client.

When controlled assessments are not readily feasible due to insufficient resources or expertise, an alternative approach is to synthesize research findings and recommendations from the literature to create a centralized resource that behavior analysts could use to determine an optimal intervention. This centralized resource would serve as an initial starting place for behavior analysts as they work to address barriers to assessment-based instruction and determine their client’s need for such an assessment. Numerous authors have provided extensive guidelines detailing conditions under which a given prompt-fading strategy is recommended or contraindicated. These guidelines include considerations about characteristics of the client (e.g., response tendencies, current behavioral repertoire, tolerance of different prompts), characteristics of the target skill (e.g., novelty of the target skill, degree of difficulty of motor responses), and characteristics of the teaching environment (e.g., possibility of providing physical prompts). Many of these guidelines come from controlled assessments (e.g., Gast et al., 1991; Libby et al., 2008) later summarized in broader literature reviews (e.g., MacDuff et al., 2001; Wolery et al., 1992), along with guidelines that make intuitive sense (e.g., don’t use physical prompts for clients who resist them; Seaver & Bourret, 2014). However, no research to date has attempted to synthesize all the available considerations and recommendations into a practical resource that can guide behavior analysts in the evaluation and selection of prompting strategies.

The development of a resource that behavior analysts could reference to guide their decision-making process would seem beneficial. Behavior analysts must first evaluate and then synthesize numerous variables related to the characteristics of the client, the target skill, and the teaching environment to take an evidence-based approach to the selection of prompts and prompt fading strategies for individual clients. A decision-making tool that eases this process through checklists or diagrams and could be referenced during ongoing clinical practice might be particularly helpful for relatively new or inexperienced behavior analysts. Geiger et al. (2010), for example, developed a decision-making tool to guide behavior analysts through the selection of function-based treatments for escape-maintained problem behavior. The authors synthesized treatment recommendations from the literature into a hierarchical decision-making tool. The tool guided behavior analysts through a series of yes/no questions surrounding ethical, safety, environmental, and practice considerations that resulted in differential treatment recommendations based on the answers. An important next step after developing a decision-making tool is to determine its ease of use, or the usability of the tool, by behavior analysts when they apply the tool to their work. Both the usability and utility of this decision-making tool was further evaluated in subsequent studies (Hoffmann et al., 2020; Saini et al., 2017), providing support for the benefits that such tools may offer behavior analysts when selecting interventions for their clients.

Deochand et al. (2020) recently published a decision-making tool designed to guide behavior analysts through a functional analysis risk assessment. The authors developed the tool by synthesizing the literature on risk mitigation for functional analyses and enlisted a group of 10 behavior analysts specializing in the assessment and treatment of severe problem behavior to review the tool. The expert group concluded that the tool was best suited as a supplemental aid for behavior analysts in their early careers. However, the authors did not evaluate whether behavior analysts would find their tool easy to use and noted that it was an important next step in determining the efficacy of their tool.

Given the potential benefits of decision-making tools, the purpose of the current study was to develop and test a tool to guide behavior analysts in selecting appropriate prompting strategies for clients as a first-line approach when establishing programs. The tool, called the Systematic Worksheet for the Evaluation of Effective Prompting Strategies (SWEEPS), includes a series of worksheets, datasheets, flowcharts, and supplemental instructions that produce recommendations for prompting strategies to teach a given skill to a given client. In this article, we first describe the steps taken to develop the SWEEPS. We then present results of a controlled evaluation of its ease of use by behavior analysts when selecting prompts and prompt-fading strategies for simulated and actual clients.

Phase 1: Tool Development

Literature Search

The first step in developing the SWEEPS was to search the literature for relevant articles. The first author conducted the literature search after meeting with the second author to determine the search parameters. The literature search was conducted using the APA PsycINFO and ERIC databases using the keywords “prompts,” “prompt-fading,” “prompting strategies,” “comparison,” and “autism spectrum disorder/autism/ASD” and included all articles published before December 2018. This produced 5,803 search results. We first reviewed these search results based on their titles, abstracts, and discussions to identify studies and literature reviews focused on the comparison and recommended use of various prompting strategies. We determined an article to focused on the comparison and recommended use of various prompting strategies if it either compared or provided recommended applications for two or more different prompt-fading strategies. We then examined these results to identify those that provided clinical recommendations for use of one or more types of response prompts and prompt-fading strategies, with a specific focus on vocal, gestural, model, and physical prompts and on LTM, MTL, PD, MTLD, and graduated guidance. We selected these types of prompts and prompt-fading strategies for inclusion in the SWEEPS due to their more frequent inclusion in research. If an article contained recommendations for additional types of prompts or prompt-fading strategies, we omitted these recommendations from the SWEEPS. For example, the literature review conducted by Cengher et al. (2018) described additional types of stimulus prompts and prompt-fading strategies (simultaneous prompting and no-no prompting) that are less commonly found in the literature. This process resulted in a total of 21 articles (denoted by * in the reference section).

Literature Review

After identifying relevant articles, we reviewed the articles to compile a list of the recommendations for using different prompts and prompt-fading strategies. In general, we identified recommendations in two ways. First, we reviewed imperative statements delivered by the author(s) using wording such as the instructor “should” or “should not” (and comparable phrases) apply a specific prompt or prompt-fading strategy under a certain set of conditions. Second, we reviewed the authors’ discussion of the results and potential extensions of the observed outcomes. For example, Libby et al. (2008) provided a series of “best practice” recommendations in their discussion based on their results as well as previous research, including considerations between using LTM, MTL, and MTLD prompting strategies based on past client rates of acquisition. We then organized the recommendations for each type of prompt and prompting fading strategy. For example, we grouped together all recommendations about the use of physical prompts. One exception was that we grouped all recommendations for the use of model prompts with those for gesture prompts, because they likely require similar prerequisite skills (e.g., attending and imitation). Table 1 summarizes the recommendations identified for the SWEEPS with select supporting citations. Recommendations without supporting citations were based on reasonable assumptions about environment–behavior relations.

Table 1 Prompting Strategy Recommendations

Literature Recommendation Synthesis

The next step was to synthesize the recommendations so that we could create a decision-making worksheet with associated flow charts. We began by developing yes/no questions for each recommendation. These questions were developed together by the first and second author. For example, we developed the question, “Does the skill require motor responses that are difficult for the client,” in reference to the recommendation to use graduated guidance to teach skills that include difficult motor responses for the client. In some cases, we developed a single question for a group of similar recommendations to reduce redundancy. For example, we developed the question, “Does the client tend to learn new skills relatively quickly or slowly,” in reference to several recommendations related to the pace at which the client typically acquires new skills.

We then divided the questions into two sections, one related to the selection of prompts and the other to the selection of prompt-fading strategies. We further organized the questions in the second section beginning with the most global questions appearing before questions with more specific recommendations. For example, the questions, “Does the client have experience with this skill or other similar skills,” and “Have you seen the client do the skill independently before,” appear before the question, “Does the client tend to wait for prompts before responding.” We organized the questions in a bulleted number format with a section to mark responses to each question aligned to the right-hand side of the page. The response options we included for each question were “Yes,” “No,” “Unsure,” and “N/A” (i.e., not applicable). We included the “unsure” response option because it is likely that the behavior analyst filling out the worksheet will not have sufficient information to definitively answer every question in one sitting. As the behavior analyst completes the worksheet, they can identify how many variables require further assessment and can plan to obtain that information. To aid in this assessment process, we developed a corresponding collection of written instructions and sample datasheets related to every question on the worksheet to which a behavior analyst may initially be unsure of a definitive answer. These materials are described in more detail below. The main SWEEPS worksheet is available in Supplemental Materials.

Next, we developed flowcharts that behavior analysts use after answering the questions on the worksheet. The flowcharts contain the yes/no questions along pathways leading to specific recommendations. For example, after answering “yes” to the question, “Does the skill require motor responses that are difficult for the client,” the flowchart instructs the user to select graduated guidance as the prompt-fading strategy. The first page of the flowchart for selecting the prompt-fading strategy is show in Fig. 1. The full prompt-fading strategy flowchart and the remaining flowcharts for the selection of types of prompts and additional SWEEPS materials can be found in the Supplemental Materials.

Fig. 1
figure 1

Flowchart for Selecting the Appropriate Prompt-Fading Strategy

Expert Review and Feedback

We asked two doctoral-level BCBAs (BCBA-Ds) who had extensive experience in the research and clinical application of prompting strategies and who were unaffiliated with the study to review and provide feedback on the recommendations (see Table 1) included in the SWEEPS via a Qualtrics survey. The survey listed each recommendation included in the SWEEPS separately and asked the expert reviewers to indicate whether they agreed or disagreed with the appropriateness of the recommendation and why. Both respondents agreed with the appropriateness of the recommendations included in the SWEEPS. In addition, we modified various recommendations for clarity based on the respondents’ feedback.

Instructional Materials and Datasheets

We created several supplemental materials to accompany the main SWEEPS worksheet and flowcharts, including an instructional manual, a reference guide for determining definitive answers for any questions behavior analysts answered as “unsure” on the worksheet, and accompanying datasheets. We also developed two PowerPoint presentations to orient users of the SWEEPS to (1) each of the prompt-fading strategies included in the SWEEPS and (2) the navigation and use of the SWEEPS materials. We subsequently created instructional videos of these two presentations in place of the in-person instructional materials (see below) in response to the COVID-19 pandemic and to evaluate whether training on how to use the SWEEPS materials could occur with minimal direct instruction from a trainer.

The instructional manual comprised three sections: (1) best practices for discrete trial training (DTT), (2) descriptions and procedures for each of the types of prompts and prompt-fading strategies included in the SWEEPS, and (3) recommendations and accompanying rationales related to selecting an appropriate prompting strategy. We compiled the information in each of these sections directly from the literature we collected previously and from additional resources related to the best practices of DTT.

The reference guide for determining answers for questions marked as “unsure” included a set of individual materials for each question. Each question had a document with bulleted instructions on how to design a brief assessment, a corresponding sample datasheet, instructions for data collection, and instructions on how to update the worksheet once a yes/no answer was reached. For example, the document related to determining if the client could imitate vocalizations (i.e., echoic behavior; see Supplemental Materials) included brief instructions on how to select targets and how to set up assessment trials either in a more traditional DTT context or during a naturalistic context, such as when prompting a mand. The data collection instructions and response operational definitions were located below these instructions. The corresponding datasheet provided a place to write in the targets assessed, the context in which the assessment occurred, and the client’s response (i.e., correct or incorrect response). Finally, the bottom of the datasheet included instructions on how to interpret the collected data and how to subsequently re-mark the worksheet. Taken together, these materials served as a model for how to conduct brief assessments to obtain definitive answers to complete the main worksheet. We included a note along with these materials highlighting that the behavior analyst may need to conduct additional assessment trials and sessions to determine these definitive answers if a client’s initial responding was not relatively clear and did not meet the operational scoring definitions.

Both training presentations were approximately 90 min. The first presentation covered information on basic discrete-trial training (DTT) procedures, response prompts, and the five prompt-fading strategies. The presentation also included opportunities for the trainer delivering the presentation to model each of the prompt-fading strategies. The second presentation covered all of the SWEEPS materials, the recommendations included in the SWEEPS, and two demonstrations of how to use the SWEEPS.

We created the instructional videos using the built-in recording and screen-sharing features on Zoom. All the content in the videos was identical to the in-person instructional materials. In addition, we included video modeling with voice-over instructions (VMVO) of the implementation of each of the five prompt-fading strategies in place of in-person models.

Pilot Testing of the SWEEPS

The final step in developing the tool was to arrange for several groups of special-education teachers attending a summer teacher-training program to pilot test the SWEEPS materials. The special education teachers were recruited by offering them the opportunity to receive additional training on the implementation and selection of different prompting strategies with their learners in the context of the teacher-training program. The goal was to evaluate the teachers’ ease of use, readability, and comprehensibility of the instructional manual, worksheet, flowcharts, and associated materials. We observed their performance and solicited feedback as they attempted to apply the materials to different client scenarios. We incorporated this feedback by modifying and refining the materials. Feedback primarily resulted in changes to the written portions of the SWEEPS, instructions on the supplemental materials, and the visual layout of the flowcharts.

Phase 2: Application Test

Formal testing of the SWEEPS commenced after the initial pilot testing in Phase 1 resulted in a final version of the SWEEPS. In phase 2, we recruited graduate students to apply the SWEEPS when working with both simulated and actual clients. The goals of this phase were to evaluate the tool’s ease of use via a nonconcurrent multiple-baseline design across participants, assess social validity, and to determine the extent of training needed for behavior analysts to apply the tool with a high degree of procedural integrity. This latter goal was accomplished by including both live and video-based training formats.

Participants and Setting

Eight graduate students beginning their first semester at an on-campus masters-level behavior analysis program (Madeline, Bonnie, Celeste, Renata, Jane, Tasha, and Erin) or an online masters-level behavior analysis program (Cassidy) participated. Four participants identified as white; one identified as Vietnamese American; one identified as Hispanic and Mexican American; one identified as white and Hispanic American; and one identified as Indian, Afro-Carribean, and other. Participants were recruited from graduate students entering a master’s program in behavior analysis, with the exception of Cassidy, who was recruited from a local clinic where she was completing her supervised experience requirements for board certification. To be eligible, participants had to report receiving little to no formal training on the selection of response prompts and prompt-fading strategies. None of the participants received any of the training provided in the study prior to or during the course of the study. Before the study, participants completed a questionnaire on their experiences implementing and selecting different types of prompts and prompt-fading strategies. Table 2 summarizes each participant’s number of years of experience working with individuals with ASD and their current position. Participant’s responses to the questionnaire are available in the Supplemental Materials. Madeline, Bonnie, Celete, Renata, and Jane received the in-person (live) format of the training, and the three remaining participants received the video-based format. Each participant received a $50 gift card contingent upon completing the study.

Table 2 Participants’ Years of Experience Working with Clients with ASD and Current Position

Eight clients diagnosed with ASD who were receiving services at a university-based clinic participated in generalization sessions. The clients ranged from 4–10 years of age, exhibited a variety of response tendencies (e.g., prompt dependency, mild topographies of problem behavior following an error), but engaged in minimal severe problem behavior (e.g., self-injury or aggression) that would have prevented instructors from safely conducting teaching sessions. The clients participated in sessions as part of their routine clinical services.

Sessions for the in-person training format were conducted in empty therapy rooms equipped with one-way observeration windows and video-recording equipment at a university-based clinic where the clients who participated received behavior analysis services. All session rooms contained a table, two chairs, and the relevant materials needed to conduct each session (e.g., instructional materials, training binders, datasheets). Sessions for the video-based training format were conducted via a licensed account on a HIPAA-compliant videoconferencing platform (Zoom). The experimenter and participant attended all sessions from their respective living spaces in a quiet room with a stable internet connection. All participants and the experimenter used a laptop with a built-in webcam that was capable of running Zoom. The experimenter recorded all sessions using the built-in recording feature on Zoom. The experimenter uploaded all video recordings to an encrypted server immediately following the session.

Materials

Training Binder

Each participant received a three-ring binder that contained the materials (i.e., written instructions, flowcharts, and data-collection sheets) for each of the five prompt-fading strategies included in the decision-making tool. Once the participant began the training, the experimenter placed the SWEEPS in the binder (see SWEEPS Training below). The experimenter scheduled a meeting with participants in the video-based training format to give them all of the necessary research materials (e.g., instructional materials, training binder, datasheets) before the first session. Each participant’s training binder initially contained only datasheets necessary for practicing each of the five prompt-fading strategies. All other instructional materials and datasheets were enclosed in sealed envelopes. The experimenter instructed the participant not to open these envelopes until instructed to do so. The experimenter instructed the particpant to open each envelope (while in view of the camera) and place the enclosed materials in their training binder at its corresponding point in the study.

Simulated Client Profiles

Participants received a different written client profile detailing the target skill and the response variables of a simulated client in each session, with at least one variable listed as “unsure” (e.g., the behavior analyst was not certain if the client could imitate motor movements). The purpose of this profile was to give the participant information about a client that they may serve in their practice. The profile included an uncertainty about a particular variable or response tendency so that the experimenter could evaluate the participant’s assessment of factors that would be necessary for making decisions about an appropriate prompting strategy. The profile was presented in a bulleted list that first described the target skill (e.g., selecting named pictures from an array), instructional setup (e.g., three picture cards placed in an evenly spaced horizontal line in front of the client or webcam), and correct client response (e.g., selecting the named picture from the array). The profile then listed the client variables in the order they appeared on the SWEEPS. An experimenter or trained graduate student not participating in the study served as the client and responded in a manner consistent with the client profile throughout each session.

An example of one client profile can be found in the supplemental materials. In this example, the participant was tasked with teaching Dominic (a simulated client) to fold a towel. The client profile stated that Dominic (1) cannot imitate motor movements, (2) has never worked on this skill before, (3) has never been observed to fold a towel independently, (4) would find this motor task to be difficult, (5) does not engage in challenging behavior or work more slowly when he responds incorrectly or must wait for a prompt, (6) learns new skills relatively slowly, (7) is not prompt dependent, and (8) does not tend to respond incorrectly before a prompt is provided or without attending to the materials. The client profile also stated that the participant was not sure if Dominic resists, avoids, or overly enjoys physical prompts.

The experimenter created three sets of 12 client profiles. The 12 profiles in each set corresponded to 1 of the 12e different outcomes that could occur on the SWEEPS prompt-fading strategy flowchart. These profiles also sampled various combinations of response prompts to ensure each response prompt was appropriate or contraindicated an equal number of times. This provided enough client profiles to avoid using the same profile more than once for a participant. Thus, a participant encountered the same outcome multiple times, but they did not encounter the same client name and target skill multiple times.

The experimenter randomized the order of client profiles for each participant; however, the first five client profiles each participant received resulted in a recommendation for each of the five prompt-fading strategies. Therefore, each participant encountered at least one client profile in their baseline that resulted in a recommendation of each of the five prompt-fading strategies. Each participant also encountered client profiles that fit each prompt-fading strategy at least once in posttraining.

Generalization Client Profiles

All participants except Tasha, Erin, and Cassidy (who received the video-based training) participated in generalization sessions with actual clients. Tasha, Erin, and Cassidy could not participate because the COVID-19 pandemic restricted access to clients at the time of the study. Before a generalization session with a client, the experimenter and the client’s BCBA met to develop a written client profile. The client’s BCBA supervised the client’s routine clinical services and did not provide specific, direct guidance to the participants on procedures related to the study. The BCBA’s only role in the current study was to complete the SWEEPS with the client to develop the written profile. An experimenter and the client’s BCBA independently completed the SWEEPS for the client to determine the variables that would be listed in the client profile. Both the experimenter and the client’s BCBA conducted assessments with the SWEEPS materials as needed if they were uncertain about any client variable or response tendency. After they both completed the SWEEPS, the experimenter and BCBA compared their results. If they scored a different outcome on one or more items (i.e., one recorded “Yes” whereas the other recorded “No”), they reviewed their data, conducted additional assessments with the SWEEPS materials as needed, and remediated these discrepancies.

Teaching Stimuli

The participant had access to three bags of stimuli in each session. One bag contained all targeted teaching stimuli. The experimenter provided the relevant bag of teaching stimuli to the participant before each session. The other two bags contained stimuli that participants needed to assess unsure variables specified in client profiles. Each of these bags contained one task requiring a motor response (e.g., ring stacker or string and beads; hereafter referred to as a motor task) and several pictorial stimuli that the particpant could use to assess vocal responding. None of the stimuli in these bags were listed as targeted instructional materials in a client profile (e.g., none of the client profiles required the participant to teach the client to complete a ring stacker). The experimenter told the participant that the items in the "Known" bag were tasks that each client had previously mastered. The items in the "Unknown" bag were tasks that each client had not mastered. Items in the “Known” and “Unknown” bags were individualized for the clients participating in the generalization sessions. For the video-based training participants, the experimenter included these stimuli bags among the session materials and instructed the participants to place these bags next to their workspace before beginning sessions.

Response Measurement, Interobserver Agreement, and Procedural Integrity

The experimenter scored the following components as "Yes," "No," or "Not Applicable" (N/A) for each session: (1) The participant's assessment of client variables that were unknown for a given client profile (unsure variables), defined as the participant conducting at least three assessment trials in which they evaluated the specified unsure variable in the given client profile; (2) the selection of the correct type(s) of prompt(s), defined as the participant selecting at least one type of prompt that was recommended (as opposed to contraindicated) for the given client profile; (3) the selection of the correct prompt-fading procedure to teach the specified skill to the given client, defined as the participant selecting a prompt-fading strategy that was recommended (as opposed to contraindicated) for the given client profile; (4) whether the participant conducted an assessment probe using LTM prompting to determine the initial prompt level (if the prompt-fading procedure was MTL, MTLD, or prompt delay), defined as the participant conducting at least three instructional trials in which they delivered the initial instruction without a prompt and then provided subsequently more intrusive prompts contingent upon incorrect responses; and (5) the selection of the correct initial prompt level (when applicable), defined as the participant writing the type of prompt they would implement as the first prompt when teaching the skill. The correct initial prompt level was randomly preselected for each client profile. For generalization clients, the correct initial prompt level was determined before the session by the experimenter; however, if the client’s responding during the LTM probe produced a different outcome (e.g., the client previously required a full-physical prompt to respond correctly but now responds correctly to a model), this prompt level was scored as correct. The experimenter scored (4) and (5) as not applicable (N/A) if the participant selected LTM or graduated guidance as the prompt-fading strategy, regardless of whether that was the correct selection. Additional details about criteria for scoring the components can be found in the supplemental material. Finally, the experimenter also collected trial-by-trial data as a direct measure of the selection of the correct type(s) of prompt(s) and prompt-fading procedure. These implementation data were depicted as a percentage of correct implementation for each session.

Independent secondary observers collected data on the dependent variables and on the procedural integrity of the experimenters and simulated clients for 25%–43% of sessions in each phase of the study for each participant. Observers collected data by reviewing each participant's session datasheets and by observing the participant's, experimenter's, and confederate client's performance either live during the session or from video recordings of the sessions. Exact agreement was calculated by dividing the number of agreements (i.e., both circling “Yes” or “No” for the given component) by the total number of agreements plus disagreements. The quotient was then converted into a percentage by multiplying by 100. Independent tertiary observers collected data on experimenter and simulated client procedural integrity for 25%–43% of sessions in each phase for each participant for the purposes of obtaining interobserver agreement (IOA).

The mean agreement for participant evaluation and selection across all conditions was 100% for Madeline, 96.7% (range: 67%–100%) for Bonnie, 100% for Celeste, 97% (range: 67%–100%) for Renata, 100% for Jane, 94.7% (range: 67%–100%) for Cassidy, 92.5% (range: 60%–100%) for Erin, and 95% (range: 80%–100%) for Tasha.

Experimenter integrity in each session included (1) reading the session script, (2) providing the written client profile and modeling the appropriate setup, SD, and client response based on the target skill, (3) providing the correct instructional materials to the participant, (4) not providing feedback to the participant about their selection and implementation of the prompting strategy (except for during feedback sessions), and (5) providing both behavior-specific praise and corrective feedback to the participant on their selection and use of the SWEEPS (only during feedback sessions). Observers collected data on the experimenter’s integrity by viewing their performance during the session either live or through a video recording. The experimenter’s performance of each procedural component was scored as either correct, incorrect, or not applicable. Observers then calculated experimenter integrity by dividing the total number of experimenter behaviors scored as correct divided by the total number of scored steps. The quotient was then converted into a percentage by multiplying by 100. The mean experimenter integrity across all conditions was 100% for all participants, and the mean agreement on experimenter integrity across all conditions was 100% for all participants.

Simulated client integrity included (1) correctly responding according to the script for each client profile and (2) not providing feedback to the participant during any session. Similar to experimenter integrity, these data were collected by viewing their performance during the session either live or through a video recording. The simulated client’s performance of each procedural component was scored as either correct, incorrect, or not applicable. Observers then calculated simulated client integrity by dividing the total number of client behaviors scored as correct divided by the total number of scored steps. The quotient was then converted into a percentage by multiplying by 100. The mean simulated client integrity across all conditions was 100% for Bonnie, Celeste, Renata, Jane, Cassidy, Erin, and Tasha and 90% (range: 50%–100%) for Madeline. The mean agreement on simulated client integrity across all conditions was 100% for all participants.

Social Validity

We sent each participant a link to an anonymous Qualtrics survey approximately 2–4 months following completion of the study to learn about their use of the SWEEPS materials.

Procedures

Pre-SWEEPS Training

To use the SWEEPS, participants must be familiar with the types of response prompts and prompt-fading strategies that they can select when teaching skills to clients. Thus, the experimenter trained the participants to implement each type of prompt and prompt-fading strategy before evaluating their use of the SWEEPs. For the in-person training, the experimenter delivered the training presentation (described above) to each participant individually. The experimenter also provided a written manual detailing all of the procedures included in the PowerPoint as well as procedural flowcharts and data-collection sheets for each prompt-fading strategy. The experimenter described and modeled how to implement each of the prompts and prompt-fading strategies, but did not describe when or why to use one prompt or prompt-fading strategy versus another. For the video-based training, participants received the training via an instructional video (see above). Next, the participant practiced each strategy in role-play with a simulated client (either a second experimenter or the primary experimenter). For the video-based training, the experimenter provided instructions on how to arrange tasks within camera view and modeled how to deliver each type of response prompt during virtual role plays. The purpose of this practice was to familiarize participants with each of the prompt types and the prompt-fading strategies and expose them to the procedural differences. Participants practiced each prompt-fading strategy until their implementation met the mastery criterion of one 6-trial teaching session with 100% correct implementation of the prompt-fading strategy. Participants practiced one prompt fading strategy (e.g., LTM) until their performance met the mastery criterion and then began practicing the next prompt-fading strategy (e.g., MTL). This practice continued until their implementation met the mastery criterion for all five procedures.

Baseline

The experimenter began each session by giving the participant a client profile, accompanying instructional materials, and a datasheet for recording their selections for the prompting strategy. The participant had access to materials explaining how to implement each of the five prompt-fading strategies but did not have access to any of the SWEEPS materials. The experimenter vocally described the target skill and modeled the instructional setup (SD) and correct client response for the participant. Next, the experimenter asked the participant if they would like the experimenter to read the client profile aloud to them or if they would like to read it to themselves. The experimenter then either read the profile aloud or provided the participant time to read it themselves. Following this, the experimenter instructed the participant to select the type(s) of prompt(s) and prompt-fading strategy they would use to teach the skill to the client. The experimenter told the participant that they could reference any of the materials they received previously throughout the session. The participant received a different client profile that contained a different target skill and learner variables in each session. An experimenter or a trained graduate student serving as a research assistant served as a simulated client and responded in a manner that was consistent with the client profile during the session. The experimenter told the participants that they could interact with the simulated client if they wanted to assess anything with the client.

Next, the experimenter told the participant to record their selections on the datasheet once they selected the type(s) of prompt(s) and prompt-fading strategy they would use to teach the skill to the client. Each participant had as much time as they wanted to make their selections, but was told that they would be completing 5 to 10 different client profiles before training. Finally, the experimenter told the participant that they would not receive any feedback on their selections and that the experimenter could not answer any questions that were not related to the instructions. Once the participant selected the type(s) of prompt(s) and prompt-fading strategy they would use to teach the skill, the experimenter asked the participant to implement their selected prompting strategy in a six-trial teaching session. The experimenter did not provide any feedback on the implementation of their selected prompting procedure. The experimenter told the participants that they could stop the session at any point or leave the room in-between sessions to take a break.

SWEEPS Training

For the in-person training, the experimenter delivered the SWEEPS training presentation (described above) to each participant individually. The experimenter provided each participant with multiple copies of the SWEEPS materials and a written manual describing all of the procedures detailed in the PowerPoint. The experimenter described each component of the SWEEPS, provided the rationale for why each variable was included on the SWEEPS, and modeled the use of the SWEEPS with two example client profiles. For the video-based training, participants received the SWEEPS instructional video. Once the participant completed the video presentation, they notified the experimenter and immediately began posttraining sessions as describe in the next section.

Posttraining

Sessions were identical to baseline, except that the participants now had access to the SWEEPS materials. The experimenter did not provide any feedback to the participants on their selection of the prompting strategy or their use of the SWEEPS. The experimenter conducted differing numbers of posttraining sessions with each participant (range: 5–14 sessions).

Posttraining Feedback

If a participant’s correct responding was on a decreasing trend or remained stable for three to five sessions, the experimenter conducted a feedback session with the participant. Feedback sessions typically lasted less than 5 min. In each case, the experimenter provided indirect feedback (e.g., “Make sure to follow the flowcharts carefully,” “Double-check your work,” “Be sure to use all of your materials”). Following the feedback session, the participant resumed posttraining sessions, as described above. If the participant had emitted any further errors in their evaluation or selection of the prompting strategy in subsequent sessions, the experimenter would have conducted another identical feedback session with the participant. However, this was not necessary for any participant.

Generalization Probes

For the in-person training only, the experimenter asked the participants to evaluate and select prompting strategies for a child with ASD before and following training to assess generalization. Participants completed generalization probes with two different children in both baseline and following posttraining sessions. One of the generalization probes for each participant was with a client who tended to learn new skills relatively quickly whereas the other probe was with a client who tended to learn new skills relatively slowly and demonstrated learning characteristics such as no motor imitation or prompt dependence.

The experimenter provided the participant with both known and unknown vocal and motor tasks for the selected client. With the exception of Madeline, participants had little to no previous experience with the clients. Madeline became the primary therapist for one of her generalization clients during her posttraining session; thus, the experimenter asked Madeline to teach her client a new skill that she had not previously targeted with that client. Participants completed their generalization probes with the same clients for both baseline and posttraining sessions except for the rare occasion that the client was absent.

During the generalization session, the client was in the room playing with toys or other leisure activities. The client’s primary therapist or the experimenter supervised the client while the participant made their selections. The experimenter told the participant that they were allowed to interact with the client at any time if they would like to assess something. The client’s therapist did not give the participant any instructions on how to work with the client except to point out highly preferred items and how to manage problem behavior (this rarely occurred).

Removal of SWEEPS Materials

After completing posttraining sessions and generalization probes, participants completed additional sessions without the SWEEPS materials available. Participants who completed the in-person training experienced this condition between 2–4 weeks after their completion of posttraining sessions due to a holiday break that occurred immediately after the posttraining sessions. Participants who completed the video-based training experienced this condition immediately after their completion of posttraining sessions. The purpose of this condition was to evaluate whether participants’ correct selection and evaluation of an appropriate prompting strategy remained under the stimulus control of the SWEEPS manual and accompanying materials or whether continued access to the SWEEPS would be needed to promote the selection of appropriate prompting strategies. Sessions were procedurally identical to baseline. Participants continued sessions until their evaluation and selection of the prompting strategy were stable or on a decreasing trend for at least three sessions (range: 4–9 sessions).

Results

Figures 2 and 3 depict the results of the live and video-based training participants’ evaluation and selection of prompting strategies across sessions (hereafter referred to as the “SWEEPS components”), respectively. Asterisks in Fig. 2 denote the generalization probes with an actual learner. Arrows denote when a participant received feedback. Data and the accompanying scoring rules on the participants’ implementation of the appropriate prompting strategy in each session are available from the second author upon request. Overall, accurate implementation of the appropriate prompting strategy was high in session where participants selected the correct prompt-fading strategy and was low in sessions in which they did not. Only one participant (Jane) required an additional booster training on the different prompt-fading strategies due to poor procedural integrity. Performance immediately improved and maintained during all subsequent sessions.

Fig. 2
figure 2

Performance on Each Procedural Component for Participants Who Completed In-Person Training. Notes. Numbers on the y-axis refer to the procedural components: (1) correct assessment of unsure variable, (2) correct selection of types of prompts (Steps 1 and 1a of the SWEEPS), (3) correct selection of the prompt-fading strategy (Steps 2 and 2a), (4) correctly conducted LTM probe (Step 3), (5) correct selection of the initial prompt level (Step 3). Asterisks denote generalization probes with an actual child. Arrow denotes that the participant received feedback between sessions

Fig. 3
figure 3

Performance on Each Procedural Component for Participants Who Completed Video-Based Training. Notes. Numbers on the y-axis refer to the procedural components: (1) correct assessment of unsure variable, (2) correct selection of types of prompts (Steps 1 and 1a of the SWEEPS), (3) correct selection of the prompt-fading strategy (Steps 2 and 2a), (4) correctly conducted LTM probe (Step 3), (5) correct selection of the initial prompt level (Step 3). Arrow denotes that the participant received feedback between sessions

None of the participants consistently implemented any of the SWEEPS components during baseline, except for the selection of the correct type(s) of prompt(s). In posttraining, six participants consistently implemented all the SWEEPS components without any experimenter feedback. Renata and Cassidy did not initially demonstrate correct implementation of all SWEEPS components, but did do so following one instance of brief, indirect feedback to be sure to utilize all the SWEEPS materials and to double check their work. In general, participants’ correct implementation of the SWEEPS components decreased relative to posttraining following the removal of the SWEEPS.

All eight participants completed the Qualtrics survey. Three respondents reported looking at (but not using) the SWEEPS and two respondents reported having used the SWEEPS since their training. One respondent who reported not using the SWEEPS wrote, “I actually plan to use the SWEEPS when I am unsure that least-to-most will be effective or whether physical prompts are aversive to my client because it's a great resource to have when you're unsure.” Another respondent noted that the “prompting strategies were difficult to generalize to the school setting” but did not elaborate further on the difficulties they encountered. A third respondent replied that the SWEEPS “has not been needed yet” for their clients, and a fourth respondent reported that they had not had the opportunity to apply the materials with their clients yet because they were implementing interventions designed by a previous behavior analyst.

Discussion

We developed a decision-making tool to guide practitioners in the first-line selection of appropriate prompting strategies to use with clients across a variety of skills. We developed the SWEEPS through a process that included conducting a literature review of relevant clinical recommendations, organizing these recommendations into subgroups (i.e., recommendations for selecting types of prompts and selecting the prompt-fading strategy), developing the worksheet and corresponding flowcharts to guide users through all the recommendations, and creating the supplementary materials (e.g., instructional manual, materials for conducting assessments when unsure about certain variables, and instructional videos). We then tested the usability of the SWEEPS materials with graduate students via both live and video-based instruction. Participants’ effective use of the SWEEPS materials was comparable across both training modalities, suggesting that the self-instructional format was sufficient to produce effective use of the SWEEPS. Although two participants (Renata and Cassidy) required feedback on their use of the SWEEPS, this feedback was brief (i.e., less than 5 min) and appears practical to include in the context of regularly scheduled supervision meetings. The next essential step in the development and evaluation of this decision-making tool is to assess its clinical utility. That is, further research is needed to determine if practitioners’ use of the tool leads to improved client outcomes.

The SWEEPS is the first attempt to synthesize information about prompt-fading strategies utilizing response prompts into a practical decision-making tool. The results of the current study also extend the literature on the efficacy of decision-making tools in training participants to engage in complex behaviors (e.g., Deochand et al., 2020; Geiger et al., 2010). Decision-making tools, such as the SWEEPS, may be a first-line assessment and intervention-planning option when more rigorous controlled assessments are not possible due to time or other resource constraints. Participants in the current study typically completed the SWEEPS in under 15 min during sessions with either a simulated or actual client. The SWEEPS also may serve as a preliminary resource for behavior analysts who have limited training in selecting prompting strategies or conducting more rigorous assessments. Although BCBAs are ethically required to maintain and increase their competency in areas such as assessments (BACB, 2020), this learning process is not a short one. Tools like the SWEEPS may therefore serve as a “bridge” resource to behavior analysts as they receive more comprehensive training. Ultimately, some clients may benefit from a controlled assessment of different prompts and prompting strategies (e.g., Seaver & Bourret, 2014) when they encounter barriers to learning with procedures recommended by the SWEEPS or with previously effective procedures.

At least in the short run, participants in the current study required access to the SWEEPS to fully select appropriate prompting strategies for clients. Complete removal of decision-making tools, such as the SWEEPS, is not necessarily a critical feature or goal of such materials. However, one potential limitation of the analysis was that participants’ performance was assessed without the SWEEPS approximately 2–4 weeks after the posttraining sessions due to a holiday break. Therefore, it is unclear if the SWEEPS was an important source of stimulus control for performance or whether the degradation in performance reflected a failure to maintain skills acquired via exposure to the SWEEPS.

An important next step is to evaluate the clinical utility of the SWEEPS. Although the SWEEPS integrates empirically based recommendations, we need to determine whether behavior analysts’ use of this decision-making tool ultimately improves client outcomes (i.e., produces more rapid acquisition of skills or reduces levels of problem behavior) relative to their existing approaches for selecting prompts and prompt-fading strategies. Until these and other data are collected, conclusions about the utility of the SWEEPS should remain tentative. One measure of utility we observed in this study was the time it took participants to complete the SWEEPS in each session. Although participants required 20–30 min to complete the SWEEPS in some initial sessions, their completion time decreased to about 10 min as they continued to use the SWEEPS materials. However, the participants had written client profiles that included most of the information required to complete the SWEEPS. This may simulate a scenario in which the behavior analyst is highly familiar with the client and their learning tendencies. Behavior analysts who are completing the SWEEPS for clients with whom they have minimal experience may require more time to do so.

Despite an expansive literature that provides recommendations for the optimal uses of various response prompts and prompt-fading strategies, conclusive empirical demonstrations of these recommendations are still needed. Additional research should be conducted investigating the efficacy of prompt delays for clients exhibiting prompt dependence. We included this recommendation in the SWEEPS because it may provide clients with more opportunities to respond independently relative to other prompt-fading strategies (Touchette, 1971). Recent studies have evaluated treatments for prompt dependence incorporating procedures such as differential reinforcement (e.g., Cividini-Motta & Ahearn, 2013; Vladescu & Kodak, 2010). Thus, prompt delays should not be considered an intervention to address prompt dependence but rather a prompting strategy that can be combined with other procedures to address the problem.

Because prompt-fading procedures utilizing response prompts are typically idiosyncratic across clients, broad recommendations about the application of prompt-fading strategies must remain tentative. Two potential solutions could resolve this problem. First, researchers could investigate the variables that comprise clinical recommendations (e.g., client speed of learning) as the independent variable. That is, researchers could evaluate the degree to which LTM and MTL prompting are more effective and efficient for “fast” learners relative to “slower” learners. Second, researchers could continue to develop individualized skill assessments to identify appropriate instructional procedures for each client (e.g., Seaver & Bourret, 2014).

Additional limitations should be noted. A limitation that is inherent in any decision-making tool is that it is not possible (or at least practical) to include every possible consideration from the literature. For example, the SWEEPS did not include guidance on the selection or use of stimulus prompts, differential observing responses (DORs), or supplemental error-correction procedures that may be beneficial for some learners. One addition that could be made to the SWEEPS is the inclusion of considerations for using stimulus-prompting procedures, which have been demonstrated to be an effective instructional procedure across a diversity of clients and skills (see Cengher et al., 2018, for a review); however, the dilemma when creating a decision-making tool becomes one of including sufficient information to guide appropriate decision-making while restraining the content to maintain the practicality of the instrument.

Another limitation is that decision-making tools capture best-practice recommendations at a specific moment in time. Literature reviews and research on the use of various prompting strategies continue to be published every year (e.g., Cengher et al., 2020; Chazin & Ledford, 2021; Schnell et al., 2020). Although the basic recommendations pertaining to the use of these prompting strategies have remained generally consistent, future research may alter best-practice guidelines. As such, this tool may need periodic updating.

Another limitation of this study was the low IOA scores obtained for the participants’ correct evaluation and selection in a few sessions for some participants (Bonnie, Renata, Cassidy, Erin). One variable that contributed to these sessions was the small number of variables measured in some sessions. That is, just three to five variables were scored in these sessions. Thus, a disagreement on one variable, such as the participant conducting the correct number of assessment trials of the uncertain characteristic in the client profile, when only two other variables were scored represented a significant impact to the agreement score. This impact to each participant was minimal in that it typically only occurred in one to two sessions per participant. A final limitation is that not all the participants continued to use the SWEEPS in their day-to-day work despite reporting that they were likely to continue using it. Future researchers should investigate possible variables in graduate education and clinical practice that may create competing contingencies to utilizing resources that are rated as highly favorable.

Despite these limitations, decision-making tools like the SWEEPS and other self-instructional materials are worthwhile avenues for researchers and clinicians to explore as ways to disseminate behavior-analytic procedures effectively and efficiently to professionals both inside and outside of the field. Until client learning outcomes are known, however, conclusions about the SWEEPS (and other comparable tools) should remain tentative. Regardless of their efficacy, decision-making tools are not (and cannot be) a substitute for effective training for current clinicians and comprehensive behavior-analytic training programs for graduate students. Instead, they serve as a supplemental resource for behavior analysts to reference as they develop effective instructional programs for their clients.