Participants
Participants for this study were part of a multi-site randomized-controlled trial testing the effectiveness of ASSIST. Eligibility criteria for the study were as follows: (a) parent or legal guardian of a youth with ASD who was 16–26 year of age; (b) parent provided medical, psychological, and/or educational documentation of their youth’s ASD diagnosis; (c) parent was able to attend the 12-week group program at the intervention site on the day and time it was delivered (either Monday or Wednesday evening, depending on the site); and (d) youth had lifetime scores on the Social Communication Questionnaire (SCQ; Rutter et al., 2003)—a parent report screener for ASD—indicating the likelihood of an ASD diagnosis.
Ninety-one participants were assigned to the treatment group and are the focus of this report. They were evenly distributed across the study sites, reflecting three states in the U.S. (29 in IL, 33 in TN, 29 in WI). Demographic characteristics of the sample are presented in Table 1.
Table 1 Participant demographic information (n = 91) Procedures
After baseline data were collected via interview and questionnaire, participants were randomized, and the treatment group began the 12-week program. Attendance was taken at each ASSIST session. Participants who were unable to attend an ASSIST session had access to a password-protected website (along with the rest of the treatment group participants) on which they could watch the session recording. Analytic data were gathered from the website to record who engaged with the online video content. Treatment fidelity was assessed by a study team member for each ASSIST session; for about one-quarter of the sessions (24%), a second team member assessed treatment fidelity so that inter-rater agreement could be determined.
After each group finished the ASSIST program, they were emailed the link to a summative evaluation form. We allowed these forms to be completed anonymously to facilitate honest evaluation and to ensure that participants’ responses were not influenced by relationships they may have developed with project staff during the program. About two-thirds (68%; n = 62) of participants completed the summative evaluation.
Intervention: ASSIST
Development
Over the course of a year, the VAP-T was revised into a new program that would be relevant across the United States, called ASSIST (Advocating for Supports to Improve Service Transitions). The intervention revision team consisted of experts in ASD and adult services, representatives from adult disability service agencies and advocacy groups, and parents of adults with ASD. Participation of disability advocacy groups and parents was particularly important to the development of ASSIST, as their participation ensured that the issues covered were of most relevance to individuals and families, and that the program was developed in a way that facilitated sustainability by community organizations. Team members resided in three states (IL, TN, WI) and thus could speak to the landscape of service systems across multiple systems. During weekly videoconferencing meetings and one in-person conference, this team: (1) examined the topics presented in the VAP-T so see if all important aspects of adult services were covered; (2) divided information about adult services and supports included in the VAP-T into what was relevant across the nation (e.g., definition of Medicaid Waiver; how to apply for Supplemental Security Income) and what would differ based on state and locality; and (3) determined a program structure that would provide standardization while still allowing enough flexibility to address the diverse services and supports available in different regions.
The intervention revision team determined that the best structure for ASSIST would consist of short introductory videos presenting nationally-relevant information followed by local experts who contextualized information about the topic to their geographic area, with ample time for questions and discussion throughout. The local experts would be provided learning objectives for their presentation, developed by the research team. This format provides standardization, while still allowing each site to personalize the program to their local context. For example, Medicaid waivers differ dramatically from state-to-state. Using this format, the introductory video contains important background about why the Medicaid waivers were developed, what they mean in the context of other long-term services and supports, and general information about eligibility and portability. The local expert, then, describes the specific waiver programs that are available in their state and state-specific regulations.
Program Description
The resulting intervention is a 12-week program, with each weekly session lasting two hours (24 h total). An optional 13th week on secondary transition planning can be delivered to participants if relevant (i.e., if youth are still in high school at the time of ASSIST delivery). ASSIST is led by a program facilitator with experience in adult services/supports and group processes.
For this project, ASSIST was delivered using an “Initial Implementation” model (Fixsen et al., 2005, 2013); the program facilitators were members of local community disability agencies or organizations, with support provided, as needed, by the university team. Before the program started, all program facilitators were provided with an ASSIST treatment manual that described the steps to prepare for and deliver the program. The university team met with the facilitators to discuss the implementation of the program and answer questions. Program facilitators identified and invited the local experts with support from the university team. Local expert invitation emails provided information about the program, the audience, and the learning objectives that they were asked to cover during their presentation. During the ASSIST sessions, university team members provided technical support to the program facilitators, and coaches provided feedback to the facilitators after the sessions. As program facilitators became more familiar with the ASSIST program, coaching was phased back but technical support remained.
The topic for each ASSIST session is presented in Table 2 and the schedule for a representative ASSIST session is presented in Fig. 1. Each ASSIST session opens with a group greeting and introductions (Week 1) or reflections on the previous week’s session (Weeks 2–12) as well as an icebreaker question/activity that pertains to the topic of the day. Then, the introductory video is shown. The introductory video for each session presents information about the topic at hand that is relevant regardless of a person’s locale. The content is presented by two “hosts” who are parents of young adults with ASD. In the videos, the hosts have a conversation about the topic, asking and answering questions to present key information. Each introductory video also includes the story of a family’s experience that is relevant to the topic.
Table 2 Treatment fidelity for each session by study site, and for local expert learning objectives After the video is shown, there is time for group participants to ask questions. Then, the local expert for that week is introduced by the program facilitator, and they give a presentation about the local aspects of the topic at hand, guided by the learning objectives for that topic developed by the research team. Learning objectives for the introductory videos and local expert presentations for each session are presented in Supplementary Table 1.
The last part of the session is reserved for further questions and discussion, and, at the end of the session, the “tip sheet” for that session (i.e., one-page summary with key points including national and state-specific resources) is covered. As in the VAP-T, participants are asked to complete the section of a Letter of Intent that corresponds to each topic; the applicable section is presented at the very end of the session. After the session has finished, participants are asked to complete an evaluation of that session; information from that evaluation is used to determine whether there are questions that need to be addressed at the next session and to provide feedback to program facilitators and local experts.
All participants are given a binder of materials that contains handouts of the information presented in each session for reference, biographies and contact information for the local experts, tip sheets and a Letter of Intent.
In-Person Versus Online Implementation
Because our pilot work suggested that the group experience was an important part of the VAP-T (Taylor et al., 2017a), ASSIST was developed to be an in-person, group-based intervention. The first two ASSIST groups were mid-way through the program (completing 4 and 3 weeks, respectively) when COVID-19 restrictions on group gatherings were implemented in the local areas. After a pause of approximately two months, the programs were resumed using secure video-conferencing platforms. This provided us with the unique opportunity to query participants in these two groups about their experiences taking ASSIST in-person versus online, allowing us to make small refinements to the procedures with the remaining groups to approximate, as closely as possible, the in-person group experience. The remaining 4 treatment groups (1 in IL, 1 in TN, and 2 in WI) were delivered the entire ASSIST series via synchronous video-conference, as restrictions on group gatherings were in place throughout the entire time that the treatment group was receiving the program.
Measures
Feasibility: Treatment Fidelity
The treatment fidelity form for a representative ASSIST session is presented in Fig. 1. The form for each session assessed both the fidelity to the components of ASSIST (e.g., greeting group members; showing the video; introducing the expert speaker) as well as the learning objectives that were expected to be covered by the local expert in that session (see Supplemental Table 1 for local expert learning objectives). Fidelity to each item on the form was assessed on a binary scale (yes/no); though this did not allow for rating of items that were “somewhat” covered, the simpler scale facilitated uniformity in coding across sites.
Feasibility: Participant Attendance
Feasibility of parent participation was measured primarily through attendance. Attendance was taken for each ASSIST session, and attendance data was tabulated for each participant. Data were also gathered on whether a participant who was absent from an ASSIST session accessed that session on the password protected website. For each session, participants were given a code of: attended the session (2); absent but watched the recorded session (1); or did not attend nor access the recorded content (0).
Feasibility: Ease of Participation and Barriers to Participation
Information about feasibility of parent participation was also collected in the summative evaluation. In terms of ease of participation, parents were asked whether each of the aspects of ASSIST implementation “worked for me” or “did not work for me:” session length (2 h); session day (Monday or Wednesday); session time (evenings); and number of sessions (12 sessions). If any aspects of ASSIST “did not work for [them],” participants were asked what would have worked better.
Participants were also asked about barriers to participation with the following question, “If you were unable to attend one or more ASSIST sessions(s), what circumstances made attendance challenging. Please select all that apply.” They could select from the following: work conflict; vacation; medical issue (non-COVID related); family emergency; no respite care; weather or other relevant emergency (e.g., power outage); COVID-related circumstances; technology issues (e.g., trouble with internet connection); and/or other.
Acceptability
The summative evaluation included ten items that assessed participant satisfaction. Satisfaction items are presented in Table 3, and reflect satisfaction with the ASSIST content, with participants’ ability to attend and participate, and with the group experience. Agreement with each item was rated on a five-point Likert scale ranging from 1 = strongly disagree to 5 = strongly agree.
Table 3 Descriptive statistics for items measuring participant satisfaction with ASSIST (n = 61) In addition, the two groups who took some ASSIST sessions in-person and some online (due to COVID-19) were given the same set of statements (see Table 3) and asked to compare their online experience to their in-person one. Response options to the items for this set of questions were 1 = less than in-person; 2 = same as in-person; 3 = more than in-person.
Finally, in the summative evaluation, all participants were asked to rank-order each of the following models of ASSIST delivery by their preference: in-person; online (live); or recorded.
Data Analysis
To examine treatment fidelity (Aim 1), the percentage of learning objectives met for each session were calculated by study site (IL, TN, WI) and overall (combining across sites). Percent agreement was calculated for the sessions in which two research team members rated treatment fidelity. We were particularly interested in whether the local experts for each session addressed the learning objectives in their presentation, given the individualization that we expected across states and speakers; thus, we separately examined the percentage of local expert learning objectives that were covered in each session.
Aim 1 also examined the feasibility of parent participation in the ASSIST program. Attendance was examined by calculating the number of sessions each participant attended (either by joining the session live or by watching recordings of the sessions after the fact). Using the summative evaluations, we used frequencies to examine how feasible each of the components of ASSIST implementation was for the participants. We also examined barriers to participation by calculating frequencies of the circumstances endorsed by participants that made attendance challenging.
Aim 2 examined acceptability of ASSIST as measured by participant satisfaction. For those who had both in-person and online implementation, we also investigated how the type of implementation impacted their satisfaction. Mean scores for each satisfaction item in the summative evaluation were calculated, as well as the percentage who reported “agreeing” or “strongly agreeing” with each satisfaction statement. Participants who received part of the curriculum in-person and part online were also asked to compare their satisfaction across the two modalities (more, similarly, or less satisfied with web-conferencing versus in-person): frequencies of responses to each satisfaction item were summarized. Finally, for all participants, we summarized the percentage that chose in-person, online (live), or recorded as their preferred mode of ASSIST delivery.