Design
An accelerated longitudinal multiple cohort design was used to answer the research questions. This design allowed us to collect data about youth work participants at four time points (between September 2017 and December 2018, at intervals of 3–4 months) and to compare the development of three cohort groups of youngsters who vary in the length of participation in youth work settings at baseline: participation for 0–6 months; participation for 7 months–2 years; and participation for 3 years or longer. We considered the 0–6 months participation cohort group as a reference group in this study. By following the other two cohort groups at the same time and comparing them to the first, we could examine the influence of participation in youth work settings. Examining and comparing several cohorts at the same time provides the opportunity to gain insight into the development of participants over a relatively short time (Galbraith et al., 2017). This is important to meet the urgent needs of practice and for policy makers to gain insight into the preventive value of youth work.
The study was conducted in close collaboration with 11 Dutch professional youth work providers from urban areas in the middle, south and east parts of the Netherlands. All of the providers are public welfare organizations funded by local government, and all invest in the professionalization of youth work practice. All of them apply a multi-methodic approach in reaching out to youngsters (boys and girls) in a broad age group (10–24). One organization mainly focuses on young people with severe and multiple disadvantages, while other organizations focus on all three sub-groups targeted. The organizations offer a good reflection of professional youth work in the Netherlands and actively approached the research group to conduct in practice-based research that would contribute to the further professionalization of youth work. In addition to granting access to their practice for data collection, the collaboration involved participation of youth workers in a Youth Worker Lab (N = 11) and participation of diverse adolescents in a Youth Panel (N = 14).
The participation with the Youth Worker Lab and Youth Panel ensured the research instruments and the process of data collection were appropriate to youth work practice from the perspective of both youth workers and youngsters. During two meetings, these two groups were consulted to support the development of the questionnaire and to discuss how data should be collected. After analyzing the data, youth workers who participated in the Youth Worker Lab were consulted for reflection on the results, which contributed to ensuring validity.
Participants
For sample selection, a short questionnaire was first distributed to the 11 organizations involved to gain insight into the population of youngsters in each organization (age groups, level of problems) and their participation in youth work settings (length of participation, combination of methods). This information allowed a profile of the population to be drawn up for each organization, with the 11 profiles used to compose a representative sample of adolescents from different age groups, with different levels of problems, a different length of participation and who received different methods. The youngsters recruited to the study fulfilled four criteria: (1) participation in one of the 11 youth work organizations; (2) at least 10 years old and younger than 25; (3) sufficient command of Dutch and (4) familiar with at least one of the four methods offered by youth work. Thirteen youngsters were excluded because they did not meet the inclusion criteria or withheld approval. Another 35 youngsters were excluded because they did not fully complete the first questionnaire. In total, 1597 youngsters were included in the analysis (Fig. 1). The number of youngsters from each youth work organization varied between 66 and 227 (M = 145, SD = 45.8). Participants were approached four times for self-reporting: T1, Sept–Dec 2017, N = 1,597; T2, Jan–April 2018, N = 981; T3, May–Aug 2018, N = 626; T4, Sept–Dec 2018, N = 595. Of the total participants, 19.8% (N = 316) participated in all four waves of data collection, 26.4% (N = 421) participated in three waves, 24.9% (N = 398) in two waves and 28.9% (N = 462) dropped out after the first wave. The response rates are shown in Fig. 1. Non-completion was defined as completing none, one or two of the three post-measurements. Reasons for non-completion at follow up were (temporary) positive outflow, refusal, (temporary) loss of contact, and organizational reasons. Table 1 shows the data collected on non-response for Waves 2–4.
Table 1 Reasons for non-completion We examined characteristics of non-completers with a T-test and Chi square. Youngsters who missed questionnaires were more often boys (69.2%) compared to the completers (50.9%) (x2 = 37.6, p ≤ 0.001), and were older in age compared to the completers (16.6 and 16.0 years, respectively, t = 2.69, p = 0.007). Non-completers were more often youngsters who participated 0–6 months (85.6%) or 7 months–2 years (80.4%) in youth work, compared to youngsters who had participated 3 years or longer (76.2%) (x2 = 14.68, p = 0.001). Table 2 provides the descriptive statistics for the participants that were included in the analysis. Most youngsters in our sample were aged between 14 and 17 years (40%) and the mean age was 16.5 years (SD = 3.60). There were more males (65.6%) than females (34.4%). It is known that girls are underrepresented in youth work activities (Boomkens et al., 2019b), with Gemmeke et al. (2011) noting that only 10–30% of the youngsters in youth work are girls. In relation to cultural background, 21% of the youngsters reported a native Dutch background and 31% reported a Dutch bi-cultural background (e.g., Dutch and Moroccan). Of those attending school (79.3%, N = 1242), 14.6% were in primary school, 67.3% vocational education and 18.0% higher education.
Table 2 Demographic variables of youth at Wave 1 (N = 1597) A sample of 322 youngsters per cohort is sufficient to test the differences between three cohorts based on 0.80 power and a small effect size (Cohen, 1992). Figure 1 shows we meet these assumptions to test differences between the three cohorts regarding length of participation. Table 3 shows the number of respondents for methods of youth work being used.
Table 3 Youth work methods in cohort groups Procedures
For the data collection, we collaborated with all 11 organizations. At least one manager (N = 12) and ten youth workers (N = 150) participated from each organization. Two researchers (including author one) gave instructions for data collection verbally during training, which included an introduction to the study; instructions on adhering to the research protocol; acquiring respondents; inclusion criteria; informed consent procedure; procedure for digital data collection; and guaranteeing the reliability of the data. Supplementary youth workers received a field guide with instructions. After completing the training, the 150 youth workers were asked to select at least ten youngsters from their own practice to participate in the study. The researchers and trained youth workers planned how they would reach a diverse group of young people, taking into account differences in: (1) gender (boys and girls); (2) age (10–24); (3) the extent of personal or social problems (doing well; minor/initial problems; severe and multiple disadvantage); and (4) length of participation (0–6 months; 7 months–2 years; 3 years or longer). Youth workers verbally informed youngsters (and their primary caregiver, if they were younger than 16) about the study and asked them whether they were interested in participating. If they were interested, the youth workers gave them a letter provided by the researchers. Participants were made aware of their rights (such as voluntary participation, right to withdraw, confidentiality and anonymity). If youngsters were younger than 12, the youth worker also verbally contacted the caregiver(s) to obtain verbal consent in addition to the letter. Before the first questionnaire, digital consent was also required to ensure informed consent. The youngsters completed the questionnaires online in private using tablets. They were able to consult another person if they had any questions, preferably a person other than the youth worker, to reduce socially desirable answers. The research team maintained close contact with the youth workers to ensure greater levels of response, and the team monitored the process and missing data points. Data cleaning was done as soon as the data were collected. The study protocol was approved by the managements of the 11 participating organizations and youth workers from the Youth Work Lab. Data collection was carried out in accordance with the Netherlands Code of Conduct for Research Integrity (2018).
Instruments
We collaborated with the Youth Worker Lab and the Youth Panel to develop an appropriate questionnaire that was suitable for a broad age group (10–24) and for young people with a lower language level. Based on feedback from a pilot, we concluded that the first version of the questionnaire was too long and some of its concepts too complicated, which could potentially adversely affect the existing relationships between youth workers and youngsters (De St. Croix, 2018), and major drop-out from repeated measurements. To combat respondent fatigue, we shortened and simplified the questionnaire through scale adaptation (Heggestad et al., 2019) on some validated scales. Furthermore, we designed items and scales ourselves based on the existing literature if there were no suitable instruments available.
Demographic Variables and Participation in Youth Work
Demographic information included age, gender, cultural background, activity during the day and educational level. One question, “How long have you had contact with youth workers?,” was used in Wave 1 to proxy the length of participation in youth work settings. In addition, we asked youngsters about their level of intensity of participation at each time point.
Multi-Methodic Youth Work
The multi-methodic approach was measured at each time point with four items designed for this study based on the literature (Koops et al., 2013, 2014; Manders & Metz, 2017; Rumping et al., 2017). Youth self-report on participating in social group work, receiving individual guidance and/or information and advice services was assessed with three single dichotomy questions (yes or no). For example, the item, “I participated in group-based youth work activities (e.g., cooking, soccer or space to hang out) during the past 3 months,” was used to assess whether respondents had been involved in social group work. To assess detached youth work, the respondents self-reported where they had engaged with the youth worker during the last 3 months. Respondents could indicate multiple answers (e.g., on the street, at home, snack bars or cafés).
Outcome Measures
Prosocial skills were assessed at each time point by one of the five subscales of the Dutch version of the self-report Strengths and Difficulties Questionnaire (SDQ) (Widenfelt et al., 2003). The SDQ self-report was developed to assess the psychosocial adjustment of adolescents (aged 11–17). The prosocial behavior scale consists of five items concerning both strengths and difficulties; for example, “I often offer to help others (parents, teachers, children).” To keep the scale level the same for all outcome measures, we adjusted the response scale from the original three-point Likert scale to a five-point option ranging from “strongly disagree” to “strongly agree.” Higher scores indicated higher prosocial behavior. The internal consistency of this subscale was computed as α = 0.77 at baseline in the sample.
We adapted the Dutch version (Kempen, 1992) of the Pearlin Mastery Scale (PMS) (Pearlin & Schooler, 1978) to measure the extent to which youngster’s self-mastery improved. The PMS is a widely used measure, including among adolescents, which assesses “the extent to which people see themselves as being in control of the forces that importantly affect their lives” (Pearlin et al., 1981, p. 340). Each item (e.g., “I have little control over things that happen to me”) is answered on a five-point scale, with options ranging from “strongly agree” to “strongly disagree.” Higher scores indicated higher mastery. We excluded item 2 (“Sometimes I feel that I’m being pushed around in life”) because the pilot showed that this item was misinterpreted by youngsters. In the current study, the alpha coefficient indicated reliability (α = 0.78) at baseline in the sample.
The youngsters’ social network was measured with a 6-item instrument designed for this study. Research by Asselt-Goverts (2016) about social network analysis for people with an intellectual disability inspired us to design the instrument. The items used in this study were formulated in simple language and therefore suitable for our respondents, who generally had a lower language level.
At each time point, the six items, scored on a five-point scale, assessed youth self-reported number of contacts with family and friends (ranging from “0–5” to “30 or more”), whether young people received support from family and friends (ranging from “never” to “always”) and whether they were satisfied with the support received (ranging from “very dissatisfied” to “very satisfied”). Higher scores indicated a more extensive social network. Cronbach’s alpha was computed as α = 0.71 at baseline in the sample.
Youth self-report of civic participation was assessed with two items to provide insight into the quantity of activities in social contexts (Item 1: “How often have you volunteered?”; Item 2: “How often have you organized an activity in your neighborhood?”) During each measurement we asked about the past 3 months. Results were analyzed at item level.
We assessed finding (specialized) care at each time point by asking the youngsters whether they found care services through youth workers during the last 3 months. They could choose different answer categories on a list, such as a social care institution for debt or addiction, a doctor or the Social District Team (Koops et al., 2014).
Statistical Analyses
Descriptive statistics were used to illustrate the demographic and other characteristics of the sample and to provide a useful outline for the outcome variable of finding (specialized) care. Respondents were split into three age groups (10–14 years, 15–19 years, 20–24 years).
As a necessary condition for analyzing data from a longitudinal cohort design (Duncan et al., 1996), we began by using Chi-square and univariate ANOVA tests to determine whether the different cohorts were comparable with respect to the attributes being measured. Comparing demographics between the three groups of participants, we found no significant differences based on gender (x2 = 4.94n.s., p = 0.085). However, significant differences were found between groups on the variables of age (F = 17.43, p < 0.001), cultural background x2 = 15.32, p = 0.02), and intensity of participation in youth work (F = 6.99, p = 001). Furthermore, younger adolescents and youngsters with a native Dutch background were less represented in Cohort 3 (participation 3 years or longer), while youngsters in Cohort 1 (0–6 months participation) participated less intensively in youth work settings compared to youngsters in the other two cohort groups. We, therefore, controlled for any confounding on these three variables in our analysis.
Because observations were made repeatedly over four time points, we used Linear Mixed Models (LMM) to answer the research questions. Multiple imputation of missing values was not necessary because LMM includes participants in the analysis who have not completed all measurements (dependent variables) (Twisk et al., 2013). LMM provides an understanding of both the mean levels of the outcome variables (average over time) and the changes in mean levels over the four time points. A two-level (repeated measures were clustered within youngsters) linear mixed model was constructed with restricted maximum likelihood estimation. The variance at organizational level was also checked, but did not explain differences, and therefore it was not included in the models. The model intercept was specified as random across individuals, while other parameters were specified as fixed. We used unstructured covariance.
First, we explored the mean differences between cohorts over the four time points. These mean differences indicate how the factor of “length of participation” influences the scores on outcome variables. The between-group effect size was calculated according to Cohen’s d. Second, we modeled time as a categorical variable to assess improvements of the three cohorts on outcome variables over a period of 16 months. Undertaking an analysis for each cohort group, it was possible to observe in detail how the different groups developed over time. We did not calculate effect sizes for these results because some scholars discourage reporting effect sizes for within-group changes (pre-post within one group); for example, because pre- and post-test results tend to be dependent (Cuijpers et al., 2017). All analyses were conducted using SPSS 24. Statistical significance was assessed at the 0.05 level.