Enhancement and Standardization of a Universal Social-Emotional Health Measure for Students’ Psychological Strengths


Robust evidence links students’ positive mental health with academic achievement and provides a compelling rationale for developing and refining strength-based assessments. The Social Emotional Health Survey–Secondary (SEHS-S) assesses adolescents’ social and emotional skills and positive psychological dispositions. Previous studies provide reliability and validity evidence; nonetheless, there is a need for continued refinement and validation across diverse groups. The current study revised and standardized the updated SEHS-S-2020 to validate further its use in secondary schools (Grades 9–12) with a large, diverse adolescent sample. Study participants included 72,740 from 113 California schools (structural validation sample), 10,757 students from 15 randomly selected California schools (criterion validation sample), and 707 students from four additional California schools (test-retest sample). Data analyses examined structural validity, measurement invariance, criterion validity, internal consistency, and response stability. Results supported the SEHS-S-2020 validity across diverse groups of youth in various contexts. The discussion focuses on implications for assessing students’ psychosocial assets and universal school-based screening.

Approximately 17% of U.S. school-age youth (8 million children) experience a mental health disorder at a given point in time—one-half of these children do not receive adequate treatment from a mental health professional (Whitney and Peterson 2016). The incidence of youth psychosocial problems is troubling as youth emotional and behavioral disorders are associated with short- and long-term effects, such as suicide attempts, substance use, impaired social functioning, and high dropout rates (Bradley et al. 2008; Erskine et al. 2016; Kern et al. 2013). Despite progress in the evaluation of mental health difficulties, most of these youth are unidentified and underserved (Catalano and Kellogg 2020; Higa-McMillan et al. 2016), and service and outcome disparities are distinctly associated with socioeconomic disadvantage, ethnic or racial minority status, and immigrant status (Alegría et al. 2015). These circumstances emphasize the need for improved identification and monitoring practices as part of comprehensive school mental wellness services.

Contemporary approaches to universal mental health screening assess individuals for symptoms of distress and well-being (Furlong et al. 2014a). They align with a dual-continua model that proposes mental health includes co-occurring positive (well-being) and negative (distress) indicators (Suldo and Shaffer 2008; Zhou et al. 2020). Robust research supports this mental health model that combines psychological well-being with psychological distress (Greenspoon and Saklofske 2001; Suldo and Shaffer 2008; Wilkinson and Walford 1998), also called complete mental health (Furlong et al. 2014b). With the re-emergence of positive psychology in the past two decades (Furlong et al. 2014b; Seligman et al. 2009), there is an increased focus on the importance of assessing positive psychological dispositions such as gratitude (Froh et al. 2010) and hope (Snyder et al. 2003) due to their relations with youths’ subjective well-being and academic achievement.

Strength-focused assessments, as part of a complete mental health model, are essential to gain a comprehensive understanding of students’ mental health and to create and validate practices that facilitate “psychologically healthy educational environments for [all] children” (Huebner et al. 2009, p. 565). Strength-focused assessments complement and extend traditional assessment approaches that focus on identifying students’ problems and deficits (Donovan and Nickerson 2007) and recognize the importance of internal assets for positive development. Strength-focused assessments facilitate the formation of a comprehensive understanding of students’ social and emotional functioning. In contrast, by design, deficit measures alone identify 15–20% of students with substantial psychosocial problems. Given the importance of universal screening processes in creating comprehensive mental health programs in schools, it is critical to validate measures that provide school staff with a holistic understanding of students’ mental health.

The Social Emotional Health Survey-Secondary (SEHS-S; Furlong et al. 2014a) is a strength-based questionnaire developed to create an efficient measure that educators can use to assess and monitor students’ positive psychosocial development. Specifically, the SEHS-S measures strengths by assessing core psychological mindsets that can enhance positive youth development when fostered through the daily interactions with their family, peers, and other supportive adults in their immediate microsystems (Furlong et al. 2019). Administering a measure such as the SEHS-S as part of a complete mental health screening process is one logical step to implement a comprehensive mental health program in schools. The SEHS-S is currently being used broadly both within U.S. and international schools as part of universal mental health screening and monitoring practices. Hence, it is critical to rigorously evaluate and standardize the SEHS-S to evaluate its efficacy in identifying youth from diverse backgrounds in need of additional mental health support and fostering the flourishing well-being of all youth.

Conceptual Framework

The SEHS-S wellness model includes core social and emotional skills, such as empathy and emotional competence, and psychological dispositions, such as gratitude and persistence. The hypothesis is that internal assets exert their primary effects by fostering an upward positive developmental spiral in the quality of youths’ interpersonal transactions. The SEHS-S model proposes that flourishing development occurs by nurturing various core dispositions (i.e., the sum is greater than its parts). The primary effects of these dispositions emerge via the day-to-day transactions a youth has with the adults, family, and peers in their immediate social ecosystems. With educators immersed intimately in youths’ social circles, they play an essential role in fostering these psychological dispositions in children. Positive developmental outcomes increase when youth possess the internal dispositions and skill sets to influence the quality of their daily interpersonal interactions. This conceptualization draws upon the positive youth developmental perspective (Lerner et al. 2019), and as in self-determination theory (Griffin et al. 2017) by emphasizing the importance of creating conditions that empower youth to make things happen in their lives rather than passively letting them happen.

What Does the SEHS-S Measure?

Drawing from a psychological strengths perspective, the SEHS-S measures the covitality latent trait. Covitality refers to the co-occurrence of positive, healthy traits (Weiss and Luciano 2015). It embodies the “…synergistic effects of positive mental health resulting from the interplay among multiple positive-psychological building blocks” (Furlong et al. 2014a, p. 3). The SEHS-S has 12 subscales representing unique positive social emotional health constructs associated with four general positive social emotional health domains. The first domain, belief-in-self, consists of three subscales grounded in constructs from the social emotional learning (SEL) and self-determination theory literature: self-efficacy, self-awareness, and persistence (e.g., Bandura et al. 1996; Durlak et al. 2011; Shechtman et al. 2013). The second domain, belief-in-others, has three subscales derived from constructs found in the childhood resilience literature: school support, peer support, and family support (e.g., Larson 2000; Masten et al. 2009). The third domain, emotional competence, consists of three subscales based on constructs drawn from the SEL scholarship: emotional regulation, empathy, and behavioral self-control (e.g., Greenberg et al. 2003; Zins et al. 2007). Engaged living, the final domain, comprises three subscales grounded in constructs derived from the positive youth psychology literature: gratitude, zest, and optimism (e.g., Furlong et al., 2014b; Kirschman et al. 2009). Renshaw et al. (2014) provide a detailed review of each of these scales and their associated constructs, a description of the conceptual rationale underlying the SEHS, and a discussion of the empirical merit of the 12 positive psychological dispositions.

Previous Development and Validation

Previous SEHS-S studies (N = 4, 189; Furlong et al. 2014a) examined its higher-order factor structure, sociocultural and gender group invariance, reliability, and predictive validity (Ito et al. 2015; Lee et al. 2016; Telef and Furlong 2017; You et al. 2015). Internal consistency estimates (Cronbach’s alpha) for the SEHS-S covitality score were .93 for Japanese youth (N = 975; Ito et al. 2015) and .94 for South Korean youth (N = 686; Lee et al. 2016). These were comparable to a U.S. sample (N = 14,171; .95; e.g., You et al. 2015). Additionally, the SEHS-S overall covitality score had strong convergent validity with measures of global subjective well-being. For example, the covitality score had a significant positive relation with the Strengths and Difficulties Questionnaire (SDQ; Goodman 1997), prosocial behavior subscale (r = .40), and a negative relation with the SDQ total difficulties scale among Turkish youths (N = 2, 242; r = −.25; Telef and Furlong 2017). In a U.S. sample (You et al. 2015), the SEHS-S negatively correlated with the Behavioral and Emotional Screening System (BESS; Kamphaus and Reynolds 2015; r = −.63). Additionally, among Korean youths, the SEHS-S correlated positively with subjective well-being (N = 686; r = .56; Lee et al. 2016) and negatively with depression, anxiety, and stress (r = −.22 to −.36; measured with the Depression Anxiety and Stress Scale-21; Lovibond and Lovibond 1995). It positively correlated with the Positive and Negative Affect Scale (PANAS; Chen and Zhang 2004) among Chinese youths (N = 3750; Xie et al. 2018). In a sample of Spanish adolescents, the covitality latent trait was positively associated with health-related quality of life (r = .79) and negatively related to emotional and behavioral problems (r = −.45; N = 1060; Piqueras et al. 2019). These studies provided reliability and validity evidence for the preliminary SEHS-S; however, they did not provide information about the validity evidence or utility of the SEHS-S across large, comprehensive populations of diverse youth.

Purpose of the Current Study

The preliminary SEHS-S version (Furlong et al. 2014a) has broad research coverage; however, as with any measure, there is a need to continue to refine and standardize its content and build evidence supporting its validity carefully. One pressing need was to standardize the wording of all items and the response format of the SEHS-S to make it more uniform for standard application across varied contexts. For example, previous versions of the SEHS-S had various response options, with some versions using a four-point response scale and others using a six-point response scale. The initial scale development process examined these options and found that they were psychometrically equivalent (Gordon Wolf et al. 2019). The four-point response format was considered to be cognitively easier for participants. Consequently, the current validation study adopted a standard four-point response format.

Additionally, with the change to a standard four-point response scale, minimal wording changes occurred to the three items measuring zest and the three items measuring gratitude. Previous versions of these items measured recent emotional experiences (i.e., “During the past week…”). For alignment with other SEHS-S items, the zest and gratitude item stems were modified slightly to ask about prevailing, not recent, feelings (i.e., “On most days…”). There also was limited validity evidence from large, diverse adolescent samples that support its use across different adolescent subgroups (e.g., genders, grades, Latinx (Latin American heritage) status, and ethnic group identification).

It is crucial to establish norming information for a large, comprehensive sample of diverse secondary school students. With this objective in mind, the Institute of Educational Sciences funded a four-year (2016–2020) grant to enhance and further validate the SEHS-S. The grant project aimed to enhance the validity and practical utility of the revised SEHS-S for diverse California youth through several aims: (a) refining the measure for use in schools, (b) verifying the construct validity, (c) investigating the criterion validity, and (d) examining the consistency and stability of responses. The present study reports the results of this IES grant project to standardize the measure and evaluate whether the SEHS-S, now designated the SEHS-S-2020, has has evidence of validity for school-based mental wellness screening and monitoring across diverse groups of high school students.


Structural and Criterion Validity Data Collection

WestEd administered and managed that present study’s survey as part of the California State Department of Education’s (DOE) effort to support the recurrent collection of information that local education agencies use to monitor school quality indicators. The CHKS is a comprehensive school-based surveillance survey used in California for more than 20 years. Parents received an introductory letter and a consent form. Consistent with CHKS procedures, parents provided passive consent, and students provided assent. A school-site administrator coordinated the CHKS online survey administration (see https://calschls.org/survey-administration/instructions/). The SEHS-S-2020 items were presented in one preselected random order. All students in the structural validity analyses sample (N = 72,740) and the criterion validity analyses sample (N = 11,217) completed the CHKS Core Module (students report on substance use, school safety, school climate) and a second module that included the SEHS-S-2020 and the Social Emotional Distress Scale (SEDS-S). A third module, including the MHC-SF, was administered only to the criterion validity sample. Students completed the survey between September 2017 and May 2019.

Stability Sample Data Collection

Following university human subjects committee approval, passive parental consent, and student assent, students in the stability sample completed an online survey using tablets in a classroom setting or a school computer lab. If students were absent during the initial administration session, they had up to five later attempts to complete the survey. At all schools, teachers received a script with which to proctor the administration of the measures. Students completed the survey in September–October 2017 and 2018. The SEHS-S-2020 items were presented in a different random order to each student at each administration.


Administrations of the California Healthy Kids Survey (CHKS) provided the present study’s primary data sets. WestEd administers the annual CHKS for the California DOE. As described in this study’s Procedure section, the SEHS-S-2020 structural validity analyses used one set of CHKS responses. The SEHS-S-2020 criterion validity analyses used a different set of CHKS survey responses. An independent dataset collected by the study’s authors evaluated the temporal stability of the SEHS-S-2020. The following section describes the students in these three data sets. Table 1 presents descriptive information for all three samples. The sample demographics match California’s secondary school enrollment (California Department of Education [CDE], 2020).

Table 1 Cross-sectional Sample Descriptive Information


Structural Validity Analyses

The Confirmatory Factor Analysis (CFA) and related invariance analyses used the responses of 72,740 California high school students. These students were from 113 schools located in 19 of the state’s 58 counties from urban, suburban, and rural communities. These schools were involved in the current study as part of their regularly scheduled biennial CHKS survey administration. Most of these schools were traditional public high schools (87.4%) or public charter schools (11.4%). Seventy of these 113 schools surveyed primarily only ninth and eleventh graders.

Criterion Validity Analyses

This study’s criterion validity analyses used the responses of 11,217 California high school students. These students attended one of 15 high schools located in nine counties. These schools were randomly selected from California high schools and asked to complete the CHKS survey and a special criterion validity module (see the Procedure section). The next randomly selected school in its region of the state replaced it if a contacted school declined to participate. The total enrollment at these schools was 18,787 (survey participation was 59.7%, and 91.3% of students who attempted the survey provided usable responses). After removing participants who did not complete at least 60% of the target variables, the final sample consisted of N = 10,757. Four of the 15 schools administered the survey only to ninth and eleventh graders.

Stability Analysis

Data from a longitudinal investigation involving four collaborating California high schools was used to evaluate the stability of the SEHS-S-2020. As part of the stability investigation, 707 students completed the SEHS-S-2020 in the fall of 2017 and approximately one year later in 2018. See Table 1 for descriptive information.


The Social Emotional Health Survey-Secondary-2020 (SEHS-S-2020), the focal measure in the present study, was used in all structural validity and test-retest analyses. The SEHS-S-2020 is a 36-item measure that assesses secondary student’s self-reports of social and emotional strengths (Furlong et al. 2014a). Previous research supports a three-level, one general factor model with four domains and 12 subscales (thee items per subscale) that load onto four domains: belief in self (self-awareness, persistence, self-efficacy), belief in others (school support, family coherence, peer support), emotional competence (empathy, self-control, behavioral self-control), and engaged living (gratitude, zest, and optimism). The four domains load onto one general factor called covitality. Previous studies provide validity and reliability evidence via CFA and measurement invariance (You et al. 2015). The response options for all 36 items for this standardization study was as follows (1 = not at all true, 2 = a little true, 3 = pretty much true, and 4 = very much true). Confirmatory factor analysis (CFA) and measurement invariance (MI; You et al. 2014, 2015) provide preliminary validity and reliability evidence.

Criterion Validity Analyses Measures

The criterion validity analyses used measures described in the following sections. Specifically, the SEHS-S-2020 responses will be:

  1. 1.

    negatively related to student distress;

  2. 2.

    positively related to students’ feelings of school connectedness;

  3. 3.

    positively associated with student reported grades; and

  4. 4.

    negatively associated with substance use (i.e., smoking cigarettes, vaping, binge drinking, marijuana use) and suicide ideation.

Additional criterion validity analysis examines the covitality scores of mental health groups categorized by the Mental Health Continuum - Short Form (MHC-SF; described below). We hypothesize that the covitality mean score will be significantly different across the three MHC-SF groups (e.g., languishing < moderate < flourishing).

Student Distress

The Social Emotional Distress Scale–Secondary (SEDS-S, Dowdy et al., 2018) is a 10-item measure that assesses internal emotional distress. It uses a four-point response scale (1 = not at all true, 2 = a little true, 3 = pretty much true, 4 = very much true). A sample item is, I had a hard time breathing because I was anxious. CFA has supported a unidimensional model (Dowdy et al., 2018). The measure (M = 2.00, SD = 0.86) had strong reliability in this study with α = .94 and Ω = .95. Analyses used the mean item response, with higher scores indicating higher student-reported emotional distress.

School Connectedness

The School Connectedness Scale (SCS, Furlong et al. 2011) is a five-item scale measuring students’ engagement and sense of school belonging. The response scale is: 1 = strongly disagree, 2 = disagree, 3 = neither disagree nor agree, 4 = agree, 5 = strongly agree. A sample item is: I feel close to people at this school. Mean scores are created based on students’ responses across all five items, with higher scores indicating higher school connectedness. There is evidence of good reliability (α = .82 to .87) and a unidimensional factor structure (Furlong et al. 2011). The mean SCS value was (M = 3.62, SD = .80). Analyses used the mean item response.

Substance Use, Suicide Ideation, and Self-Reported Grades

The California Healthy Kids Survey (CHKS, https://calschls.org) is a survey of school climate and safety, student wellness, and youth resiliency. The criterion validity analysis used six CHKS self-reported items. One item asks students (yes or no) to report if they had experienced suicidal thoughts during the past 12 months (Did you ever seriously consider attempting suicide?). This item is the same as is used in the Youth Risk and Behavior Surveillance Survey (Kann et al. 2018). Four other items asked students to report on past-month substance use: cigarettes, vaping, binge drinking, and marijuana (1 = 0 days, 2 = 1 day, 3 = 2 days, 4 = 3–9 days, 5 = 10–19 days, 6 = 20–30 days). A final item asks students to report their grades: During the past 12 months, how would you describe the grades you mostly received in school? (1 = Mostly A’s, 2 = A’s and B’s, 3 = Mostly B’s, 4 = B’s and C’s, 5 = Mostly C’s, 6 = C’s and D’s, 7 = Mostly D’s, and 8 = Mostly F’s). Responses for self-report grades were reverse coded. Means for these validity measures were as follows: grades (M = 6.22, SD = 0.80), cigarette use (M = 1.03, SD = 0.30), vaping (M = 1.25, SD = 0.88), binge drinking (M = 1.12, SD = 0.59), suicide ideation (M = 1.17, SD = 0.37), and marijuana use (M = 1.28, SD = 0.96).

Subjective Well-Being

The Mental Health Continuum Short Form (MHC-SF, Keyes, 2005) is a 14-item measure of emotional (EWB), psychological (PWB), and social (SWB) well-being, with previous studies supporting a three-factor structure (Keyes, 2006). The MHC-SF was used in criterion validity analyses to understand how the SEHS-S relates to other measures assessing for positive indicators of mental health. The question stem is, During the past month, how often did you feel the following ways: (a) an example item for emotional well-being is …happy; (b) an example item for the psychological well-being is …that you liked most parts of your personality; and (c) an example item for social well-being is, …that people are basically good. Response options are as follows 0 = never, 1 = once or twice, 2 = about once a week, 3 = 2 or 3 times a week, 4 = almost every day, and 5 = every day. Individuals are classified with flourishing mental health when they respond “every day” or “almost every day” to at least one of the three EWB items and at least six of the 11 PWB-SWB items. Individuals are classified as having languishing mental health when they respond “never” or “once or twice” to at least one of the three EWB items and at least six of the 11 PWB-SWB items. The remaining individuals are classified with moderate mental health. Means for this study are languishing mental health (M = 2.33, SD = 0.45), moderate mental health (M = 2.73, SD = 0.38), and flourishing mental health (M = 3.19, SD = 0.41). The three subscales exhibited high reliability (EWB α = .88, SWB α = .90, and PWB α = .92).

Analysis Plan

The SEHS-S-2020 scores should provide evidence of score validity for future inferences related to intended users (Kane 2013). Without this evidence, the SEHS-S-2020 scores may lack accuracy and lead to suboptimal decisions about individuals (Banks 1995). We examined extrapolation and scoring claims for the SEHS-S-2020 scores. A scoring claim supports the test scores’ precision in capturing a critical characteristic of the construct (Kane 2013), and an extrapolation claim (or inference) strengthens the interpretation and generalization of the domain scores.

Factor Structure of the SEHS-S-2020

The SEHS-S-2020 factor structure was examined to illuminate how to score it for research and practice. First, a CFA of the SEHS-S-2020 was conducted to evaluate support for its hypothesized higher-order model. A CFA was conducted on a random subsample of 10,000 students. Model fit was assessed using recommendations from the literature: comparative fit index (CFI > .95), root mean square of approximation (RMSEA < .05) and standardized root mean square residual (SRMR < .05) indicated excellent model fit (Browne and Cudeck 1989; Hu and Bentler 1999).


Next, calibration/validation (CV) was conducted on random samples for the full covitality model (Cudeck and Brown 1983). CV is similar to conducting an exploratory factor analysis followed by a CFA in that it verifies that the data robustly represent the structure (Cudeck and Brown 1983). When utilizing a large sample, it is reasonable to use random subsamples for cross-validation and testing replicability. A second sample of 10,000 cases, drawn without replacement, was used to replicate model fit for the full covitality model. Next, model parameters from the full covitality model in the first sample CFA were used to rerun the model for the second sample. Fit statistics (i.e., CFI, RMSEA, and SRMR) were then inspected to evaluate differences between the calibration and validation model. Negligible differences in fit indicate robust replicability.

Measurement Invariance (MI)

To evaluate if the SEHS-S-2020 is invariant across a range of demographic subgroups, multigroup CFA examined MI for: (a) gender, (b) grade level, (c) Latinx status, and (d) ethnic group identification. This analysis used Mplus version 8.4 (Muthén and Muthén 1998-2017) with maximum likelihood (ML) and unit variance identification. Using random subsamples of n = 2500 from the structural validity sample, CFAs analyzed model fit for subgroups. These were then followed by successive multigroup CFAs to evaluate configural, metric, and scalar invariance (Vandenberg and Lance 2000). MI provides evidence that the factor structure, loadings, and intercepts are similar across subgroups. Invariance tests, conducted sequentially, first examined the model with all parameters freely estimated across groups (configural invariance). Determining configural invariance establishes that the model’s structure fits the data well for each compared group. Next, metric invariance was tested by holding the loadings to be equal across groups. When compared to the configured models, metric invariance is established when ΔCFI < .01 and ΔRMSEA < .015, or ΔSRMR < .03 (Chen 2007). Scalar invariance analysis held the loadings and intercepts equal across groups. The establishment of scalar invariance indicates that participants’ scores on the latent construct and observed variable will be the same regardless of their group membership. Scalar invariance is confirmed when the comparison to the metric model yields a ΔCFI < .01 and ΔRMSEA < .015 (or ΔSRMR < .03, Chen 2007). Scalar invariance, when found, allows researchers to make inferences via extrapolation claims for each of the subgroups.

Internal Consistency

SEHS-S-2020 Cronbach alpha (α) and Omega (Ω) coefficients are reported for its 12 subdomains, four domains, and the overall covitality index. Values higher than .80 provide evidence that the items are measuring the same construct (Cronbach 1951; McDonald 1999).

Stability and Criterion Validity Analysis

For stability analysis we estimated bivariate correlations of the SEHS-S-2020 to establish reliability across time. The one-year stability test-retest analysis used responses from students who were in Grades 9 and 10 in 2017 and Grades 10 and 11 in 2018. Test-retest reliability allows researchers to evaluate if the measure is stable across time. Although there is no agreed-upon cut-off value for a test-retest coefficient (Crocker and Algina 2006), we used cut-off ranges from Shepherd et al. (2014) as follows: .00–.40 (weak/poor agreement), .41–.60 (adequate agreement), .61–.80 (considerable agreement), and .81–1.0 (near perfect agreement).

For criterion validity, we used a random subsample of 5000 participants to examine the association of the SEHS-S-2020 with theoretically related variables through a structural equation model (SEM). The SEM model regressed SEDS-S, SCS, self-report grades, cigarette use, vaping, binge drinking, marijuana use, and suicide ideation on the covitality general factor. This SEM examined relations with covitality as the predictor of each outcome, thereby evaluating the validity hypotheses. We expected covitality to have a positive relationship with school connectedness and academic grades, and a negative relationship with social emotional distress, cigarette use, vaping, binge drinking, marijuana use, and suicidal ideation, as depicted in Fig. 1. Standardized regression coefficients, the direction (positive or negative), and the significance of their relationships indicate if this study’s validity predictions are supported.

Fig. 1

Full Covitality Model with Criterion Variables. Note. All loadings are standardized. SEDS = mean item response to the Social Emotional Distress Scale (Dowdy et al. 2018). School Connectedness = mean item response to the School Connectedness Scale (Furlong et al. 2011). Grades = self-reported overall course grades. Substance use indicators = any past 30-day cigarette smoking, vaping, or binge drinking (five or more drinks at one time). Suicidal ideation = any suicidal ideation thoughts in past 12 months

Subjective Well-Being

To support the validity of the SEHS-S-2020, an ANOVA was conducted across the three groups of mental health as categorized by the MHC-SF (Keyes 2005): languishing, moderate, and flourishing mental health groups. Significant differences across the MHC-SF groups for the ANOVA will substantiate the criterion validity of the SEHS-S-2020.


Factor Analysis

The CFA for the hypothesized higher-order factor structure of the SEHS-S-2020 exhibited excellent model fit, χ2(578) = 11,156.85, p < .001, CFI = .956, RMSEA = .043 [CI = .042, .043], and SRMR = .045. The calibration and validation results indicated that the model fit was almost identical, further providing evidence that the full covitality model was successfully replicated with a different sample (see Table 2).

Table 2 Model Fit Replication of the Full SEHS-S Hypothesized Model

Internal Consistency

Internal consistency (see Table 3) of the SEHS-S-2020 covitality score was excellent (α = .95, Ω = .95) providing evidence of robust reliability. The four SEHS-S-2020 domains showed excellent reliability (αrange = .87–.94, Ωrange = .87–.94), and subscale coefficients indicated moderate to strong reliability (αrange = .76–.95, Ωrange = .74–.94), except for self-control (α = .67, Ω = .64).

Table 3 Alpha and Omega Reliability Coefficients

Measurement Invariance

Initial CFAs for each group and subgroup indicated an excellent fit. Tests for MI indicated that all three levels of the model were invariant across: (a) gender (i.e., male v. female, see Table 4); (b) grade level (i.e., Grades 9, 10, 11 and 12, see Table 5); (c) Latinx status (i.e., Latinx or non-Latinx, see Table 6); and (d) ethnic group identification (i.e., American Indian, Asian, African American, Pacific Islander, and European American, see Table 7). The ΔCFI was less than .01, ΔRMSEA < .015, and ΔSRMR < .03 for all comparisons for all groups. Results of invariance testing indicated that the SEHS-S-2020 items measure the covitality construct in similar ways across relevant demographic subpopulations, thus supporting future extrapolation and scoring claims.

Table 4 Invariance across Gender
Table 5 Invariance Analysis for Grade Level
Table 6 Invariance Across Latinx Status
Table 7 Invariance Across Ethnic Group Identification

Criterion Validity

The SEM analysis indicated significant associations with all external variables in the expected directions (see Fig. 1). Covitality predicted student distress, school connectedness, self-report grades, cigarette use, vaping, binge drinking, marijuana use, and suicidal ideation. The model indicated adequate fit, χ2(858) = 10,072.52, p < .001, RMSEA = .046, CFI = .906, SRMR = .050. These results show that the SEHS-S-2020 scores are related in expected directions with other variables relevant to flourishing adolescent development, supporting its convergent validity.

Subjective Well-Being

An ANOVA comparison of the mean SEHS-S-2020 across the three groups of the MHC-SF was significant overall and for all pairwise comparisons with large effect sizes, with flourishing (M = 3.19, SD = 0.44) > moderate mental health (M = 2.72, SD = 0.38) > languishing (M = 2.33, SD = .44), F(2, 4997) = 1674.07, η2 = .44. These results indicate that the SEHS-S-2020 measure was strongly associated with the MHC-SF, a widely used subjective well-being measure.

Stability Analysis

The one-year test-retest stability coefficients for the SEHS-S-2020 (see Table 8) ranged from .48 to .68 indicative of stable, trait-like psychological dispositions.

Table 8 One-Year Test-Retest Reliability Coefficients


With a rise in incidences of youth mental health disorders (Whitney and Peterson 2016), schools must incorporate a streamlined approach to identifying and treating youth who are at risk of or are currently experiencing mental health difficulties. Research shows that a valid identification and treatment process should include universal screening for mental health problems (Dowdy et al. 2010), which can assess psychological strengths in addition to psychological distress. As the field of psychology renews its focus on positive psychology and psychological strengths, strength-focused assessments are essential in order to deepen educators’ knowledge of students’ holistic well-being (Huebner et al. 2009). The SEHS-S-2020 was created to bridge this gap in strength-based measures and focuses on assessing students’ core social and emotional skills, competence, and dispositions.

Previous studies examined the preliminary version of the SEHS-S for validity and reliability across sociocultural and gender groups (Ito et al. 2015; Lee et al. 2016; Telef and Furlong 2017; You et al. 2015). It was modified to create a more standardized measure. Thus, it is critical to understand further the utility of the SEHS-S-2020 across a large, diverse group of youth, including developing norming information and providing evidence for the scale’s utility across varied contexts and groups of adolescents. With this study’s results, educators should have increased confidence in utilizing the SEHS-S-2020 to build a holistic understanding of students’ complete mental health throughout the universal screening process.

The results of the CFA supported the full covitality model. Given the size and context of the current sample, the covitality model is expected to be robust and would likely hold across various contexts and sociocultural groups. These findings indicate that the SEHS-S-2020 constructs are measured consistently across different groups of California adolescents. Furthermore, the results of the internal consistency analyses suggest that the full covitality model, including its four domains, and all but one of the subscales maintain strong to moderate reliability. These findings suggest that the scale provides a reliable measure of students’ psychological strengths, a promising finding for the utility of the SEHS-S-2020 in schools with diverse groups of students.

Measurement invariance analyses examined if the SEHS-S-2020 measured the same constructs across gender, grade, Latinx status, and ethnic group identification. Invariance held across all subgroups, suggesting that the factor structure functions equivalently across subpopulation. With the array of subgroups in the present study’s sample, this provides validity evidence for the SEHS-S-2020 with diverse groups of students in varied contexts. These results support the utility of the SEHS-S-2020 to make comparisons of psychological strength mean differences across subgroups. This finding should support researchers and educators further to understand how gender, grade, race/ethnicity, and other diversity considerations play a role in mental health differences. The results suggest that the SEHS-S-2020 scores can be interpreted similarly for several groups of students, regardless of their subgroup. This finding should help educators feel confident in identifying and providing interventions for extra support for students.

Results of the test-retest analyses indicated that there were minimal differences in responses across a one-year timepoint. These results provide support for trait-like (as opposed to state-like) social-emotional traits, consistent with the SEHS-S-2020 conceptual model (Furlong et al. 2014a). Trait-like social-emotional traits indicate that students have multiple core dispositions or mindsets that teachers and parents can foster and nurture. These dispositions, or traits, influence youths’ day-to-day interactions with others, and, ultimately, their well-being.

To understand the SEHS-S-2020’s convergent validity, an SEM model examined the relationships of covitality with the latent variables of self-reported grades, cigarette use, vaping, binge drinking, marijuana use, and suicidal ideation. Model results indicated good fit, providing support for the SEHS-S-2020’s positive relationship with grades (i.e., academic achievement), and negative relationships with drug-use and suicidal ideation. Given covitality’s definition as the co-occurrence of positive psychological dispositions, it should correlate positively with academic achievement, given the robust area of research linking well-being to academic success (Anderman, 2002). Similarly, it is no surprise that covitality is associated negatively with drug-use and suicidal ideation, generally linked to psychological distress (Armstrong and Costello 2002; McKelvey et al. 1998). Furthermore, the ANOVA comparisons of the mean scores for the SEHS-S-2020 and MHC-SF provide further support for the criterion validity of the SEHS-S-2020 with the covitality construct associated strongly with subjective well-being. As such, the SEHS-S-2020 is a useful measure that assesses students’ positive psychological strengths that are positively associated with their life satisfaction, well-being, and academic success and negatively associated with drug-use and suicidal ideation. With these crucial linkages, educators can have confidence that utilizing the SEHS-S-2020 provides meaningful information to support all students’ mental wellness.

Implications for Research and Practice

The present study provides a meaningful contribution to the literature. It provides a large sample of diverse students who were co-administered measures assessing life satisfaction, subjective well-being, social-emotional distress, and students’ psychological strengths. This study provides standardized information about how these critical measures of students’ social-emotional functioning covary, allowing researchers and educators to interpret the results of each measure simultaneously. This study’s results inform a better understanding of the overall functioning and well-being of California students with implications to other regions.

The primary purpose of this study was to enhance and standardize the SEHS-S-2020. The SEHS-S-2020 is used to support universal, schoolwide monitoring of students grounded in the dual-continua mental health model (Suldo and Shaffer 2008; Zhou et al. 2020). This model considers the balance of students’ experiences of emotional and behavioral distress in conjunction with their social-emotional strengths and resources. The SEHS-S-2020 is also used to inform interventions, particularly in Tier 1 (for all students) support services. By identifying all students’ social-emotional strengths in a given school, educators can then ascertain which universal strategies and practices could enhance social-emotional skills on a schoolwide level. These programs are particularly crucial for boosting all students’ social-emotional health and can have a substantial impact on school climate.

Another important use of the SEHS-S-2020 is as part of a schoolwide screening process to identify students who might require additional Tier 2 services (vulnerable youth). In this context, drawing upon the dual continua mental health model (Suldo and Shaffer 2008; Zhou et al. 2020), students with higher levels of emotional distress and lower levels of life satisfaction are identified by educators to detect who requires additional services by school care teams. The SEHS-S-2020 then provides a profile of these students’ psychological strengths that can be used by the Tier 2 care teams to identify areas for intervention strategies or growth. Thus, the SEHS-S-2020 is a useful tool for schools’ multitiered systems of supports, mainly when used in conjunction with other student information such as academic records, teacher-report forms, and parent-report forms. That is, the SEHS-S-2020 is most effective when educators engage in best practices for assessment by gaining a holistic perspective of each student.

The SEHS-S-2020 can be used anonymously, for instance, monitoring the well-being of students in schools in a Tier 1 context primarily to enhance school climate. However, the optimal use of the instrument is when students self-identify their responses so that care teams can follow up and provide targeted supports as needed (i.e., Tier 2 and Tier 3 context). Research has shown the survey to be supportive in both contexts (Wagle et al. 2020), which indicates its utility for most schools, regardless of the resources available and their level of multitiered systems of supports in place.

The SEHS-S-2020 model is grounded in well-recognized adolescent social-emotional development and positive psychology frameworks. As such, there is a wealth of strategies and resources available for educators linked to the SEHS-S-2020 latent traits and constructs. For example, Froh and colleagues developed several intervention strategies to boost students’ gratitude (e.g., Froh et al. 2008). Suldo and colleagues studied the efficacy of interventions to increase students’ life satisfaction levels in schools (Suldo et al. 2014). Other resources are available for educators to examine and implement in the school setting (see https://www.covitalityucsb.info). Given the research findings that the combination of these traits suggests positive outcomes rather than each trait in isolation (Lenzi et al. 2015), schools do not need to target a particular subset of strategies. Instead, it is most optimal for educators to use strategies that help build strengths across all areas.

Limitations and Future Directions

This study is one of few to analyze and validate life satisfaction, subjective well-being, social-emotional distress, and psychological strength measures in a large, diverse student sample. Nonetheless, limitations are acknowledged. Although the data collected were representative of secondary schools across California, this study collected data from students in one U.S. state. This study’s sample was socioculturally diverse, and a preliminary version of the SEHS-S was validated cross-nationally (e.g., Ito et al. 2015; Lee et al. 2016). Nevertheless, work remains to replicate this study’s findings with other samples, including students from various geographic regions and sociocultural groups. Additionally, the present study’s stability subsample differed from the sample used for the structural and criterion validity analyses, warranting investigation with other samples. In considering the current study’s results, its substantial size, and the sociocultural diversity of the larger samples, the present study’s results are likely to generalize to a wide range of students across the U.S.; however, additional research should confirm current findings among samples outside of California.

The present study employed mono-method procedures (i.e., self-report). Studies suggest that adolescents are ideal informants for identifying internalizing symptoms, which were primarily assessed in the current study, because these symptoms can be more difficult for outside sources to identify (Smith 2007). Additionally, self-report allows a context for students to use their voice and self-reflect on their life experiences. However, relying solely on one informant may lead to mono-method bias (Podsakoff et al. 2003) and social desirability bias (Huang et al. 1998), indicating skewed or inaccurate responses. Future studies should investigate multi-informant assessments (parent and teacher forms) based on the covitality framework (e.g., Branscum 2020).

Another study limitation is that the we did not consider all types of validation. It is essential to evaluate how students’ responses to the SEHS-S-2020 are associated with their day-to-day lived experiences in a school context. For example, additional research could examine other measures in school contexts to investigate if students’ immediate emotions are associated with broader trait-like characteristics measured by the SEHS-S-2020 (e.g., Pekrun 2017). Linking SEHS-S-2020 responses with students’ real-time achievement emotions through experience sample monitoring would also be instructive (Goetz et al. 2016).


There is a disconcerting number of youths with mental health disorders (Whitney and Peterson, 2016), and a high proportion of them are untreated (Catalano and Kellogg 2020; Higa-McMillan et al. 2016; Whitney and Peterson 2016). Positive psychology has gained a second wave of traction (Huebner et al. 2009), and the understanding of the impact of positive psychological traits of mental health has grown (Suldo and Shaffer 2008; Zhou et al. 2020). As such, it is urgent to improve school-based practices that identify and foster students’ social-emotional development. The SEHS-S-2020 offers a resource for schools to systematically monitor all students’ complete mental wellness.

Data Availability

Data used in this study are available from the corresponding author for legitimate research purposes.


  1. Alegría, M., Greif Green, J., McLaughlin, K. A., & Loder, S. (2015). Disparities in child and adolescent mental health and mental health services in the U.S. William T Grant Foundation inequality paper. http://wtgrantfoundation.org/library/uploads/2015/09/Disparities-in-Child-and-Adolescent-Mental-Health.pdf. Accessed 13 Jan 2021.

  2. Anderman, E. M. (2002). School effects on psychological outcomes during adolescence. Journal of Educational Psychology, 94, 795–809. https://doi.org/10.1037//0022-0663.94.4.795.

    Article  Google Scholar 

  3. Armstrong, T. D., & Costello, E. J. (2002). Community studies on adolescent substance use, abuse, or dependence and psychiatric comorbidity. Journal of Consulting and Clinical Psychology, 70(6), 1224–1239. https://doi.org/10.1037/0022-006X.70.6.1224.

    Article  Google Scholar 

  4. Bandura, A., Barbaranelli, C., Caprara, G. V., & Pastorelli, C. (1996). Multifaceted impact of self-efficacy beliefs on academic functioning. Child Development, 67, 1206–1222. https://doi.org/10.2307/1131888.

    Article  Google Scholar 

  5. Banks, J. A. (1995). Handbook of research on multicultural education. Macmillan.

  6. Bradley, R., Doolittle, J., & Bartolotta, R. (2008). Building on the data and adding to the discussion: The experiences and outcomes of students with emotional disturbance. Journal of Behavioral Education, 17, 4–23. https://doi.org/10.1007/s10864-007-9058-6.

    Article  Google Scholar 

  7. Branscum, A. M. (2020). Strengths-based assessment of adolescents: A multi-informant approach. [unpublished doctoral dissertation]. University of Central Arkansas.

  8. Browne, M. W., & Cudeck, R. (1989). Single sample cross-validation indices for covariance structures. Multivariate Behavioral Research, 24(4), 445–455. https://doi.org/10.1207/s15327906mbr2404_4.

    Article  Google Scholar 

  9. California Department of Education. (2020). Enrollment by ethnicity for 2019–20. http://dq.cde.ca.gov/dataquest/

  10. Catalano, R. F., & Kellogg, E. (2020). Fostering healthy mental, emotional, and behavioral development in children and youth: A national agenda. Journal of Adolescent Health, 66(3), 265–267. https://doi.org/10.1016/j.jadohealth.2019.12.003.

    Article  Google Scholar 

  11. Chen, F. F. (2007). Sensitivity of goodness of fit indexes to lack of measurement invariance. Structural Equation Modeling, 14, 464–504. https://doi.org/10.1080/10705510701301834.

  12. Chen, J., & Zhang, J. (2004). Factorial and construct validity of the Chinese positive and negative affect scale for students. Chinese Mental Health Journal, 18(763–765), 759.

    Google Scholar 

  13. Crocker, L., & Algina, J. (2006). Introduction to classical and modern test theory. Wadsworth.

  14. Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3), 297–334. https://doi.org/10.1007/BF02310555.

    Article  Google Scholar 

  15. Cudeck, R., & Brown, M. W. (1983). Cross-validation of covariance structures. Multivariate Behavioral Research, 18, 147–167. https://doi.org/10.1207/s15327906mbr1802_2.

    Article  Google Scholar 

  16. Donovan, S. A., & Nickerson, A. B. (2007). Strength-based versus traditional social-emotional reports: Impact on multidisciplinary team members’ perceptions. Behavioral Disorders, 32, 228–237. https://doi.org/10.1177/019874290703200401.

    Article  Google Scholar 

  17. Dowdy, E., Ritchey, K., & Kamphaus, R. W. (2010). School-based screening: A population-based approach to inform and monitor children’s mental health needs. School Mental Health, 2, 166–176. https://doi.org/10.1007/s12310-010-9036-3.

    Article  Google Scholar 

  18. Dowdy, E., Furlong, M. J., Nylund-Gibson, K., Moore, S., & Moffa, K. (2018). Initial validation of the social emotional distress scale to support complete mental health screening. Assessment for Effective Intervention, 43, 241–248. https://doi.org/10.1177/1534508417749871.

    Article  Google Scholar 

  19. Durlak, J. A., Weissberg, R. P., Dymnicki, A. B., Taylor, R. D., & Schellinger, K. B. (2011). The impact of enhancing students’ social and emotional learning: A meta-analysis of school-based universal interventions. Child Development, 82, 405–432. https://doi.org/10.1111/j.1467-8624.2010.01564.x.

    Article  Google Scholar 

  20. Erskine, H. E., Norman, R. E., Ferrari, A. J., Chan, G. C., Copeland, W. E., Whiteford, H. A., & Scott, J. G. (2016). Long-term outcomes of attention-deficit/hyperactivity disorder and conduct disorder: A systematic review and meta-analysis. Journal of the American Academy of Child & Adolescent Psychiatry, 55(10), 841–850. https://doi.org/10.1016/j.jaac.2016.06.016.

    Article  Google Scholar 

  21. Froh, J. J., Sefick, W. J., & Emmons, R. A. (2008). Counting blessings in early adolescents: An experimental study of gratitude and subjective well-being. Journal of School Psychology, 46, 213–233. https://doi.org/10.1016/j.jsp.2007.03.005.

    Article  Google Scholar 

  22. Froh, J. J., Bono, G., & Emmons, R. (2010). Being grateful is beyond good manners: Gratitude and motivation to contribute to society among early adolescents. Motivation and Emotion, 34(2), 144–157. https://doi.org/10.1007/s11031-010-9163-z.

    Article  Google Scholar 

  23. Furlong, M. J., Dowdy, E., Moffa, K., Bertone, A., Yang, C., Kim, E., & Ito, A. (2019). Assessment of complete social emotional wellness: An international school psychology perspective. In C. Hatzichristou & B. Nastasi (Eds.), Handbook of school psychology in a global context. Springer.

  24. Furlong, M. J., O’Brennan, L. M., & You, S. (2011). Psychometric properties of the add health school connectedness scale for 18 sociocultural groups. Psychology in the Schools, 48, 986–997. https://doi.org/10.1002/pits.20609.

    Article  Google Scholar 

  25. Furlong, M. J., You, S., Renshaw, T. L., Smith, D. C., & O’Malley, M. D. (2014a). Preliminary development and validation of the social and emotional health survey for secondary students. Social Indicators Research, 117, 1011–1032. https://doi.org/10.1007/s11205-013-0373-0.

    Article  Google Scholar 

  26. Furlong, M. J., Gilman, R., & Huebner, E. S. (Eds.). (2014b). Handbook of positive psychology in schools (2nd ed.). Taylor & Francis: Routledge. https://doi.org/10.4324/9780203106525.

    Google Scholar 

  27. Goetz T., Bieg, M., & Hall N. C. (2016) Assessing academic emotions via the experience sampling method. In M. Zembylas & P. Schutz (Eds.), Methodological advances in research on emotion and education (pp. 245–258). Springer. https://doi.org/10.1007/978-3-319-29049-2.

  28. Gordon Wolf, M., Nylund-Gibson, K., Dowdy, E., & Furlong, M. J. (2019). An analytic approach for deciding between 4-and 6-point Likert-type response options. ERIC, Institute of Education Sciences. https://eric.ed.gov/?id=ED591440. Accessed 13 Jan 2021.

  29. Goodman, R. (1997). The strengths and difficulties questionnaire: A research note. Journal of Child Psychology and Psychiatry, 38, 581–586. https://doi.org/10.1111/j.1469-7610.1997.tb01545.x.

    Article  Google Scholar 

  30. Greenberg, M. T., Weissberg, R. P., O’Brien, M. U., Zins, J. E., Fredericks, L., Resnik, H., & Elias, M. J. (2003). Enhancing school-based prevention and youth development through coordinated social, emotional, and academic learning. American Psychologist, 58, 466–474. https://doi.org/10.1037/0003-066X.58.6-7.466.

    Article  Google Scholar 

  31. Greenspoon, P. J., & Saklofske, D. H. (2001). Toward an integration of subjective well-being and psychopathology. Social Indicators Research, 54, 81–108. https://doi.org/10.1023/A:1007219227883.

    Article  Google Scholar 

  32. Griffin L. K., Adams N., & Little, T. D. (2017). Self-determination theory, identity development, and adolescence. In M. Wehmeyer, K. Shogren, T. Little, & S. Lopez (Eds.), Development of self-determination through the life-course (pp. 186–196). Springer. https://doi.org/10.1007/978-94-024-1042-6_14.

  33. Higa-McMillan, C. K., Francis, S. E., Rith-Najarian, L., & Chorpita, B. F. (2016). Evidence base update: 50 years of research on treatment for child and adolescent anxiety. Journal of Clinical Child & Adolescent Psychology, 45(2), 91–113. https://doi.org/10.1080/15374416.2015.1046177.

    Article  Google Scholar 

  34. Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6, 1–55. https://doi.org/10.1080/107055199095400118.

    Article  Google Scholar 

  35. Huang, C., Liao, H., & Chang, S. (1998). Social desirability and the clinical self-report inventory: Methodological reconsideration. Journal of Clinical Psychology, 54, 517–528. https://doi.org/10.1002/(SICI)1097-4679(199806)54:4<517::AID-JCLP13>3.0.CO;2-I.

    Article  Google Scholar 

  36. Huebner, E. S., Gilman, R., Reschly, A. L., & Hall, R. (2009). Positive schools. In S. J. Lopez & C. R. Snyder (Eds.), Oxford library of psychology. Oxford handbook of positive psychology (pp. 561–568). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780195187243.001.0001.

  37. Ito, A., Smith, D. C., You, S., Shimoda, Y., & Furlong, M. J. (2015). Validation and utility of the social emotional health survey–secondary for Japanese students. Contemporary School Psychology, 19, 243–252. https://doi.org/10.1007/s40688-015-0068-4.

    Article  Google Scholar 

  38. Kane, M. T. (2013). Validating the interpretations and uses of test scores. Journal of Educational Measurement, 50, 1–73. https://doi.org/10.1111/jedm.12000.

    Article  Google Scholar 

  39. Kann, L., McManus, T., Harris, W. A., Shanklin, S. L., Flint, K. H., Queen, B., et al. (2018). Youth risk behavior surveillance survey—United States 2017. MMWR Surveillance Summary, 67(8), 1–479. https://www.cdc.gov/healthyyouth/data/yrbs/pdf/2017/ss6708.pdf. Accessed 13 Jan 2012.

  40. Kamphaus, R. W., & Reynolds, C. R. (2015). The behavioral and emotional screening system (BESS). Pearson.

  41. Kern, L., George, M., & Weist, M. (2013). Supporting students with emotional and behavioral problems: Prevention and intervention strategies. Brookes.

  42. Keyes, C. L. M. (2005). The subjective well-being of America’s youth: Toward a comprehensive assessment. Adolescent and Family Health, 4, 3–11. https://doi.org/10.1037/0002-9432.76.3.395.

    Article  Google Scholar 

  43. Keyes, C. L. M. (2006). Mental health in adolescence: Is America’s youth flourishing? American Journal of Orthopsychiatry, 76, 395–402. https://doi.org/10.1037/0002-9432.76.3.395.

    Article  Google Scholar 

  44. Kirschman, K. J. B., Johnson, R. J., Bender, J. A., & Roberts, M. C. (2009). Positive psychology for children and adolescents: Development, prevention, and promotion. In S. J. Lopez & C. R. Snyder (Eds.), Oxford library of psychology. Oxford handbook of positive psychology (pp. 133–148). Oxford University Press.

  45. Larson, R. W. (2000). Toward a psychology of positive youth development. American Psychologist, 55(1), 170–183. https://doi.org/10.1037//0003-066X,55.1.170.

    Article  Google Scholar 

  46. Lee, S. Y., You, S., & Furlong, M. J. (2016). Validation of the social emotional health survey-secondary for Korean students. Child Indicators Research, 9(1), 73–92. https://doi.org/10.1007/s12187-014-9294-y.

    Article  Google Scholar 

  47. Lenzi, M., Dougherty, D., Furlong, M. J., Dowdy, E., & Sharkey, J. D. (2015). The configuration protective model: Factors associated with adolescent behavioral and emotional problems. Journal of Applied Developmental Psychology, 38, 49–59. https://doi.org/10.1016/j.appdev.2015.03.003.

    Article  Google Scholar 

  48. Lerner, R. M., Tirrell, J. M., Dowling, E. M., Geldhof, G. J., Gestsdóttir, S., Lerner, J. V., King, P. M., Williams, K., Irahet, G., & Sim, A. T. (2019). The end of the beginning: Evidence and absences studying positive youth development in a global context. Adolescent Research Review, 4(1), 1–14. https://doi.org/10.1007/s40894-018-0093-4. Accessed 13 Jan 2021.

  49. Lovibond, S. H., & Lovibond, P. F. (1995). Manual for the depression anxiety stress scales. Psychology Foundation. http://www2.psy.unsw.edu.au/dass//.

  50. Masten, A. S., Cutuli, J. J., Herbers, J. E., & Reed, M.-G. J. (2009). Resilience in development. In S. J. Lopez & C. R. Snyder (Eds.), Oxford handbook of positive psychology (2nd ed., pp. 117–131). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780195187243.001.0001.

  51. McDonald, R. P. (1999). Test theory: A unified treatment. Erlbaum. https://doi.org/10.4324/9781410601087.

  52. McKelvey, R. S., Davies, L. C., Pfaff, J. J., Acres, J., & Edwards, S. (1998). Psychological distress and suicidal ideation among 15-24-year-olds presenting to general practice: A pilot study. Australian New Zealand Journal of Psychiatry, 32, 344–348. https://doi.org/10.3109/00048679809065526.

    Article  Google Scholar 

  53. Muthén, L. K., & Muthén, B. O. (1998-2017). Mplus user’s guide (8th ed.). Muthén & Muthén.

  54. Pekrun, R. (2017). Emotion and achievement during adolescence. Child Development Perspectives, 11(3), 215–221. https://doi.org/10.1111/cdep.12237.

    Article  Google Scholar 

  55. Piqueras, J. A., Rodriguez-Jimenez, T., Marzo, J. C., Rivera-Riquelme, M., Martinez-Gonzalez, A. E., Falco, R., & Furlong, M. J. (2019). Social Emotional Health Survey-Secondary (SEHS-S) (2019). A universal screening measure of social-emotional strengths for Spanish-speaking adolescents. International Journal of Environmental Research: Public Health, 16, 4982. https://doi.org/10.3390/ijerph16244982.

    Article  Google Scholar 

  56. Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88, 879–903. https://doi.org/10.1037/0021-9010.88.5.879.

    Article  Google Scholar 

  57. Renshaw, T. L., Furlong, M. J., Dowdy, E., Rebelez, J., Smith, D. C., O’Malley, Lee, S., & Strom, I. F. (2014). Covitality: A synergistic conception of adolescents’ mental health. In M. J. Furlong, R. Gilman, & E. S. Huebner (Eds.), Handbook of positive psychology in schools (2nd ed., pp. 12–32). Routledge/Taylor & Francis. https://doi.org/10.4324/9780203106525.

  58. Seligman, M. E. P., Ernst, R. M., Gillham, J., Reivich, K., & Linkins, M. (2009). Positive education: Positive psychology and classroom interventions. Oxford Review of Education, 35(3), 293–311. https://doi.org/10.1080/03054980902934563.

    Article  Google Scholar 

  59. Shechtman, N., DeBarger, A. H., Dornsife, C., Rosier, S., & Yarnall, L. (2013). Promoting grit, tenacity, and perseverance: Critical factors for success in the 21st century. U.S. Department of Education Office of Educational Technology. https://www.sri.com/wp-content/uploads/pdf/promoting-grit-tenacity-and-perseverance-critical-factors-success-21st-century.pdf

  60. Shepherd, J., Smith, M., & Schofield, G. (2014). Convergent validity and test–retest reliability of the authentic happiness inventory in working adults. Social Indicators Research, 124, 1–10. https://doi.org/10.1007/s11205-014-0812-6.

    Article  Google Scholar 

  61. Smith, S. R. (2007). Making sense of multiple informants in child and adolescent psychopathology: A guide for clinicians. Journal of Psychoeducational Assessment, 25, 139–149. https://doi.org/10.1177/0734282906296233.

    Article  Google Scholar 

  62. Snyder, C. R., Lopez, S. J., Shorey, H. S., Rand, K. L., & Feldman, D. B. (2003). Hope theory, measurements, and applications to school psychology. School Psychology Quarterly, 18, 122–139. https://doi.org/10.1521/scpq.

    Article  Google Scholar 

  63. Suldo, S. M., Savage, J. A., & Mercer, S. H. (2014). Increasing middle school students’ life satisfaction: Efficacy of a positive psychology group intervention. Journal of Happiness Study, 15, 19–42. https://doi.org/10.1007/s10902-013-9414-2.

    Article  Google Scholar 

  64. Suldo, S. M., & Shaffer, E. J. (2008). Looking beyond psychopathology: The dual-factor model of mental health in youth. School Psychology Review, 37, 52–68. https://doi.org/10.1080/02796015.2008.12087908.

    Article  Google Scholar 

  65. Telef, B. B., & Furlong, M. J. (2017). Adaptation and validation of the social emotional health survey-secondary into Turkish culture. International Journal of School & Educational Psychology, 5, 255–265. https://doi.org/10.1080/21683603.2016.1234988.

    Article  Google Scholar 

  66. Vandenberg, R. J., & Lance, C. E. (2000). A review and synthesis of the measurement invariance literature: Suggestions, practices, and recommendations for organizational research. Organizational Research Methods, 3(1), 4–70. https://doi.org/10.1177/109442810031002.

    Article  Google Scholar 

  67. Wagle, R., Dowdy, E., Furlong, M. J., Nylund-Gibson K., Carter, D., & Hinton, T. (2020). Anonymous vs. self-identified response formats: Implications for mental health screening in schools. Assessment for Effective Intervention. First online 30 September, 2020. https://doi.org/10.1177/1534508420959439.

  68. Weiss, A., & Luciano, M. (2015). The genetics and evolution of “covitality.” In M. Pluess (Ed.), Genetics of psychological well-being: The role of heritability and genetics in positive psychology (chap. 9). Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199686674.001.0001.

  69. Whitney, D. G., & Peterson, M. D. (2016). U.S. national and state-level prevalence of mental health disorders and disparities of mental health care use in children. JAMA Pediatrics, 173, 389–391. https://doi.org/10.1001/jamapediatrics.2018.5399.

    Article  Google Scholar 

  70. Wilkinson, R. B., & Walford, W. A. (1998). The measurement of adolescent psychological health: One or two dimensions? Journal of Youth and Adolescence, 27, 443–455. https://doi.org/10.1023/A:1022848001938.

    Article  Google Scholar 

  71. Xie, J., Liu, S., Yang, C., & Furlong, M. J. (2018). Chinese version of social and emotional health survey–primary. Chinese Journal of Clinical Psychology, 26(3), 522–527 https://doi.org/10.16128/j.cnki.1005-3611.2017.06.004.

    Google Scholar 

  72. You, S., Furlong, M. J., Felix, E., & O’Malley, M. (2015). Validation of the social and emotional health survey for five sociocultural groups: Multigroup invariance and latent mean analyses. Psychology in the Schools, 52, 349–362. https://doi.org/10.1002/pits.21828.

    Article  Google Scholar 

  73. You, S., Furlong, M. J., Dowdy, E., Renshaw, T. L., Smith, D. C., & O’Malley, M. D. (2014). Further validation of the social and emotional health survey for high school students. Applied Research in Quality of Life, 9, 997–1015. https://doi.org/10.1007/s11482-013-9282-2.

    Article  Google Scholar 

  74. Zhou, J., Jiang, S., Zhu, X., Huebner, E. S., & Tian, L. (2020). Profiles and transitions of dual-factor mental health among Chinese early adolescents: The predictive roles of perceived psychological need satisfaction and stress in school. Journal of Youth Adolescence. First online 22 may 2020. https://doi.org/10.1007/s10964-020-01253-7.

  75. Zins, J. E., Bloodworth, M. R., Weissberg, R. P., & Walberg, H. J. (2007). The scientific base linking social and emotional learning to school success. Journal of Educational & Psychological Consultation, 17(2–3), 191–210. https://doi.org/10.1080/10474410701413145.

    Article  Google Scholar 

Download references


We thank sincerely the families, teachers, and students who generously contributed their time to this project. Our appreciation is also extended to Dr. Tom Hanson, WestEd, and Hilva Chan, California State Department of Education for their essential support in completing this study.


The research reported here was supported in part by the Institute of Education Sciences, U.S Department of Education, through Grant # R305A160157 to the University of California, Santa Barbara. The opinions expressed are those of the authors and do not represent views of the Institute of Education Sciences or the U.S. Department of Education.

Author information



Corresponding author

Correspondence to Michael J. Furlong.

Ethics declarations

Conflicts of Interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Code Availability

The Mplus syntax used for this study’s analyses are available upon request.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information


(DOCX 19 kb)

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Furlong, M.J., Dowdy, E., Nylund-Gibson, K. et al. Enhancement and Standardization of a Universal Social-Emotional Health Measure for Students’ Psychological Strengths. J well-being assess (2021). https://doi.org/10.1007/s41543-020-00032-2

Download citation


  • Social emotional health survey–secondary
  • SEHS-S-2020
  • Strength-based
  • School-based
  • Covitality
  • Universal screening