1 Introduction

The growth of the field of positive psychology has been fast and rapid (Rusk & Waters, 2013). A central goal of this growing branch of psychology is the identification, development, and evaluation of interventions that aim to enhance well-being (Wood & Johnson, 2016). The rationale for the development of well-being interventions was in part due to the recognition that well-being and psychopathology are two independent constructs (Carr et al., 2021). There has been substantial progress in research and practice in expanding the notion of mental health beyond the absence of mental illness and rather integrating the presence of positive features, including well-being (Galderisi et al., 2015). Well-being has frequently been described as a difficult concept to define due to the dynamic, multifaceted constructs that constitute it (Dodge et al., 2012). Researchers across a variety of backgrounds suggest dividing well-being into objective and subjective components (Voukelatou et al., 2021). The objective components of well-being include many material and social attributes of one’s life circumstances, including physical resources, education, employment and income, housing, and health. Such attributes are quantitatively measured (Wallace et al., 2021). The subjective components of well-being, on the other hand, are represented in an individual’s subjective thoughts and feelings about one’s life and circumstances, and their level of satisfaction with such (King et al., 2014). It is interpreted to mean experiencing a high level of positive affect, a low level of negative affect, and a high degree of satisfaction with one’s life (Deci & Ryan, 2008). Subjective well-being has been traditionally captured through studies based on data collected by self-reporting measures where individuals evaluate their own lives (Keyes, 2006). Subjective well-being is enhanced when the social determinants of a healthy life, in particular psychosocial (mental health and social support) and physical (diet and physical activity), are promoted and protected (Naz & Bögenhold, 2018).

The COVID-19 pandemic had significant negative effects on the overall well-being of individuals (Toselli et al., 2022), forcing people from around the globe to considerably modify their daily routines (Bastoni et al., 2021). Social distancing and self-isolation policies were introduced in most European countries, including public gathering bans, border closure, temporary restrictions on free internal movements of the citizens, and school and workplace closure (ECDPC, 2020). The Republic of Ireland was placed on full lockdown on the 27th of March 2020, which involved a ban on all non-essential journeys outside the home for two weeks. The only exceptions were for travelling to and from work for essential workers, shopping for essential food and household goods, attending medical appointments, supporting the sick or elderly, and taking brief physical exercise within 2 km of one’s home. In May 2020, the Irish government published a COVID-19 ‘roadmap’ of four phases for reopening society and business to ease the restrictions in a phased manner over five months (Department of the Taoiseach, 2020). Within the Netherlands, the Dutch Government implemented a strategy which it called an “Intelligent COVID-19 Lockdown” (de Haas et al., 2020). This lockdown involved social distancing, social isolation, public event cancellations, self-quarantine and a 9pm curfew (Fried et al., 2022). Large-scale businesses, schools, and universities were closed, and international travel was restricted (Government of the Netherlands, 2020). Although recognized as effective measures to curb the spread of COVID-19, these “stay-at-home” policies evoked a negative effect on the overall well-being of the population, with a greater proportion of individuals experiencing physical and social inactivity, poor sleep quality, unhealthy diet behaviours, and unemployment (Ammar et al., 2021).

The COVID-19 pandemic had an extensive impact on the higher education sector globally (Crawford et al., 2020), with universities from around the world, including Ireland and the Netherlands, switching to remote learning to prevent the spread of the virus (Gewin, 2020). University students, who are already recognized as a vulnerable population, were at increased risk of mental and physical health issues given the COVID-19-related interferences to higher education (Liu et al., 2020). Prior to the outbreak of the virus, the World Health Organisation’s international survey of 13,984 participants across eight countries found that one-third of first year college students self-reported a mental health disorder (Auerbach et al., 2018). Students’ experience of studying has been described as disrupted and leading to feelings of anxiety, hopelessness, and insecurity (Ma et al., 2020; Hajdúk et al., 2020). Remote learning has also impacted the physical activity of students, with reports demonstrating that students’ levels of physical activity reduced across all ages (Martínez-de-Quel et al., 2021). Physical activity has well-established relations with positive mental health and well-being (2018 PAGAC; Sallis et al., 2016). The sudden changes in learning environments, along with the prevalence of mental health disorders and decreases in physical activity levels following the COVID-19 pandemic has brought the importance of well-being into focus for many universities (Novo et al., 2020). Student well-being has been shown to increase a sense of belonging, positive relationships with others, participation in learning activities, autonomy, and competencies (Cox & Brewster, 2021) and reduce their stress, frustration, burnout, and withdrawal from active learning (Yazici et al., 2016). Well-being not only stimulates student academic achievement, but also prepares students for lifelong success (Mahatmya et al., 2018). Thus, focusing on methods to promote and enhance student subjective well-being is a necessary endeavor.

With research suggesting that the change to remote learning is having a measurable negative impact on student well-being (e.g., Ostafichuk et al., 2021), digital interventions appear as a compelling strategy to improve student well-being and have been developed and evaluated in randomized controlled trials, with many showing positive results (Lattie et al., 2019).

Universities are an ideal setting for such well-being interventions (Baik et al., 2019). However, with many universities teaching from home, and more remote working options available for students after university, digital well-being interventions may be the optimal method to improve well-being as they can be delivered, for example, through smartphone applications, computer programs, short message service (SMS), interactive websites, social media, and wearable devices (Perski et al., 2017). One such intervention is StudentPOWR.

1.1 StudentPOWR

StudentPOWR is a holistic well-being platform that incorporates both psychological and physical well-being. It comprises the pre-existing online well-being tool, ‘POWR’ (wrkit.com/products/well-being), and the pre-existing online exercise tool, Move (wrkit.com/move). POWR is an interactive well-being tool designed by a team of clinical psychologists to help individuals manage their overall well-being and academic performance. It is paid for by the university and is then free for the students to use. POWR is accessible 24/7 on all online devices and has six pillars of well-being: mind, active, sleep, food, life, and work. The user completes clinical questionnaires within each pillar of well-being and receives a score based on their answers. The scores fall into four categories: low, average, high, or excellent. According to their scores in these questionnaires, an individually tailored behavioural ‘plan’ is recommended. There are over 450 behavioural plans that are recommended according to an algorithm that was developed by the clinical and technology teams. These are easy seven-day plans that are based on cognitive behavioural therapy techniques and encourage users to engage in health-promoting behaviors that enhance well-being (see Appendix 1 for screenshots and a more detailed description of features on POWR). Next to POWR, StudentPOWR also has the online exercise tool, Move (see screenshots in Appendix 2). Move contains workout videos, including Yoga, Pilates, High Intensity Interval Training, mobility classes, and deskercises (short exercises that can be done at one’s desk with no equipment needed). Move was developed by licensed personal trainers, Yoga, and Pilates instructors. Each workout is designed to be done in the comfort of the individual’s home, so they are all suitable for students as they study from home.

1.2 The Current Study

While there is a growing body of evidence supporting that digital interventions can have significant effects on improving well-being, low levels of user engagement have been reported as barriers to optimal efficacy (Gan et al., 2022). User engagement is considered one of the most important factors in determining the success of a digital intervention (Sharpe et al., 2017). Although various interpretations of engagement have emerged (Doherty & Doherty, 2018), it has been commonly described as a subjective experience characterized by attention, interest, and affect (Perski et al., 2017). Based on the above evidence, there is an urgent need for, and high potential in using digital interventions to improve student well-being (Rauschenberg et al., 2021). The current study therefore aims to evaluate the online well-being intervention, “StudentPOWR”, in improving the subjective well-being of university students studying from home in Ireland and the Netherlands. Additionally, engagement, as defined above, is assessed throughout the intervention as a potential moderator between the StudentPOWR intervention and subjective well-being scores.

2 Methods

2.1 Participants

The student participants were recruited using convenience sampling with an advertisement (Appendix 1) via social media posts and group chats. In total, 99 students who were studying from home were interested in participating and all were eligible to take part in the study, completing all questionnaires. Based on a power calculation using G*Power (Faul et al., 2009), a minimum of 54 participants were required in total (A priori power calculation for a repeated measures ANOVA on the interaction effect of the between and within factors with 3 levels each, assuming a medium effect size (r = .25), alpha of 0.05 and Power of 0.95; see Appendix 3). We further ran a post-hoc analysis with the same settings and setting the sample size to 70 (this is the most conservative scenario as 70 is the minimum sample size in our intervention achieved at time point 3) resulted in a power > 0.99. Data were collected remotely from three universities in the Netherlands and five universities in Ireland. Participants originated from diverse study faculties across universities. Participation in the study was voluntary. The inclusion criteria were: (1) age 18 years or older; (2) university student studying from home; (3) fluency in the English language; (4) access to a laptop and smartphone.

2.2 Procedure

A non-blind, 3-arm (full access, limited access, and waitlist control) randomized controlled trial (RCT) was used to carry out this investigation in March 2021. After the recruitment process, a total of 99 students were interested, and all met the inclusion criteria. The participants were provided a unique 8-digit code to ensure anonymity and confidentiality.

The participants were surveyed at three intervals over the four-week study period (see Fig. 1). There were three conditions, all over a period of four weeks: (1) full access to the intervention, (2) limited access to the intervention, and (3) a waiting-list control group. Both the limited access group and the control group had the possibility to receive full access after the completion of the study. To establish baseline subjective well-being scores before any intervention, all participants received a Qualtrics link to complete the pre-test SPANE well-being questionnaire (Diener et al., 2010). Randomization of participants was conducted using the random function in Microsoft Excel 2015 (aiming to achieve equal sized groups). Participants were randomly assigned to either condition 1 (N = 36), condition 2 (N = 30), or condition 3 (N = 33).

Fig. 1
figure 1

Trial design flow

The participants in condition 1 were emailed with a link to register with full access to StudentPOWR (POWR and Move). Participants in condition 2 were emailed with a link to register with limited access to StudentPOWR (just Move). Participants in condition 3 were informed that they were in the wait-list control group and that they would be granted full access to StudentPOWR after four weeks. All participants were aware of the study’s purpose and their assignment, as participants in conditions 2 and 3 needed to know that they would have full access to the intervention after the four weeks of the study period. Two weeks later, the SPANE well-being questionnaire (Diener et al., 2010) was administered to each of the conditions again (mid-test, time point 2). To test for participants’ engagement with StudentPOWR, the DBCI-ES-Ex (Perski et al., 2020) was also administered to participants in condition 1 and 2. Four weeks after baseline, the SPANE well-being questionnaire and the DBCI-ES-Ex engagement questionnaire were administered to participants in conditions 1 and 2 again (post-test, time point 2). In condition 3, just the SPANE questionnaire was administered. After finishing the data collection, participants in condition 2 and condition 3 were then granted full access to StudentPOWR for the following four weeks. Participants in condition 1 continued to have full access for the next four weeks. No data were collected during the four weeks post-intervention. Following data collection and during the following four weeks, a well-being challenge was run on StudentPOWR. If they wished to participate in the challenge, participants were required to take an online Pilates class on the Move portal of StudentPOWR. This challenge was incentivized with a Fitbit Versa 3, for which the winner was selected at random using the random function in Microsoft Excel 2015. Participants were aware of this challenge prior to participation in the research and no data were collected during the challenge. The current study procedure was granted ethics approval by the Ethics Review Committee Psychology and Neuroscience (ERCPN; OZL_188_11_02_2018_S22) on 10/02/2021.

2.3 Measures

The SPANE well-being Scale (Diener et al., 2010) was used to assess the subjective well-being of participants at the three time points (α = 0.89, averaged across all three time points). This is a 12-item self-report measure of positive feelings, negative feelings, and the balance between the two. Items like “How often do you experience feeling happy?” were all ranked on a 5-point frequency scale ranging from “Very Rarely/Never” to “Very Often/Always”. The negative feelings score is subtracted from the positive feelings score, and the resulting difference score can vary from − 24 (unhappiest possible) to + 24 (happiest possible).

The experiential subscale of the Digital Behaviour Change Intervention Engagement Scale (DBCI-ES-Ex) (Perski et al., 2020) was used to assess participants engagement with StudentPOWR (α = 0.79, averaged across the two time points). It consists of eight items on a 7-point answering scale ranging from “1: not at all” to “7: extremely”. The items were introduced as: “Please answer the following questions with regard to your most recent use of StudentPOWR. How strongly did you experience the following?” The items were (1) Interest, (2) Intrigue, (3) Focus, (4) Inattention, (5) Distraction, (6) Enjoyment, (7) Annoyance, and (8) Pleasure, with items 4, 5, and 7 reverse scored. For the DBCI, a sum score was calculated (range: 8 (no engagement at all) – 56 (fully engaged).

2.4 Data Analysis

All participants with data at least 1 time point (n = 99) were included in the statistical analyses. Statistical analyses were conducted using IBM SPSS Statistics 27. To assess whether randomization of participants into the three conditions was successful, baseline values for all outcomes were compared using an ANOVA. To test whether subjective well-being scores improved during the intervention in each of the three conditions, mixed regressions were conducted with repeated measures over time of the dependent variable subjective well-being (SPANE). Specifically, we used a random intercept (subjects) model (model 1 in Table 1) and we used Gender, Exchange student, Age, Time, Condition, and the interaction between Time and Condition as fixed effects. The covariance structure was set to scale identity which is the simplest possible model assuming constant variance at each time point and no correlation between measurement times (more complicated covariance structures were explored but the results were essentially the same, so we decided to use the simplest structure for parsimony). The engagement outcome (DBCI) was missing both at baseline and in the control condition (by design). By leaving out time point 1 and the waitlist control group, a mixed regression with random intercept was run with subjective well-being (SPANE) as dependent variable. Gender, Exchange student, Age, Time, Condition, Engagement (DBCI - time varying) and all the possible interaction between Time, Condition and DBCI were used as fixed effects (model 2a in Table 1). The covariance structure was set to scale identity. Lastly, a simple linear regression was carried out to predict participants’ mean well-being change scores (pre-post intervention) based on their engagement with the intervention (model 3). All tests were carried out with alpha = 0.05.

Table 1 Analysis model summary

3 Results

3.1 Study Population

The final sample consisted of 70 women (70.7%) and 29 men (29.3%), whose ages ranged from 19 to 43 (M = 23.7, SD = 3.86). Participant demographics are presented in Table 2. Between groups, no significant differences were found in baseline characteristics (all p’s > 0.72) suggesting that randomization was successful.

Table 2 Participant demographics at baseline

3.2 Dropout Analysis

From the start (pre-test) to the end of the intervention, there was an overall dropout rate of 29%. There was a 19% dropout rate in condition 1 (full access to intervention), a 43% dropout rate in condition 2 (partial access to intervention), and a 27% dropout rate in condition 3 (wait-list control group). Of the females who participated in the research, there was a 27% dropout rate and a 36% dropout rate of males. Dropout rates are presented in Table 3. Possible bias arising from how baseline variables relate to missingness of an outcome was resolved by including all baseline variables as predictors into the effect analyses with mixed regression (Verbeke & Molenberghs, 2000).

Table 3 Dropout rates across conditions and demographics

3.3 Improvements in Subjective Well-Being

The means and standard deviations of SPANE scores for each of the conditions at each time point are presented in Table 4. A significant interaction between time and condition was found, F (4,151) = 4.04, p = .004. Focusing on the regression coefficients related to the interaction using the first time point and the control condition (Waitlist control, Condition 3) as reference, these show a significant difference between the control condition at time 1 vs. (a) the full intervention (Condition 1) at time 2 (p = .003, regression coefficient \(b=3.87\), 95% CI = [1.30, 6.44], Hedges g = 0.4902, small effect size), (b) the partial intervention (Condition 2) at time 2 (p=.012, regression coefficient \(b=3.60\), 95% CI = [0.80, 6.40], Hedges g = 0.5698, medium effect size), and (c) the partial intervention at time 3 (p = .003, regression coefficient \(b=4.54\), 95% CI = [1.57, 7.51], Hedges g = 0.9024, large effect size). No significant difference was found between the control condition at time 1 vs. the full intervention at time 3 (p = .175, regression coefficient \(b=1.81\), 95% CI = [-0.82, 4.44]). This is visualized in Fig. 2.

Table 4 Descriptive statistics for SPANE well-being scores in each condition across time
Fig. 2
figure 2

Visual representation of the intervention effect on subjective well-being

3.4 Engagement with the Intervention

A mixed regression was run to see how engagement with the intervention influenced the intervention-effects on subjective well-being. The means and standard deviations of engagement scores for conditions 1 and 2 are provided in Table 5. Since the specified model (model 2a in Table 1) considers four interaction terms (three of the first order, one of the second order), which makes the model complex and difficult to interpret, we ran model selection on the interaction terms, using backward stepwise regression. Specifically, first we focused on the second order interaction, which was not significant (p = .468). Then, the analysis was run again focusing only on the first order interactions (model 2b in Table 1). Again, these were all not significant with the largest p-value corresponding to the interaction between Condition and Engagement (p = .889), which was removed from the analysis, leading to model 2c in Table 1. The remaining first order interactions were also not significant (p = .163 for condition*time and p = .095 for time*engagement). We then tested both models (model 2d and model 2e in Table 1) keeping only one of the two remaining first order interaction - these were also not significant: p = .124 for condition*time and p = .073 for time*engagement. Finally, we fitted the model (model 2f in Table 1) without any interaction. We focus on this final model, which resulted in only engagement being a significant predictor (F(1, 88,540) = 19,239; p < .001), suggesting a possible confounding effect of engagement. However, the conclusion could only be drawn with caution, as the analysis was performed on a reduced dataset.

Table 5 Descriptive statistics for DBCI-ES-Ex scores for participants in conditions 1 and 2

For condition 1, the mean engagement score at mid-intervention was 35.77 and decreased to 35.14 post-intervention, while the mean subjective well-being score at mid-intervention was reduced to 8.57 and 8.38 at post-intervention. For condition 2, the mean engagement score at mid-intervention was 34.83 and increased to 37.18 at post-intervention. The mean subjective well-being score at mid-intervention was 8.81 for condition 2 and increased to 11.06 at post-intervention.

A simple linear regression was carried out to predict participants’ mean well-being change scores (pre-post intervention) based on their engagement change score with the intervention (model 3 in Table 1). Preliminary analyses were performed to ensure no violation of the assumptions of normality, linearity, and homoscedasticity. The value of well-being change scores when one is not engaged at all is 1.049. A significant regression model was found (F (1, 45) = 6.69, p = .013, regression coefficient \(b=0.318\), 95% CI = [0.07, 0.556]), with an R2 of 0.132. Using the regression equation, we can then predict that there is a relationship between engagement and mean well-being change scores such that each time engagement is increased by 1 score (X-axis), participants’ mean well-being change score can then be expected to improve by 0.318 (Y-axis). This suggests that the level of engagement participants in condition 1 and 2 had with the intervention had a significant effect on their subjective well-being scores post-intervention, demonstrating that engagement served as a moderator for subjective well-being.

4 Discussion

In this study, the effects of the StudentPOWR well-being intervention on the subjective well-being of students studying from home was examined. In line with earlier digital well-being interventions (e.g., Harrer et al., 2019; Lambert et al., 2019; Krifa et al., 2022; Davies et al., 2014), our online intervention (POWR and Move) was successful in changing student well-being scores compared to a control group at week 2, with a small effect size of the full intervention (Hedges g = 0.4902), and a medium effect size of the partial intervention (Hedges g = 0.5698). However, post-intervention measures (week 4) suggested that only students receiving the partial intervention (Move) – and not the full intervention - had significant changes in subjective well-being compared to the control group, with a large effect size (Hedges g = 0.9024). Engagement scores for those with full access to the intervention decreased slightly over the course of the experiment, while the engagement scores for participants with partial access to the intervention increased over time.

Although initially unexpected, the findings are in line with earlier studies (e.g., Kurelović et al., 2016; Eppler & Mengis, 2003) that suggest that perceived information overload leads to an overall negative impact on well-being, including psychological stress (Lee et al., 2016), anxiety (Bawden & Robinson, 2009), and negative affect (LaRose et al., 2014). Different authors talk about a variety of factors that influence information overload, with a common consensus that the technology used to get the information plays a key role (Ruff, 2002; Vigil, 2011). In our combined POWR and Move intervention – offering informative articles, notifications, clinical plans, webinars, exercise videos and more - it might have been that there were too many features involved.

Some authors suggest that this (digital) information overload might lead to a reduced user engagement (Sharpe et al., 2017) and higher levels of digital fatigue (defined as “a state of indifference or apathy brought on by an overexposure to something”; Merriam-Webster, 2022, p1). The advent of COVID-19 resulted in an unprecedented level of digital global communication from both universities and individuals (Chang et al., 2020). During this time, there was a rise in the general use of technology as people were using it to consume news media, watch television, use social media to connect with others, and utilize lifestyle apps to shop for groceries and other consumer goods (Garfin, 2020). Previous research suggests that excessive use of digital devices leads to digital fatigue in students during COVID-19 (Sarangal & Nargotra, 2022; Sharma et al., 2021). This digital fatigue has been found to decrease subjective well-being (Singh et al., 2022) and may have influenced the subjective well-being scores of students receiving the full intervention (POWR and Move).

The level of engagement (as measured through attention, interest, and affect) that participants in condition 1 and 2 had with the intervention had a significant effect on their subjective well-being scores post intervention, such that those who were more engagement had increases in subjective well-being. This is in line with previous research by Gander et al. (2016) who concluded that positive psychology interventions based on engagement, meaning, positive relationships, and accomplishment are effective in increasing well-being.

5 Limitations, Strengths, and Future Directions

The current study had several limitations. First, the 2021 sample was (1) recruited online, (2) mostly female, and (3) only consisted of students across Ireland and the Netherlands. The self-guided manner of data collection might have compromised the internal validity (such as extraneous variables influencing subjective well-being scores) (Neumeier et al., 2017; Andrade, 2018). Additionally, studies have indicated greater levels of mental health problems due to COVID-19 in women compared to men (McGinty et al., 2020; Pierce et al., 2020; Di Giuseppe et al., 2020). Therefore, generalizability to other populations might be limited (Gray et al., 2020; Arendt et al., 2020). This was however considered a worthwhile trade-off in order to gain a better understanding of the interventions’ effectiveness in the real-world context for which they are intended. While COVID-19 restrictions varied in countries across the globe and this study focused only on universities in Ireland and the Netherlands, there was a high percentage of participants who were exchange students, enhancing the cultural diversity of the sample.

A second limitation was that dropout rates differed between conditions (differential attrition; Bell et al., 2013). There was nearly double the dropout rate of participants from the partial access group (45% dropout rate) compared to the full access group (24% dropout rate). This differential attrition may have influenced the increasing subjective well-being scores for participants in the partial access group.

To overcome some of the limitations, participants were randomly assigned to one of the three conditions, which is one method of avoiding selection effects within experiments (i.e., each group had similar subjective well-being scores before the intervention) (Lanz, 2020). The randomization of participants reduced bias and provided a rigorous tool to examine the cause-effect relationship between the intervention and its outcome (Hariton & Locascio, 2018). Additionally, dropout analyses were conducted on exchange and non-exchange students which provided insights into differences between these cohorts of students. Further, the inclusion of a waitlist control group had ethical advantages because it allowed for the provision of care (if delayed) to participants during a difficult period (Cunningham et al., 2013).

In future studies, it is recommended to include country of origin as a variable to gain a better understanding of the subjective well-being of students from different countries. The influence of cultural and social contexts on university students’ wellbeing is an underexplored research area (Hernández-Torrano et al., 2020). It is also recommended to include a more extensive measure of engagement. In the context of digital interventions, engagement has typically been conceptualized as (1) the extent of usage of digital interventions, focusing on amount, frequency, duration, and depth of usage (Danaher et al., 2006) and (2) a subjective experience characterized by attention, interest, and affect (Perski et al., 2017). While our study accounted for the latter when measuring engagement, a more accurate representation might include the former as well, incorporating drop-out rates into the measurement of engagement. Future replication studies should consider both concepts of engagement, by measuring amount, frequency, duration, and depth of usage as well as the constructs measured in Perski et al.’s DBCI-ES-Ex. This would prevent from the extraneous effects of differential attrition on the outcomes of the research.

6 Conclusion

The current study showed some positive outcomes of the StudentPOWR intervention in improving the subjective well-being of students studying from home. While it initially was predicted that the combination of POWR and Move in one intervention would have had a greater positive impact on subjective well-being than just Move, this was not the case. Engagement with the interventions played a potential role in this effect. The findings of this research, along with previous studies on information overload, digital fatigue, and subjective well-being, suggest that when designing digital well-being interventions to enhance student well-being, sometimes less is more. This is particularly relevant during times of heightened technology use (such as COVID-19).