The internet has become an essential part of people’s daily routines as a result of increasing digitization and rapid technological development. Internet Addiction (IA) is one of the most significant consequences of technological devices’ spread, and it is a highly investigated topic in the current research due to its potential and relevant effects on users’ health and habits.

Although the academic literature on IA is extensive, with a high number of studies published in the last 30 years, there is still a lack of agreement among scholars about its conceptualization. Indeed, some researchers have raised the issue of defining IA as a singular entity because the internet allows access to a wide range of activities, including playing (e.g., online gaming, gambling), socializing (e.g., social network sites, dating web sites), entertainment (e.g., movies, TV series, music), working (e.g., accessing online resources, emailing), and shopping (e.g., food, clothes, utilities) (Van Rooij & Prause, 2014). Similarly, Ryding et al. (2018) pointed out how the internet actually provides individuals with several multidimensional virtual spaces to accomplish various goals, indicating a somewhat questionable consistency of the IA’s concept. In line with this claim, Griffiths (2018) proposed distinguishing “general internet addiction” (i.e., when a person spends an excessive amount of time in more than one activity, disregarding all the important aspects of his/her life) from “specific internet addictions” (i.e., when a person spends an excessive amount of time in only one activity, such as gaming or gambling, disregarding all the important aspects of his/her own life), also indicating that individuals who are addicted to online sex, online gaming, online gambling, or online shopping are actually addicted to sex, gaming, gambling, or shopping rather than to the internet. The lack of specific criteria for recognizing and diagnosing IA adds to the debate. Sure enough, the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) (American Psychiatric Association, 2013) only included Internet Gaming Disorder (IGD) – a related but distinct condition from IA – in Appendix III as a phenomenon that requires further investigation before being recognized as a formal disorder. Despite these inconsistencies in content and unclear and overlapping definitions, IA is commonly defined as the uncontrolled time-consuming use of the internet, with impairments in people’s ability to manage time and the resulting reduced interests and skills in dealing with other aspects of life (Weinstein & Lejoyeux, 2010). As reported by a recent meta-analysis and systematic review (Lozano-Blasco et al., 2022), the incidence rate of IA by geographic area varies from 40 to 50% in Eastern countries (Mark et al., 2014), from 10 to 30% in America (Cruz et al., 2018), from 2 to 8% in Europe (Pontes et al., 2016), and tends to decrease in Australia and New Zealand. However, these discrepancies may be due to the lack of common diagnostic criteria, as well as the different scales used (Błachnio et al., 2017). For instance, in Asian countries, parents believe that any use of the internet that is not related to academic purposes is problematic. Also, some authors (Lei et al., 2018a, 2018b; Lei et al., 2018a, 2018b), in their meta-analytic studies, reported significant differences in IA within the same country (China), pointing out the issue of comparing these findings. Thus, the influence of geographic area on IA is apparent (Lozano-Blasco et al., 2022).

Measuring IA Through the Internet Addiction Test (IAT): Main Evidence of its Psychometric Properties and Need for a Shorter Version

The Internet Addiction Test (IAT) (Young, 1998) is the most commonly used and cross-validated instrument for assessing IA. A recent meta-analysis aimed at summarizing the scale’s psychometric properties (Moon et al., 2018) evidenced discrepancies in IAT dimensionality, with the 25 studies examined indicating a one- to five-factor structure. However, these authors reported that some of the studies included in their meta-analysis did not strictly follow the recommended guidelines for appropriate factor analysis study examinations, such as insufficient sample size per number of factors and number of variables, or the presence of cross-loadings even after items’ removal (Mundfrom et al., 2005; Yong & Pearce, 2013); thus, they suggested that the adequate number of factors for the IAT should be one (Boysan et al., 2017; Dhir et al., 2015; Waqas et al., 2018) or two (Faraci et al., 2013; Servidio, 2017). Methodological issues, such as the different sample sizes involved in each study or the diverse methods applied for factor extraction, could have influenced the disparate factor solutions. Because the reported studies were conducted in different countries, cultural differences may be another possible explanation for the divergent findings. Despite these inconsistencies in the factor structure, Moon et al. (2018) reported in their meta-analysis that the IAT has good psychometric properties in terms of reliability of the test scores (internal consistency and test–retest reliability) and validity of the test scores interpretations, highlighting its high quality for assessing IA.

Regardless of the IAT's utility and strength, developing a shorter version may be advantageous for a variety of reasons. First of all, some previous research has reported Cronbach’s alpha values of 0.90 or higher (Barke et al., 2012; Boysan et al., 2017; Faraci et al., 2013; Hawi et al., 2015). Although homogeneity is generally regarded as a useful indicator of a scale’s quality, internal consistency greater than 0.90 in a 20-item measure may reflect some redundancy, indicating that multiple items may evaluate the same content (Streiner, 2003). Additionally, some items of the original IAT may be outdated and unsuitable for assessing IA in the modern era. As an example, item 7 “How often do you check your email before something else that you need to do?” may be problematic because new and alternative modes of communication (e.g., social media) have largely replaced e-mail communications, and smartphones allow easily access to e-mails, so it is no longer necessary to “check” the email (Pawlikowski et al., 2013). According to this assumption, several studies found that removing item 7 improved model fit (Ali et al., 2021; Fernández-Villa et al., 2015; Pawlikowski et al., 2013; Siste et al., 2021). Moreover, shorter measures may be useful and valuable for studies involving a large battery of assessment scales because they require less time for administration and scoring, and they are more likely to be completed in a valid manner.

The Current Study

The overall goal of this study was to develop a short version of the IAT, while also investigating its psychometric properties. We conducted two separate studies: In Study 1, we aimed to create a shorter form of the IAT, with the same strengths as the original full version of the scale, and examine its internal consistency; in Study 2, we investigated the validity of test scores interpretations by examining the associations with some theoretically related variables.

Study 1

The primary goal of Study 1 was to shorten the original version of the IAT while maintaining its psychometric qualities. To develop a good short form, a good long form with a solid theoretical foundation and an established history of structural validity study examinations should be used as a starting point (Marsh et al., 2005a, 2005b). This is not the case for the IAT, as previously stated. As a result, before developing the brief form, it was necessary to examine the dimensionality of the full-length IAT. However, as evidenced in previous research (Moon et al., 2018), the IAT has shown to be a reliable and valid scale. Therefore, based on its psychometric qualities, we deemed the IAT long form a good candidate for short-form development.

Some brief versions of the IAT have already been published. Pawlikowski et al. (2013) developed a 12-IAT version by removing items with the lowest factor loadings, and other studies replicated the same two-factor solution (Tran et al., 2017; Wéry et al., 2016). Also, Hernandez Contreras and Rivera Ottenberger (2018) proposed a brief Spanish version of the IAT based on a qualitative analysis of the content of the obsolete items, and the authors themselves believed their work required additional empirical support. Pino et al. (2021) created the IAT Spanish reduced version based on empirical data to fill this gap. Following, Ali et al. (2021) selected the items for their six-IAT version, relying on the highest factor loadings of those items that seemed to better reflect Griffith’s model six components (2005) (i.e., conflict, mood modification, salience, tolerance, withdrawal, and relapse). In the item selection procedure, some issues can be highlighted in these previous IAT short forms. On the one hand, Pawlikowski et al. (2013) and Pino et al. (2021) relied on a data-driven approach, disregarding content-related facets. According to some authors (Goetz et al., 2013), the combination of statistical and content techniques may ensure that the brief form of a scale retains the same psychometric qualities as the original scale. Ali et al. (2021), on the other hand, predetermined the number of items in their shorter version (i.e., one item for each component of Griffith’s model), neglecting the application of more rigid and systematic techniques in short-form scale development. In addition, the inclusion of correlations between error terms to improve model fit (two out of six) may indicate that the selected items are somewhat questionable. Thus, with the goal of providing a more comprehensive view, we aimed to compare our proposed shortened version of the IAT with those recently published and described above.

The Methodological Perspective of Exploratory Structural Equation Modeling (ESEM) for Assessing IA

In order to evaluate the dimensionality of the full-IAT scale, as well as of its abbreviated form, we applied an Exploratory Structural Equation Modeling (ESEM) approach. It is a proper analytic strategy in psychological measurement because, by integrating the best features of EFA (i.e., cross-loadings) and CFA (i.e., theory-driven approach), it simultaneously allows both flexibility and methodological robustness, while also providing more realistic and less biased results (Marsh et al., 2009, 2014). Such a methodology permits less inflated inter-factor correlations, which allows for greater differentiation between factors, thus providing evidence for a better ability in discriminant validity than CFA (Marsh et al., 2014).

Method

Participants and Procedure

We collected data using the MTurk platform. We established the following inclusion criteria for participants’ recruitment: (a) being over the age of 18, (b) living in Italy, and (c) having an HIT approval rate greater than .90. The study included 463 individuals (53.8% males), with a mean age of 31.69 years (SD = 10.71). The majority of them were workers (53.3%) or students (33.7%), and only a small number declared to be unemployed (11.4%) or retired (1.1%). Except for a slight percentage of individuals with a basic education (4.8%), participants had a High School diploma (40.6%) or a higher education level (54.7%). We informed participants about the research purposes and assured them that the data would be analyzed collectively, ensuring the anonymity of their responses. Because the proportion of missing data was small (< 5%), we chose the mean-imputation technique to handle missing values. According to Cook (2021), this is an adequate procedure to use when the percentage of missing data is low and the data are missing completely at random. The research was approved by the Internal Review Board of Psychology of the University of Enna Kore.

Instruments

Internet Addiction Test

The instrument is composed of 20 items rated on a Likert scale (from 1 = never to 5 = always). It comprises items describing several addictive behaviors and obsessive internet use (e.g., escapism, compulsivity, and dependency), as well as the consequences of excessive internet use (e.g., interpersonal conflicts, deficiencies in personal activities). Higher scores indicate greater levels of internet addiction.

Data Analyses

Preliminary Data Analyses

First, we checked items’ distribution by evaluating skewness and kurtosis statistics, which were regarded as indicative of a violation of the assumption of normality if >|1| and |3|, respectively. We also tested multivariate normality by computing Mardia’s coefficient. We then inspected item statistics and inter-item correlations to examine the extent to which they could be used as indicators of IA evaluation.

The Selection of the Best Model for the IAT Long-Form

In addition to the overall model chi-square statistic (χ2), we examined and compared the fit of the models using several indices: the Tucker-Lewis index (TLI), the comparative fit index (CFI), the standardized root mean square residual (SRMR), and the root mean square error of approximation (RMSEA). TLI and CFI values ≥ 0.90 and 0.95 were considered indicative of acceptable and good model fit, respectively; RMSEA and SRMR values between 0.06 and 0.08 indicated a marginally acceptable model fit, and values between 0.01 and 0.05 indicated an excellent model fit (Hu and Bentler 1999). Further, models with the lowest AIC, BIC, and aBIC indicated a better fit. However, in addition to the evaluation of fit indices and information criteria, we also inspected parameter estimates and measurement quality indicators to select the best representation of the data, as also suggested by previous authors (Marsh et al., 2005a, b; Marsh et al., 2004). For multidimensional models we compared CFA Models (Models 5, 6, and 7), with their respective ESEM structures, and in these comparisons, the ESEM solutions were chosen based on whether they produced a significant reduction in the magnitude of the correlations between latent factors. Otherwise, the CFA Models should be preferred based on the criterion of parsimony (Marsh et al., 2009). We used target rotation for testing the ESEM solutions, a rotational procedure that permits all cross-loadings to be as close to zero as possible (Asparouhov & Muthén, 2009), allowing us to use the ESEM approach in a confirmatory manner (Marsh et al., 2014). We favored target rotation given the a priori knowledge of the factor structure of the tool. We retained the best factor solution from the long version of the IAT for the process of item selection and development of the scale’s brief version. We also tested the full-IAT’s factor structure by removing item 7, which caused a decrease in model fit in previous studies (Ali et al., 2021; Fernández-Villa et al., 2015; Pawlikowski et al., 2013; Siste et al., 2021). Table 1 shows all of the models that were tested. We then compared our proposed IAT-SF to previously validated shortened versions, namely the six-IAT (Pawlikowski et al., 2013), the 12-IAT (Ali et al., 2021), and the IAT-12 (Pino et al., 2021).

Table 1 Models Tested for the IAT Long-Form and Items on Each Corresponding Factors

Short-Form Development

Based on the guidelines for item selection for the short form scale development recommended by Marsh et al., (2005a, 2005b), we used the following criteria: (a) retain the same content coverage facets of the full-length IAT, (b) a sufficient number of items in each subscale to achieve a coefficient of reliability greater than 0.80, (c) item-total correlation of at least 0.40, (d) provide adequate model fit, (e) size of standardized item factor loadings higher than 0.30, (f) items with minimal cross-loadings, (g) items with low missing data (although the proportion of missing responses is generally less than 5%), and (h) subjective evaluation of the content of each item to maintain the original construct’s breadth of content. In accordance with Moon et al. (2018)’s recommendations, unidimensional and two-factor models were tested. Specifically, we included models in our analyses that met the recommended guidelines for factor analysis examinations (Boysan et al., 2017; Dhir et al., 2015; Faraci et al., 2013; Servidio, 2017; Waqas et al., 2018).

Reliability

We examined the internal consistency of our proposed short version of the IAT by computing Cronbach’s alpha and Mc Donald’s omega. Although it is a common and useful practice to report both reliability coefficients, the latter should be favored given that the former is based on assumptions that are hardly ever met in empirical research, such as tau-equivalence (i.e., equivalence of factor loadings), continuous items with a normal distribution, uncorrelated errors, and strict unidimensionality.

Results and Discussion

Preliminary Data Screening

Skewness and kurtosis values did not exceed |1| and |3|, indicating that IAT items showed a univariate normal distribution, whereas the inspection of Mardia’s coefficient (504.49) suggested a violation of multivariate normality. Thus, we applied the Robust Maximum Likelihood (MLR) as estimation method, which does not require that the data are normally distributed. Inter-item correlations for the IAT Long-Version were moderate and significant, as shown in Table 2. Item 7, on the other hand, reported weaker associations with the other scale’s items (not exceeding the value of 0.30), indicating a lower weight in contributing to the IA assessment. Thus, our results suggested that item 7 was somewhat problematic for IA measurement, which was consistent with previous research (Ali et al., 2021; Fernández-Villa et al., 2015; Pawlikowski et al., 2013; Siste et al., 2021). This justified our choice to test the models with and without its inclusion.

Table 2 Items’ Statistics and Inter-Item Correlations for the IAT Long-Form

The Selection of the Best Model for the IAT Long-Form

As expected, CFA models displayed worse fit indices than the corresponding ESEM Models (Table 3). Specifically, neither the one-dimensional structures nor the CFA Models met the standard thresholds of acceptability, whereas all ESEM models displayed higher fit indices, reaching acceptable results. One could argue that, because ESEM models are less restrictive, they always result in improved model fit. We are aware of this assumption and that it may likely produce over-fitting; thus, following the suggestions provided by previous authors (Marsh et al., 2005a, b; Marsh et al., 2004), we proceeded with an inspection and comparison of parameter estimates, as well as of inter-factor correlations. Given that Model 7-ESEM reported a substantial reduction in the magnitude of correlations between factors compared to its competing Model 7-CFA (Model 7-ESEM: r = 0.476; Model 7-CFA: r = 0.0.819), it was considered the most plausible factor solution in our study sample (a detailed description of parameter estimates and inter-factor correlations of Models 5-CFA, 5-ESEM, 6-CFA, and 6-ESEM is shown in Supplemental Materials). The fit indices for Model 7-CFA and Model 7-ESEM were as follows: Model 7-CFA: [χ2 = 591.306; df = 134; CFI = 0.867; TLI = 0.848; RMSEA = 0.086 (0.079–0.093); SRMR = 0.067; AIC = 23,242.149; BIC = 23,469.724; aBIC = 23,295.168]; Model 7-ESEM [χ2 = 335.868; df = 118; CFI = 0.937; TLI = 0.918; RMSEA = 0.063 (0.055–0.071); SRMR = 0.035; AIC = 22,962.046; BIC = 23,255.825; aBIC = 23,030.489]; the standardized solutions for both Models are shown in Table 4. However, Model 7-ESEM showed some concerns in relation to the magnitude of cross-loadings. As suggested by some authors (Morin et al., 2016a, 2016b; Tóth-Király et al., 2018), inflated cross-loadings in ESEM solutions may be attributable to unmodeled G-factors. Thus, we tested whether a general factor, together with the two specific factors, fitted our data by applying a Bifactor ESEM (B-ESEM) approach. Our results evidenced that a B-ESEM model for the IAT Long-Form was the most appealing factor solution, reporting good fit-indices [χ2 = 252.281; df = 102; CFI = 0.956; TLI = 0.934; RMSEA = 0.057 (0.048–0.065); SRMR = 0.028; AIC = 22,877.108; BIC = 23,237.090; aBIC = 22,960.975] and a significant reduction of the size of cross-loadings (Model 7-ESEM: from ǀ.006ǀ to ǀ.540ǀ; B-ESEM: ǀ.007ǀ to ǀ.255ǀ).

Table 3 Goodness-of-fit Indices for Each Tested Model
Table 4 Comparison between Model 7_CFA, Model 7_ESEM and B-ESEM of the IAT Long-Version

The values of composite reliability (ω and ωs) were 0.941, suggesting that both the total scale and the two subscales had excellent score reliability estimates. The value of ωH indicated that 86% of the variance of the total IAT score was attributable to the G-factor, whereas ωHs for F1 and F2 were 0.05, and 0.03, respectively. Thus, based on ωHs and ωH, the proportion of the subscale score variance was 5% (F1) and 3% (F2). These results indicated that the two S-factors did not provide much unique information after controlling the variance attributable to the G-factor. We obtained similar findings by checking the ECV: The G-factor explained a large proportion of the total variance (78%), whereas the S-factors reported an ECV ≤ 0.13, meaning that the two S-factors have a small ability to explain the global construct (the parameter estimates of the B-ESEM Model for the IAT Long-Version are shown in Table 4).

Short-Form Development

After testing the adequacy of model fit, we started the process of shortening the scale by examining item characteristics and evaluating their fulfillment of the established criteria (Marsh et al., 2005a, 2005b). We first removed items 3, 5, 8, 9, 13, 14, 15, 18, and 20 because they reported factor loadings lower than 0.30 in both factors. We then eliminated items 10 and 19 because they reported nontrivial cross-loadings. In the ESEM approach, cross-loadings, although allowed, should be as close to zero as possible. Large cross-loadings should be avoided because they may indicate a conceptual overlap between items and factors (van Zyl et al., 2020). We then evaluated (a) item-total correlations to examine whether some items did not report the minimum recommended threshold of 0.40, and (b) internal consistency to test whether the selected factors showed an adequate level of reliability. As reported in Table 5, item-total correlation coefficients were above the suggested minimum of 0.40, meaning that the retained items adequately contributed to assess IA. In addition, the coefficients of internal consistency (both α and ω) were acceptable. Notably, our guidelines for short-form development suggested that items should be retained in order to guarantee a minimum of 0.80 of internal consistency for each factor. Although both factors did not reach the recommended threshold, values of ω and α lower than 0.80 but higher than 0.70 (as in our study) may be acceptable, given the small number of items (see Table 5 for more details). We also inspected the content of the items to evaluate whether they maintained the original construct’s breadth of content. All of the remaining items were salient to describe the two dimensions. Thus, no additional items were removed for further analyses. Following, we tested whether our proposed shortened scale of the IAT fitted the data. We investigated its dimensionality by performing CFA and ESEM analyses. Overall, our findings evidenced that our abbreviated form of the scale showed adequate model fit indices, both in the CFA [χ2 = 50.886; df = 13; CFI = 0.955; TLI = 0.928; RMSEA = 0.079 (0.057–0.103); SRMR = 0.043; AIC = 9,309.441; BIC = 9,400.471; aBIC = 9,330.648], as well as in the ESEM solution [χ2 = 13.373; df = 8; CFI = 0.994; TLI = 0.983; RMSEA = 0.030 (0.000–0.073); SRMR = 0.016; AIC = 9,276.944; BIC = 9,388.662; aBIC = 9,302.971], with the former performing worse than the latter. Table 6 shows the standardized solutions of the abbreviated form of the IAT (CFA vs. ESEM), the parameter estimates, and inter-factor correlations. Based on these results, we elected the ESEM solution because it produced higher fit indices and lower information criteria, as well as a reduction in the size of the correlation between factors. Although the reduction of the magnitude of correlation between factors was not substantial compared to its corresponding ESEM solution, the presence of significant cross-loadings in the ESEM model supports the appropriateness of this analytic approach to estimate the latent structure of the scale.

Table 5 Item-total Correlations and Internal Consistency for the IAT-7
Table 6 Parameter Estimates and Inter-Factor Correlations of the IAT-7 (CFA vs ESEM)

All the selected items had the highest saturation on their respective theorized construct, and only small cross-loadings on the non-target factor were estimated. As a result, our proposed shortened form of the IAT – which we named IAT-7 – includes seven items distributed into two subscales: F1: Interpersonal, Emotional and Obsessive Conflict; F2: Online Time Management (items of the IAT-7 are shown in the Table 9, Appendix). Despite the prominent and excellent features of the two-factor model for the IAT-7, we followed our analyses by also testing whether a global factor existed over and above the two specific factors. Such an inspection was not merely guided by the goal of evaluating the two sources of psychometric multidimensionality of the construct, as it is a common practice in current literature on this topic (Morin et al., 2016a, 2016b; Morin et al., 2016a, 2016b; Tóth-Király et al., 2018). Indeed, in the current study, we wanted to examine and maintain the same factor structure reported for the IAT Long-Form. To accomplish this goal, we tested a B-ESEM Model for the IAT-7. However, the B-ESEM solution for the IAT-7 was a saturated model, with an equal number of estimated parameters and data points. A saturated model, which reports perfect goodness-of-fit indices, is useless due to its non-falsifiability (Bottaro et al., 2023). Thus, our findings did not support the presence of hierarchically ordered factors for the short version of the IAT. Finally, we compared the IAT-7 with the 12-IAT (Pawlikowski et al., 2013), IAT-12 (Pino et al., 2021), and six-IAT (Ali et al., 2021), which revealed poorer fit indices to our data than our proposed IAT-7 (see Table 7 for more details).

Table 7 Goodness-of-fit indices for the 12-item IAT, the six-item IAT, and our proposed IAT-7

Study 2

Growing engagement in online social activities is replacing offline interactions, thereby preventing individuals from establishing or maintaining social relationships, reducing interests in other aspects of life, and weakening psychological health (Valenti et al., 2022; Weinstein & Lejoyeux, 2010). In line with this interpretation, researchers have investigated the relationship between IA and psychological well-being, reporting associations with psychiatric and subthreshold psychiatric symptoms (Rosa et al., 2022), with stress, depression, and anxiety being the most common comorbid conditions (Andrade et al., 2020; Dong et al., 2020; Sayed et al., 2022). Along with these psychosocial impairments, Ostovar et al. (2016) examined the effects of IA on loneliness, finding that individuals with a higher degree of IA exhibited more loneliness than low or moderate IA users. A recent meta-analysis (Saadati et al., 2021) found additional evidence of positive associations between IA and loneliness, highlighting that individuals with IA had significantly higher levels of loneliness. Based on these theoretical and empirical considerations, and in accordance with some previous IAT validations (Lee et al., 2013; Pontes et al., 2014), we used this set of constructs to evaluate the validity of test scores interpretations (convergent validity) of the IAT-7, hypothesizing moderate and positive associations.

Participants and Procedures

We recruited a sample of 374 individuals (54.8% females), with a mean age of 29.45 years (SD = 10.60) through social networks (i.e., Facebook, Instagram, WhatsApp). To participate in the study, participants had to be at least 18 years old. We did not request any additional socio-demographic data. We informed participants about the research objectives and assured them that the data would be analyzed collectively, guaranteeing the anonymity of their responses. To avoid missing data, we used a mandatory response in our Google Form. The research project proposal was carried out in accordance with the Declaration of Helsinki, and it was approved by the Internal Review Board of the psychological research of the University of Enna “Kore”.

Instruments

IAT-7

To assess internet addiction, we used the IAT-7 developed in Study 1 (see the previous section for more details). The internal consistency of both subscales in the current study was good (Interpersonal, Emotional and Obsessive Conflict: α = 0.736; ω = 0.739; Online Time Management: α = 0.830; ω = 0.834). As a cross-validation procedure, we tested the dimensionality of the IAT-7 by performing a CFA on this second sample, and our findings evidenced promising results [χ2 = 38.284; df = 13; CFI = 0.964; TLI = 0.942; RMSEA = 0.076 (0.046–0.099); SRMR = 0.037; AIC = 7365.031; BIC = 7451.365; aBIC = 7381.565].

Depression, Anxiety, and Stress

To assess symptoms of depression, anxiety, and stress, we used the Depression Anxiety and Stress- 21 (Lovibond & Lovibond, 1995), in its Italian version (Bottesi et al., 2015). It is a 4-point Likert scale (from 0 = did not apply to me at all to 3 = applied to me very much, or most of the time), composed of 21 items distributed into three subscales: Depression (α = 0.906; ω = 0.909), Anxiety (α = 0.840; ω = 0.841), and Stress (α = 0.894; ω = 0.895).

UCLA Loneliness Scale Version 3

To assess loneliness, we used the UCLA Loneliness Scale Version 3 (UCLA LS3) (Russell, 1996), in its Italian adaptation (Boffo et al., 2012). It is a 20-item measure with a 4-point Likert scale (from 1 = never to 4 = always). Higher scores are indicative of greater levels of feelings of loneliness. The internal reliability for the current study sample was excellent (α = 0.914; ω = 0.915).

Data Analyses

With the goal of investigating the validity of test scores interpretations, we specified a latent variable model (LVM) and assessed the relations of the IAT-7 scores, based on the retained measurement solution, with some related constructs (i.e., depression, anxiety, stress, and loneliness). Each external variable was specified as unidimensional and included in the multivariate model. A path analysis was also performed to investigate the effects of IA on psychological health (i.e., anxiety, depression, stress, and loneliness). We specifically tested whether both IA’s facets (i.e., F1: Interpersonal, Emotional and Obsessive Conflict; F2: Online Time Management) were significant predictors of the selected indicators of psychological well-being. The path diagram of the hypothesized casual relationships is shown in Fig. 1.

Fig. 1
figure 1

Hypothesized Predictive Associations of IAT-7 on Psychological Well-Being’s Indicators

Results and Discussion

Preliminary Data Screening

Skewness and kurtosis analysis revealed that the selected set of variables had a univariate normality distribution (0.04 < S < 1.12; -0.81 < K < 0.95). Moreover, Mardia’s coefficient computation (52.06) revealed that the multivariate normality condition was satisfied. The achievement of normality distribution is a necessary prerequisite for subsequent analyses.

Associations Between IAT-7 and Psychological Well-being’s Indicators

As shown in Table 8, we found significant and moderate associations (p < 0.001) between the two IAT-7 subscales and all the psychological well-being indicators, providing empirical evidence for convergent validity. Specifically, all the reported associations were in the expected direction and consistent with previous research (Andrade et al., 2020; Ostovar et al., 2016; Rosa et al., 2022; Sayed et al., 2022). In addition, the examination of the causal paths (see Fig. 2) indicated that F1 (Interpersonal, Emotional and Obsessive Conflict) predicted loneliness (β = 0.341, p < 0.001) and depression (β = 0.241, p < 0.05), whereas F2 (Online Time Management) was primarily associated with stress (β = 0.380, p < 0.001), followed by depression (β = 0.233, p < 0.05). As a viable explanation, F1 is made up of items describing the preference for staying connected over experiencing real social relationships (e.g., Item 12: “How often do you fear that life without the Internet would be boring, empty, and joyless?”; Item 19: “How often do you choose to spend more time on-line over going out with others?”), which may lead to social isolation from the real world, depriving individuals of a sense of belonging and satisfaction with real social connections. On the other hand, F2 includes items describing how time spent online may negatively affect daily routine (e.g., Item 2: “How often do you neglect household chores to spend more time on-line?”; Item 6: “How often do your grades or school work suffer because of the amount of time you spend on-line?”), likely inducing a negative emotional state of physical and mental strain caused by the overload of all things to do that were neglected or procrastinated while spending time online. According to these findings, the more addicted a person is to the internet, the more stressed, depressed, and lonely the person will be. As a result, our findings support previous empirical studies (Andrade et al., 2020; Ostovar et al., 2016; Rosa et al., 2022; Sayed et al., 2022), which emphasize that IA is a risk factor for the development of psychological health problems.

Table 8 Correlations Between IAT-7 and Psychological Well-being’s Indicators
Fig. 2
figure 2

Predictive Associations of IAT-7 on Psychological Well-Being’s Indicators. Note. The path analysis shows associations between IA facets and endogenous psychological well-being’s indicators (depression, anxiety, stress, loneliness). Coefficients presented are standardized linear regression coefficients. *p < .05; ***p < .001

General Discussion

The main goal of the current study was to develop a shorter version of the IAT, the most widely used measure to assess IA. To create a robust and solid abbreviated version of the questionnaire with the same strength and content coverage as its longer form, we combined the application of more rigid analytic techniques in item reduction with an item content evaluation, which may ensure the development of an instrument with good and promising qualities (Goetz et al., 2013). First and foremost, our study adds to the extensive literature on IA assessment by providing empirical support for the construct’s multidimensionality over one-dimensionality. Complex phenomena typically have a broad content that describes various, heterogeneous, and multifaceted aspects. According to this viewpoint, the concept of IA involves several facets, including excessive internet use and the resulting difficulties in managing time, preoccupation with the internet-using behavior, unsuccessful attempts at stopping time online, as well as a decrease in interests in various life domains (Weinstein & Lejoyeux, 2010). As a result, one-dimensionality may not adequately capture the complexity of the IA phenomenon. In line with this assumption, more recent studies appear to favor the construct’s multidimensionality over its one-dimensionality (Lu et al., 2020, 2022; Sondhi & Joshi, 2021). Notably, as shown in Table 3, even though each one-dimensional model had the worst fit indices, we found significant improvements in model fit when the ESEM approach was applied. These findings have intriguing implications for the conceptualization and evaluation of IA. From a content standpoint, our findings highlight the notion that there are two major facets of IA: (a) a cognitive-emotional domain and (b) a behavioral domain. Indeed, the former includes items describing the emotional and cognitive importance of being connected to the internet, such as feeling depressed, nervous, empty, or aggressive when not connected, as well as items related to social aspects of time spent online, such as preferring online activities to social interactions with others; the latter includes items concerned with ineffective attempts to stop or reduce time spent online in addition to harmful effects of internet use for daily functioning. Despite some items being grouped into a different factor and a slight discrepancy in the way factors are named, a similar IA conceptualization emerged in previous studies (Barke et al., 2012; Pawlikowski et al., 2013; Pino et al., 2021; Servidio, 2017; Tran et al., 2017), evidencing this dual facet of the IA construct. Nonetheless, while these factor solutions have the same number of factors and similar names, they do not contain the same specific items, implying that additional research is still needed and highly recommended.

From a methodological perspective, in line with previous research (van Zyl et al., 2020; van Zyl and ten Klooster, 2022), our results suggest that ESEM models, due to their higher level of flexibility, better capture the complexity of the constructs, providing better fit-indices than their competing CFA models. The ESEM approach may overcome the shortcomings of the more simplistic, restrictive, and idealistic CFA models that assume the existence of “pure” factors. Indeed, research has shown that the majority of items on psychological instruments tend to measure more than one conceptually related factor, implying that some degree of construct-relevant association between items is possible and should be expected (Mai et al., 2018). Moreover, when applying CFA models for multi-factor psychological instruments, researchers frequently found discrepancies between the reported EFA and CFA findings, or relied on modification indices to achieve good data model fit (McNeish et al., 2018). Some IAT validation studies (Fernández-Villa et al., 2015; Pontes et al., 2014), for example, used modification indices, as well as some IAT brief versions (Pawlikowski et al., 2013). Thus, the ESEM approach is a viable alternative to traditional CFA that could be used for psychological constructs’ assessment, and researchers should systematically compare these two frameworks to identify the most suitable one.

We developed our IAT-7 by following the recommendations provided by Marsh et al., (2005a, 2005b), and our analyses indicated that seven items should be kept for assessing IA. We retained an ESEM structure for our proposed IAT-7, in which items were distributed into two well-defined factors (F1: Interpersonal, Emotional and Obsessive Conflict; F2: Online Time Management). With small cross-loadings, each item reported the highest saturation value on its a priori theorized construct. In this regard, the IAT-7 outperforms the full version of the IAT because significant but small cross-loadings may indicate a lack of conceptual overlap between items and factors (van Zyl et al., 2020). However, the presence of low non-target factor loadings is a relevant element highlighting the fallible nature of items as pure indicators of the construct they are meant to assess (Asparouhov & Muthén, 2009). In practice, although the ESEM solution provides a better factor distinctiveness than its CFA counterpart, the emerged factors are not perfectly distinct from each other, pointing out that, when measuring psychological multidimensional constructs, a “cross-contamination” between factors is somewhat unavoidable.

Interestingly, item 6 (i.e., Do your grades or school work suffer because of the amount of time you spend online?) revealed no specific issues in assessing IA, with neither lower mean scores nor smaller factor loadings. Indeed, asking whether time spent online has a negative impact on school performance may be problematic for non-students. Although we decided not to remove item 6 from our proposed IAT-7 due to its good psychometric quality, we recommend changing its content into “Do your grades or job performance suffer because of the amount of time you spend online?” if the scale is intended to address IA among the general population.

The investigation of the psychometric properties of our proposed IAT-7 revealed promising features. Specifically, the reliability coefficients reported in both studies indicated that the IAT-7 is a reliable measure, highlighting that items assess the same underlying construct. According to this viewpoint, high levels of internal consistency with a small number of items may indicate a lack of redundancy, which can be found when Cronbach’s alpha is equal to or greater than 0.90 in a 20-item scale (Streiner, 2003). Further, the IAT-7 reported significant associations with some theoretically related constructs, providing empirical support for the validity of test scores interpretations. In accordance with previous studies (Andrade et al., 2020; Dong et al., 2020; Rosa et al., 2022; Sayed et al., 2022), and in line with our expectations, the two dimensions significantly predicted poor psychological health quality. However, the associations between IA and psychological wellbeing’s indicators may also be inverse or reciprocal, meaning that people higher in depression, loneliness, and stress may be more prone to (ab)use the internet as a regulatory strategy to cope with impairments in social life (Brand et al., 2014; Tian et al., 2021). From this perspective, additional studies are recommended to evaluate the extent to which these external variables predicted IA – as assessed by our proposed IAT-7.

Importantly, although attractive and with psychometric robustness, our proposed IAT-7 does not overcome the issues and controversies related to IA assessment. Truly, IA is a developing concept that needs to be more clearly defined with well-established and identified criteria. From this perspective, additional research is desirable to better differentiate the emerging concepts related to the use of the internet, as well as to distinguish general internet addiction from specific internet addictions, such as online gaming, gambling, and online shopping. The strength of using the IAT-7 instead of the traditional 20-item version derives from the possibility to assess IA in a valid and reliable manner, with the advantage of relying on a less time-consuming scale and a loss of item redundancy. Our findings have relevant theoretical and practical implications. On a theoretical level, our results support the broad and extensive literature on the detrimental effects of excessive internet usage on psychological well-being, while also evidencing different patterns of associations with the two distinct IA dimensions (for more details, please see the previous section). On a practical level, our findings confirm that, regardless of whether or not it should be included in the classification systems, internet addiction is a significant social problem with clinical and social maladjustment symptoms that needs to be recognized and evaluated. Obviously, it does not mean that the internet should be eliminated from daily routines, but educational guidelines for appropriate and adaptive internet use should be highly recommended. For instance, the reduction of time spent online, together with the promotion of healthy behaviors (e.g., physical activity), as well as the stimulation of positive abilities such as resilience, adaptive coping styles, and positive cognitive and emotional skills, may prevent the occurrence of IA (Brand et al., 2014; Cheng et al., 2023; Lopez-Fernandez & Kuss, 2020; Robertson et al., 2018). In addition, the spread of information about the possible comorbidities and other associated psychological problems (e.g., cyberbullying) should be enhanced in order to make individuals more aware of this common phenomenon, which characterizes the technological era (Lopez-Fernandez & Kuss, 2020).

Limitations and Suggestions for Future Works

The current study has some limitations that should be mentioned. First, we did not consider relevant external criteria, such as the average daily time spent on the internet, which could have contributed to the evaluation of the IAT-SF’s concurrent validity, as also reported in some previous IAT study validations (Dhir et al., 2015; Pino et al., 2021; Servidio, 2017). Second, we did not establish cut-off points to discriminate between IA addicts and non-IA addicts. Additional studies are needed to establish threshold values that distinguish individuals with IA from those who do not have IA, as well as to identify different levels of IA’s severity. Third, although IA has not been included in the DSM-5, it would be interesting to examine the psychometric properties of the IAT-7 including clinical samples, such as individuals affected by social phobias, hypochondriasis, substance use and behavioral addictions, and eating disorders. It could be exciting both to investigate possible comorbidities (Floros et al., 2014; Lopez-Fernandez & Kuss, 2020; Masi et al., 2021), along with its factorial invariance between clinical and non-clinical samples, as well as across different clinical groups. Future studies should include younger age groups, especially adolescents, because they report more problematic internet usage (Anderson et al., 2017; Pino et al., 2021). In this regard, and given the differences observed across age groups in earlier literature (Pino et al., 2021), it would be interesting to study factorial invariance across age groups.

Conclusion

In summary, this paper describes the development of the IAT-7 by comparing several existing models and employing an ESEM approach. According to the findings, our proposed brief version is made up of seven items divided into two moderately inter-related factors (F1: Interpersonal, Emotional and Obsessive Conflict; F2: Online Time Management). The scale has promising psychometric properties in terms of internal consistency, with adequate levels of Cronbach’s alpha and McDonald’s omega in each subscale. The associations with some theoretically related psychological well-being indicators (depression, anxiety, stress, and loneliness) provide support for the validity of test score interpretations (convergent validity), implying that each factor has a different impact on psychological well-being. Due to its psychometric robustness, the IAT-7 may be a useful screening tool for evaluating IA. In addition, its length makes it suitable to be included in larger batteries alongside other questionnaires addressed to measure IA-related constructs, such as nomophobia or smartphone addiction.