Over the last few decades, excessive and disordered screen use has become more prevalent, prompting investigations into its associated consequences. The extent to which disordered screen use behaviours impact neuropsychological functioning has been reportedly mixed and at times inconsistent. This review sought to synthesise the literature and estimate the magnitude of overall cognitive impairment across a wide range of disordered screen use behaviours. We also sought to determine the cognitive domains most impacted, and whether the observed impairments were moderated by the classification of screen-related behaviours (i.e., Internet or gaming) or the format of cognitive test administration (i.e., paper-and-pencil or computerised). A systematic search of databases (Embase, PsycINFO, MEDLINE) identified 43 cross-sectional articles that assessed neuropsychological performance in disordered screen use populations, 34 of which were included in the meta-analysis. A random-effects meta-analysis revealed significant small/medium (g = .38) cognitive deficits for individuals with disordered screen use behaviours relative to controls. The most affected cognitive domain with a significant medium effect size (g = .50) was attention and focus followed by a significant reduction in executive functioning (g = .31). The classification of disordered screen use behaviours into Internet or gaming categories or the format of cognitive testing did not moderate these deficits. Additionally, excluding disordered social media use in an exploratory analysis had little effect on the observed outcomes. This study highlights a number of methodological considerations that may have contributed to disparate findings and shows that disordered screen use can significantly impact cognitive performance. Recommendations for future research are also discussed. Data for this study can be found at https://osf.io/upeha/.
Similar content being viewed by others
Technology and the Internet have provided innumerable benefits. However, excessive use without moderation may cause impairment in other areas of life. Despite current guidelines recommending no more than 2 h per day of recreational screen media for teenagers, including televisions, computers, and phones (Australian Institute of Health & Welfare, 2020), averages of more than 8 h per day have been recently reported (Cardoso-Leite et al., 2021). In excess, screen usage may exhibit many of the hallmark symptoms of other behavioural addiction disorders (Hwang et al., 2014; Warburton, 2021; Warburton et al., 2022) prompting debate and heterogeneity in the conceptualisation and classification of excessive and problematic screen use behaviours (Kuss et al., 2017; Marshall et al., 2022; Shaffer et al., 2000; Warburton & Tam, 2019). These disordered behaviours are sometimes described in terms of Internet addiction disorder (IAD; Li et al., 2018) or video gaming disorders (Ko et al., 2015; Warburton et al., 2022), but there is still disagreement about whether either classification accurately captures the scope of problematic behaviours (Király et al., 2015a). Nonetheless, there has been extensive research on the psychological, physical, and social consequences of screen-based disorders over the past few decades with findings highlighting detrimental impacts on health and overall wellbeing (Kircaburun et al., 2020; Kuss & Griffiths, 2012; Marshall et al., 2022; Paulus et al., 2018; Sugaya et al., 2019; Warburton, 2021; Warburton et al., 2022).
However, by contrast, there has been much less research and little consensus to date on the exact neuropsychological impacts that result from disordered screen use behaviours. Some studies report improvements in specific areas of cognition (Irak et al., 2016), whilst other studies report a reduction in those same areas (Cao et al., 2007). Inconsistencies in neuropsychological and neuroscientific methodologies have been identified as a potential contributor to such disparate findings (Pontes et al., 2017). The purpose of this review and analysis is to synthesise and quantify the effects of disordered screen use behaviours on neuropsychological outcomes, as well as explore the contribution of classification strategy and cognitive testing format on the measured outcomes.
Screen use is becoming increasingly recognised and investigated for its problematic aspects. For the purposes of this review, screen use refers to screen-based interactions including gaming (online and offline), Internet browsing, social media use, and smartphone use. In most cases, users interact with screens on a daily basis and engage with these technologies for work and leisure. However, some individuals may spend excessive amounts of time in front of a screen to the neglect and detriment of their social, physical, mental, and psychological wellbeing (Sigman, 2017). Some may even develop acute dependency symptoms similar to severe alcohol dependence (Hwang et al., 2014) or methamphetamine addiction (Jiang et al., 2020). Efforts have been made to characterise these problematic aspects of screen use in accordance with diagnostic classifications for other behavioural addictions such as gambling (Wölfling et al., 2020; Zhou et al., 2016). According to this characterisation, harmless screen use is seen to progress into the disordered and problematic realm when the following criteria are met: (1) screens are used excessively and with impaired control, (2) usage is associated with withdrawal when the screen is removed, (3) results in increased tolerance and the need to spend more time in front of a screen to satisfy the same desire, and (4) persists despite negative consequences to important areas of functioning such as increased social isolation, neglect in hygiene or health, progressive decline in other endeavours, or a downturn in academic or work performance (Sigman, 2017). Over the past decade, it has been observed that the prevalence of these symptoms has been increasing globally (Pan et al., 2020).
There has been much debate about how best to operationalise these addiction-like behaviours, with distinctions being made between the problem of screens as a whole and the problem of certain forms of screen use (Blaszczynski, 2006; Warburton, 2021). With regard to the latter, diagnostic classifications have been developed for specific screen-related usage such as social media addiction (Andreassen et al., 2016), technology addiction (Dadischeck, 2021), smartphone addiction (Yu & Sussman, 2020), Facebook addiction disorder (Brailovskaia et al., 2018), and various operationalisations of problematic Internet behaviours including Internet addiction (IA; Young, 2004), Internet disorder (Pontes & Griffiths, 2017), and problematic Internet use (PIU; Shapira et al., 2003). The only screen-based disorders to be officially classified are video game based: Internet gaming disorder (IGD), included in a section of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) for disorders requiring further study (American Psychiatric Association, 2013); gaming disorder (GD), included the 11th Revision of the International Classification of Disease (ICD-11; World Health Organization, 2019); and the sub-clinical hazardous gaming (HG), also in the ICD-11. The IGD and GD diagnoses both recognise excessive screen use as an addiction-like disorder rather than an issue of impulse control (e.g., see Pontes & Griffiths, 2014).
Although different types of maladaptive screen use can be considered as nosologically distinct in terms of impacted demographics (Pontes & Griffiths, 2014), it has been argued that all variants share the same basic diagnostic and etiological components characteristic of behavioural addictions (Griffiths, 2009a; Warburton, 2021; Weinstein, 2015). Of note, social media usage is generally considered distinct in terms of underlying motivations (Wolniewicz et al., 2018; Zhu & Xiong, 2022), affected cognitive domains (Weinstein, 2022), and aetiology (Pontes, 2017). However, with its recent emergence, research on its impacts and its potential similarity with other types of screen addiction is limited. At their core, the various diagnostic classification schemes ultimately refer to the numerous maladaptive and disordered activities associated with the use of screens. This is consistent with recommendations made by Pontes et al. (2017) to delineate between excessive screen time and “addicted” screen time, the latter characterised by functional impairment (also see Griffiths, 2009b). Moreover, from a clinical standpoint, the functional impairment resulting from disordered screen use, irrespective of the specific type of screen modality used, largely presents the same and is commonly treated in a comparable manner (Dell’Osso et al., 2021; Marshall et al., 2022; Warburton, 2021). Thus, it can be more helpful to conceptualise problematic screen use as on a continuum, with severe functional impairment at one extreme (Paulus et al., 2018; Warburton, 2021; Warburton et al., 2022). With the above in mind, this review will not limit its focus to specific diagnostic variants. Rather, it will consider disordered screen use behaviours in terms of broader categories of addiction-like behaviours marked by functional impairments.
The psychological effects of disordered screen use behaviours have been extensively explored. For instance, screen-addicted individuals experience lower overall psychosocial wellbeing (Yang & Tung, 2007), increased psychiatric symptoms (Ha et al., 2006; Király et al., 2015b; Lai et al., 2015; Snodgrass et al., 2014; Vukosavljevic-Gvozden et al., 2015; Young & Rogers, 1998), lower life satisfaction (Samaha & Hawi, 2016), higher rates of loneliness (Yao & Zhong, 2014), compromised academic achievement (Hawi & Samaha, 2016; Jiang, 2014; Samaha & Hawi, 2016; Yang & Tung, 2007), reduced levels of sports and exercise (Henchoz et al., 2015), and poorer levels of health and sleep (Griffiths et al., 2004; Marshall et al., 2022; Wittek et al., 2016). In a large survey study involving around 15,000 teenagers, it was found that 5 or more hours of video gaming a day was significantly associated with higher reported instances of sadness, suicidal ideation, and suicidal plans compared to teenagers with no video game use (Messias et al., 2011). There is evidence that excessive screen time can cause a wide range of physical symptoms such as joint pain, strain injuries, peripheral neuropathy, encopresis, inflammation, and epileptic seizures (Chuang, 2006; Weinstein, 2010).
There is also evidence that disordered screen use behaviours can impact neurostructural development (Schettler et al., 2022; Warburton, 2021). Engaging in excessive and obsessive video gaming during childhood can have significant structural and neuroadaptive impacts on reward-related, emotional-processing, and decision-making areas in the brain (Kuss & Griffiths, 2012; Schettler et al., 2022; Yao et al., 2017). Research has shown that individuals with gaming addictions have decreased grey and white matter volumes in areas associated with learning, reward, and memory proportional to their addiction duration, controlling for age, gender, and volume (Yuan et al., 2016, 2017). In a large population of children and young adults aged eight to 21, it was found that video game time was positively correlated with lower tissue density in cortical and subcortical areas observed over a 3-year period (Takeuchi et al., 2016). By comparing functional magnetic resonance imaging (fMRI) signals in response to cue-induced craving between gamers and non-gamers, Ko and colleagues (2009) found that the gamers exhibited stronger activation in the striatum and orbitofrontal cortex, regions commonly associated with other substance-related addictions. The same areas have been implicated in individuals diagnosed with Internet addiction (Dong et al., 2011). Temporal neuroimaging studies investigating the effects of disordered screen use behaviours, including the excessive and problematic use of social media and smartphones, demonstrate the emergence of atypical neural cue reactivity, aberrant activity (He et al., 2018; Horvath et al., 2020; Schmitgen et al., 2020; Seo et al., 2020), and altered neural synchronisation (Park et al., 2017; Youh et al., 2017). These features are seen to persist despite pharmacological treatment (Park et al., 2018). For a comprehensive review on neurobiological mechanisms and brain findings, see Weinstein et al. (2017) and Weinstein and Lejoyeux (2020); for impacts of excessive smartphone use, see Wacks and Weinstein (2021).
Neuropsychological findings, designed to reflect neurobiological deficits, have not always mirrored the observed psychological or neurostructural and functional changes and have remained inconsistent. On the one hand, some studies have reported advantages: screen-addicted populations outperformed healthy controls on tasks assessing real-life decision-making, despite displaying higher novelty-seeking behaviours (Ko et al., 2010), made fewer errors and had quicker reactions on response inhibition tasks (Irak et al., 2016; Sun et al., 2009), and were superior at object recognition (Irak et al., 2016). In one study, it was found that even 10 h of video game experience was enough to improve performances on an attentional flexibility task in gaming naïve participants (Green & Bavelier, 2003). Other studies have found no difference in general intelligence (Hyun et al., 2015), risk-taking tendencies (Ko et al., 2010), or cognitive flexibility (Dong et al., 2010, 2014) in disordered screen use populations compared to healthy controls.
On the other hand, a number of studies reveal profound reductions within disordered screen use populations in many of the same areas of cognition. For one, several studies found decision-making to be markedly impaired in game-addicted populations including a propensity for immediate reward gratification and making disadvantageous and risky choices (Irvine et al., 2013; Pawlikowski & Brand, 2011; Tang et al., 2017; Wölfling et al., 2020; Yao et al., 2015). Cao and colleagues (2007) found that excessive Internet users showed greater impulsiveness as measured by self-rated scores and performed worse on a response inhibition task relative to controls. Attentional deficits have also been found with addicted gamers exhibiting a bias towards computer-related stimuli (e.g., laptop, computer keyboard, or mouse) characterised by an impaired disengagement of attention and protracted attentional processing (Heuer et al., 2021; Kim et al., 2018; Zhang et al., 2016; also see Kim et al., 2021). In fact, it has been found that individuals with disordered screen use behaviours share similar psychobiological mechanisms, neurocognitive impairments, and comorbidities with attention-deficit/hyperactivity disorder (ADHD), indicating a common neurofunctional deficit (Weinstein & Weizman, 2012; Weinstein et al., 2015; Yen et al., 2009). Indeed, there is a positive association between the amount of time children spend in front of screens daily and the severity of ADHD symptoms on a parent-rated scale (Chan & Rabinowitz, 2006). Time spent gaming was also found to be negatively correlated with overall cognitive performance, controlling for education and other demographics (Jang et al., 2021). These results stand in contrast to the above findings of enhanced cognitive performance or of no difference in discorded screen use populations.
Other reviews have questioned the heterogeneity in the literature regarding the impacts disordered screen use behaviours may have on cognition (Ko et al., 2015; Pontes et al., 2017). Firstly, when evaluating the neuropsychological impacts of disordered screen use behaviours, it is important to consider whether distinguishing between different diagnoses based on the predominant form of screen use is justified, or if the cognitive effects are largely uniform. That is, do different modalities of disordered screen use impact cognition differently? Does the interchangeability in defining and diagnosing disordered screen use behaviours obscure a reliable picture of cognitive outcomes? Second, Ko and colleagues (2015) pointed out that a good degree of cognitive functioning is a necessary requirement for performance on video games. The cognitive tasks that are used to assess impairment may draw on many of the same underlying cognitive capacities that are required for video gaming, and so may enhance performance rather than hinder it, potentially clouding conclusions where some authors report improvements and others report decrements. The authors caution against drawing premature conclusions about cognitive impacts based on studies that do not consider a broad range of cognitive tasks (also see Pontes et al., 2017). However, few studies implement a full battery of cognitive tasks, but instead infer domain-level impairments in “executive control” based on a single cognitive task (for example, see Wang et al., 2017). With this in mind, to determine cognitive outcomes as a result of disordered screen use, it is important to examine the role of disordered screen use classification with a focus on methodological issues in neuropsychological testing.
Further consideration should be given to the type and format of testing. The selection of tests is a crucial aspect of any assessment of cognition (Schoenberg & Scott, 2011; Strauss et al., 2006). Grounded in the literature, a test should be chosen based on its suitability for measuring a specific population under particular circumstances (Strauss et al., 2006). Depending on its psychometric properties, the type of test chosen can influence the accurate measurement of true impairment (Schoenberg & Scott, 2011). For instance, tests should be sensitive enough to capture the condition of interest but specific enough to avoid incorrectly classifying those who are unimpaired (Streiner, 2010). Although two tests may both measure executive functioning, only one of those tests may be sensitive enough to detect impairment in a given population. It is possible that whilst the Go/No-go task, for example, may fail to capture impairment in a disordered screen use population, the Stop Signal task may be better suited for this purpose.
Analysing cognitive performance in populations with disordered screen use also requires consideration of the test format: computerised or paper-and-pencil. Computerised administration is known to impact test performance, especially in individuals with high-computer anxiety and in some clinical populations (Browndyke et al., 2010; Strauss et al., 2006). In some cases, this may either mask true deficits (Strauss et al., 2006) or boost performances (Luciana, 2003). If individuals with disordered screen use behaviours demonstrate marked behavioural and neural attentional biases and disengagement from screen-related stimuli (see Heuer et al., 2021; Kim et al., 2018, 2021; Schmitgen et al., 2020; Zhang et al., 2016), one might reasonably expect differences in cognitive performance based on the format of testing. To the authors’ knowledge, whether the type or format of testing moderates cognitive performance has not been investigated to date.
The inconsistencies apparent in the neuropsychological literature necessitate a quantitative examination of findings to illustrate the magnitude of cognitive deficit. This systematic review and meta-analysis will focus on neuropsychological considerations that may be contributing to the apparent discrepancies, such as number of cognitive tasks, type and format of testing, and assessment of disordered behaviours according to a predominant form of screen use (e.g., Internet or gaming). Previous reviews have focused on epidemiological research on screen addictions (Kuss et al., 2014; Pan et al., 2020), specific diagnostic variants without considering the similarities between disordered screen use behaviours (Legault et al., 2021), and deficits in narrow cognitive domains (Ioannidis et al., 2019; Yao et al., 2022), or only provided a qualitative analysis (Brand et al., 2014; Legault et al., 2021). Without holistic consideration of a comprehensive and inclusive integration of a wide range of screen technologies across multiple cognitive domains, this limits an accurate neuropsychological analysis of disordered screen use behaviours. With this aim in focus, our systematic review and meta-analysis seek to provide a comprehensive overview of cross-sectional studies examining neuropsychological comparisons between disordered screen-related behaviours and healthy controls, as well as to explore the quality of studies conducted up until now. In the meta-analysis, we also consider the contributions of disordered use classification (e.g., gaming, Internet, and social media), the type of tests, and the format of neuropsychological testing (e.g., computerised or manual).
Protocol and Registration
The systematic review was registered with PROSPERO on the 10th of December 2020 and amended the revision notes on the 15th of March 2022 to include plans for the meta-analysis, PROSPERO registration: CRD42020216147. The search was conducted following the PRISMA (Preferred Reporting Items for Systematic review and Meta-Analyses) guidelines (Page et al., 2021) and Gates and March’s (2016) recommendations for neuropsychological systematic reviews.
The following criteria had to be met by studies to qualify for inclusion in the review: (1) the participants had to meet criteria or satisfy an operational definition for screen addiction, dependence, hazardous, excessive, or problematic screen use according to diagnostic measures or scales; (2) the disordered use group was compared to a group of healthy controls matched on a least one sociodemographic variable (age, gender, education); (3) at least one objective neuropsychological measure was used to assess cognitive functioning (e.g., not exclusively subjective self-reports or an analysis with an experimental manipulation); and (4) the study was available in English or translated into English. For studies to be included in the meta-analysis, they needed to provide sufficient data (i.e., means and SD, mean differences, Cohen’s d, Hedges’ g effect sizes, t-value, p-value).
Exclusion criteria were as follows: (1) results contained neuropsychological performance methods such as the Mini-Mental State Exam or the Barratt Impulsiveness Scale without an accompanying cognitive assessment; (2) either group had a comorbid diagnosis other than disordered screen use (e.g., ADHD or Autism Spectrum Disorder); (3) any single case studies; (4) exclusively neuroimaging studies without reporting on neuropsychological outcomes; (5) treatment or intervention studies with no cross-sectional data; (6) systematic reviews or meta-analyses; (7) grey literature including thesis abstracts, conference preliminary studies, or poster presentations; and (8) exclusively contained a non-screen-related diagnosis or operational definition (e.g., gambling). Studies were excluded from the meta-analysis if they (1) did not report (or respond to requests for) sufficient data to compute effect sizes; (2) contained assessment tasks that were modified or manipulated for experimental purposes such as only including addiction-related stimuli in a Stroop task and therefore tap into a different set of cognitive proc the type of tests, and the format of neuropsychological testing (see Brand et al., 2014); and (3) only included a test which was used once by that single study.
A systematic literature search was conducted in December 2020 and additional studies were added until data extraction in November 2021. Searches were conducted independently in the following three databases: Embase, PsycINFO, and Medline.
The search strategy was developed and refined with the aid of an experienced librarian. Like Paulus et al. (2018), we conceptualised disordered screen use behaviour broadly in order to maximally capture the various and inconsistent definitions throughout the literature. We placed no restrictions on language or publication date. A restriction was placed on human studies. A combination of the following keywords was used: (“internet*” or “online” or “web” or “computer” or “screen*” or “mobile phon*” or “smartphon*” or “gaming” or “games” or “video gam*” or “television” or “tv” or “social media”) and (“addict*” or “dependen*” or “excess*” or “problematic*” or” disorder*” or “hazardous*” or “obsess*” or “overus*” or “impair*”) and (“neuropsyc*” or “memory” or “attentt*” or “intelligen*” or “cognit*” or “executive function*”).
Two authors (MM and KK) independently reviewed the relevant articles at each distinct stage of identification, screening, eligibility, and inclusion. Reference lists of relevant studies were examined, and studies included if they met the relevant criteria. Disagreement about inclusions between the two reviewers was resolved through discussion and, if unresolved, was examined by a third author (JB or WW).
Data Collection Process
Data were extracted into Microsoft Excel and independently cross-checked by two authors (MM and KK). For studies that reported more than one comparison group (e.g., healthy control and ADHD), only the healthy control group was used as a comparison (Wollman et al., 2019). Additionally, in the instances where cognition was assessed more than once (e.g., longitudinal or intervention studies), only the baseline cross-sectional data were extracted. Nine authors were contacted to clarify either methodology or relevant criteria, or to request data required to compute effect sizes. Two authors (Metcalf & Pammer, 2014; Park et al., 2011) responded with the required data and were included in the systematic review and meta-analysis. Among the seven remaining studies, two were excluded from the systematic review because of insufficient information regarding eligibility requirements, and the rest were included only in the systematic review but not the meta-analysis.
Variables extracted included the (1) year of publication, (2) country of publication, (3) demographic information (sample size, mean and standard deviation of education and age, and number of males and females in the sample when available), (4) disordered behaviour classification (e.g., IGD, IAD, or PIU), (5) associated measure including cut-offs when available, (6) assessment of cognitive performance, and (7) format of cognitive assessment (e.g., computerised or manual). For data only reported in figures, we extracted the relevant values using WebPlotDigitizer (Rohatgi, 2021) to ensure maximal inclusion (Pick et al., 2019). For studies that did not report means or standard deviations, we extracted either t-values, p-values, or effect sizes.
To examine neuropsychological domains separately for the meta-analysis, cognitive tests were grouped into the domains of global functioning, executive functioning, processing speed, attention, and working memory according to clinical guidelines (Strauss et al., 2006) and previous reviews (Mauger et al., 2018; Shin et al., 2014; Wagner et al., 2015). However, it is acknowledged that many tests are not pure measures of any given cognitive domain but share underlying similarities and are only, therefore, imperfect indicators of cognitive ability within domains (Engle et al., 1999; Rabaglia et al., 2011). The domain of executive functioning was used broadly to refer to the abilities involved in problem-solving, goal-directed behaviours, inhibitory control, cognitive flexibility, planning, concept formation, and strategy generation (Elliott, 2003; Miyake et al., 2000). Tasks that required psychomotor, visuomotor, or decision speed abilities were grouped under the processing speed domain (Strauss et al., 2006). Tests that assessed rapid response selection, attentional capacity, and sustained performance were grouped under the attention domain (Strauss et al., 2006). Finally, tasks that required retaining and manipulation of information over the short term were grouped under the working memory domain (Strauss et al., 2006). In the cases where a single test produced more than one outcome (e.g., Digit Span or Go/No-go), the outcomes were sorted into their relevant domain (e.g., Digit Span forwards under attention and Digit Span backwards under working memory). Cognitive tests that were used only once (e.g., the Cups Task) were unsuitable for a meta-analysis and were not included. This limited the number of possible cognitive domains for inclusion in the analysis such as memory, language, and visuospatial skills.
Study Risk of Bias Assessment
For quality assessment, we used the National Heart, Lung, and Blood Institute (NHLBI) Quality Assessment Tool for Observational Cohort and Cross-Sectional Studies (NHLBI, 2019). In the current analysis of cross-sectional data at a single time point, only eight out of fourteen methodological criteria from the assessment tool were applicable. In accordance with Carbia et al. (2018), we adapted Item 5 to better capture the quality of the sample size (n ≥ 25), known to be important when sample size calculations were not computed (Grjibovski et al., 2015; Wang & Cheng, 2020). Two independent authors (MM and KK) evaluated each of the items with “yes”, “no”, “cannot determine”, “not reported”, or “not applicable”. After independent evaluation of each study, disagreements were resolved through discussion until a consensus was reached. Given the modification of the scale for the purposes of this review and consistent with recommendations, we did not include an overall rating summary (O’Connor et al., 2015; Robinson et al., 2021; Sanderson et al., 2007).
The effect size of standardised mean differences in cognitive performance between the controls and the disordered use group was calculated and expressed as Hedges’ g and its 95% confidence interval (95% CI) (Hedges, 1981; Hedges & Olkin, 1985). Hedges’ g, a variation of Cohen’s d, was used to correct for potential bias related to the sample sizes in individual studies and the resultant overestimation of true population effects (Hedges & Olkin, 1985). As with Cohen’s d, a Hedges’ g effect size of 0.20 represents a small effect, 0.50 a medium effect, and 0.80 a large effect (Cohen, 1988). Higher Hedges’ g scores indicated a greater difference between the disordered use group and the control group reflecting an inferior performance of the former.
The meta-analysis was conducted using the Comprehensive Meta-Analysis (CMA) version 3 software package (Borenstein et al., 2013). In line with Borenstein et al. (2013), we selected a random-effects model given that significant heterogeneity of effects was expected beyond sampling error and the included studies varied with respect to sample characteristics and cognitive tasks. All analyses were examined for heterogeneity by using Tau-squared, I-squared, and Q-squared statistics. Consistent with Higgins et al. (2003), we interpreted an I2 of 25% as low, 50% as moderate, and 75% as high heterogeneity. Based on recommendations by Borenstein et al. (2021) and to ensure that there was sufficient power for moderator variables, studies had to have (1) at least two of the same cognitive tasks or outcome measures and (2) at least two of the same disorder classification groups to be included in subgroup analyses.
Reporting Bias Assessment
In assessing the risk of bias between studies, four methods were applied that assessed the overall effect and all subgroup analyses. To quantify asymmetry and identify small-study effects, funnel plots were visually inspected for symmetry around the combined effect size, and Egger’s test of the intercept was computed. This was supported by Duval and Tweedie’s trim and fill analysis which provides an estimate of the number of missing studies and adjusts the estimated overall effect size (Duval & Tweedie, 2000). Finally, classic fail-safe N was used to calculate the minimum number of undetected negative results that were necessary to nullify the effect (e.g., to raise the observed p-value above 0.05).
Figure 1 presents a flow chart illustrating the identification, screening, and final inclusion of all studies in the review and analysis process. Five additional studies identified through reference list searches were added based on inclusion criteria. A total of 43 studies satisfied the eligibility criteria for inclusion in the systematic review and 34 studies were included in the meta-analysis. Of those, 33 were included in an exploratory analysis.
The extracted summary data from included studies are shown in Table 1. Summary data included country, participant, and control descriptive data (age, sex, and education); disordered use classification and measure; neuropsychological assessment; and format of testing. Almost half of the included studies were conducted in China (n = 20). Eight studies were from Europe (Germany (n = 5), Spain (n = 1), Netherlands (n = 1), and Bosnia and Herzegovina (n = 1); six studies from South Korea; two from Taiwan; two from Turkey; two from the UK; one from Iran; one from Australia; and one from Brazil. The majority of studies were conducted between 2014 and 2018 consistent with IGD’s first appearance in the DSM-5 in May 2013. The included studies yielded a total of 1341 participants with screen disorders (72% males) and 1590 healthy controls (69% males). The results of some studies were not reported separately for disordered use and control groups, so the demographic of the entire group was included.
Sample sizes differed considerably between studies, with the smallest study incorporating 11 participants (Liu et al., 2014) and the largest involving 113 participants (Marín Vila et al., 2018). There was less variability in age between studies with the youngest average age included being 11 years (Kuo et al., 2018) and the oldest 29 years (Zhou et al., 2016). As with Casale et al. (2021), this review grouped different age samples together given the similarity of technology-related problems across ages. There was a disproportionate number of males with some studies including only males (Dong et al., 2010, 2014, 2015, 2017; Han et al., 2012; Jeromin et al., 2016a, 2016b; Lim et al., 2016; Liu et al., 2014; Luijten et al., 2015; Metcalf & Pammer, 2014; Wang et al., 2017; Wölfling et al., 2020; Yao et al., 2015). Years of education between studies ranged from 5.6 (Kuo et al., 2018) to 21.5 years (Dong et al., 2014).
Classification of Disordered Screen Use Behaviours
Of the selected studies, 26 examined gaming-related disorders, with 20 of them meeting the inclusion criteria for the meta-analysis. The majority used the classification Internet gaming disorder (Cai et al., 2016; Dong et al., 2015, 2017; Irak et al., 2016; Jang et al., 2021; Jeromin et al., 2016b; Li et al., 2020; Lim et al., 2016; Liu et al., 2014; Park et al., 2020; Wang et al., 2015, 2017; Wölfling et al., 2020; Wu et al., 2020; Xing et al., 2014; Yao et al., 2015; Yuan et al., 2016, 2017), whilst the remainder used Problematic Video Gaming (Collins & Freeman, 2014; Irvine et al., 2013), Internet gaming addiction (Ding et al., 2014), problematic online gaming addiction (Han et al., 2012), problematic gaming (Luijten et al., 2015), Addicted First-Person Shooter Gaming (Metcalf & Pammer, 2014), and excessive Internet gaming (Jeromin et al., 2016a; Pawlikowski & Brand, 2011). Fourteen studies examined Internet-related disorders, with 12 included in the meta-analysis. The majority used the terminology Internet addiction disorder (Choi et al., 2013, 2014; Dong et al., 2011, 2014; Shafiee-Kandjani et al., 2020; Wang & Cheng, 2020; Zhou et al., 2013, 2014, 2016), and the remainder either used problematic Internet use (Marín Vila et al., 2018; Zhou et al., 2010), Internet addiction (Kuo et al., 2018; Tekın et al., 2018), or excessive Internet use (Sun et al., 2009). Two studies examined social media addiction using either the terminology of Problematic Social Networking Sites Use (Aydın et al., 2020) or Problematic Social Network Use (Müller et al., 2021), with only the former included in the meta-analysis. Lastly, one study examined smartphone addiction (Khoury et al., 2019) and was included in the meta-analysis.
There was heterogeneity among the screeners used to assess disordered screen use. Some studies included more than one screener. The most common screeners were the Young’s Internet Addiction Test (IAT; n = 12), the DSM criteria (n = 10), the modified Diagnostic Questionnaire for Internet Addiction (YDQ; n = 7), and the Game Addiction Scale (GAS; n = 3). There were 13 screeners that were used once across the studies. There was an inconsistency in the thresholds applied to define addiction or disordered screen use. For instance, some studies implemented a cut score above 70 on the IAT to indicate addiction (Choi et al., 2013, 2014; Lim et al., 2016), whilst other studies used a score of 50 (Cai et al., 2016; Dong et al., 2015, 2017; Pawlikowski & Brand, 2011; Wang et al., 2017, 2020; Xing et al., 2014). One study implemented a 15% cut-off for the extreme scorers on the Chinese Internet Addiction Scale (CIAS) to signify addiction (Kuo et al., 2018) whilst another used scores of 67 and above as a threshold (Yao et al., 2015). Some screeners were used interchangeably to measure disordered behaviour. For example, the YDQ and CIAS were used to define both Internet addiction disorder and Internet gaming disorder (Kuo et al., 2018; Wang et al., 2015; Yao et al., 2015; Zhou et al., 2013).
There were 58 different neuropsychological tests employed with 134 unique outcome measures across all studies, with the majority examining executive functioning and attention. The most common assessment tasks were the Stroop task (n = 15) and the Go/No-go paradigm (n = 12) followed by the Stop Signal task (n = 5) and the WCST (n = 5). Approximately half of the studies included a singular neuropsychological assessment task (n = 21). There were 14 studies that included at least three assessments whilst the most tasks implemented in a single study were 15. In eight studies, at least one manual neuropsychological measure was used, but the majority relied solely on computerised testing.
Neuropsychological tasks and their outcome measures were sorted into cognitive domains based on the inclusion criteria for the meta-analysis (Table 3). Most tasks assessed executive functioning. There was heterogeneity in the methodology for some of the implemented neuropsychological tasks. Taking the Go/No-go task as an example, two out of the seven studies that measured the No-go error rate as an outcome did not include practice trials (Ding et al., 2014; Luijten et al., 2015) and three did not involve reward contingencies (Ding et al., 2014; Luijten et al., 2015; Sun et al., 2009). Studies used different stimuli including letters (Ding et al., 2014; Dong et al., 2010; Luijten et al., 2015), shapes (Li et al., 2020; Liu et al., 2014), and numbers (Sun et al., 2009; Zhou et al., 2014, 2016). Additionally, stimuli duration ranged from 200 (Li et al., 2020) to 1000 ms (Zhou et al., 2014, 2016) and the frequency of target trials (for No-go) ranged from 12 (Luijten et al., 2015) to 50% (Li et al., 2020). Similar variabilities were also found in the Stroop task between stimuli colour (Dong et al., 2014; Wang et al., 2015), target presentation duration (Luijten et al., 2015; Xing et al., 2014; Yuan et al., 2016), rest periods (Luijten et al., 2015; Xing et al., 2014; Yuan et al., 2016), task rewards (Dong et al., 2014), and number of trials (Dong et al., 2014; Kuo et al., 2018; Luijten et al., 2015; Wu et al., 2020; Yuan et al., 2016), with some not reporting methodologies of task administration at all (Lim et al., 2016; Tekın et al., 2018).
Risk of Bias in Studies
Overall results are displayed in Table 2. Almost all studies had clearly defined and objective outcomes measures that were also consistently implemented (Q11, n = 41; although a number contained novel, n = 5, or experimental measures, n = 2), and all contained clearly defined exposure measures that were implemented consistently across study participants (Q9, n = 43). Most of the included studies had a clear research question or objective related to neuropsychological testing (Q1, n = 35), although a few focused mainly on neuroimaging and so did not have clear neuropsychological testing objectives (n = 8). Most studies included a clear identification of the sample (Q2, n = 40) that contained demographic, location, or time period recruitment descriptions. Most studies included uniform requirements for sample selection and inclusion and exclusion criteria (Q4, n = 36), whereas some studies only reported criteria for one group (n = 2), did not pre-specify criteria (n = 2), or did not report at all (n = 3). Around half of the studies had samples larger than 25 participants (Q5b, n = 22) and reported measuring and adjusting for potential confounding variables relevant to neuropsychological testing (Q14, n = 19). Areas of consistent weakness included a failure to provide sample size justification (Q5a, n = 4) or assess severity levels of disordered use behaviour (Q8, n = 8).
Synthesis of Results
Figure 2 displays the range of study effects comparing performance on cognitive tasks between participants with screen disorder to healthy controls. The analysis included 34 cross-sectional observational studies across 1076 participants with disordered screen use behaviour and 1338 healthy controls. Across all studies and tests of cognition, those with disordered screen use behaviour had significantly lower cognitive performance scores compared to controls, resulting in a mid-range small Hedges’ g effect size, g = .38, 95% CI (.25, .52), p < .001. There was evidence of significant considerable heterogeneity between the studies (Q = 145.52, p < .001, I2 = 77.32, τ2 = .34) suggesting the need for further investigation of this heterogeneity through subgroup analyses.
Cognitive Domain Analysis
A subgroup analysis was conducted to examine the differences between disordered screen use and control samples by the different cognitive domains (Fig. 3). Executive functioning was the most assessed domain with 32 studies overall, followed by 14 studies that assessed attention, 13 studies that measured processing speed, six studies that measured working memory, and three studies that assessed global functioning. Relative to healthy controls, individuals with disordered use showed significant moderate impairment in the domain of attention (g = .50, 95% CI [.16, .84], p = .004, I2 = 71.84, τ2 = .46) and significant small impairment in executive functioning (g = .31, 95% CI [.087, .53], p = .006, I2 = 87.64, τ2 = .73). There were no significant differences between individuals with disordered use and controls in the domains of processing speed (g = .31, 95% CI [− .037, .65], p = .080, I2 = 69.96, τ2 = .36), working memory (g = .44, 95% CI [− .064, .94], p = .089, I2 = 35.00, τ2 = .19), or global functioning (g = .58, 95% CI [− .12, 1.28], p = .11, I2 = 74.22, τ2 = .39). To determine whether there was any difference in the effect sizes between the cognitive domains, a mixed effects analysis revealed there was no significant difference (Q = 1.41, p = .84).
Test Level Analysis
To examine which test performances were most impacted for the disordered screen use samples relative to controls, we ran an analysis across every individual test (Table 3). Out of 134 unique neuropsychological outcome measures, 32 were computable in the quantitative analysis according to the inclusion criteria. For example, several tasks were included only once in all studies, such as the Cups Task, Cambridge Gambling Task, and flanker compatibility task, and some outcome measures were either only used once or included a non-traditional, experimental outcome. For tasks assessing executive functioning, there were significantly reduced performances for individuals with disordered screen use on all measures on the WCST, as well as reduced accuracy scores for incongruent trails on the Stroop task, Delay Discounting task, and proportion of successful stops on the Stop Signal task. Interestingly, the disordered screen use sample had significantly quicker reaction times than controls for No-go trials on the Go/No-go task requiring rapid impulse control. The most diminished performance for individuals with disordered screen use was on the go trial on the Go/No-go task assessing attention with a significant large effect size of 1.28. It should be noted this was significantly influenced by Zhou et al. (2010) with a Hedges’ g effect size of 4.0 for this task. There was also a significant medium/large reduction in performances on the forward recall Digit Span task and go trials on the Stop Signal task. Interestingly, there were no significant differences on the Go/No-go and Stop Signal tasks that measured reaction times as an outcome. Relatedly, there were no significant differences between disordered screen use and control samples on tests of processing speed. In the working memory domain, backward recall and the composite index of the Digit Span task were significantly reduced. There were no significant differences on the Spatial Span task. Lastly, within the global domain of cognition, performances were reduced for individuals with disordered screen use as measured by the Full-Scale IQ index on the WAIS.
Testing Format Analysis
A subgroup analysis examined the difference between the effect sizes for computerised and manual testing for disordered screen use compared to controls. There was a small significant Hedges’ g effect size (g = .37, 95% CI [.22, .52], p < .001) for computerised testing and a small significant effect size for manual testing (g = .35, 95% CI [.080, .63], p = .013). However, the two formats of testing did not differ significantly from each other (Q = .002, p = .90). Overall, the 29 studies that reported the use of computerised testing significant considerable heterogeneity was found (Q = 121.30, I2 = 76.92, τ2 = .38, p < .001). The seven studies that reported manual testing were likewise considered heterogeneous (Q = 30.68, I2 = 80.44, τ2 = .27, p < .001).
Addiction Type Analysis
A subgroup analysis was run to examine whether disordered use classification moderated cognitive outcomes for individuals with disordered screen use behaviours compared to controls. The singular studies that examined social media addiction (Aydın et al., 2020) and smartphone addiction (Khoury et al., 2019) were excluded from the analysis given a needed minimum of two studies for quantitative analysis. There was significant heterogeneity found for the 19 studies that examined gaming addiction (Q = 86.89, I2 = 79.28, τ2 = .41, p < .001). The 13 studies that examined Internet addiction were likewise considered heterogeneous (Q = 59.35, I2 = 79.78, τ2 = .33, p < .001). We found a medium significant Hedges’ g effect size (g = .40, 95% CI [.21, .60], p < .001) for gaming addiction and a medium significant Hedges’ g effect size (g = .36, 95% CI [.14, .59], p = .002) for Internet-related disordered behaviour. The two types of disordered use classifications did not differ significantly (Q = .065, p = . 79).
Given the ongoing debate regarding the distinction between disordered social media use and other forms of disordered screen use (see Weinstein, 2022), we conducted an exploratory analysis to investigate whether the pattern of observed outcomes would noticeably differ if we excluded social media from the analysis. After excluding Aydın et al. (2020), we found an incremental change in overall effect size of g = .39, 95% CI (.25, .53), p < .001, across 33 studies with evidence of significant considerable heterogeneity between studies (Q = 144.17, p < .001, I2 = 77.80, τ2 = .35). The cognitive domain analysis showed minor changes in executive functioning (g = .31, 95% CI [.082, .55], p = .008, I2 = 88.01, τ2 = .79) with no significant difference between domains (Q = 1.29, p = .86). The test level analysis revealed a change on WCST CA (g = .28, 95% CI [− .13, .68], p = . 184), which was no longer statistically significant, and a slight increase in scores on WCST PE (g = 1.00, 95% CI [.41, 1.59], p = .001). Lastly, the testing format analysis revealed an effect size of g = .38, 95% CI (.22, .54), p < .001, for computerised testing (Q = 120.07, I2 = 77.51, τ2 = .40, p < .001) with no significant difference between formats.
Risk of bias across studies was conducted. First, visual inspection of the funnel plot (Fig. 4) indicated that the distribution of studies of overall cognitive functioning is mostly symmetrical around the estimated effect size, although the studies were more clustered on the left of the effect. There was a single outlier; however, this study had the smallest sample size (n = 11) and had the smallest weighting on the overall results (Liu et al., 2014). Indeed, a leave-one-out analysis confirmed that when this study was removed, it had little impact on the overall effect which was still significant and small, g = .37, 95% CI (.23, .51). The Egger’s test for plot symmetry was not significant (Egger’s intercept = 1.34, p = .22), suggesting that publication bias did not significantly impact validity. Based on Duval and Tweedie’s trim and fill analysis, no studies are required to be trimmed from the right side, and one study should be trimmed from the left side, leading to an adjusted significant effect size of g = .34, 95% CI (.28, .40), indicating that bias was not detected. According to the classic fail-safe N method, there would need to be 1056 non-significant studies to produce a null effect and for the obtained effect size of g = .38 to be overturned (Zakzanis, 2001). Based on our observation of 34 studies, this number of unpublished studies has a very low probability. Risk of bias assessment was conducted for all subgroup analyses and the exploratory analysis with no significant results from Egger’s test suggestive of no publication bias. Funnel plots and other analyses can be found through https://osf.io/upeha/.
In the current systematic review and meta-analysis, we sought to synthesise and quantitatively assess the magnitude of neuropsychological deficits from disordered screen use behaviours. In particular, this was undertaken to resolve apparent inconsistencies in the neuropsychological literature concerning the cognitive impacts of disordered screen use behaviours. Indeed, with an increasing trend in problematic screen use prevalence (Pan et al., 2020), understanding the exact extent of cognitive consequences remains a vital concern. For this purpose, we identified cross-sectional studies that compared performance on objective neuropsychological tasks between disordered screen use behaviour samples and healthy controls. We explored the heterogeneity across diagnoses and neuropsychological testing, as well as appraised the quality of studies conducted. In our quantitative examination, we investigated the differences in cognitive performance as a function of cognitive domain, disordered use classification, test type, and test format. We found that individuals with disordered screen use behaviours had significantly lower cognitive performances with an effect size of .38, with attention showing the greatest reductions followed by executive functioning. This reduction was not moderated by either the classification of disordered screen use into gaming or Internet behaviours or by the format of the tests. Although almost all studies fulfilled quality requirements, these results may have been impacted by a consistent failure to provide sample size justifications and assess the severity of disordered screen use behaviours. This extended the existing literature by including a broad spectrum of cognitive abilities and neuropsychological assessment tasks as possible across all disordered screen-related behaviours, screen modalities, and ages.
Overall Cognitive Performance
The review identified 43 cross-sectional studies and 34 were eligible for the meta-analysis. Firstly, we found that most of the included studies were of young Asian males, consistent with higher prevalence rates in Asian countries (Naskar et al., 2016) and the disproportionate prevalence of disordered screen use behaviours among younger males (Wittek et al., 2016). We found that study effect sizes varied widely in cognitive performance, from a g = − .46 showing better performance compared to controls to a g = 1.22 indicating worse performance compared to controls, a likely artefact of the variability in the neuropsychological literature. With an estimated overall study effect size score of g = .38, we revealed an overall reduction in cognitive performance for individuals with disordered screen use behaviours that is on the higher end of the small effect size range using historical cut-offs.
In comparing the extent of cognitive performance, the measured effect size of .38 indicates a reduction of almost half of a standard deviation compared to controls (for comparing effect size with standard deviation, see Abramovitch et al., 2013). Based on Funder and Ozer’s (2019) review of effect sizes in psychology, the estimated Hedges’ g effect size corresponds to a Pearson’s r score of .18, which by their newer criteria would indicate an effect with likely explanatory and practical relevance, even in the short term. In other words, even a small effect can compound critically across time, especially in the context of childhood education. As an example, research has shown that even minor cognitive reductions at an early age, like those caused by mild traumatic brain injury, can lead to progressively increasing lags in academic performance and further “widening of the gap” in comparison to peers (Babikian et al., 2011; Maillard-Wermelinger et al., 2010). Therefore, unless remediated, even minor reductions in cognitive performance can gradually lead to more profound impairments across time.
Whilst we have found a reduction in cognitive performance, the extent of that reduction remains unclear. Abramovitch and Cooperman (2015) highlight that when interpreting effect sizes, underperformance on test scores does not necessarily imply clinically significant functional impairment. Based on Taylor and Heaton’s (2001) recommendation, a standard deviation of 1.0 is typically a useful diagnostic criterion for capturing neuropsychological impairment with specificity and sensitivity. However, given that only four studies had an effect size over one, the extent to which individuals with disordered screen use have clinically significant impairments without ecologically valid tests of impairment remains unclear. For measuring the extent and nature of cognitive reduction as reflected by clinically significant functional impairments, it would be beneficial for future studies to conduct ecologically valid assessments in the context of everyday functioning in academic, professional, and other real-world settings (see Spooner & Pachana, 2006). Nonetheless, given the critical age in which this reduction in cognitive performance is seen to take place, it seems important that some form of remediation be administrated to ensure that these reductions in cognition do not compound over time.
We found that the most profound deficits for individuals with disordered screen use behaviours were found in the domain of attention. From a cognitive standpoint, attention is considered foundational to other aspects of thinking as it is the cognitive bottleneck for both processing incoming information and deploying attentional resources outwards (Luria, 1980; Mapou, 1995). Indeed, it is common for neuropsychologists to examine arousal and attention first during assessment, as how well a person can pay attention determines how much information they can process, attend to, or commit to memory, and deficits in these areas are likely to impact all other cognitive functions (Mapou, 1995). Given that attention was most impaired, we might expect to see broader global impairments in cognitive functioning. Indeed, we found that there was no significant difference between cognitive domains suggesting a trend of global impairment. It is also important to consider that cognitive domains are not isolated and separate constructs but can be highly correlated and dynamically related. Therefore, decrements measured in one domain may be interdependent to reductions in other domains. However, the extent to which attention was producing more broad level impairment in cognition remains unclear. In addition, whilst we have grouped tasks into a broad domain of attention, it is necessary to examine how and whether disordered screen use may impact the various subtypes of attention differently (see Salo et al., 2017; van Zomeren & Brouwer, 1992).
One possibility is that whilst there may be deficits in a global definition of attention, there may be increases in selective attention or divided attention. For instance, video game players, characterised by at least 7 h of gaming a week across 2 years, will either outperform or demonstrate no difference from non-gamers on some tasks of attention (Boot et al., 2008). Gaming has also been linked to increases in correctly filtering out irrelevant items (divided attention) and recovering from attention shifts (Moisala et al., 2017). Indeed, for tasks that require singular focus and successful inhibition of automatic impulses, gamers tend to perform worse than non-gamers, whereas for tasks that require filtering out stimuli and shifting attention, gamers tend to outperform non-gamers (DeRosier & Thomas, 2018). However, these possible variabilities within cognitive domains may have been overlooked in this review which took a broad analysis point of view. Future studies with comprehensive neuropsychological batteries are needed to determine whether decrements in attention result in more global cognitive changes or whether the less frequently studied subdomains and domains (such as language and memory) will also follow the observed pattern of impairment. Additionally, given the known interdependencies and interactions between cognitive domains (Engle et al., 1999; Unsworth & Engle, 2007; Unsworth et al., 2009), investigating the impacts of disordered screen use from a global cognition perspective using advanced techniques such as network analysis that account for these interdependencies can offer a more comprehensive understanding of the cognitive impacts of disordered screen use behaviours (Kellermann et al., 2016).
Individual Test Type
In order to identify which tests led to the greatest underperformances, we analysed individual neuropsychological task performances comparing disordered screen use and control samples. For disordered screen users, accuracy scores on the go condition of the Go/No-go task showed the greatest underperformance with a significant large effect size, followed by the forward condition on the Digit Span task. From those studies, one included statistical adjustment for potential confounding variables and two examined levels of severity, thereby limiting inferences about causality. Both tasks share a similarity in that successfully responding to go trials as well as listening to and repeating a sequence of digits requires vigilance, concentration, and sustained attention (Hale et al., 2016; O’Connell et al., 2009). The act of maintaining one’s attention over time requires the dual abilities to both allocate attentional resources and reorient attention as it strays (van Zomeren & Brouwer, 1992). The dynamic, captivating, and visually stimulating features characteristic of screen-based media and technologies may challenge the capacity to both focus and reorient attention towards information that are more mundane and a lot less stimulating and rapidly changing, such as the Go/No-go and Digit Span tasks. Indeed, even a single night of fast-paced, action binge video gaming can result in reduced performances on a sustained attention task (Trisolini et al., 2018). Interestingly, we found that disordered screen users had enhanced reaction times on No-go trials for the Go/No-go task compared to controls. In comparison to go trials, successfully responding to No-go trials requires the rapid inhibition of automatic responding (O’Connell et al., 2009). It is possible that the same elements of screen-based media and technologies that can disadvantage attention may be advantageous for rapid response inhibition (see Dye et al., 2009). Regarding overall inhibitory control, however, it was found that disordered screen users had significantly reduced performances on the WCST and for incongruent trials on the Stroop task compared to controls. In sum, the most reduced performances for disordered screen users were on neuropsychological tasks that required sustained attention, although similar underperformances were also evident on specific tasks of executive functioning, working memory, and global functioning.
Among the included studies, we found that the methodologies used for the cognitive tasks were highly variable in terms of stimulus durations, reward contingencies, target stimuli, and target frequencies. Standardisation and consistent procedures in cognitive testing are crucial, largely due to the emphasis on comparing an individual test performance against a normative standard, but also to ensure scientific rigour and inter-rater and test–retest reliability (Russell et al., 2005). Using the Go/No-go task as an example, as in this study, Wessel (2018) found that the administration of the task including the frequency of targets and the duration of stimuli tended to vary widely across studies. Critically, electroencephalography (EEG) event-related potential (ERP) measurements revealed that even seemingly minor differences in task administration engaged separate neural processes, thereby emphasising the need to conduct consistent testing. Whilst we have found significant underperformances on the Go/No-go task, the variability in task administration is a critical consideration when interpreting these results. Given the degree of heterogeneity in cognitive task administration, we have found that there is a clear need to administer consistent, standardised, and previously validated assessments rather than modifying or creating novel assessment tasks.
Moreover, nearly half of the included studies used a singular neuropsychological task to assess a cognitive domain. There are several neuropsychological implications to consider when interpreting test results from single-test studies or when there is an over-reliance on a single test across studies. Lezak et al. (2012) argue that use of a single test to identify a disorder or impairment, both within studies and across studies, can lead to higher rates of impairment misclassification. For one, the absence of a positive finding does not automatically preclude the possibility of a present impairment in the same way that the presence of a negative finding (on a single test) does not automatically presume cognitive impairment. For example, a reduced score on the incongruent condition of the Stroop task does not automatically imply, as some studies put it “impaired cognitive control” (Cai et al., 2016, p. 16) or “cognitive control deficits” (Yuan et al., 2017, p. 5). Rather, in the case of making an inference about the broad domain of executive functioning, an evaluation must be made based on the pattern of test scores and across different tests of executive functioning (Lezak et al., 2012). Otherwise, a reduced score tells us something about the specific key process involved in a given test and less about the domain of interest. Additionally, drawing inferences about cognitive impairment from the findings of a single test is heavily dependent on the psychometric integrity and sensitivity of the test in question.
As discussed above, heterogeneity in test administration can challenge the psychometric integrity of a test, an estimate based on standardisation. Studies would benefit from including a test of batteries that are standardised, sensitive, and specific to the nature of impairment that will more suitably allow for the possibility of detecting patterns of deficits within and across cognitive domains in an addicted population.
Modality of Testing
It has been previously demonstrated that individuals with disordered screen use behaviours exhibit an attentional bias towards computer or screen-related stimuli (Heuer et al., 2021; Kim et al., 2018). Awareness of such an attentional bias could be crucial when choosing appropriate neuropsychological assessments to measure cognition. Indeed, the “best practice” guidelines in neuropsychological assessment require that scores reflect a participant’s best performance and that tests are acceptable to assess the relevant functions (Bush, 2009). If extraneous variables are present, such as suboptimal effort or some source of distraction, then these should be accounted for either in mention or through psychometric scoring (e.g., adjusted according to level of attention). To our knowledge, whether the attentional bias towards computer-related or screen-related stimuli impacts performance on computerised neuropsychological testing has hitherto neither been questioned nor investigated.
Despite the above concerns regarding appropriateness of testing, we found that only eight out of 43 studies included at least one manual neuropsychological measure whilst the rest relied solely on computerised testing. As part of our analysis, we also conducted a subgroup analysis to determine whether the two types of cognitive testing, manual and computerised, moderated cognitive performance. Computerised tasks had a slightly larger effect size than manual testing; however, there was no significant difference in cognitive performance between the two formats of administration. In other words, the format of administration did not produce any advantage or disadvantage on cognitive performance or for detecting cognitive impairment in disordered screen use. Although we found that cognitive performance between groups did not significantly differ as a function of testing format, there was an overwhelming majority of computerised studies. More studies utilising different formats of neuropsychological testing, such as paper-and-pencil, computerised, and virtual-reality, would be useful in examining the contribution of format to cognitive performance in a disordered screen use population.
Classification of Disordered Screen Use Behaviours
This review found that although there were overall strengths in defining, describing, and using validated measures, there was a high degree of variability in the methods employed to describe and classify disordered screen use behaviours. The two most common classifications were IGD and IAD. However, 16 studies included diagnostic classifications that were used twice or less across all studies. The classification measures and cut-offs were also applied inconsistently. As mentioned above, the YDQ measurement scale was used to define both IAD and IGD. This is consistent with a previous systematic review which found a considerable degree of variability in implemented diagnostic measures for classifying gaming disorders as well as a tendency for studies to adapt or create new measures rather than adopt previously validated ones (King et al., 2020).
Although the degree of inconsistency and variability is unsurprising given the relatively recent exploration of screen-related disorders as a field of study, there is a clear imperative for consistency in disordered screen use measurement that has direct implications. As an example, a meta-analysis found that studies utilising the IAT or CIAS estimated higher prevalence rates of gaming disorder than studies that employed the YDQ measurement scale (Li et al., 2018). To address the varying quality of screener measures, Koronczai et al. (2011) suggested that disordered screen use measurement tools be brief, comprehensive, reliable, valid for all ages, cultures, and data collection methods, as well as clinically validated to be able to broadly apply across countries, screen modalities, and variables of interest.
Our analysis study aimed to examine the magnitude of cognitive impairment as a function of disordered screen use classification. Our findings showed that, although the estimated effect size for gaming-related disordered behaviour was slightly larger than for studies including Internet-related behaviours, this difference was not significant. In other words, from a neuropsychological standpoint, the classification of disordered screen use behaviours according to the predominant modality of usage (Internet or gaming) did not moderate the magnitude of cognitive impairment. However, as only one study examined disordered social media use behaviours and one study examined disordered smartphone use behaviours, there were not enough studies in each category to estimate an effect size for either. Nevertheless, excluding social media in an exploratory analysis revealed only marginal changes across the overall and relevant subgroup results. Interestingly, the results indicated slightly poorer performance in individuals with disordered screen use compared to controls when social media was excluded. This suggests that the cognitive effects of problematic social media use may not be as severe as those associated with other forms of screen use, which is consistent with other findings (see Weinstein, 2022). However, since only minor changes were observed, this finding lends some support to grouping social media with other forms of problematic screen use when assessing their impact on cognition. Still, due to the inclusion of only one social media study, we could not determine the significance of the differences between classifications. Further studies are needed that assess other forms of problematic screen use, including social media, before such a conclusion can be made.
Further, our review identified only eight studies that presented severity classifications for disordered screen use behaviour. This limits the extent to which the relationship of cognitive performance across a spectrum of severity behaviours can be investigated. A previous systematic review on Internet gaming disorder in children and adolescents recommended that researchers make a distinction between levels of engagement with gaming (Paulus et al., 2018). As the authors make clear, any psychosocial and academic consequences may vary significantly based on levels of engagement, with even high levels of engagement resulting in some positive effects. For these reasons, more studies that examine a range of screen modalities across a continuum of severity, especially in terms of causally linking severity of disordered screen use behaviours to cognitive impacts, are needed to establish a relationship.
There are several important limitations to consider. First, although we have shown that attention and executive functioning are impaired in disordered screen use, we were not able to comprehensively cover all cognitive domains in this meta-analysis (e.g., memory, visuospatial ability, or language) nor were we able to confidently examine subcomponents of cognitive domains (e.g., selective attention, divided attention, nonverbal reasoning, decision-making, or impulse inhibition) owing to the limited number of studies that examined those domains. Additionally, whilst we have followed clinical guidelines in sorting the tests under their relevant domains, there is no definitive consensus about which cognitive domains a test truly measures. There is also considerable overlap and correlation between cognitive domains, which can make it difficult to categorise tests definitively. Second, we did not search for unpublished studies concerning cognitive impacts of disordered screen use that may exist. Third, it is important to consider that the overwhelming majority of included studies were of an Asian, young, and male demographic, thereby limiting the global generalisability of these results particularly in older, non-male, and Western populations. For one, the prevalence and severity of disordered screen use behaviours are known to be higher in Asian countries compared to Western countries (Naskar et al., 2016). Given that the diagnostic criteria for IGD or IAD do not establish severity of symptoms beyond the cut-offs, cognitive impacts might be more extreme in Asian populations. Moreover, culture and gender can impact the expression and distress resulting from disordered screen use behaviours, so a broad range of cultures and genders is essential for generalisation (Andreetta et al., 2020; Kuss, 2013). Another limitation stems from the inherent constraints of cross-sectional studies, which limits a more thorough understanding of the contributions of moderating variables. For instance, it remains uncertain whether factors such as anxiety or depression, known to have a high comorbidity with disordered screen use behaviour (with rates as high as 92% and 89% respectively, see González-Bueso et al., 2018), precipitate increased screen usage or result from it. Lastly, we found that very few studies included a sample size justification (although almost half had a sample size greater than 25), assessed the severity of disordered screen use behaviours, or statistically adjusted for potential confounding variables. Along with the narrow range of the measured population and this review’s focus on cross-sectional observational studies, this limits a greater understanding of causality and the contribution of other variables.
Future studies should consider the following recommendations. In order to identify and evaluate disordered screen use, researchers should use consistent and validated methods rather than modifying or adopting novel screening measures and cognitive tasks. Second, research on neuropsychological impacts would benefit from a battery of cognitive tests that measure the range of cognitive functioning across and within cognitive domains rather than relying on and interpreting results based on a single test. Third, assessing the severity of disordered screen use behaviours will provide insight into and possibly establish a relationship between cognitive deficits and symptom severity. Fourth, to establish a causal or prospective relationship between disordered screen use behaviours and cognitive impacts, future investigations should consider adopting experimental and longitudinal designs. Fifth, ecologically valid assessments of cognitive functioning should be incorporated to determine the severity of impairments experienced in daily life. Sixth, although disordered screen use is more prevalent in some demographics, little is known about its cognitive impacts on older, non-male, and Western populations. It would be beneficial to investigate these underexplored populations in future research.
In summary, the results of this systematic review and meta-analysis suggest that disordered screen use can negatively impact cognitive abilities. Attention is the most affected cognitive domain, followed by executive functioning, but further research is needed to determine the magnitude of deficits in other lesser-studied domains. Neither disordered screen use classification nor testing format influenced the extent of cognitive deficits from a neuropsychological perspective. However, given the limited number of studies, more research that incorporates broader disordered screen use behaviours, including social media and smartphones, and includes comprehensive manual cognitive assessments are required. With increased reliance on technology, it has never been more important to assess the impact of too much use of screens on cognitive functioning and overall wellbeing. This will enable the development of targeted remediation and treatment plans as well as inform designer decisions regarding development of technological platforms and devices with cognitive impacts in mind.
Availability of Data and Materials
All of the reviewed studies are publicly available. Data, figures, and results from this study are available through https://osf.io/upeha/.
Abramovitch, A., Abramowitz, J. S., & Mittelman, A. (2013). The neuropsychology of adult obsessive-compulsive disorder: A meta-analysis. Clinical Psychology Review, 33(8), 1163–1171. https://doi.org/10.1016/j.cpr.2013.09.004
Abramovitch, A., & Cooperman, A. (2015). The cognitive neuropsychology of obsessive-compulsive disorder: A critical review. Journal of Obsessive-Compulsive and Related Disorders, 5, 24–36. https://doi.org/10.1016/j.jocrd.2015.01.002
American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.). https://doi.org/10.1176/appi.books.9780890425596
Andreassen, C. S., Billieux, J., Griffiths, M. D., Kuss, D. J., Demetrovics, Z., Mazzoni, E., & Pallesen, S. (2016). The relationship between addictive use of social media and video games and symptoms of psychiatric disorders: A large-scale cross-sectional study. Psychology of Addictive Behaviors, 30(2), 252.
Andreetta, J., Teh MSc, J., Burleigh, T. L., Gomez, R., & Stavropoulos, V. (2020). Associations between comorbid stress and Internet gaming disorder symptoms: Are there cultural and gender variations? Asia-Pacific Psychiatry, 12(2), e12387. https://doi.org/10.1111/appy.12387
Australian Institute of Health and Welfare. (2020). Australia’s children. https://www.aihw.gov.au/reports/children-youth/australias-children/contents/health/physical-activity
Aydın, O., Obuća, F., Boz, C., & Ünal-Aydın, P. (2020). Associations between executive functions and problematic social networking sites use. Journal of Clinical and Experimental Neuropsychology, 42(6), 634–645. https://doi.org/10.1080/13803395.2020.1798358
Babikian, T., Satz, P., Zaucha, K., Light, R., Lewis, R. S., & Asarnow, R. F. (2011). The UCLA longitudinal study of neurocognitive outcomes following mild pediatric traumatic brain injury. Journal of the International Neuropsychological Society, 17(5), 886–895.
Blaszczynski, A. (2006). Internet use: In search of an addiction. International Journal of Mental Health and Addiction 2006 4:1, 4(1), 7–9. https://doi.org/10.1007/s11469-006-9002-3
Boot, W. R., Kramer, A. F., Simons, D. J., Fabiani, M., & Gratton, G. (2008). The effects of video game playing on attention, memory, and executive control. Acta Psychologica, 129(3), 387–398. https://doi.org/10.1016/j.actpsy.2008.09.005
Borenstein, M., Hedges, L., Higgins, J., & Rothstein, H. (2013). Comprehensive Meta-Analysis Version 3. Biostat.
Borenstein, M., Hedges, L. V, Higgins, J. P., & Rothstein, H. R. (2021). Introduction to meta-analysis. John Wiley & Sons.
Brailovskaia, J., Schillack, H., & Margraf, J. (2018). Facebook addiction disorder in Germany. Cyberpsychology, Behavior, and Social Networking, 21(7), 450–456.
Brand, M., Young, K. S., & Laier, C. (2014). Prefrontal control and Internet addiction: A theoretical model and review of neuropsychological and neuroimaging findings. Frontiers in Human Neuroscience, 8 (375). https://doi.org/10.3389/fnhum.2014.00375
Browndyke, J. N., Albert, A. L., Malone, W., Schatz, P., Paul, R. H., Cohen, R. A., Tucker, K. A., & Drew Gouvier, W. (2010). Computer-related anxiety: Examining the impact of technology-specific affect on the performance of a computerized neuropsychological assessment measure. Applied Neuropsychology, 9(4), 210–218. https://doi.org/10.1207/s15324826an0904_3
Bush, S. S. (2009). Determining whether or when to adopt new versions of psychological and neuropsychological tests: Ethical and professional considerations. The Clinical Neuropsychologist, 24(1), 7–16. https://doi.org/10.1080/13854040903313589
Cai, C., Yuan, K., Yin, J., Feng, D., Bi, Y., Li, Y., Yu, D., Jin, C., Qin, W., & Tian, J. (2016). Striatum morphometry is associated with cognitive control deficits and symptom severity in internet gaming disorder. Brain Imaging and Behavior, 10(1), 12–20. https://doi.org/10.1007/s11682-015-9358-8
Cao, F., Su, L., Liu, T. Q., & Gao, X. (2007). The relationship between impulsivity and Internet addiction in a sample of Chinese adolescents. European Psychiatry, 22(7), 466–471. https://doi.org/10.1016/j.eurpsy.2007.05.004
Carbia, C., López-Caneda, E., Corral, M., & Cadaveira, F. (2018). A systematic review of neuropsychological studies involving young binge drinkers. Neuroscience & Biobehavioral Reviews, 90, 332–349. https://doi.org/10.1016/j.neubiorev.2018.04.013
Cardoso-Leite, P., Buchard, A., Tissieres, I., Mussack, D., & Bavelier, D. (2021). Media use, attention, mental health and academic performance among 8 to 12 year old children. PloS one, 16(11), e0259163. https://doi.org/10.1371/journal.pone.0259163
Casale, S., Musicò, A., & Spada, M. M. (2021). A systematic review of metacognitions in Internet Gaming Disorder and problematic Internet, smartphone and social networking sites use. Clinical Psychology & Psychotherapy, 28(6), 1494–1508. https://doi.org/10.1002/CPP.2588
Chan, P. A., & Rabinowitz, T. (2006). A cross-sectional analysis of video games and attention deficit hyperactivity disorder symptoms in adolescents. Annals of General Psychiatry, 5(1), 1–10. https://doi.org/10.1186/1744-859X-5-16
Choi, J. S., Park, S. M., Lee, J., Hwang, J. Y., Jung, H. Y., Choi, S. W., Kim, D. J., Oh, S., & Lee, J. Y. (2013). Resting-state beta and gamma activity in Internet addiction. International Journal of Psychophysiology, 89(3), 328–333.
Choi, J. S., Park, S. M., Roh, M. S., Lee, J. Y., Park, C. B., Hwang, J. Y., Gwak, A. R., & Jung, H. Y. (2014). Dysfunctional inhibitory control and impulsivity in Internet addiction. Psychiatry Research, 215(2), 424–428.
Chuang, Y. C. (2006). Massively multiplayer online role-playing game-induced seizures: A neglected health problem in internet addiction. CyberPsychology & Behavior, 9(4), 451–456.
Cohen, J. (1988). Statistical power analysis for the behavioural sciences. Lawrence Erlbaum Associates.
Collins, E., & Freeman, J. (2014). Video game use and cognitive performance: Does it vary with the presence of problematic video game use? Cyberpsychology, Behavior, and Social Networking, 17(3), 153–159. https://doi.org/10.1089/cyber.2012.0629
Dadischeck, M. (2021). Conceptualizing digital well-being and technology addiction in IO psychology. Industrial and Organizational Psychology, 14(3), 401–403.
Dell’Osso, B., Di Bernardo, I., Vismara, M., Piccoli, E., Giorgetti, F., Molteni, L., Fineberg, N. A., Virzì, C., Bowden-Jones, H., Truzoli, R., & Viganò, C. (2021). Managing problematic usage of the internet and related disorders in an era of diagnostic transition: An updated review. Clinical Practice and Epidemiology in Mental Health, 17(1), 61–74.
DeRosier, M. E., & Thomas, J. M. (2018). Video games and their impact on teens’ mental health. Technology and Adolescent Mental Health, 237–253. https://doi.org/10.1007/978-3-319-69638-6_17
Ding, W. N., Sun, J. H., Sun, Y. W., Chen, X., Zhou, Y., Zhuang, Z. G., Li, L., Zhang, Y., Xu, J. R., & Du, Y. S. (2014). Trait impulsivity and impaired prefrontal impulse inhibition function in adolescents with internet gaming addiction revealed by a Go/No-Go fMRI study. Behavioral and Brain Functions, 10(1), 1–9. https://doi.org/10.1186/1744-9081-10-20
Dong, G., Huang, J., & Du, X. (2011). Enhanced reward sensitivity and decreased loss sensitivity in Internet addicts: An fMRI study during a guessing task. Journal of Psychiatric Research, 45(11), 1525–1529. https://doi.org/10.1016/j.jpsychires.2011.06.0
Dong, G., Li, H., Wang, L., & Potenza, M. N. (2017). Cognitive control and reward/loss processing in Internet gaming disorder: Results from a comparison with recreational Internet game-users. European Psychiatry, 44, 30–38. https://doi.org/10.1016/j.eurpsy.2017.03.004
Dong, G., Lin, X., & Potenza, M. N. (2015). Decreased functional connectivity in an executive control network is related to impaired executive function in Internet gaming disorder. Progress in Neuro-Psychopharmacology and Biological Psychiatry, 57, 76–85. https://doi.org/10.1016/j.pnpbp.2014.10.012
Dong, G., Lin, X., Zhou, H., & Lu, Q. (2014). Cognitive flexibility in internet addicts: FMRI evidence from difficult-to-easy and easy-to-difficult switching situations. Addictive Behaviors, 39(3), 677–683. https://doi.org/10.1016/j.addbeh.2013.11.028
Dong, G., Lu, Q., Zhou, H., & Zhao, X. (2010). Impulse inhibition in people with Internet addiction disorder: Electrophysiological evidence from a Go/NoGo study. Neuroscience Letters, 485(2), 138–142. https://doi.org/10.1016/j.neulet.2010.09.002
Duval, S., & Tweedie, R. (2000). Trim and fill: A simple funnel-plot–based method of testing and adjusting for publication bias in meta-analysis. Biometrics, 56(2), 455–463. https://doi.org/10.1111/j.0006-341x.2000.00455.x
Dye, M. W. G., Green, C. S., & Bavelier, D. (2009). Increasing speed of processing with action video games. Current Directions in Psychological Science, 18(6), 321–326. https://doi.org/10.1111/j.1467-8721.2009.01660.x
Elliott, R. (2003). Executive functions and their disorders: Imaging in clinical neuroscience. British Medical Bulletin, 65(1), 49–59.
Engle, R. W., Laughlin, J. E., Tuholski, S. W., & Conway, A. R. A. (1999). Working memory, short-term memory, and general fluid intelligence: A latent-variable approach. Journal of Experimental Psychology: General, 128(3), 309–331. https://doi.org/10.1037/0096-34126.96.36.1999
Funder, & D. C., & Ozer, D. J. (2019). Evaluating effect size in psychological research: Sense and nonsense. Advances in Methods and Practices in Psychological Science, 2(2), 156–168.
Gates, N. J., & March, E. G. (2016). A neuropsychologist’s guide to undertaking a systematic review for publication: Making the most of PRISMA guidelines. Neuropsychology Review, 26, 109–120.
González-Bueso, V., Santamaría, J. J., Fernández, D., Merino, L., Montero, E., & Ribas, J. (2018). Association between internet gaming disorder or pathological video-game use and comorbid psychopathology: A comprehensive review. International Journal of Environmental Research and Public Health, 15(4), 668.
Green, C. S., & Bavelier, D. (2003). Action video game modifies visual selective attention. Nature, 423(6939), 534–537.
Griffiths, M. (2009a). A ‘components’ model of addiction within a biopsychosocial framework. Journal of Substance Use, 10(4), 191–197. https://doi.org/10.1080/14659890500114359
Griffiths, M. D. (2009b). The role of context in online gaming excess and addiction: Some case study evidence. International Journal of Mental Health and Addiction 2009b 8:1, 8(1), 119–125. https://doi.org/10.1007/s11469-009-9229-x
Griffiths, M. D., Davies, M. N. O., & Chappell, D. (2004). Demographic factors and playing variables in online computer gaming. Cyberpsychology and Behavior, 7(4), 479–487. https://doi.org/10.1089/CPB.2004.7.479
Grjibovski, A. M., Ivanov, S., & v., & Gorbatova, M. A. (2015). Case-control studies in health sciences. Science & Healthcare, 2, 5–18.
Ha, J. H., Yoo, H. J., Cho, I. H., Chin, B., Shin, D., & Kim, J. H. (2006). Psychiatric comorbidity assessed in Korean children and adolescents who screen positive for internet addiction. Journal of Clinical Psychiatry, 67(5), 821–826. https://doi.org/10.4088/jcp.v67n0517
Hale, J. B., Hoeppner, J. A. B., & Fiorello, C. A. (2016). Analyzing digit span components for assessment of attention processes. Journal of Psychoeducational Assessment, 20(2), 128–143. https://doi.org/10.1177/073428290202000202
Han, D. H., Lyoo, I. K., & Renshaw, P. F. (2012). Differential regional gray matter volumes in patients with on-line game addiction and professional gamers. Journal of Psychiatric Research, 46(4), 507–515. https://doi.org/10.1016/j.jpsychires.2012.01.004
Hawi, N. S., & Samaha, M. (2016). To excel or not to excel: Strong evidence on the adverse effect of smartphone addiction on academic performance. Computers & Education, 98, 81–89. https://doi.org/10.1016/j.compedu.2016.03.007
He, Q., Turel, O., & Bechara, A. (2018). Association of excessive social media use with abnormal white matter integrity of the corpus callosum. Psychiatry Research: Neuroimaging, 278, 42–47. https://doi.org/10.1016/j.pscychresns.2018.06.008
Hedges, L. V. (1981). Distribution theory for Glass’s estimator of effect size and related estimators. Journal of Educational Statistics, 6(2), 107–128.
Hedges, L., & Olkin, I. (1985). Statistical methods for meta-analysis. Academic Press.
Henchoz, Y., Studer, J., Deline, S., & NGoran, A. A., Baggio, S., & Gmel, G. (2015). video gaming disorder and sport and exercise in emerging adulthood: A longitudinal study. Behavioral Medicine, 42(2), 105–111. https://doi.org/10.1080/08964289.2014.965127
Heuer, A., Mennig, M., Schubö, A., & Barke, A. (2021). Impaired disengagement of attention from computer-related stimuli in Internet gaming disorder: Behavioral and electrophysiological evidence. Journal of Behavioral Addictions, 10(1), 77–87. https://doi.org/10.1556/2006.2020.00100
Higgins, J. P. T., Thompson, S. G., Deeks, J. J., & Altman, D. G. (2003). Measuring inconsistency in meta-analyses. BMJ, 327(7414), 557–560. https://doi.org/10.1136/BMJ.327.7414.557
Horvath, J., Mundinger, C., Schmitgen, M. M., Wolf, N. D., Sambataro, F., Hirjak, D., Kubera, K. M., Koenig, J., & Christian Wolf, R. (2020). Structural and functional correlates of smartphone addiction. Addictive Behaviors, 105, 106334. https://doi.org/10.1016/j.addbeh.2020.106334
Hwang, J. Y., Choi, J. S., Gwak, A. R., Jung, D., Choi, S. W., Lee, J., Lee, J. Y., Jung, H. Y., & Kim, D. J. (2014). Shared psychological characteristics that are linked to aggression between patients with Internet addiction and those with alcohol dependence. Annals of General Psychiatry, 13(1). https://doi.org/10.1186/1744-859x-13-6
Hyun, G. J., Han, D. H., Lee, Y. S., Kang, K. D., Yoo, S. K., Chung, U. S., & Renshaw, P. F. (2015). Risk factors associated with online game addiction: A hierarchical model. Computers in Human Behavior, 48, 706–713. https://doi.org/10.1016/j.chb.2015.02.008
Ioannidis, K., Hook, R., Goudriaan, A. E., Vlies, S., Fineberg, N. A., Grant, J. E., & Chamberlain, S. R. (2019). Cognitive deficits in problematic internet use: Meta-analysis of 40 studies. The British Journal of Psychiatry, 215(5), 639–646. https://doi.org/10.1192/bjp.2019.3
Irak, M., Soylu, C., & Çapan, D. (2016). Violent video games and cognitive processes: A neuropsychological approach. Gamer Psychology and Behavior, 3–20.
Irvine, M. A., Worbe, Y., Bolton, S., Harrison, N. A., Bullmore, E. T., & Voon, V. (2013). Impaired decisional impulsivity in pathological videogamers. PLoS ONE, 8(10). https://doi.org/10.1371/journal.pone.0075914
Jang, J. H., Chung, S. J., Choi, A., Lee, J. Y., Kim, B., Park, M., Park, S., & Choi, J. S. (2021). Association of general cognitive functions with gaming use in young adults: A comparison among excessive gamers, regular gamers and non-gamers. Journal of Clinical Medicine 2021, Vol. 10, Page 2293, 10(11), 2293. https://doi.org/10.3390/jcm10112293
Jeromin, F., Nyenhuis, N., & Barke, A. (2016a). Attentional bias in excessive Internet gamers: Experimental investigations using an addiction Stroop and a visual probe. Journal of Behavioral Addictions, 5(1), 32–40. https://doi.org/10.1556/2006.5.2016.012
Jeromin, F., Rief, W., & Barke, A. (2016b). Using two web-based addiction Stroops to measure the attentional bias in adults with Internet Gaming Disorder. Journal of Behavioral Addictions, 5(4), 666–673. https://doi.org/10.1556/2006.5.2016.075
Jiang, C., Li, C., Zhou, H., & Zhou, Z. (2020). Individuals with internet gaming disorder have similar neurocognitive impairments and social cognitive dysfunctions as methamphetamine-dependent patients. Adicciones, 1342. https://doi.org/10.20882/adicciones.1342
Jiang, Q. (2014). Internet addiction among young people in China: Internet connectedness, online gaming, and academic performance decrement. Internet Research, 24(1), 2–20. https://doi.org/10.1108/intr-01-2013-0004/full/pdf
Kellermann, T. S., Bonilha, L., Eskandari, R., Garcia-Ramos, C., Lin, J. J., & Hermann, B. P. (2016). Mapping the neuropsychological profile of temporal lobe epilepsy using cognitive network topology and graph theory. Epilepsy & Behavior, 63, 9–16. https://doi.org/10.1016/j.yebeh.2016.07.030
Khoury, J. M., Couto, L. F. S. C., de Almeida Santos, D., de Oliveira e Silva, V. H., Drumond, J. P. S., de Carvalho e Silva, L. L., Malloy-Diniz, L., Albuquerque, M. R., de Castro Lourenço das Neves, M., & Garcia, F. D. (2019). Bad choices make good stories: The impaired decision-making process and skin conductance response in subjects with smartphone addiction. Frontiers in Psychiatry, 73. https://doi.org/10.3389/fpsyt.2019.00073
Kim, B. M., Lee, J., Choi, A. R., Chung, S. J., Park, M., Koo, J. W., Kang, U. G., & Choi, J. S. (2021). Event-related brain response to visual cues in individuals with Internet gaming disorder: Relevance to attentional bias and decision-making. Translational Psychiatry, 11(1), 258.
Kim, S. N., Kim, M., Lee, T. H., Lee, J. Y., Park, S., Park, M., Kim, D. J., & Kwon, & Choi, J. S. (2018). Increased attentional bias toward visual cues in internet gaming disorder and obsessive-compulsive disorder: An event-related potential study. Frontiers in Psychiatry, 9, 315.
King, D. L., Chamberlain, S. R., Carragher, N., Billieux, J., Stein, D., Mueller, K., Potenza, M. N., Rumpf, H. J., Saunders, J., Starcevic, V., Demetrovics, Z., Brand, M., Lee, H. K., Spada, M., Lindenberg, K., Wu, A. M. S., Lemenager, T., Pallesen, S., Achab, S., Kyrios, M., Higuchi, S., Fineberg, N. A., Delfabbro, P. H. (2020). Screening and assessment tools for gaming disorder: A comprehensive systematic review. Clinical Psychology Review, 77, 101831. https://doi.org/10.1016/j.cpr.2020.101831
Király, O., Griffiths, M. D., & Demetrovics, Z. (2015a). Internet gaming disorder and the DSM-5: Conceptualization, debates, and controversies. Current Addiction Reports, 2, 254–262.
Király, O., Urbán, R., Griffiths, M. D., Ágoston, C., Nagygyörgy, K., Kökönyei, G., & Demetrovics, Z. (2015b). The mediating effect of gaming motivation between psychiatric symptoms and problematic online gaming: An online survey. Journal of Medical Internet Research, 17(4), e88. https://doi.org/10.2196/jmir.3515
Kircaburun, K., Pontes, H. M., Stavropoulos, V., & Griffiths, M. D. (2020). A brief psychological overview of disordered gaming. Current Opinion in Psychology, 36, 38–43. https://doi.org/10.1016/j.copsyc.2020.03.004
Ko, C. H., Liu, G. C., & Yen, J. Y. (2015). Functional imaging of internet gaming disorder. Internet Addiction: Neuroscientific Approaches and Therapeutical Interventions, 43–63. https://doi.org/10.1007/978-3-319-07242-5_3
Ko, C. H., Hsiao, S., Liu, G. C., Yen, J. Y., Yang, M. J., & Yen, C. F. (2010). The characteristics of decision making, potential to take risks, and personality of college students with Internet addiction. Psychiatry Research, 175(1–2), 121–125. https://doi.org/10.1016/j.psychres.2008.10.004
Ko, C. H., Liu, G. C., Hsiao, S., Yen, J. Y., Yang, M. J., Lin, W. C., Yen, C. F., & Chen, C. S. (2009). Brain activities associated with gaming urge of online gaming addiction. Journal of Psychiatric Research, 43(7), 739–747. https://doi.org/10.1016/j.jpsychires.2008.09.012
Koronczai, B., Urbán, R., Kökönyei, G., Paksi, B., Papp, K., Kun, B., Arnold, P., Kállai, J., & Demetrovics, Z. (2011). Confirmation of the three-factor model of problematic internet use on off-line adolescent and adult samples. Cyberpsychology, Behavior, and Social Networking, 14(11), 657–664. https://doi.org/10.1089/cyber.2010.0345
Kuo, S. Y., Chen, Y. T., Chang, Y. K., Lee, P. H., Liu, M. J., & Chen, S. R. (2018). Influence of internet addiction on executive function and learning attention in Taiwanese school-aged children. Perspectives in Psychiatric Care, 54(4), 495–500. https://doi.org/10.1111/ppc.12254
Kuss, D. J. (2013). Internet gaming addiction: Current perspectives. Psychology Research and Behavior Management, 6, 125. https://doi.org/10.2147/prbm.s39476
Kuss, D. J., & Griffiths, M. D. (2012). Internet and gaming addiction: A systematic literature review of neuroimaging studies. Brain Sciences, 2(3), 347–374.
Kuss, D. J., Griffiths, M. D., Karila, L., & Billieux, J. (2014). Internet addiction: A systematic review of epidemiological research for the last decade. Current Pharmaceutical Design, 20(25), 4026–4052. https://doi.org/10.2174/13816128113199990617
Kuss, D. J., Griffiths, M. D., & Pontes, H. M. (2017). Chaos and confusion in DSM-5 diagnosis of Internet Gaming Disorder: Issues, concerns, and recommendations for clarity in the field. Journal of Behavioral Addictions, 6(2), 103–109. https://doi.org/10.1556/2006.5.2016.062
Lai, C. M., Mak, K. K., Watanabe, H., Jeong, J., Kim, D., Bahar, N., Ramos, M., Chen, S. H., & Cheng, C. (2015). The mediating role of Internet addiction in depression, social anxiety, and psychosocial well-being among adolescents in six Asian countries: A structural equation modelling approach. Public Health, 129(9), 1224–1236. https://doi.org/10.1016/j.puhe.2015.07.031
Legault, M. C., Liu, H. Z., & Balodis, I. M. (2021). Neuropsychological constructs in gaming disorders: A systematic review. Current Behavioral Neuroscience Reports, 8(3), 59–76.
Lezak, M. D., Howieson, D. B., Bigler, E. D., & Tranel, D. (2012). Neuropsychological assessment (5th ed.). Oxford University Press.
Li, L., Xu, D. D., Chai, J. X., Wang, D., Li, L., Zhang, L., Lu, L., Ng, C. H., Ungvari, G. S., Mei, S. L., & Xiang, Y. T. (2018). Prevalence of Internet addiction disorder in Chinese university students: A comprehensive meta-analysis of observational studies. Journal of Behavioral Addictions, 7(3), 610–623. https://doi.org/10.1556/2006.7.2018.53
Li, Q., Wang, Y., Yang, Z., Dai, W., Zheng, Y., Sun, Y., & Liu, X. (2020). Dysfunctional cognitive control and reward processing in adolescents with Internet gaming disorder. Psychophysiology, 57(2). https://doi.org/10.1111/psyp.13469
Lim, J. A., Lee, J. Y., Jung, H. Y., Sohn, B. K., Choi, S. W., Kim, Y. J., Kim, D. J., & Choi, J. S. (2016). Changes of quality of life and cognitive function in individuals with Internet gaming disorder: A 6-month follow-up. Medicine, 95(50), e5695. 10.1097/ md.0000000000005695
Liu, G. C., Yen, J. Y., Chen, C. Y., Yen, C. F., Chen, C. S., Lin, W. C., & Ko, C. H. (2014). Brain activation for response inhibition under gaming cue distraction in internet gaming disorder. Kaohsiung Journal of Medical Sciences, 30(1), 43–51. https://doi.org/10.1016/j.kjms.2013.08.005
Luciana, M. (2003). Practitioner review: Computerized assessment of neuropsychological function in children: Clinical and research applications of the Cambridge Neuropsychological Testing Automated Battery (CANTAB). Journal of Child Psychology and Psychiatry, 44(5), 649–663. https://doi.org/10.1111/1469-7610.00152
Luijten, M., Meerkerk, G. J., Franken, I. H. A., van de Wetering, B. J. M., & Schoenmakers, T. M. (2015). An fMRI study of cognitive control in problem gamers. Psychiatry Research - Neuroimaging, 231(3), 262–268. https://doi.org/10.1016/j.pscychresns.2015.01.004
Luria, A. R. (1980). Higher cortical functions in man. In Higher Cortical Functions in Man (2nd ed.). Basic Books. https://doi.org/10.1007/978-1-4615-8579-4
Maillard-Wermelinger, A., Yeates, K. O., Gerry Taylor, H., Rusin, J., Bangert, B., Dietrich, A., Nuss, K., & Wright, M. (2010). Mild traumatic brain injury and executive functions in school-aged children. Developmental Neurorehabilitation, 12(5), 330–341. https://doi.org/10.3109/17518420903087251
Mapou, R. L. (1995). A Cognitive Framework for Neuropsychological Assessment. 295–337. https://doi.org/10.1007/978-1-4757-9709-1_11
Marín Vila, M., Carballo Crespo, J. L., & Coloma Carmona, A. (2018). Academic outcomes and cognitive performance in problematic Internet users. Rendimiento académico y cognitivo en el uso problemático de Internet. Adicciones, 30(2), 101–110. https://doi.org/10.20882/adicciones.844
Marshall, B., Warburton, W., & Kangas, M. (2022). Internet gaming disorder (IGD) in children: Clinical treatment insights. Annals of Case Reports. 7(2), 816. https://doi.org/10.29011/2574-7754.100816.
Mauger, C., Lancelot, C., Roy, A., Coutant, R., Cantisano, N., & le Gall, D. (2018). Executive functions in children and adolescents with turner syndrome: A systematic review and meta-analysis. Neuropsychology Review 2018 28:2, 28(2), 188–215. https://doi.org/10.1007/s11065-018-9372-x
Messias, E., Castro, J., Saini, A., Usman, M., & Peeples, D. (2011). sadness, suicide, and their association with video game and internet overuse among teens: Results from the youth risk behavior survey 2007 and 2009. Suicide and Life-Threatening Behavior, 41(3), 307–315. https://doi.org/10.1111/j.1943-278x.2011.00030.x
Metcalf, O., & Pammer, K. (2014). Impulsivity and related neuropsychological features in regular and addictive first person shooter gaming. Cyberpsychology, Behavior, and Social Networking, 17(3), 147–152. https://doi.org/10.1089/cyber.2013.0024
Miyake, A., Friedman, N. P., Emerson, M. J., Witzki, A. H., Howerter, A., & Wager, T. D. (2000). The unity and diversity of executive functions and their contributions to complex “frontal lobe” tasks: A latent variable analysis. Cognitive Psychology, 41(1), 49–100. https://doi.org/10.1006/COGP.1999.0734
Moisala, M., Salmela, V., Hietajärvi, L., Carlson, S., Vuontela, V., Lonka, K., Hakkarainen, K., Salmela-Aro, K., & Alho, K. (2017). Gaming is related to enhanced working memory performance and task-related cortical activity. Brain Research, 1655, 204–215. https://doi.org/10.1016/j.brainres.2016.10.027
Müller, S. M., Wegmann, E., García Arias, M., Bernabéu Brotóns, E., Marchena Giráldez, C., & Brand, M. (2021). Deficits in executive functions but not in decision making under risk in individuals with problematic social-network use. Comprehensive Psychiatry, 106. https://doi.org/10.1016/j.comppsych.2021.152228
Naskar, S., Victor, R., Nath, K., & Sengupta, C. (2016). “One level more:” A narrative review on internet gaming disorder. Industrial Psychiatry Journal, 25(2), 145. https://doi.org/10.4103/IPJ.IPJ_67_16
NHLBI (2019). Quality assessment tool for observational cohort and cross-sectional studies. https://www.nhlbi.nih.gov/health-topics/study-quality-assessment-tools.
O’Connell, R. G., Dockree, P. M., Bellgrove, M. A., Turin, A., Ward, S., Foxe, J. J., & Robertson, I. H. (2009). two types of action error: Electrophysiological evidence for separable inhibitory and sustained attention neural mechanisms producing error on go/no-go tasks. Journal of Cognitive Neuroscience, 21(1), 93–104. https://doi.org/10.1162/JOCN.2009.21008
O’Connor, S. R., Tully, M. A., Ryan, B., Bradley, J. M., Baxter, G. D., & McDonough, S. M. (2015). Failure of a numerical quality assessment scale to identify potential risk of bias in a systematic review: A comparison study. BMC Research Notes, 8(1), 1–7.
Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hrobjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ, 372(71) https://doi.org/10.1136/bmj.n71
Pan, Y. C., Chiu, Y. C., & Lin, Y. H. (2020). Systematic review and meta-analysis of epidemiology of internet addiction. Neuroscience & Biobehavioral Reviews, 118, 612–622. https://doi.org/10.1016/J.NEUBIOREV.2020.08.013
Park, J. H., Hong, J. S., Han, D. H., Min, K. J., Lee, Y. S., Kee, B. S., & Kim, S. M. (2017). Comparison of QEEG findings between adolescents with attention deficit hyperactivity disorder (ADHD) without comorbidity and ADHD comorbid with internet gaming disorder. Journal of Korean Medical Science, 32(3), 514–521. https://doi.org/10.3346/JKMS.2017.32.3.514
Park, M., Jung, M. H., Lee, J., Choi, A. R., Chung, S. J., Kim, B., Kim, D. J., & Choi, J. S. (2020). Neurophysiological and cognitive correlates of error processing deficits in internet gaming disorder. Cerebral Cortex, 30(9), 4914–4921. https://doi.org/10.1093/cercor/bhaa083
Park, M. H., Park, E. J., Choi, J., Chai, S., Lee, J. H., Lee, C., & Kim, D. J. (2011). Preliminary study of Internet addiction and cognitive function in adolescents based on IQ tests. Psychiatry Research, 190(2–3), 275–281. https://doi.org/10.1016/j.psychres.2011.08.006
Park, S., Ryu, H., Lee, J. Y., Choi, A., Kim, D. J., Kim, S. N., & Choi, J. S. (2018). Longitudinal changes in neural connectivity in patients with internet gaming disorder: A resting-state EEG coherence study. Frontiers in Psychiatry, 9, 252.
Paulus, F. W., Ohmann, S., von Gontard, A., & Popow, C. (2018). Internet gaming disorder in children and adolescents: A systematic review. Developmental Medicine and Child Neurology, 60(7), 645–659. https://doi.org/10.1111/dmcn.13754
Pawlikowski, M., & Brand, M. (2011). Excessive Internet gaming and decision making: Do excessive World of Warcraft players have problems in decision making under risky conditions? Psychiatry Research, 188(3), 428–433. https://doi.org/10.1016/j.psychres.2011.05.017
Pick, J. L., Nakagawa, S., & Noble, D. W. A. (2019). Reproducible, flexible and high-throughput data extraction from primary literature: The metaDigitise r package. Methods in Ecology and Evolution, 10(3), 426–431. https://doi.org/10.1111/2041-210X.13118
Pontes, H. M. (2017). Investigating the differential effects of social networking site addiction and Internet gaming disorder on psychological health. Journal of Behavioral Addictions, 6(4), 601–610.
Pontes, H. M., & Griffiths, M. D. (2014). Internet addiction disorder and internet gaming disorder are not the same. Journal of Addiction Research & Therapy, 5(4).
Pontes, H. M., & Griffiths, M. D. (2017). Internet Disorder Scale-15 (IDS-15). APA PsycTests. https://doi.org/10.1037/t58316-000
Pontes, H.M., Kuss, D.J., Griffiths, M.D. (2017). Psychometric assessment of internet gaming disorder in neuroimaging studies: A systematic review. In: Montag, C., Reuter, M. (eds) Internet addiction. studies in neuroscience, psychology and behavioral economics. Springer, Cham. https://doi.org/10.1007/978-3-319-46276-9_11
Rabaglia, C. D., Marcus, G. F., & Lane, S. P. (2011). What can individual differences tell us about the specialization of function? Cognitive Neuropsychology, 28(3–4), 288–303.
Robinson, T. L., Gogniat, M. A., & Miller, L. S. (2021). Frailty and cognitive function in older adults: A systematic review and meta-analysis of cross-sectional studies. Neuropsychology Review, 32(2), 274–293. https://doi.org/10.1007/s11065-021-09497-1
Rohatgi, A. (2021). WebPlotDigitizer (Version 4.5). https://Automeris.Io/WebPlotDigitizer.
Russell, E. W., Russell, S. L. K., & Hill, B. D. (2005). The fundamental psychometric status of neuropsychological batteries. Archives of Clinical Neuropsychology, 20(6), 785–794. https://doi.org/10.1016/J.ACN.2005.05.001
Salo, E., Salmela, V., Salmi, J., Numminen, J., & Alho, K. (2017). Brain activity associated with selective attention, divided attention and distraction. Brain Research, 1664, 25–36. https://doi.org/10.1016/j.brainres.2017.03.021
Samaha, M., & Hawi, N. S. (2016). Relationships among smartphone addiction, stress, academic performance, and satisfaction with life. Computers in Human Behavior, 57, 321–325. https://doi.org/10.1016/J.CHB.2015.12.045
Sanderson, S., Tatt, I. D., & Higgins, J. (2007). Tools for assessing quality and susceptibility to bias in observational studies in epidemiology: A systematic review and annotated bibliography. International Journal of Epidemiology, 36(3), 666–676.
Schettler, L., Thomasius, R., & Paschke, K. (2022). Neural correlates of problematic gaming in adolescents: A systematic review of structural and functional magnetic resonance imaging studies. Addiction Biology, 27(1), e13093. https://doi.org/10.1111/adb.13093.
Schmitgen, M. M., Horvath, J., Mundinger, C., Wolf, N. D., Sambataro, F., Hirjak, D., Kubera, K. M., Koenig, J., & Wolf, R. C. (2020). Neural correlates of cue reactivity in individuals with smartphone addiction. Addictive Behaviors, 108, 106422. https://doi.org/10.1016/j.addbeh.2020.106422
Schoenberg, M. R., & Scott, J. G. (2011). The Little Black Book of Neuropsychology: A Syndrome-Based Approach. Springer.
Seo, H. S., Jeong, E. K., Choi, S., Kwon, Y., Park, H. J., & Kim, I. (2020). Changes of neurotransmitters in youth with internet and smartphone addiction: A comparison with healthy controls and changes after cognitive behavioral therapy. American Journal of Neuroradiology, 41(7), 1293–1301. https://doi.org/10.3174/AJNR.A6632
Shaffer, H. J., Hall, M. N., & vander Bilt, J. (2000). “Computer Addiction”: A critical consideration. American Journal of Orthopsychiatry, 70(2), 162–168. https://doi.org/10.1037/H0087741
Shafiee-Kandjani, A. R., Mohammadzadeh, Z., Amiri, S., Arfaie, A., Sarbakhsh, P., & Safikhanlou, S. (2020). Comparison of working memory and executive function in patients with internet addiction disorder, attention deficit hyperactivity disorder, and normal individuals. International Journal of High Risk Behaviors and Addiction, 9(2), 1–8. https://doi.org/10.5812/ijhrba.98997
Shapira, N. A., Lessig, M. C., Goldsmith, T. D., Szabo, S. T., Lazoritz, M., Gold, M. S., & Stein, D. J. (2003). Problematic internet use: Proposed classification and diagnostic criteria. Depression and Anxiety, 17(4), 207–216. https://doi.org/10.1002/da.10094
Shin, N. Y., Lee, T. Y., Kim, E., & Kwon, J. S. (2014). Cognitive functioning in obsessive-compulsive disorder: A meta-analysis. Psychological Medicine, 44(6), 1121–1130. https://doi.org/10.1017/S0033291713001803
Sigman, A. (2017). Screen dependency disorders: A new challenge for child neurology. Journal of the International Child Neurology Association, 1(1). https://doi.org/10.17724/jicna.2017.119
Snodgrass, J. G., Lacy, M. G., Dengah, F., Eisenhauer, S., Batchelder, G., & Cookson, R. J. (2014). A vacation from your mind: Problematic online gaming is a stress response. Computers in Human Behavior, 38, 248–260. https://doi.org/10.1016/J.CHB.2014.06.004
Spooner, D. M., & Pachana, N. A. (2006). Ecological validity in neuropsychological assessment: A case for greater consideration in research with neurologically intact populations. Archives of Clinical Neuropsychology, 21(4), 327–337. https://doi.org/10.1016/J.ACN.2006.04.004
Strauss, E., Sherman, E. M. S., & Spreen, O. (2006). A Compendium of neuropsychological tests: Administration, norms, and commentary (3rd ed.).
Streiner, D. L. (2010). Diagnosing tests: Using and misusing diagnostic and screening tests. Journal of Personality Assessment, 81(3), 209–219. https://doi.org/10.1207/S15327752JPA8103_03
Sugaya, N., Shirasaka, T., Takahashi, K., & Kanda, H. (2019). Bio-psychosocial factors of children and adolescents with internet gaming disorder: A systematic review. BioPsychoSocial Medicine, 13, 3. https://doi.org/10.1186/s13030-019-0144-5
Sun, D. L., Chen, Z. J., Ma, N., Zhang, X. C., Fu, X. M., & Zhang, D. R. (2009). Decision-making and prepotent response inhibition functions in excessive internet users. CNS Spectrums, 14(2), 75–81. https://doi.org/10.1017/S1092852900000225
Takeuchi, H., Taki, Y., Hashizume, H., Asano, K., Asano, M., Sassa, Y., Yokota, S., Kotozaki, Y., Nouchi, R., & Kawashima, R. (2016). Impact of videogame play on the brain’s microstructural properties: Cross-sectional and longitudinal analyses. Molecular Psychiatry, 21(12), 1781–1789.
Tang, Z., Zhang, H., Yan, A., & Qu, C. (2017). Time is money: The decision making of smartphone high users in gain and loss intertemporal choice. Frontiers in Psychology, 363.
Taylor, M. J., & Heaton, R. K. (2001). Sensitivity and specificity of WAIS-III/WMS-III demographically corrected factor scores in neuropsychological assessment. Journal of the International Neuropsychological Society, 7(7), 867–874. https://doi.org/10.1017/s1355617701777107
Tekin, A., Yetkin, A., Adıgüzel, S., & Akman, H. (2018). Evaluation of Stroop and Trail-Making Tests performance in university students with internet addiction. Anatolian Journal of Psychiatry, 19(6), 593–598.
Trisolini, D. C., Petilli, M. A., & Daini, R. (2018). Is action video gaming related to sustained attention of adolescents? Quarterly Journal of Experimental Psychology, 71(5), 1033–1039. https://doi.org/10.1080/17470218.2017.1310912
Unsworth, N., & Engle, R. W. (2007). On the division of short-term and working memory: An examination of simple and complex span and their relation to higher order abilities. Psychological Bulletin, 133(6), 1038–1066. https://doi.org/10.1037/0033-2909.133.6.1038
Unsworth, N., Redick, T. S., Heitz, R. P., Broadway, J. M., & Engle, R. W. (2009). Complex working memory span tasks and higher-order cognition: A latent-variable analysis of the relationship between processing and storage. Memory, 17(6), 635–654. https://doi.org/10.1080/09658210902998047
van Zomeren, A. H., & Brouwer, W. H. (1992). Assessment of attention. In J. R. Crawford, D. M. Parker, & W. W. McKinlay (Eds.), A Handbook of Neuropsychological Assessment (1st ed., pp. 241–266). Lawrence Erlbaum Associates.
Vukosavljevic-Gvozden, T., Filipovic, S., & Opacic, G. (2015). The mediating role of symptoms of psychopathology between irrational beliefs and internet gaming addiction. Journal of Rational - Emotive and Cognitive - Behavior Therapy, 33(4), 387–405. https://doi.org/10.1007/S10942-015-0218-7
Wacks, Y., & Weinstein, A. M. (2021). Excessive smartphone use is associated with health problems in adolescents and young adults. Frontiers in Psychiatry, 12, 762. https://doi.org/10.3389/fpsyt.2021.669042
Wagner, S., Müller, C., Helmreich, I., Huss, M., & Tadić, A. (2015). A meta-analysis of cognitive functions in children and adolescents with major depressive disorder. European Child and Adolescent Psychiatry, 24(1), 5–19. https://doi.org/10.1007/s00787-014-0559-2
Wang, H., Jin, C., Yuan, K., Shakir, T. M., Mao, C., Niu, X., Niu, C., Guo, L., & Zhang, M. (2015). The alteration of gray matter volume and cognitive control in adolescents with internet gaming disorder. Frontiers in Behavioral Neuroscience, 9. https://doi.org/10.3389/fnbeh.2015.00064
Wang, H., Sun, Y., Lan, F., & Liu, Y. (2020). Altered brain network topology related to working memory in internet addiction. Journal of Behavioral Addictions, 9(2), 325–338. https://doi.org/10.1556/2006.2020.00020
Wang, X., & Cheng, Z. (2020). Cross-sectional studies: Strengths, weaknesses, and recommendations. Chest, 158(1), S65–S71. https://doi.org/10.1016/j.chest.2020.03.012
Wang, Y., Wu, L., Zhou, H., Lin, X., Zhang, Y., Du, X., & Dong, G. (2017). Impaired executive control and reward circuit in Internet gaming addicts under a delay discounting task: Independent component analysis. European Archives of Psychiatry and Clinical Neuroscience, 267(3), 245–255. https://doi.org/10.1007/s00406-016-0721-6
Warburton, W. A. (2021). Should internet addiction and gaming addiction be categorized as disorders. Masters of Media: Controversies and Solutions, 43–58.
Warburton, W. A., Parkes, S., & Sweller, N. (2022). Internet gaming disorder: Evidence for a risk and resilience approach. International Journal of Environmental Research and Public Health., 19, 5587. https://doi.org/10.3390/ijerph19095587
Warburton, W. A. & Tam, P. (2019). Untangling the weird, wired web of gaming disorder and its classification. HealthEd Expert Monograph 43. Sydney: HealthEd. Available at: https://www.healthed.com.au/wp-content/uploads/2019/09/043-Tam-Warburton-Gaming-Disorder-Final.pdf
Weinstein, A. (2015). Internet addiction and its treatment. Medical Psychology in Russia, 4(33), 3.
Weinstein, A., & Lejoyeux, M. (2020). Neurobiological mechanisms underlying internet gaming disorder. Dialogues in Clinical Neuroscience, 22(2), 113–126. https://doi.org/10.31887/DCNS.2020.22.2/aweinstein
Weinstein, A., Livny, A., & Weizman, A. (2017). New developments in brain research of internet and gaming disorder. Neuroscience & Biobehavioral Reviews, 75, 314–330.
Weinstein, A., Yaacov, Y., Manning, M., Danon, P., & Weizman, A. (2015). Internet addiction and attention deficit hyperactivity disorder among schoolchildren. The Israel Medical Association Journal, 17(12), 731–734. https://europepmc.org/article/med/26897972
Weinstein, A., & Weizman, A. (2012). Emerging association between addictive gaming and attention-deficit/ hyperactivity disorder. Current Psychiatry Reports, 14(5), 590–597. https://doi.org/10.1007/s11920-012-0311-x
Weinstein, A. M. (2010). Computer and video game addiction—A comparison between game users and non-game users. The American Journal of Drug and Alcohol Abuse, 36(5), 268–276.
Weinstein, A. M. (2022). Problematic social networking site use-effects on mental health and the brain. Frontiers in Psychiatry, 13, 1106004.
Wessel, J. R. (2018). Prepotent motor activity and inhibitory control demands in different variants of the go/no-go paradigm. Psychophysiology, 55(3), e12871. https://doi.org/10.1111/psyp.12871
Wittek, C. T., Finserås, T. R., Pallesen, S., Mentzoni, R. A., Hanss, D., Griffiths, M. D., & Molde, H. (2016). Prevalence and predictors of video game addiction: A study based on a national representative sample of gamers. International Journal of Mental Health and Addiction, 14(5), 672–686. https://doi.org/10.1007/S11469-015-9592-8
Wölfling, K., Duven, E., Wejbera, M., Beutel, M. E., & Müller, K. W. (2020). Discounting delayed monetary rewards and decision making in behavioral addictions – A comparison between patients with gambling disorder and internet gaming disorder. Addictive Behaviors, 108. https://doi.org/10.1016/j.addbeh.2020.106446
Wollman, S. C., Hauson, A. O., Hall, M. G., Connors, E. J., Allen, K. E., Stern, M. J., Stephan, R. A., Kimmel, C. L., Sarkissians, S., Barlet, B. D., & Flora-Tostado, C. (2019). Neuropsychological functioning in opioid use disorder: A research synthesis and meta-analysis. The American Journal of Drug and Alcohol Abuse, 45(1), 11–25.
Wolniewicz, C. A., Tiamiyu, M. F., Weeks, J. W., & Elhai, J. D. (2018). Problematic smartphone use and relations with negative affect, fear of missing out, and fear of negative and positive evaluation. Psychiatry Research, 262, 618–623.
World Health Organization. (2019). International statistical classification of diseases and related health problems (11th ed.). https://icd.who.int/.
Wu, L. lu, Zhu, L., Shi, X. hui, Zhou, N., Wang, R., Liu, G. qun, Song, K. ru, Xu, L. xuan, Potenza, M. N., & Zhang, J. T. (2020). Impaired regulation of both addiction-related and primary rewards in individuals with internet gaming disorder. Psychiatry Research, 286. https://doi.org/10.1016/j.psychres.2020.112892
Xing, L., Yuan, K., Bi, Y., Yin, J., Cai, C., Feng, D., Li, Y., Song, M., Wang, H., Yu, D., Xue, T., Jin, C., Qin, W., & Tian, J. (2014). Reduced fiber integrity and cognitive control in adolescents with internet gaming disorder. Brain Research, 1586, 109–117. https://doi.org/10.1016/j.brainres.2014.08.044
Yang, S. C., & Tung, C. J. (2007). Comparison of Internet addicts and non-addicts in Taiwanese high school. Computers in Human Behavior, 23(1), 79–96. https://doi.org/10.1016/j.chb.2004.03.037
Yao, M. Z., & Zhong, Z. J. (2014). Loneliness, social contacts and Internet addiction: A cross-lagged panel study. Computers in Human Behavior, 30, 164–170. https://doi.org/10.1016/j.chb.2013.08.007
Yao, Y., Liu, L., Ma, S., Shi, X., Zhou, N., Zhang, J., & Potenza, M. N. (2017). Functional and structural neural alterations in Internet gaming disorder: A systematic review and meta-analysis. Neuroscience and Biobehavioral Reviews, 83, 313–324. https://doi.org/10.1016/j.neubiorev.2017.10.029
Yao, Y. W., Chen, P. R., Li, S., Wang, L. J., Zhang, J. T., Yip, S. W., Chen, G., Deng, L. Y., Liu, Q. X., & Fang, X. Y. (2015). Decision-making for risky gains and losses among college students with internet gaming disorder. PLoS ONE, 10(1). https://doi.org/10.1371/journal.pone.0116471
Yao, Y. W., Zhang, J. T., Fang, X. Y., Liu, L., & Potenza, M. N. (2022). Reward-related decision-making deficits in internet gaming disorder: A systematic review and meta-analysis. Addiction, 117(1), 19–32. https://doi.org/10.1111/add.15518
Yen, J. Y., Yen, C. F., Chen, C. S., Tang, T. C., & Ko, C. H. (2009). The association between adult ADHD symptoms and internet addiction among college students: The gender difference. Cyberpsychology & Behavior, 12(2), 187–191.
Youh, J., Hong, J. S., Han, D. H., Chung, U. S., Min, K. J., Lee, Y. S., & Kim, S. M. (2017). Comparison of electroencephalography (EEG) coherence between major depressive disorder (MDD) without comorbidity and MDD comorbid with Internet gaming disorder. Journal of Korean Medical Science, 32(7), 1160–1165.
Young, K. S. (2004). Internet addiction: A new clinical phenomenon and its consequences. American Behavioral Scientist, 48(4), 402–415. https://doi.org/10.1177/0002764204270278
Young, K. S., & Rogers, R. C. (1998). The relationship between depression and internet addiction. Cyberpsychology and Behavior, 1(1), 25–28. https://doi.org/10.1089/cpb.1998.1.25
Yu, S., & Sussman, S. (2020). Does smartphone addiction fall on a continuum of addictive behaviors? International Journal of Environmental Research and Public Health, 17(2), 422.
Yuan, K., Qin, W., Yu, D., Bi, Y., Xing, L., Jin, C., & Tian, J. (2016). Core brain networks interactions and cognitive control in internet gaming disorder individuals in late adolescence/early adulthood. Brain Structure and Function, 221(3), 1427–1442. https://doi.org/10.1007/s00429-014-0982-7
Yuan, K., Yu, D., Cai, C., Feng, D., Li, Y., Bi, Y., Liu, J., Zhang, Y., Jin, C., Li, L., Qin, W., & Tian, J. (2017). Frontostriatal circuits, resting state functional connectivity and cognitive control in internet gaming disorder. Addiction Biology, 22(3), 813–822. https://doi.org/10.1111/adb.12348
Zakzanis, K. K. (2001). Statistics to tell the truth, the whole truth, and nothing but the truth: Formulae, illustrative numerical examples, and heuristic interpretation of effect size analyses for neuropsychological researchers. Archives of Clinical Neuropsychology, 16(7), 653–667.
Zhang, Y., Lin, X., Zhou, H., Xu, J., Du, X., & Dong, G. (2016). Brain activity toward gaming-related cues in Internet gaming disorder during an addiction stroop task. Frontiers in Psychology, 7, 714.
Zhou, Z., Li, C., & Zhu, H. (2013). An error-related negativity potential investigation of response monitoring function in individuals with Internet addiction disorder. Frontiers in Behavioral Neuroscience, SEP. https://doi.org/10.3389/fnbeh.2013.00131
Zhou, Z., Zhou, H., & Zhu, H. (2016). Working memory, executive function and impulsivity in Internet-addictive disorders: A comparison with pathological gambling. Acta Neuropsychiatrica, 28(2), 92–100. https://doi.org/10.1017/neu.2015.54
Zhou, Z., Zhu, H., Li, C., & Wang, J. (2014). Internet addictive individuals share impulsivity and executive dysfunction with alcohol-dependent patients. Frontiers in Behavioral Neuroscience, 8(288). https://doi.org/10.3389/fnbeh.2014.00288
Zhou, Z. H., Yuan, G. Z., Yao, J. J., Li, C., & Cheng, Z. H. (2010). An event-related potential investigation of deficient inhibitory control in individuals with pathological Internet use. Acta Neuropsychiatrica, 22(5), 228–236. https://doi.org/10.1111/j.1601-5215.2010.00444.x
Zhu, X., & Xiong, Z. (2022). Exploring association between social media addiction, fear of missing out, and self-presentation online among university students: A cross-sectional study. Frontiers in Psychiatry, 13, 896762.
Open Access funding enabled and organized by CAUL and its Member Institutions. Research conducted by Michoel L. Moshel is supported by an Australian Government Research Training Program Scholarship.
The authors declare no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Moshel, M.L., Warburton, W.A., Batchelor, J. et al. Neuropsychological Deficits in Disordered Screen Use Behaviours: A Systematic Review and Meta-analysis. Neuropsychol Rev (2023). https://doi.org/10.1007/s11065-023-09612-4