Letter to the Editor

The months since the beginning of the COVID-19 pandemic have drastically altered the world, and the field of psychology research has not been immune to these changes. This pandemic has called for a paradigm shift regarding the way that psychological services are delivered (Gruber, et al., 2021). This shift in delivery is necessitating an increase in online research activities, such as the administration of assessments. With many research and service being offered online, it raises the question as to how valid and reliable these questionnaires will be compared to their paper-and-pencil counterparts. Previous work on test-retest reliability have shown that online questionnaires are able to produce equivalent results to their paper-and-pencil versions (e.g., Vallejo et al., 2007), but more research is needed to examine if the same is true for other assessments.

One psychological questionnaire that has been developed is the Autism Spectrum Quotient (AQ-50), which is used to assess autistic traits in adults (Baron-Cohen et al., 2001). It includes five domain subcategories that are characteristics of individuals with autism spectrum disorder, such as, communication, attention switching, social interactions, and imagination. Although, a 10-item short-form (AQ-10) has been developed by sampling 10 of the original 50 questions (Allison et al., 2012), its validity and reliability for online subject pool screening is not known. The current study attempted to fill the research gap.

From a subject pool of psychology undergraduate students (n = 1,060; mean age = 19.4, S.D. = 2.54; male: 42.9%), 50 students signed up to fulfill their course credit and were sampled to assess the convergent validity of their performance on the AQ-10 with AQ-50, and the test-retest reliability of the AQ-10. Specifically, students completed the AQ-10 at Time 1 via the Internet and then completed the AQ-50 in person at Time 2 (mean duration: 56.3 days; max duration: 88 days). The 10 questions used from the AQ-10 at Time 1 were extracted from the AQ-50 at Time 2. Next, convergent validity, test-retest reliability, and Cronbach’s Alpha were calculated.

The results revealed a moderate correlation between AQ-10 and AQ-50 (r = .554), which was significantly lower than reports from Allison et al., (2012)’s study (r = .92). We also found poor test-retest reliability of the AQ-10 between Time 1 and Time 2 (r = .277), which is also significantly lower than the test-retest reliability of the AQ-50 reported (r = .7) by Baron-Cohen et al., (2001). A Cronbach’s Alpha analysis of the AQ-50 yielded a comparable result (α = 0.767) to the original study (subscales’ αs ranged from 0.62 − 0.77; cf. Baron-Cohen et al., 2001).

Our results indicate that the AQ-10 conducted online had a less than satisfactory test-retest reliability for measuring autistic traits in university students. This is surprising since previous studies have reported stable trait measurements across repeated administrations of the full version (Baron-Cohen et al., 2001). The result is a good reminder to researchers that before adopting questionnaires online, especially those short versions of screening questionnaires, a thorough evaluation of any possible compromised psychometric properties is necessary. Since the AQ is the most commonly used research and clinical screening tools to date, future investigation may look into adopting other short forms that include more items, e.g., The AQ-28 (Cheung et al., 2022; Hoekstra et al., 2011), in online study (Fig. 1).

Fig. 1
figure 1

The comparison of convergent validity and test-retest reliability in the current study (AQ-10 using online format) and previous studies. *p < .05, ***p < .001