Rapidly developing technology and the ubiquity of the Internet have changed people’s reading practices, rendering the traditional view of literacy insufficient (Hartman, Morsink, & Zheng, 2010). Changes in the reading practices and skills needed in a modern society are already reflected in many nations’ educational standards or curricula (Australian Curriculum, Assessment and Reporting Authority [ACARA], n.d..; The Finnish National Board of Education, 2016) as well as in international assessments (Fraillon, Ainley, Schulz, Friedman, & Gebhardt, 2013; Office for Economic Co-operation and Development [OECD], 2013a). Even in daily school life, utilizing the Internet for learning is a common practice: 95% of surveyed teachers in the United States reported doing research or searching for information online as a typical school assignment (Purcell et al., 2012). Because of the increased role of the Internet in school work and in other areas of life, educators should ensure that all students acquire sufficient skills to read and learn on the Internet.

Reading to learn from online information, often referred to as online research and comprehension (ORC), requires, in particular, skills and strategies for locating, evaluating, and synthesizing online information as well as for communicating one’s learning to others (Leu, Kinzer, Coiro, Castek, & Henry, 2013b). Even though research has begun to identify the specific skills and strategies important when reading online (e.g., Brand-Gruwel, Wopereis, & Vermetten, 2005; Coiro & Dobler, 2007), there is still a need to better understand how traditional reading skills contribute to students’ performance when they solve problems with online information. Understanding the consequences of poor literacy skills would help educators to design tasks and supports for students with varying literacy skills. As such, this study examined how different aspects of the literacy skills of reading, reading fluency, written spelling, and reading comprehension predict sixth graders’ ORC performance. To achieve as thorough an understanding as possible on aspects related to ORC performance, we also included prior knowledge and nonverbal reasoning into our examination, as prior knowledge and inferential processes are seen as integral components of reading comprehension (McNamara & Magliano, 2009). Finally, because gender differences in literacy skills have been widely recognized (e.g., OECD, 2013a), gender was also included in our examination to clarify its role in ORC performance beyond reading ability.

Online research and comprehension

The present study is framed using an online research and comprehension framework (Leu et al., 2013b), which identifies five crucial component skills: (1) identifying an important question or a problem to solve, (2) locating information, (3) evaluating information critically, (4) synthesizing information, and (5) communicating information (see also Brand-Gruwel et al., 2005; Fraillon et al., 2013; International ICT Literacy Panel, 2002).

A reader begins online research by identifying a question to answer or problem to solve. In school or assessment contexts, the question or problem is often given to students. However, students are still required to build an understanding of the given task (Britt, Rouet, & Durik, 2018) that helps students to locate relevant information to solve the problem. Locating information requires the ability to form adequate search queries for search engines (Cho & Afflerbach, 2015) and to analyze search engine results (Rouet, Ros, Goumi, Macedo-Rouet, & Dinet, 2011). Without these skills, students are unable to use online information efficiently for their learning (Leu, Forzani, Burlingame, Kulikowich, Sedransk, Coiro, & Kennedy, 2013a).

Because a considerable amount of information on the Internet appears to be questionable (Britt & Gabrys, 2002) or commercially biased (Lewandowski, 2011), an ability to critically evaluate online information is essential. To make informed judgements of the quality of online information, readers need to evaluate the author’s expertise and the trustworthiness of online resources (Flanagin & Metzger, 2008; Pérez et al. 2018).

The fourth component skill—synthesizing information—refers to collecting ideas across resources and integrating these ideas into a versatile and coherent representation (Bråten, Britt, Strømsø, & Rouet, 2011; Cho & Afflerbach, 2017). A high quality synthesis also requires readers to compare and contrast information and different perspectives presented in multiple online resources (Cho & Afflerbach, 2015; Rouet, 2006). Finally, communicating information that one has learned requires good argumentation skills and the ability to address a specific audience. Presenting well justified arguments requires practice, especially when the information is controversial (Driver, Newton, & Osborne, 2000). Audience awareness may include components such as the greeting, addressing one’s message to a reader, and using correct language (Lapp, Shea, & Wolsey, 2011), as well as properly concluding the writing (Berggren, 2014), all of which reflect a knowledge of communicative conventions.

A recent study (Kiili, Leu, Utriainen, Coiro, Kanniainen, Tolvanen, Lohvansuu, & Leppänen, 2018b) confirmed the basic structure of the four component skills (locate, evaluate, synthesize, and communicate) while also suggesting the introduction of additional complexity to the skill structure. First, evaluation of information was divided into two components: confirming the credibility of information, and questioning the credibility of information. It seems that questioning a source that is, for example, biased or lacking in expertise, is more difficult for students than confirming the credibility of the source with relevant expertise (Kiili, Leu, Marttunen, Hautala, & Leppänen, 2018a; Pérez et al. 2018). Second, synthesizing was divided into two separate components: identifying main ideas from a single online text, and synthesizing information across multiple online texts. This suggests that the process of building coherent intertextual relationships across multiple online texts requires somewhat different skills than building coherence within a single online text (Cho & Afflerbach, 2017).

Literacy skills: reading fluency, written spelling, and reading comprehension

Reading has been defined as consisting of two main skills: decoding and comprehension (Gough & Tunmer, 1986), which have been considered to be interconnected via reading fluency (LaBerge & Samuels, 1974). At the lower level of literacy skill development, the letter–sound decoding ability enables readers to process the graphic symbols and to identify single words by connecting the graphic symbol strings—that is, letters or their clusters—in spoken word representations (Kintsch & Rawson, 2005). In addition to decoding, written spelling requires the ability to phonologically recode spoken words into grapheme strings. It has also been suggested that this process further develops the word identification system via strengthening the words’ orthographic representations (Perfetti & Stafura, 2014; Share, 2008). The development of the effectiveness and automatization of the basic decoding skill increases reading fluency, which is the ability to read the text accurately and rapidly (Meyer & Felton, 1999; National Reading Panel, National Institute of Child Health & Human Development, 2000).

The development of fluency and effortless word recognition skills reduces the amount of attentional resources allocated for decoding and improves reading comprehension, which is a higher level of literacy skill (Fuchs, Fuchs, Hosp, & Jenkins, 2001; Tilstra, McMaster, Van den Broek, Kendeou, & Rapp, 2009). In reading comprehension, readers construct a text base model by combining and interrelating the word meanings of the text and by recognizing the wider topics within the entire text (Kintsch, 1998; Kintsch & Rawson, 2005). According to the lexical quality hypothesis (Perfetti, 2007), this kind of word-to-text integration requires a sufficient quality of word representations as well as the ability to efficiently retrieve word meanings from long-term memory (Perfetti & Stafura, 2014).

Finally, to build a deeper understanding of the text, readers need to construct a situational model by integrating the text base information with their prior knowledge (Kintsch, 1998). However, sometimes readers face difficulties with accurate and fluent word recognition, as well as with poor written spelling and decoding abilities, which may also lead to reading comprehension difficulties (Perfetti, 2007). These kinds of difficulties are defined as the lack of those skills that allow readers to construct meaning from the text (Fletcher, Lyon, Fuchs, & Barnes, 2007).

The relation of prior knowledge, reasoning, and gender to literacy skills

Prior topic knowledge plays an important role in comprehension of single texts (Cromley, Snyder-Hogan, & Luciw-Dubas, 2010; Tarchi, 2010), hypertexts (Amadieu, Tricot, & Mariné, 2009), and multiple texts (Bråten, Ferguson, Anmarkrud, & Strømsø, 2013). Prior topic knowledge may aid in navigation of networked texts (Amadieu et al., 2009; Salmerón, Cañas, Kintsch, & Fajardo, 2005); it may also support intertextual inferencing (Strømsø & Bråten, 2009) as well as the evaluation of information during online research (Forzani, 2016). However, Coiro (2011) found that even though prior topic knowledge played an important role in online research and comprehension performance of students with low online reading skills, it did not influence the performance of students with high online reading skills. Further, a recent study showed that even though prior topic knowledge was associated with knowledge acquisition after engaging with multiple web pages on a socio-scientific topic, it was not associated with multiple source integration (Andresen, Anmarkrud, & Bråten, 2018). These results suggest that prior knowledge is also an important factor in online research; however, further research is needed to better understand its role.

In addition to prior topic knowledge, theoretical models of reading specify inferential processes as integral for reading comprehension (Kendeou, McMaster, & Christ, 2016); as such, students with low verbal and nonverbal reasoning skills are more likely to have comprehension difficulties (Snowling, 2013). Nonverbal reasoning has been shown to have direct and indirect effects on reading comprehension (Swart et al., 2017); it has also been shown to support young at-risk readers’ development of comprehension skills (Peng et al., 2018). Online research may require reasoning skills additional to those required for the reading of a single text on paper. Readers need to make inferences about the usefulness of a web page with the incomplete information provided by search engines (Coiro & Dobler, 2007), intertextual inferences across online texts (Strømsø & Bråten, 2009), and source-content inferences to judge the quality of information (Britt et al., 2018). Reasoning skills are particularly needed when reading tasks—such as complex online research tasks—require critical thinking and problem solving (Adlof, Catts, & Lee, 2010).

Gender difference has also been an area of interest in literacy research. Girls have been shown to have an advantage in reading fluency and reading comprehension in several studies (Logan & Johnston, 2009; Torppa, Eklund, Sulkunen, Niemi, & Ahonen, 2018), including large-scale international studies, such as the Program for International Student Assessment (OECD, 2013b). Similar patterns have also been observed in some ORC studies (Forzani, 2016; Salmerón, García, & Vidal-Abarca, 2018).

The present study

In the current study, we set out to examine how literacy skills (reading fluency, written spelling, and reading comprehension), prior topic knowledge, nonverbal reasoning, and gender are related to students’ ORC performance. We expected that reading comprehension, prior knowledge, nonverbal reasoning, and gender would independently contribute to explain the variance of ORC performance (Hypothesis 1). Studies using similar types of online reading tasks have found considerable overlap in skills needed in reading comprehension and online research tasks (Coiro, 2011; Hahnel, Goldhammer, Naumann, & Kröhne, 2016; Salmerón et al., 2018). In light of this research, we expected reading comprehension to be the strongest predictor of students’ ORC performance. Of the other explanatory factors, prior topic knowledge has been shown to play an important role in comprehension of single and multiple texts (e.g., McNamara & Kintsch, 1996; Bråten et al., 2013). Therefore, we expected that prior topic knowledge would also independently contribute to ORC performance. Furthermore, an ORC task involving multiple online texts requires inferencing within and across texts that is not necessarily captured in multiple choice reading comprehension tests, which we also used in this study (Strømsø & Bråten, 2009). Therefore, we expected nonverbal reasoning to be another unique contributor to ORC performance over and above reading comprehension. We also included gender in our analyses, expecting to confirm previous findings that show that girls outperform boys in digital reading tasks (OECD, 2013b; Naumann & Sälzer, 2017; Salmerón et al., 2018). Finally, we were interested to test whether lower level literacy skills, reading fluency, and written spelling would affect ORC skills through reading comprehension or whether these skills would make their own contribution.

Method

Participants

The participants were 426 sixth-grade students (207 girls, 219 boys) aged from 12 to 13 years (M = 12.34, SD = .32) from eight elementary schools in Central Finland. Both large and average sized schools from urban and rural areas voluntarily participated. The data were collected during the fall semesters of 2014 and 2015. A statement from the Ethical Committee was obtained, and the participants’ primary caregivers gave their written consent for participation in the study.

Measures and materials

Online research and comprehension

Students’ ORC skills were measured with the Internet Reading Assessment (Internet Lukemisen Arviointi, or ILA test), which is a Finnish adaptation (see Kiili et al., 2018b) of the Online Research and Comprehension Assessment originally developed by Leu et al. (2013a). The test consists of a simulated closed Internet environment and tasks that measure four ORC skill areas: (1) locating information, (2) evaluating information, (3) synthesizing information, and (4) communicating information (see also Kiili et al., 2018b).

At the beginning of the test, students received an assignment by email from the principal of a fictitious school. In this email, the principal asked students to explore the health effects of energy drinks and to write a recommendation justifying whether the principal should allow the school to purchase an energy drink vending machine. During the test, students were guided through the tasks by two avatar students in an environment that simulated a social networking site with a chat message window.

Students were asked to read four online resources (two news web pages [OR1, OR4], an academic online resource [OR2], and a commercial online resource [OR3]) to form their final recommendation concerning the purchase of an energy drink vending machine. The students were also required to take notes while reading these online resources. Students were asked to locate two of these resources (OR2, OR4) by formulating a search query in a search engine. When they received the search engine result list, they were asked to distinguish the relevant online resource from the irrelevant ones. If a student failed in this locating task, the avatar student gave a link to the online resource in the social networking site. Two additional resources (OR1, OR3) were given to the students. Thus, even if a student was not able to receive credit for selecting the correct resource, they could still read and take notes from the relevant resources, thereby receiving credit for this part of the task.

Students were also asked to evaluate two of four online resources—an academic (OR2) and a commercial (OR3) online resource—with regard to the author’s expertise in health issues as well as the overall credibility of the online resource itself. Instructions for the evaluation task were given by the avatar student in the chat message window. After reading, taking notes, and evaluating the online resources, the students were asked to compose a summary text on the basis of what they had learned from these resources concerning the health effects of energy drinks. They were able to utilize their notes while writing the summary. Finally, the students were asked to compose an email to the principal, in which they justified their opinion concerning the purchase of the energy drink vending machine. [For a more detailed description of the ILA test and the content of the online resources, see Kiili et al. (2018a, 2018b). The scoring rubric for the measured skills can be found in the Appendix.]

The original assessment—the Online Research and Comprehension Assessment—was developed with acceptable levels of reliability and validity. Cronbach’s α reliability coefficient for the energy drinks task was .83. Validity was established with a framework document approved by experts, 2 years of cognitive lab testing, and modifications based on a large scale pilot study (Leu et al., 2015).

To establish inter-rater reliability of coding, two independent coders, including the first and second author and trained research assistants, coded 20% of the responses for each of the 16 items. The kappa values for inter-rater reliability in locating information were 1.000. These varied in evaluation (four items) between .947 and .983, in identifying main ideas and synthesizing (six items) between .784 and 1.000, and in communication (two items) between .722 and .939. All disagreements were resolved by discussion. The remaining responses were scored by a single rater. Validation of the ILA was conducted through confirmatory factor analysis showing that the ILA assessment satisfactorily reflected the ORC framework (Kiili et al., 2018b).

Reading fluency

Fluency was measured using the three tests described below. A reading fluency factor (see the Data Analyses section) was formed on the basis of these tests. The McDonald’s omega—a model based reliability—was .68 (cf. Zhang & Yuan, 2016).

The word identification test, a subtest of the standardized Finnish reading test battery ALLU (Lindeman, 1998), included 80 items, each consisting of a picture and four alternative written words. The students’ task was to identify and connect correct picture–word pairs by drawing a line between a word and a picture. The score was the number of correctly connected pairs within the two minutes. The Kuder–Richardson reliability coefficient for the original test was .97 (Lindeman, 1998).

The word chain test (Holopainen, Kairaluoma, Nevala, Ahonen, & Aro, 2004) consisted of 25 chains of four words written without spaces between them. The students’ task was to draw a line at the word boundaries. The score was the number of correctly separated words within the 90 s time limit. The test–retest reliability coefficient for the original test varied between .70 and .84.

The oral pseudoword text-reading test (Eklund, Torppa, Aro, Leppänen, & Lyytinen, 2014) consisted of 38 pseudowords (277 letters). These pseudowords were presented in the form of a short passage, which students were instructed to read aloud as quickly and accurately as possible. The reading performance of the students was audio recorded for consecutive scoring. The score was the number of correctly read pseudowords divided by the time, in seconds, spent on reading. The inter-rater agreement for scoring the original test was .95 (Eklund et al., 2014).

Written spelling

Spelling accuracy was measured with a task in which students were asked to write 12 four syllable pseudowords from dictation (see Eklund et al., 2014). The recorded pseudowords were presented verbally to students twice, one at a time. The score was the number of correctly spelled items. Cronbach’s alpha reliability coefficient was .49, and Revelle’s omega reliability coefficient was .86.

Reading comprehension

Comprehension skills were tested using another subtest of the standardized Finnish reading test battery (Lindeman, 1998). In this subtest, students were asked to read an expository text of instructions for consumers and to respond to 12 multiple choice (four options) questions representing the following categories: (1) detail/fact (one question), (2) cause–effect/structure (one question), (3) conclusion/interpretation (four questions), (4) concept/phrase (three questions), and 5) main idea/purpose (three questions). The two page text was available when responding to the questions. The maximum score was 12 points. Cronbach’s alpha reliability coefficient was .64, and Revelle’s omega reliability coefficient was .86.

Nonverbal reasoning

Nonverbal reasoning ability was assessed with Raven’s Standard Progressive Matrices test, which is a visuospatial task appropriate for children over 11 years of age (Raven, 1998). The test consists of 60 items, of which a shortened version was used containing 30 items (every second item from the larger test). Previous studies have shown that shortened versions produce an adequate estimate of nonverbal reasoning compared to the full version of Raven’s Standard Progressive Matrices (see, e.g., Wytek, Opgenoorth, & Presslich, 1984). The total score was the number of items correctly responded to. In another large scale project with more than 800 sixth graders from the same area in Finland, the same shortened version was used with a Cronbach’s alpha reliability coefficient of .81 (Kanerva et al., submitted for publication).

Prior knowledge

Prior knowledge (refering to prior topic knowledge) was tested with seven multiple choice (four options) questions on energy drinks and their health effects. The answer options included one correct option, two incorrect options, and a “don’t know” option. One point was given for each correct selection, and zero points were given for selecting the other options. The Kuder–Richardson reliability coefficient for the total score was .89, and Revelle’s omega reliability coefficient was .42.

Procedure

The data were collected in four separate researcher-led sessions: three 45 min group testing sessions and one five minute individual test session. During the first two group sessions, students completed the tests of literacy skills and nonverbal reasoning. In the third group session, the students completed the ILA test on laptops after answering prior knowledge questions. Students’ performances were saved as log files and recorded with a screen capture program. During the assessment, the researchers provided technical assistance with the test application when needed. In the fourth session, the students completed the pseudoword text reading task in an individual test setting.

Data analyses

All analyses were conducted with Mplus version 7.3 and IBM SPSS Statistics 22. Since the pre-analysis of these data revealed some non-normality of the observed variables, and the ORC variables were categorical, the weighted least square (WLSMV) estimator was used in the structural equation model (SEM). WLSMV conducts the estimation with a diagonal weight matrix with robust standard errors and with a mean and variance adjusted χ2 test statistic with a full weight matrix (Muthén & Muthén, 1998–2017). To ensure that the specified latent factors model adequately represented the data, the model fit was evaluated using multiple indices, including Chi square (χ2), root mean square error of approximation (RMSEA), comparative fit index (CFI), Tucker-Lewis index (TLI), and weighted root mean square residual (WRMR). As an acceptable model fit, the following cutoff criteria were generally preferred: χ2 test (p > .05), RMSEA < .06, TLI and CFI ≥ .95, and WRMR ≤ .90 (Hu & Bentler, 1999; Yu, 2002). Missing values were due, for example, to sickness absences. To estimate the model parameters, the incomplete cases were used in the analyses. WLSMV supposes that missingness is allowed to be a function of the observed covariates but not of the observed outcomes (Asparouhov & Muthén, 2010; Muthén & Muthén, 1998–2017). There were no missing values in the 15 observed variables of ORC skills, except 11.7% in NOTE2 and 7.7% in NOTE4 (Fig. 1). Neither were there any missing values in prior knowledge and gender. The amount of missing data varied between 0.5 and 1.6% in the reading fluency tests forming the factor. The amount of missing data was 2.6% in written spelling, 0.9% in reading comprehension, and 2.3% in nonverbal reasoning.

Fig. 1
figure 1

SEM of literacy skills (reading fluency, written spelling [WSP], reading comprehension [RC]), nonverbal reasoning (NVR), prior knowledge (PK), and gender (GNDR) in relation to ORC skills. Notes. RF1 = word identification test, RF2 = word chain test, RF3 = oral pseudoword reading test. Measurement components are shown using thin lines and structural components are shown using bolded lines. Circles represent latent variables, and rectangles represent observed variables. All values are standardized, and all statistically significant (p < .01–.001) coefficients and unexplained variances are included in the figure. Nonsignificant relations are presented using brackets and dotted lines. The LOC1 observed variable did not load on the Locating factor (see Kiili et al., 2018b)

The six latent factors of ORC subskills (see Kiili et al., 2018b) were used in the SEM investigating literacy skills (reading fluency, written spelling, and reading comprehension), prior knowledge, nonverbal reasoning, and gender in relation to ORC. The first confirmatory factor analysis (CFA) model was formed on the basis of 15 observed variables. Since the six latent factors were highly correlated, another, more restrictive, CFA model with a common second order factor and six first order factors was evaluated against the first, less restrictive, CFA model. The comparison of these two nested models was implemented in Mplus with a DIFFTEST option.

After endorsing the final measurement model, the following were included in the SEM: reading fluency as a latent factor; written spelling, reading comprehension, prior knowledge, and nonverbal reasoning as observed variables; and gender. The reading fluency factor was based on the three reading fluency tests described earlier. In the aforementioned SEM, the predictor variables were evaluated in relation to the common ORC factor. As an additional extension of the analyses, we also evaluated these same predictor variables in relation to the six ORC subskills.

Results

Descriptive statistics for literacy skills, prior knowledge, and nonverbal reasoning

Table 1 shows the descriptive statistics for the measured independent variables. Figure 1 shows the correlations between the independent variables.

Table 1 Descriptive statistics of literacy skills, prior knowledge, and nonverbal reasoning tests

Dimensional structure of online research and comprehension skills

The results of the structural equation model are shown in Fig. 1. In this section, we present the measurement model for ORC skills. In the next section, we present the aspects that were predicted to explain students’ performance in ORC.

The measurement model revealed six ORC factors. These were labelled (1) locating, (2) confirming credibility, (3) questioning credibility, (4) identifying main ideas, (5) synthesizing, and (6) communicating (see also Kiili et al., 2018b). In this CFA model, all parameter estimates were statistically significant (p < .01), and all fit indices indicated a good model fit (χ2 (75) = 83.57, p = .233; RMSEA = .02; CFI = 1.00; TLI = 1.00; WRMR = .59). Since the factors were strongly correlated (r = .29–.73), a second order factor was set to capture the common variance across the six first order factors in another CFA model.

This common factor was named ORC. The second CFA model also demonstrated good fit to the data (χ2 (84) = 108.77, p = .036; RMSEA = .03; CFI = .99; TLI = .99; WRMR = .72); however, the χ2-difference test indicated that the less restricted model of the six first order factors would fit the data better (χ2-diff (9) = 20.43, p = .015) than the model of the second order factor of ORC and the six first order factors. However, the modification indices suggested that the model fit would be better if the residuals of questioning credibility and synthesizing were allowed to correlate. This third CFA model fulfilled the criteria for a good model fit (χ2 (83) = 89.50, p = .294; RMSEA = .01; CFI = 1.00; TLI = 1.00; WRMR = .64). In addition, the χ2-difference test indicated that this more restricted CFA model would fit the data equally as well (χ2-diff (8) = 7.18, p = .517) as the less restricted model of the six first order factors.

Based on these results, the third CFA model was considered as the final measurement model and was utilized as a part of the aforementioned final SEM (Fig. 1). In the SEM, the common ORC factor explained 26% of locating (.51; p < .001), 42% of confirming credibility (.65; p < .001), 37% of questioning credibility (.61; p < .001), 71% of identifying main ideas (.84; p < .001), 63% of synthesizing (.79; p < .001), and 63% of communicating (.80; p < .001). The negative correlation (.33; p < .01) between the residuals of questioning credibility and synthesizing indicated an inverse relation between the residuals.

Aspects explaining students’ performance in online research and comprehension

In the next phase of the analysis, predictor variables were included in the SEM. Supporting Hypothesis 1, reading comprehension, nonverbal reasoning, and gender independently contributed to explain the variance of ORC performance: The regression coefficient of reading comprehension was .34 (p < .01), nonverbal reasoning was .14 (p < .001), and gender was .34 (p < .001). Contrary to our expectations, the relation between prior knowledge and ORC was nonsignificant. Furthermore, when examining lower level literacy skills in relation to the ORC performance, it was found that reading fluency and written spelling both independently contributed to ORC performance. The regression coefficient of reading fluency was .18 (p < .01) and written spelling was .17 (p < .001).

Altogether, predictor variables included in the SEM model explained 57% of the ORC variance. Therefore, 43% of the variance in the ORC factor remained unexplained. All the fit indices of the SEM, except the χ2 test (p = .004), indicated a good model fit: CFI was .99, TLI was .98, RMSEA was .03, and WRMR was .78.

In order to understand the role of different literacy skills and other individual differences in students’ performance in different areas of ORC, we conducted a differential examination with the six factor component model (Table 2). The results of this additional SEM indicated that reading comprehension was related to all other ORC subskills except locating information. Written spelling was related to locating, synthesizing, and communicating, whereas reading fluency was only related to communication. Further, gender was related to all other subskills except locating and confirming credibility, and nonverbal reasoning was related to identifying main ideas and communicating. All the fit indices of the SEM indicated a good model fit (χ2 (169) = 206.22, p = .027; RMSEA = .02; CFI = .99; TLI = .99; WRMR = .63).

Table 2 Differential examination of the relations of literacy skills, prior knowledge, nonverbal reasoning, and gender to online research and comprehension subskills

Discussion

The present study sought to understand the role that literacy skills (reading fluency, written spelling, and reading comprehension), prior knowledge, nonverbal reasoning, and gender play in sixth graders’ ORC performance. Since the ORC subskills were highly correlated, the aforementioned variables were evaluated in relation to a common factor of ORC as well as in relation to ORC subskills.

Struggling readers face difficulties in online research and comprehension

In line with previous research (Coiro, 2011; Leu et al., 2015; Salmerón et al., 2018), reading comprehension, along with gender, was the strongest predictor for ORC performance, and it was also related to all ORC subskills except locating information. It might be that the current assessment, where students were given specific instructions for locating tasks, required more understanding of how search engines work than comprehension skills. In more open locating tasks, where readers need to comprehend the task assignment in order to formulate relevant search queries, reading comprehension may play a bigger role.

In addition, lower level literacy skills (reading fluency and written spelling) were unique predictors for the ORC performance. This contradicts the finding by Salmerón et al. (2018), who did not find an effect of word identification on the performance of an online reading task among secondary school students. It is worth noting that in the study by Salmerón et al. (2018), more emphasis was placed on the navigational component of online reading than in the present study. On the other hand, Hahnel, Goldhammer, Kröhne, and Naumann (2018) found that lower level reading skills, namely performance in a sentence verification task, made a unique contribution in addition to reading comprehension when students evaluated search engine web page results. As such, our results suggest that lower level reading skills in early adolescence can contribute to ORC performance. Slow reading makes it more difficult to read all the required materials in multiple online texts in a given time.

This is confirmed by the fact that—despite the unique contribution of reading fluency to the ORC common factor—fluency in the differential examination was primarily related to communication. The communication task required text based argumentation: that is, relying on reasoning based on the collection of information from multiple online texts, which presupposed the reading of whole web pages. Furthermore, written spelling was related to three subskills, with the strongest relationship to locating information. In our assessment environment, the search engine did not suggest correctly spelled search terms; as such, the relation we found might be stronger than it would be in authentic search environments, where search engines suggest corrections to misspellings.

The linear relationship of literacy skills to ORC performance suggests that those with below average reading fluency, written spelling, or reading comprehension are also very likely to have difficulties in ORC. Struggling readers seem to have difficulties especially in identifying main ideas and synthesizing and communicating information (see Fig. 1), which are essential skills for understanding the topic at hand. Lack of these skills may hinder their ability to learn from online information. When the direct relation of literacy skills to subskills was examined (Table 2), readers with poor reading comprehension skills also struggle with the evaluation of information.

Nonverbal reasoning and prior knowledge in online research and comprehension

In accordance with our expectations, nonverbal reasoning contributed independently to the variance of ORC performance. This is consistent with earlier findings suggesting a supportive role for nonverbal reasoning in reading comprehension (Swart et al., 2017; Peng et al., 2018). When examining the relation of nonverbal reasoning to ORC subskills, nonverbal reasoning was found to be related to identifying main ideas and communicating. In particular, communication tasks required reasoning skills because students were asked to form a recommendation and to justify it with reasoning that represented different perspectives covered in the online resources.

Even though prior knowledge has been found to play an important role in various reading contexts (e.g., McNamara & Kintsch, 1996; Bråten et al., 2013), it was not a significant predictor in the present study. One reason for this might be that all students had at least some general knowledge of energy drinks and health that helped them in the task, as the topic has been widely discussed in public and probably also in many homes. Notably, other ORC studies (Coiro, 2011; Leu et al., 2015) have found that prior knowledge does not play such an important role in students’ ORC performance. On the other hand, Forzani (2016) found a positive but weak relation between prior knowledge and evaluation of information during online research. We want to point out that our finding might be related to how prior knowledge was measured in this study (see limitations). As such, one should be hesitant in drawing any conclusions about the role of prior knowledge on the basis of the current results.

Girls outperformed boys in online research and comprehension

Our results showing that girls outperformed boys in ORC are consistent with previous findings in digital reading contexts (Forzani, 2016; Naumann & Sälzer, 2017; Salmerón et al., 2018). Gender had a direct effect beyond indirect effects via literacy skills and other predictors. Therefore, there are other gender related differences that could explain why girls performed better than boys in the ORC task. Future research should explore the gender differences by evaluating, for example, the role of motivation for reading to learn from online information. Compelling evidence shows that girls show more positive motivation for traditional reading than boys (Wigfield & Guthrie, 1997) and that reading engagement seems to mediate their higher reading scores (Chiu & McBride-Chang, 2006). This might be the case especially in Finland, where the gender difference in reading engagement is one of the widest among OECD countries (Brozo et al., 2014). Even though boys seem to have more positive attitudes towards computers (Meelissen & Drent, 2008), girls show better reading performance across different reading environments and tasks. Notably, gender differences were not found in locating information that might be perceived as relating to a more technology related activity.

Limitations and future research

The present study comes with several limitations that could be addressed in future research. First, students’ ORC skills were measured with a performance based assessment that simulated online research in the closed, scaffolded information space. Students’ literacy skills, prior knowledge, and nonverbal reasoning skills may play somewhat different roles in more complex, open Internet information spaces. Furthermore, assessing students’ information locating skills in particular would benefit from several additional tasks that would better reveal students’ search patterns (Kiili et al., 2018b). However, including all ORC subskills into one assessment requires compromising on the number of tasks. To complete the ILA assessment in its current form already requires students to invest a lot of cognitive effort.

Some of the other measures also have limitations. First, prior knowledge had somewhat low reliability. Second, prior knowledge was measured with only seven items that did not cover all perspectives on the topic presented in online resources. Furthermore, giving students the option to select “don’t know” as an answer instead of the inclusion of an additional false option may have restricted the variability.

Finally, our study examined only a few potential sources of individual variation in online research and comprehension skills. 43% of the variance remained unexplained. One potential source could be metacognitive skills that are required particularly in complex reading tasks where readers need to compare and synthesize information from multiple online resources (Goldman, Braasch, Wiley, Graesser, & Brodowinska, 2012). Previous research has shown that good reading comprehension skills do not ensure students’ success in integrating information from multiple texts (Stahl, Hynd, Britton, McNish, & Bosquet, 1996). Integrating information may also involve additional demands on working memory (Andresen et al., 2018; DeStefano & Levre, 2007). Additionally, students’ attention and executive functions may contribute to their ORC performance, especially in synthesizing information. In traditional reading research, executive functions have been shown to be associated with reading comprehension (Follmer, 2018), and some evidence exists that inattention increases difficulties when working with online information (Desjarlais, 2013).

Theoretical and instructional implications

This study expands our theoretical knowledge of ORC and contributes to instruction. First, our findings suggested that, in future studies, students’ performance in ORC could be investigated as a single construct, since a large amount of the common variance in ORC subskills was captured by a latent structure. Thus, depending on the purpose of the study, the students’ ORC skills could be examined by using either a general ORC construct or a more detailed component structure that is based on the theoretical model (Kiili et al., 2018b; Leu et al., 2013a, 2013b).

Because literacy skills partly overlap with ORC skills, instruction supporting students’ literacy skills is important but not sufficient for educating skilled online readers. We believe that struggling readers would benefit from instruction that is relevant to both traditional reading and ORC. Online readers need effective comprehension strategies that they can apply in the context of both single and multiple texts (Cho & Afflerbach, 2015; Britt et al., 2018). As comprehension of multiple online resources goes beyond comprehension of a single online resource, students need instruction on accessing, selecting, evaluating, and using online resources that vary in their perspectives, interpretations, and genres (Britt et al., 2018).

Reading of multiple online texts might be overwhelming for many struggling readers. Because they need more time and effort for reading as compared to their classmates, struggling readers would benefit from guided practice in which they can integrate ideas from a limited numbers of texts, starting from two different texts. This would ensure more resources for practicing the specific skills needed for synthesizing, such as comparing and contrasting texts and forming ties between ideas originating from different online texts.

According to our model, all six component skills contribute to ORC performance, and all students, including struggling readers, need support to develop these skills. Students need to know how to form search terms, how to enter them into a search engine (Leu et al., 2013a), and how to examine who the author of an online resource is and why he or she has written the text (Cho & Afflerbach, 2015). Instruction focusing on effective locating and evaluation strategies would help struggling readers become more skilled in these areas. Being able to efficiently locate and evaluate online information would increase resources dedicated to making sense of relevant online texts. Because ORC requires novel approaches for teaching reading strategies and supporting students with special needs, increased attention should be paid to teacher professional development.