Abstract
The present study evaluated the extent to which literacy skills (reading fluency, written spelling, and reading comprehension), together with nonverbal reasoning, prior knowledge, and gender, are related to students’ online research and comprehension (ORC) performance. The ORC skills of 426 sixth graders were measured using a Finnish adaptation of the Online Research and Comprehension Assessment. Results of a structural equation model showed that these ORC skills were divided into six highly correlated factors, and that they formed a common factor in ORC. Altogether, these predictor variables explained 57% of the variance in ORC. Reading comprehension, along with gender, was the strongest predictor for ORC performance. In addition, reading fluency and written spelling explained ORC variance over and above reading comprehension. These findings suggest that struggling readers probably face difficulties online.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Rapidly developing technology and the ubiquity of the Internet have changed people’s reading practices, rendering the traditional view of literacy insufficient (Hartman, Morsink, & Zheng, 2010). Changes in the reading practices and skills needed in a modern society are already reflected in many nations’ educational standards or curricula (Australian Curriculum, Assessment and Reporting Authority [ACARA], n.d..; The Finnish National Board of Education, 2016) as well as in international assessments (Fraillon, Ainley, Schulz, Friedman, & Gebhardt, 2013; Office for Economic Co-operation and Development [OECD], 2013a). Even in daily school life, utilizing the Internet for learning is a common practice: 95% of surveyed teachers in the United States reported doing research or searching for information online as a typical school assignment (Purcell et al., 2012). Because of the increased role of the Internet in school work and in other areas of life, educators should ensure that all students acquire sufficient skills to read and learn on the Internet.
Reading to learn from online information, often referred to as online research and comprehension (ORC), requires, in particular, skills and strategies for locating, evaluating, and synthesizing online information as well as for communicating one’s learning to others (Leu, Kinzer, Coiro, Castek, & Henry, 2013b). Even though research has begun to identify the specific skills and strategies important when reading online (e.g., Brand-Gruwel, Wopereis, & Vermetten, 2005; Coiro & Dobler, 2007), there is still a need to better understand how traditional reading skills contribute to students’ performance when they solve problems with online information. Understanding the consequences of poor literacy skills would help educators to design tasks and supports for students with varying literacy skills. As such, this study examined how different aspects of the literacy skills of reading, reading fluency, written spelling, and reading comprehension predict sixth graders’ ORC performance. To achieve as thorough an understanding as possible on aspects related to ORC performance, we also included prior knowledge and nonverbal reasoning into our examination, as prior knowledge and inferential processes are seen as integral components of reading comprehension (McNamara & Magliano, 2009). Finally, because gender differences in literacy skills have been widely recognized (e.g., OECD, 2013a), gender was also included in our examination to clarify its role in ORC performance beyond reading ability.
Online research and comprehension
The present study is framed using an online research and comprehension framework (Leu et al., 2013b), which identifies five crucial component skills: (1) identifying an important question or a problem to solve, (2) locating information, (3) evaluating information critically, (4) synthesizing information, and (5) communicating information (see also Brand-Gruwel et al., 2005; Fraillon et al., 2013; International ICT Literacy Panel, 2002).
A reader begins online research by identifying a question to answer or problem to solve. In school or assessment contexts, the question or problem is often given to students. However, students are still required to build an understanding of the given task (Britt, Rouet, & Durik, 2018) that helps students to locate relevant information to solve the problem. Locating information requires the ability to form adequate search queries for search engines (Cho & Afflerbach, 2015) and to analyze search engine results (Rouet, Ros, Goumi, Macedo-Rouet, & Dinet, 2011). Without these skills, students are unable to use online information efficiently for their learning (Leu, Forzani, Burlingame, Kulikowich, Sedransk, Coiro, & Kennedy, 2013a).
Because a considerable amount of information on the Internet appears to be questionable (Britt & Gabrys, 2002) or commercially biased (Lewandowski, 2011), an ability to critically evaluate online information is essential. To make informed judgements of the quality of online information, readers need to evaluate the author’s expertise and the trustworthiness of online resources (Flanagin & Metzger, 2008; Pérez et al. 2018).
The fourth component skill—synthesizing information—refers to collecting ideas across resources and integrating these ideas into a versatile and coherent representation (Bråten, Britt, Strømsø, & Rouet, 2011; Cho & Afflerbach, 2017). A high quality synthesis also requires readers to compare and contrast information and different perspectives presented in multiple online resources (Cho & Afflerbach, 2015; Rouet, 2006). Finally, communicating information that one has learned requires good argumentation skills and the ability to address a specific audience. Presenting well justified arguments requires practice, especially when the information is controversial (Driver, Newton, & Osborne, 2000). Audience awareness may include components such as the greeting, addressing one’s message to a reader, and using correct language (Lapp, Shea, & Wolsey, 2011), as well as properly concluding the writing (Berggren, 2014), all of which reflect a knowledge of communicative conventions.
A recent study (Kiili, Leu, Utriainen, Coiro, Kanniainen, Tolvanen, Lohvansuu, & Leppänen, 2018b) confirmed the basic structure of the four component skills (locate, evaluate, synthesize, and communicate) while also suggesting the introduction of additional complexity to the skill structure. First, evaluation of information was divided into two components: confirming the credibility of information, and questioning the credibility of information. It seems that questioning a source that is, for example, biased or lacking in expertise, is more difficult for students than confirming the credibility of the source with relevant expertise (Kiili, Leu, Marttunen, Hautala, & Leppänen, 2018a; Pérez et al. 2018). Second, synthesizing was divided into two separate components: identifying main ideas from a single online text, and synthesizing information across multiple online texts. This suggests that the process of building coherent intertextual relationships across multiple online texts requires somewhat different skills than building coherence within a single online text (Cho & Afflerbach, 2017).
Literacy skills: reading fluency, written spelling, and reading comprehension
Reading has been defined as consisting of two main skills: decoding and comprehension (Gough & Tunmer, 1986), which have been considered to be interconnected via reading fluency (LaBerge & Samuels, 1974). At the lower level of literacy skill development, the letter–sound decoding ability enables readers to process the graphic symbols and to identify single words by connecting the graphic symbol strings—that is, letters or their clusters—in spoken word representations (Kintsch & Rawson, 2005). In addition to decoding, written spelling requires the ability to phonologically recode spoken words into grapheme strings. It has also been suggested that this process further develops the word identification system via strengthening the words’ orthographic representations (Perfetti & Stafura, 2014; Share, 2008). The development of the effectiveness and automatization of the basic decoding skill increases reading fluency, which is the ability to read the text accurately and rapidly (Meyer & Felton, 1999; National Reading Panel, National Institute of Child Health & Human Development, 2000).
The development of fluency and effortless word recognition skills reduces the amount of attentional resources allocated for decoding and improves reading comprehension, which is a higher level of literacy skill (Fuchs, Fuchs, Hosp, & Jenkins, 2001; Tilstra, McMaster, Van den Broek, Kendeou, & Rapp, 2009). In reading comprehension, readers construct a text base model by combining and interrelating the word meanings of the text and by recognizing the wider topics within the entire text (Kintsch, 1998; Kintsch & Rawson, 2005). According to the lexical quality hypothesis (Perfetti, 2007), this kind of word-to-text integration requires a sufficient quality of word representations as well as the ability to efficiently retrieve word meanings from long-term memory (Perfetti & Stafura, 2014).
Finally, to build a deeper understanding of the text, readers need to construct a situational model by integrating the text base information with their prior knowledge (Kintsch, 1998). However, sometimes readers face difficulties with accurate and fluent word recognition, as well as with poor written spelling and decoding abilities, which may also lead to reading comprehension difficulties (Perfetti, 2007). These kinds of difficulties are defined as the lack of those skills that allow readers to construct meaning from the text (Fletcher, Lyon, Fuchs, & Barnes, 2007).
The relation of prior knowledge, reasoning, and gender to literacy skills
Prior topic knowledge plays an important role in comprehension of single texts (Cromley, Snyder-Hogan, & Luciw-Dubas, 2010; Tarchi, 2010), hypertexts (Amadieu, Tricot, & Mariné, 2009), and multiple texts (Bråten, Ferguson, Anmarkrud, & Strømsø, 2013). Prior topic knowledge may aid in navigation of networked texts (Amadieu et al., 2009; Salmerón, Cañas, Kintsch, & Fajardo, 2005); it may also support intertextual inferencing (Strømsø & Bråten, 2009) as well as the evaluation of information during online research (Forzani, 2016). However, Coiro (2011) found that even though prior topic knowledge played an important role in online research and comprehension performance of students with low online reading skills, it did not influence the performance of students with high online reading skills. Further, a recent study showed that even though prior topic knowledge was associated with knowledge acquisition after engaging with multiple web pages on a socio-scientific topic, it was not associated with multiple source integration (Andresen, Anmarkrud, & Bråten, 2018). These results suggest that prior knowledge is also an important factor in online research; however, further research is needed to better understand its role.
In addition to prior topic knowledge, theoretical models of reading specify inferential processes as integral for reading comprehension (Kendeou, McMaster, & Christ, 2016); as such, students with low verbal and nonverbal reasoning skills are more likely to have comprehension difficulties (Snowling, 2013). Nonverbal reasoning has been shown to have direct and indirect effects on reading comprehension (Swart et al., 2017); it has also been shown to support young at-risk readers’ development of comprehension skills (Peng et al., 2018). Online research may require reasoning skills additional to those required for the reading of a single text on paper. Readers need to make inferences about the usefulness of a web page with the incomplete information provided by search engines (Coiro & Dobler, 2007), intertextual inferences across online texts (Strømsø & Bråten, 2009), and source-content inferences to judge the quality of information (Britt et al., 2018). Reasoning skills are particularly needed when reading tasks—such as complex online research tasks—require critical thinking and problem solving (Adlof, Catts, & Lee, 2010).
Gender difference has also been an area of interest in literacy research. Girls have been shown to have an advantage in reading fluency and reading comprehension in several studies (Logan & Johnston, 2009; Torppa, Eklund, Sulkunen, Niemi, & Ahonen, 2018), including large-scale international studies, such as the Program for International Student Assessment (OECD, 2013b). Similar patterns have also been observed in some ORC studies (Forzani, 2016; Salmerón, García, & Vidal-Abarca, 2018).
The present study
In the current study, we set out to examine how literacy skills (reading fluency, written spelling, and reading comprehension), prior topic knowledge, nonverbal reasoning, and gender are related to students’ ORC performance. We expected that reading comprehension, prior knowledge, nonverbal reasoning, and gender would independently contribute to explain the variance of ORC performance (Hypothesis 1). Studies using similar types of online reading tasks have found considerable overlap in skills needed in reading comprehension and online research tasks (Coiro, 2011; Hahnel, Goldhammer, Naumann, & Kröhne, 2016; Salmerón et al., 2018). In light of this research, we expected reading comprehension to be the strongest predictor of students’ ORC performance. Of the other explanatory factors, prior topic knowledge has been shown to play an important role in comprehension of single and multiple texts (e.g., McNamara & Kintsch, 1996; Bråten et al., 2013). Therefore, we expected that prior topic knowledge would also independently contribute to ORC performance. Furthermore, an ORC task involving multiple online texts requires inferencing within and across texts that is not necessarily captured in multiple choice reading comprehension tests, which we also used in this study (Strømsø & Bråten, 2009). Therefore, we expected nonverbal reasoning to be another unique contributor to ORC performance over and above reading comprehension. We also included gender in our analyses, expecting to confirm previous findings that show that girls outperform boys in digital reading tasks (OECD, 2013b; Naumann & Sälzer, 2017; Salmerón et al., 2018). Finally, we were interested to test whether lower level literacy skills, reading fluency, and written spelling would affect ORC skills through reading comprehension or whether these skills would make their own contribution.
Method
Participants
The participants were 426 sixth-grade students (207 girls, 219 boys) aged from 12 to 13 years (M = 12.34, SD = .32) from eight elementary schools in Central Finland. Both large and average sized schools from urban and rural areas voluntarily participated. The data were collected during the fall semesters of 2014 and 2015. A statement from the Ethical Committee was obtained, and the participants’ primary caregivers gave their written consent for participation in the study.
Measures and materials
Online research and comprehension
Students’ ORC skills were measured with the Internet Reading Assessment (Internet Lukemisen Arviointi, or ILA test), which is a Finnish adaptation (see Kiili et al., 2018b) of the Online Research and Comprehension Assessment originally developed by Leu et al. (2013a). The test consists of a simulated closed Internet environment and tasks that measure four ORC skill areas: (1) locating information, (2) evaluating information, (3) synthesizing information, and (4) communicating information (see also Kiili et al., 2018b).
At the beginning of the test, students received an assignment by email from the principal of a fictitious school. In this email, the principal asked students to explore the health effects of energy drinks and to write a recommendation justifying whether the principal should allow the school to purchase an energy drink vending machine. During the test, students were guided through the tasks by two avatar students in an environment that simulated a social networking site with a chat message window.
Students were asked to read four online resources (two news web pages [OR1, OR4], an academic online resource [OR2], and a commercial online resource [OR3]) to form their final recommendation concerning the purchase of an energy drink vending machine. The students were also required to take notes while reading these online resources. Students were asked to locate two of these resources (OR2, OR4) by formulating a search query in a search engine. When they received the search engine result list, they were asked to distinguish the relevant online resource from the irrelevant ones. If a student failed in this locating task, the avatar student gave a link to the online resource in the social networking site. Two additional resources (OR1, OR3) were given to the students. Thus, even if a student was not able to receive credit for selecting the correct resource, they could still read and take notes from the relevant resources, thereby receiving credit for this part of the task.
Students were also asked to evaluate two of four online resources—an academic (OR2) and a commercial (OR3) online resource—with regard to the author’s expertise in health issues as well as the overall credibility of the online resource itself. Instructions for the evaluation task were given by the avatar student in the chat message window. After reading, taking notes, and evaluating the online resources, the students were asked to compose a summary text on the basis of what they had learned from these resources concerning the health effects of energy drinks. They were able to utilize their notes while writing the summary. Finally, the students were asked to compose an email to the principal, in which they justified their opinion concerning the purchase of the energy drink vending machine. [For a more detailed description of the ILA test and the content of the online resources, see Kiili et al. (2018a, 2018b). The scoring rubric for the measured skills can be found in the Appendix.]
The original assessment—the Online Research and Comprehension Assessment—was developed with acceptable levels of reliability and validity. Cronbach’s α reliability coefficient for the energy drinks task was .83. Validity was established with a framework document approved by experts, 2 years of cognitive lab testing, and modifications based on a large scale pilot study (Leu et al., 2015).
To establish inter-rater reliability of coding, two independent coders, including the first and second author and trained research assistants, coded 20% of the responses for each of the 16 items. The kappa values for inter-rater reliability in locating information were 1.000. These varied in evaluation (four items) between .947 and .983, in identifying main ideas and synthesizing (six items) between .784 and 1.000, and in communication (two items) between .722 and .939. All disagreements were resolved by discussion. The remaining responses were scored by a single rater. Validation of the ILA was conducted through confirmatory factor analysis showing that the ILA assessment satisfactorily reflected the ORC framework (Kiili et al., 2018b).
Reading fluency
Fluency was measured using the three tests described below. A reading fluency factor (see the Data Analyses section) was formed on the basis of these tests. The McDonald’s omega—a model based reliability—was .68 (cf. Zhang & Yuan, 2016).
The word identification test, a subtest of the standardized Finnish reading test battery ALLU (Lindeman, 1998), included 80 items, each consisting of a picture and four alternative written words. The students’ task was to identify and connect correct picture–word pairs by drawing a line between a word and a picture. The score was the number of correctly connected pairs within the two minutes. The Kuder–Richardson reliability coefficient for the original test was .97 (Lindeman, 1998).
The word chain test (Holopainen, Kairaluoma, Nevala, Ahonen, & Aro, 2004) consisted of 25 chains of four words written without spaces between them. The students’ task was to draw a line at the word boundaries. The score was the number of correctly separated words within the 90 s time limit. The test–retest reliability coefficient for the original test varied between .70 and .84.
The oral pseudoword text-reading test (Eklund, Torppa, Aro, Leppänen, & Lyytinen, 2014) consisted of 38 pseudowords (277 letters). These pseudowords were presented in the form of a short passage, which students were instructed to read aloud as quickly and accurately as possible. The reading performance of the students was audio recorded for consecutive scoring. The score was the number of correctly read pseudowords divided by the time, in seconds, spent on reading. The inter-rater agreement for scoring the original test was .95 (Eklund et al., 2014).
Written spelling
Spelling accuracy was measured with a task in which students were asked to write 12 four syllable pseudowords from dictation (see Eklund et al., 2014). The recorded pseudowords were presented verbally to students twice, one at a time. The score was the number of correctly spelled items. Cronbach’s alpha reliability coefficient was .49, and Revelle’s omega reliability coefficient was .86.
Reading comprehension
Comprehension skills were tested using another subtest of the standardized Finnish reading test battery (Lindeman, 1998). In this subtest, students were asked to read an expository text of instructions for consumers and to respond to 12 multiple choice (four options) questions representing the following categories: (1) detail/fact (one question), (2) cause–effect/structure (one question), (3) conclusion/interpretation (four questions), (4) concept/phrase (three questions), and 5) main idea/purpose (three questions). The two page text was available when responding to the questions. The maximum score was 12 points. Cronbach’s alpha reliability coefficient was .64, and Revelle’s omega reliability coefficient was .86.
Nonverbal reasoning
Nonverbal reasoning ability was assessed with Raven’s Standard Progressive Matrices test, which is a visuospatial task appropriate for children over 11 years of age (Raven, 1998). The test consists of 60 items, of which a shortened version was used containing 30 items (every second item from the larger test). Previous studies have shown that shortened versions produce an adequate estimate of nonverbal reasoning compared to the full version of Raven’s Standard Progressive Matrices (see, e.g., Wytek, Opgenoorth, & Presslich, 1984). The total score was the number of items correctly responded to. In another large scale project with more than 800 sixth graders from the same area in Finland, the same shortened version was used with a Cronbach’s alpha reliability coefficient of .81 (Kanerva et al., submitted for publication).
Prior knowledge
Prior knowledge (refering to prior topic knowledge) was tested with seven multiple choice (four options) questions on energy drinks and their health effects. The answer options included one correct option, two incorrect options, and a “don’t know” option. One point was given for each correct selection, and zero points were given for selecting the other options. The Kuder–Richardson reliability coefficient for the total score was .89, and Revelle’s omega reliability coefficient was .42.
Procedure
The data were collected in four separate researcher-led sessions: three 45 min group testing sessions and one five minute individual test session. During the first two group sessions, students completed the tests of literacy skills and nonverbal reasoning. In the third group session, the students completed the ILA test on laptops after answering prior knowledge questions. Students’ performances were saved as log files and recorded with a screen capture program. During the assessment, the researchers provided technical assistance with the test application when needed. In the fourth session, the students completed the pseudoword text reading task in an individual test setting.
Data analyses
All analyses were conducted with Mplus version 7.3 and IBM SPSS Statistics 22. Since the pre-analysis of these data revealed some non-normality of the observed variables, and the ORC variables were categorical, the weighted least square (WLSMV) estimator was used in the structural equation model (SEM). WLSMV conducts the estimation with a diagonal weight matrix with robust standard errors and with a mean and variance adjusted χ2 test statistic with a full weight matrix (Muthén & Muthén, 1998–2017). To ensure that the specified latent factors model adequately represented the data, the model fit was evaluated using multiple indices, including Chi square (χ2), root mean square error of approximation (RMSEA), comparative fit index (CFI), Tucker-Lewis index (TLI), and weighted root mean square residual (WRMR). As an acceptable model fit, the following cutoff criteria were generally preferred: χ2 test (p > .05), RMSEA < .06, TLI and CFI ≥ .95, and WRMR ≤ .90 (Hu & Bentler, 1999; Yu, 2002). Missing values were due, for example, to sickness absences. To estimate the model parameters, the incomplete cases were used in the analyses. WLSMV supposes that missingness is allowed to be a function of the observed covariates but not of the observed outcomes (Asparouhov & Muthén, 2010; Muthén & Muthén, 1998–2017). There were no missing values in the 15 observed variables of ORC skills, except 11.7% in NOTE2 and 7.7% in NOTE4 (Fig. 1). Neither were there any missing values in prior knowledge and gender. The amount of missing data varied between 0.5 and 1.6% in the reading fluency tests forming the factor. The amount of missing data was 2.6% in written spelling, 0.9% in reading comprehension, and 2.3% in nonverbal reasoning.
The six latent factors of ORC subskills (see Kiili et al., 2018b) were used in the SEM investigating literacy skills (reading fluency, written spelling, and reading comprehension), prior knowledge, nonverbal reasoning, and gender in relation to ORC. The first confirmatory factor analysis (CFA) model was formed on the basis of 15 observed variables. Since the six latent factors were highly correlated, another, more restrictive, CFA model with a common second order factor and six first order factors was evaluated against the first, less restrictive, CFA model. The comparison of these two nested models was implemented in Mplus with a DIFFTEST option.
After endorsing the final measurement model, the following were included in the SEM: reading fluency as a latent factor; written spelling, reading comprehension, prior knowledge, and nonverbal reasoning as observed variables; and gender. The reading fluency factor was based on the three reading fluency tests described earlier. In the aforementioned SEM, the predictor variables were evaluated in relation to the common ORC factor. As an additional extension of the analyses, we also evaluated these same predictor variables in relation to the six ORC subskills.
Results
Descriptive statistics for literacy skills, prior knowledge, and nonverbal reasoning
Table 1 shows the descriptive statistics for the measured independent variables. Figure 1 shows the correlations between the independent variables.
Dimensional structure of online research and comprehension skills
The results of the structural equation model are shown in Fig. 1. In this section, we present the measurement model for ORC skills. In the next section, we present the aspects that were predicted to explain students’ performance in ORC.
The measurement model revealed six ORC factors. These were labelled (1) locating, (2) confirming credibility, (3) questioning credibility, (4) identifying main ideas, (5) synthesizing, and (6) communicating (see also Kiili et al., 2018b). In this CFA model, all parameter estimates were statistically significant (p < .01), and all fit indices indicated a good model fit (χ2 (75) = 83.57, p = .233; RMSEA = .02; CFI = 1.00; TLI = 1.00; WRMR = .59). Since the factors were strongly correlated (r = .29–.73), a second order factor was set to capture the common variance across the six first order factors in another CFA model.
This common factor was named ORC. The second CFA model also demonstrated good fit to the data (χ2 (84) = 108.77, p = .036; RMSEA = .03; CFI = .99; TLI = .99; WRMR = .72); however, the χ2-difference test indicated that the less restricted model of the six first order factors would fit the data better (χ2-diff (9) = 20.43, p = .015) than the model of the second order factor of ORC and the six first order factors. However, the modification indices suggested that the model fit would be better if the residuals of questioning credibility and synthesizing were allowed to correlate. This third CFA model fulfilled the criteria for a good model fit (χ2 (83) = 89.50, p = .294; RMSEA = .01; CFI = 1.00; TLI = 1.00; WRMR = .64). In addition, the χ2-difference test indicated that this more restricted CFA model would fit the data equally as well (χ2-diff (8) = 7.18, p = .517) as the less restricted model of the six first order factors.
Based on these results, the third CFA model was considered as the final measurement model and was utilized as a part of the aforementioned final SEM (Fig. 1). In the SEM, the common ORC factor explained 26% of locating (.51; p < .001), 42% of confirming credibility (.65; p < .001), 37% of questioning credibility (.61; p < .001), 71% of identifying main ideas (.84; p < .001), 63% of synthesizing (.79; p < .001), and 63% of communicating (.80; p < .001). The negative correlation (.33; p < .01) between the residuals of questioning credibility and synthesizing indicated an inverse relation between the residuals.
Aspects explaining students’ performance in online research and comprehension
In the next phase of the analysis, predictor variables were included in the SEM. Supporting Hypothesis 1, reading comprehension, nonverbal reasoning, and gender independently contributed to explain the variance of ORC performance: The regression coefficient of reading comprehension was .34 (p < .01), nonverbal reasoning was .14 (p < .001), and gender was .34 (p < .001). Contrary to our expectations, the relation between prior knowledge and ORC was nonsignificant. Furthermore, when examining lower level literacy skills in relation to the ORC performance, it was found that reading fluency and written spelling both independently contributed to ORC performance. The regression coefficient of reading fluency was .18 (p < .01) and written spelling was .17 (p < .001).
Altogether, predictor variables included in the SEM model explained 57% of the ORC variance. Therefore, 43% of the variance in the ORC factor remained unexplained. All the fit indices of the SEM, except the χ2 test (p = .004), indicated a good model fit: CFI was .99, TLI was .98, RMSEA was .03, and WRMR was .78.
In order to understand the role of different literacy skills and other individual differences in students’ performance in different areas of ORC, we conducted a differential examination with the six factor component model (Table 2). The results of this additional SEM indicated that reading comprehension was related to all other ORC subskills except locating information. Written spelling was related to locating, synthesizing, and communicating, whereas reading fluency was only related to communication. Further, gender was related to all other subskills except locating and confirming credibility, and nonverbal reasoning was related to identifying main ideas and communicating. All the fit indices of the SEM indicated a good model fit (χ2 (169) = 206.22, p = .027; RMSEA = .02; CFI = .99; TLI = .99; WRMR = .63).
Discussion
The present study sought to understand the role that literacy skills (reading fluency, written spelling, and reading comprehension), prior knowledge, nonverbal reasoning, and gender play in sixth graders’ ORC performance. Since the ORC subskills were highly correlated, the aforementioned variables were evaluated in relation to a common factor of ORC as well as in relation to ORC subskills.
Struggling readers face difficulties in online research and comprehension
In line with previous research (Coiro, 2011; Leu et al., 2015; Salmerón et al., 2018), reading comprehension, along with gender, was the strongest predictor for ORC performance, and it was also related to all ORC subskills except locating information. It might be that the current assessment, where students were given specific instructions for locating tasks, required more understanding of how search engines work than comprehension skills. In more open locating tasks, where readers need to comprehend the task assignment in order to formulate relevant search queries, reading comprehension may play a bigger role.
In addition, lower level literacy skills (reading fluency and written spelling) were unique predictors for the ORC performance. This contradicts the finding by Salmerón et al. (2018), who did not find an effect of word identification on the performance of an online reading task among secondary school students. It is worth noting that in the study by Salmerón et al. (2018), more emphasis was placed on the navigational component of online reading than in the present study. On the other hand, Hahnel, Goldhammer, Kröhne, and Naumann (2018) found that lower level reading skills, namely performance in a sentence verification task, made a unique contribution in addition to reading comprehension when students evaluated search engine web page results. As such, our results suggest that lower level reading skills in early adolescence can contribute to ORC performance. Slow reading makes it more difficult to read all the required materials in multiple online texts in a given time.
This is confirmed by the fact that—despite the unique contribution of reading fluency to the ORC common factor—fluency in the differential examination was primarily related to communication. The communication task required text based argumentation: that is, relying on reasoning based on the collection of information from multiple online texts, which presupposed the reading of whole web pages. Furthermore, written spelling was related to three subskills, with the strongest relationship to locating information. In our assessment environment, the search engine did not suggest correctly spelled search terms; as such, the relation we found might be stronger than it would be in authentic search environments, where search engines suggest corrections to misspellings.
The linear relationship of literacy skills to ORC performance suggests that those with below average reading fluency, written spelling, or reading comprehension are also very likely to have difficulties in ORC. Struggling readers seem to have difficulties especially in identifying main ideas and synthesizing and communicating information (see Fig. 1), which are essential skills for understanding the topic at hand. Lack of these skills may hinder their ability to learn from online information. When the direct relation of literacy skills to subskills was examined (Table 2), readers with poor reading comprehension skills also struggle with the evaluation of information.
Nonverbal reasoning and prior knowledge in online research and comprehension
In accordance with our expectations, nonverbal reasoning contributed independently to the variance of ORC performance. This is consistent with earlier findings suggesting a supportive role for nonverbal reasoning in reading comprehension (Swart et al., 2017; Peng et al., 2018). When examining the relation of nonverbal reasoning to ORC subskills, nonverbal reasoning was found to be related to identifying main ideas and communicating. In particular, communication tasks required reasoning skills because students were asked to form a recommendation and to justify it with reasoning that represented different perspectives covered in the online resources.
Even though prior knowledge has been found to play an important role in various reading contexts (e.g., McNamara & Kintsch, 1996; Bråten et al., 2013), it was not a significant predictor in the present study. One reason for this might be that all students had at least some general knowledge of energy drinks and health that helped them in the task, as the topic has been widely discussed in public and probably also in many homes. Notably, other ORC studies (Coiro, 2011; Leu et al., 2015) have found that prior knowledge does not play such an important role in students’ ORC performance. On the other hand, Forzani (2016) found a positive but weak relation between prior knowledge and evaluation of information during online research. We want to point out that our finding might be related to how prior knowledge was measured in this study (see limitations). As such, one should be hesitant in drawing any conclusions about the role of prior knowledge on the basis of the current results.
Girls outperformed boys in online research and comprehension
Our results showing that girls outperformed boys in ORC are consistent with previous findings in digital reading contexts (Forzani, 2016; Naumann & Sälzer, 2017; Salmerón et al., 2018). Gender had a direct effect beyond indirect effects via literacy skills and other predictors. Therefore, there are other gender related differences that could explain why girls performed better than boys in the ORC task. Future research should explore the gender differences by evaluating, for example, the role of motivation for reading to learn from online information. Compelling evidence shows that girls show more positive motivation for traditional reading than boys (Wigfield & Guthrie, 1997) and that reading engagement seems to mediate their higher reading scores (Chiu & McBride-Chang, 2006). This might be the case especially in Finland, where the gender difference in reading engagement is one of the widest among OECD countries (Brozo et al., 2014). Even though boys seem to have more positive attitudes towards computers (Meelissen & Drent, 2008), girls show better reading performance across different reading environments and tasks. Notably, gender differences were not found in locating information that might be perceived as relating to a more technology related activity.
Limitations and future research
The present study comes with several limitations that could be addressed in future research. First, students’ ORC skills were measured with a performance based assessment that simulated online research in the closed, scaffolded information space. Students’ literacy skills, prior knowledge, and nonverbal reasoning skills may play somewhat different roles in more complex, open Internet information spaces. Furthermore, assessing students’ information locating skills in particular would benefit from several additional tasks that would better reveal students’ search patterns (Kiili et al., 2018b). However, including all ORC subskills into one assessment requires compromising on the number of tasks. To complete the ILA assessment in its current form already requires students to invest a lot of cognitive effort.
Some of the other measures also have limitations. First, prior knowledge had somewhat low reliability. Second, prior knowledge was measured with only seven items that did not cover all perspectives on the topic presented in online resources. Furthermore, giving students the option to select “don’t know” as an answer instead of the inclusion of an additional false option may have restricted the variability.
Finally, our study examined only a few potential sources of individual variation in online research and comprehension skills. 43% of the variance remained unexplained. One potential source could be metacognitive skills that are required particularly in complex reading tasks where readers need to compare and synthesize information from multiple online resources (Goldman, Braasch, Wiley, Graesser, & Brodowinska, 2012). Previous research has shown that good reading comprehension skills do not ensure students’ success in integrating information from multiple texts (Stahl, Hynd, Britton, McNish, & Bosquet, 1996). Integrating information may also involve additional demands on working memory (Andresen et al., 2018; DeStefano & Levre, 2007). Additionally, students’ attention and executive functions may contribute to their ORC performance, especially in synthesizing information. In traditional reading research, executive functions have been shown to be associated with reading comprehension (Follmer, 2018), and some evidence exists that inattention increases difficulties when working with online information (Desjarlais, 2013).
Theoretical and instructional implications
This study expands our theoretical knowledge of ORC and contributes to instruction. First, our findings suggested that, in future studies, students’ performance in ORC could be investigated as a single construct, since a large amount of the common variance in ORC subskills was captured by a latent structure. Thus, depending on the purpose of the study, the students’ ORC skills could be examined by using either a general ORC construct or a more detailed component structure that is based on the theoretical model (Kiili et al., 2018b; Leu et al., 2013a, 2013b).
Because literacy skills partly overlap with ORC skills, instruction supporting students’ literacy skills is important but not sufficient for educating skilled online readers. We believe that struggling readers would benefit from instruction that is relevant to both traditional reading and ORC. Online readers need effective comprehension strategies that they can apply in the context of both single and multiple texts (Cho & Afflerbach, 2015; Britt et al., 2018). As comprehension of multiple online resources goes beyond comprehension of a single online resource, students need instruction on accessing, selecting, evaluating, and using online resources that vary in their perspectives, interpretations, and genres (Britt et al., 2018).
Reading of multiple online texts might be overwhelming for many struggling readers. Because they need more time and effort for reading as compared to their classmates, struggling readers would benefit from guided practice in which they can integrate ideas from a limited numbers of texts, starting from two different texts. This would ensure more resources for practicing the specific skills needed for synthesizing, such as comparing and contrasting texts and forming ties between ideas originating from different online texts.
According to our model, all six component skills contribute to ORC performance, and all students, including struggling readers, need support to develop these skills. Students need to know how to form search terms, how to enter them into a search engine (Leu et al., 2013a), and how to examine who the author of an online resource is and why he or she has written the text (Cho & Afflerbach, 2015). Instruction focusing on effective locating and evaluation strategies would help struggling readers become more skilled in these areas. Being able to efficiently locate and evaluate online information would increase resources dedicated to making sense of relevant online texts. Because ORC requires novel approaches for teaching reading strategies and supporting students with special needs, increased attention should be paid to teacher professional development.
References
Adlof, S. M., Catts, H. W., & Lee, J. (2010). Kindergarten predictors of second versus eighth grade reading comprehension impairments. Journal of Learning Disabilities, 43, 332–345. https://doi.org/10.1177/0022219410369067.
Amadieu, F., Tricot, A., & Mariné, C. (2009). Prior knowledge in learning from a non-linear electronic document: Disorientation and coherence of the reading sequences. Computers in Human Behavior, 25, 381–388. https://doi.org/10.1016/j.chb.2008.12.017.
Andresen, A., Anmarkrud, Ø., & Bråten, I. (2018). Investigating multiple source use among students with and without dyslexia. Reading and Writing. https://doi.org/10.1007/s11145-018-9904-z.
Asparouhov, T., & Muthén, B. (2010). Weighted least squares estimation with missing data. Mplus Technical Appendix, 2010, 1–10. Retrieved June 25, 2018, from http://www.statmodel.com/download/GstrucMissingRevision.pdf.
Australian Curriculum, Assessment and Reporting Authority [ACARA]. (n.d). The Australian Curriculum, v6.0. Retrieved August 15, 2017, from http://www.australiancurriculum.edu.au/Home.
Berggren, J. (2014). Learning from giving feedback: A study of secondary-level students. ELT Journal, 69, 58–70. https://doi.org/10.1093/elt/ccu036.
Brand-Gruwel, S., Wopereis, I., & Vermetten, Y. (2005). Information problem solving by experts and novices: Analysis of a complex cognitive skill. Computers in Human Behavior, 21, 487–508. https://doi.org/10.1016/j.chb.2004.10.005.
Bråten, I., Britt, M. A., Strømsø, H. I., & Rouet, J. (2011). The role of epistemic beliefs in the comprehension of multiple expository texts: Toward an integrated model. Educational Psychologist, 46, 48–70. https://doi.org/10.1080/00461520.2011.538647.
Bråten, I., Ferguson, L. E., Anmarkrud, Ø., & Strømsø, H. I. (2013). Prediction of learning and comprehension when adolescents read multiple texts: The roles of word-level processing, strategic approach, and reading motivation. Reading and Writing, 26, 321–348.
Britt, M. A., & Gabrys, G. (2002). Implications of document-level literacy skills for web site design. Behavior Research Methods, Instruments, & Computers, 34, 170–176. https://doi.org/10.3758/BF03195439.
Britt, M. A., Rouet, J.-F., & Durik, A. (2018). Representations and processes in multiple source use. In J. L. G. Braasch, I. Bråten, & M. T. McCrudden (Eds.), Handbook of multiple source use (pp. 17–33). New York, NY: Routledge.
Brozo, W. G., Sulkunen, S., Shiel, G., Garbe, C., Pandian, A., & Valtin, R. (2014). Reading, gender, and engagement. Journal of Adolescent & Adult Literacy, 57, 584–593. https://doi.org/10.1002/jaal.291.
Chiu, M. M., & McBride-Chang, C. (2006). Gender, context, and reading: A comparison of students in 43 countries. Scientific Studies of Reading, 10, 331–362. https://doi.org/10.1207/s1532799xssr1004_1.
Cho, B., & Afflerbach, P. (2015). Reading on the Internet. Journal of Adolescent & Adult Literacy, 58(6), 504–517. https://doi.org/10.1002/jaal.387.
Cho, B.-Y., & Afflerbach, P. (2017). An evolving perspective of constructively responsive reading comprehension strategies in multilayered digital text environments. In S. Israel (Ed.), Handbook of research on reading comprehension (2nd ed., pp. 109–134). New York, NY: Guilford Press.
Coiro, J. (2011). Predicting reading comprehension on the Internet: Contributions of offline reading skills, online reading skills, and prior knowledge. Journal of Literacy Research, 43, 352–392. https://doi.org/10.1177/1086296X11421979.
Coiro, J., & Dobler, E. (2007). Exploring the online reading comprehension strategies used by sixth-grade skilled readers to search for and locate information on the Internet. Reading Research Quarterly, 42, 214–257.
Cromley, J. G., Snyder-Hogan, L. E., & Luciw-Dubas, U. A. (2010). Reading comprehension of scientific text: A domain-specific test of the direct and inferential mediation model of reading comprehension. Journal of Educational Psychology, 102, 687–700. https://doi.org/10.1037/a0019452.
Desjarlais, M. (2013). Internet exploration behaviours and recovery from unsuccessful actions differ between learners with high and low levels of attention. Computers in Human Behavior, 29, 694–705. https://doi.org/10.1016/j.chb.2012.12.006.
DeStefano, D., & LeFevre, J. (2007). Cognitive load in hypertext reading: A review. Computers in Human Behavior, 23, 1616–1641. https://doi.org/10.1016/j.chb.2005.08.012.
Driver, R., Newton, P., & Osborne, J. (2000). Establishing the norms of scientific argumentation in classrooms. Science Education, 84, 287–312. https://doi.org/10.1002/(SICI)1098-237X(200005)84:3%3c287:AID-SCE1%3e3.0.CO;2-A.
Eklund, K., Torppa, M., Aro, M., Leppänen, P. H., & Lyytinen, H. (2014). Literacy skill development of children with familial risk for dyslexia through grades 2, 3, and 8. Journal of Educational Psychology, 107, 126–140. https://doi.org/10.1037/a0037121.
Flanagin, A. J., & Metzger, M. J. (2008). Digital media and youth: Unparalleled opportunity and unprecedented responsibility. In M. J. Metzger & A. J. Flanagin (Eds.), Digital media, youth, and credibility (pp. 5–27). Cambridge, MA: MIT Press. https://doi.org/10.1162/dmal.9780262562324.005.
Fletcher, J. M., Lyon, G. R., Fuchs, L. S., & Barnes, M. A. (2007). Learning disabilities: From identification to intervention. New York, NY: The Guilford Press.
Follmer, D. J. (2018). Executive function and reading comprehension: A meta-analytic review. Educational Psychologist, 35, 42–60. https://doi.org/10.1080/00461520.2017.1309295.
Forzani, E. (2016). Individual differences in evaluating the credibility of online information in science: Contributions of prior knowledge, gender, socioeconomic status, and offline reading ability. Unpublished doctoral dissertation, University of Connecticut, Storrs, CT.
Fraillon, J., Ainley, J., Schulz, W., Friedman, T., & Gebhardt, E. (2013). Preparing for life in a digital age. The IEA international computer and information literacy study international report. Melbourne: Australian Council for Educational Research (ACER). Retrieved November 18, 2017, from http://www.iea.nl/fileadmin/user_upload/Publications/Electronic_versions/ICILS_2013_International_Report.pdf.
Fuchs, L. S., Fuchs, D., Hosp, M. K., & Jenkins, J. R. (2001). Oral reading fluency as an indicator of reading competence: A theoretical, empirical, and historical analysis. Scientific Studies of Reading, 5, 239–256. https://doi.org/10.1207/S1532799XSSR0503_3.
Goldman, S. R., Braasch, J. L. G., Wiley, J., Graesser, A. C., & Brodowinska, K. (2012). Comprehending and learning from internet sources: Processing patterns of better and poorer learners. Reading Research Quarterly, 47, 356–381. https://doi.org/10.1002/RRQ.027.
Gough, P. B., & Tunmer, W. E. (1986). Decoding, reading, and reading disability. Remedial and Special Education, 7, 6–10. https://doi.org/10.1177/074193258600700104.
Hahnel, C., Goldhammer, F., Kröhne, U., & Naumann, J. (2018). The role of reading skills in the evaluation of online information gathered from search engine environments. Computers in Human Behavior, 78, 223–234. https://doi.org/10.1016/j.chb.2017.10.004.
Hahnel, C., Goldhammer, F., Naumann, J., & Kröhne, U. (2016). Effects of linear reading, basic computer skills, evaluating online information, and navigation on reading digital text. Computers in Human Behavior, 55(Part A), 486–500. https://doi.org/10.1016/j.chb.2015.09.042.
Hartman, D. K., Morsink, P. M., & Zheng, J. (2010). From print to pixels: The evolution of cognitive conceptions of reading comprehension. In E. A. Baker (Ed.), The new literacies: Multiple perspectives on research and practice (pp. 131–164). New York, NY: The Guilford Press.
Holopainen, L., Kairaluoma, L., Nevala, J., Ahonen, T., & Aro, M. (2004). Lukivaikeuksien seulontamenetelmä nuorille ja aikuisille [Dyslexia screening test for youth and adults]. Jyväskylä: Niilo Mäki Instituutti.
Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6, 1–55. https://doi.org/10.1080/10705519909540118.
International ICT Literacy Panel. (2002). Digital transformation: A framework for ICT literacy. Princeton, NJ: Author. Retrieved February 17, 2019, http://www.ets.org/Media/Tests/Information_and_Communication_Technology_Literacy/ictreport.pdf.
Kanerva, K., Kiistala, I., Kalakoski, V., Hirvonen, R., Ahonen, T., & Kiuru, N. (submitted for publication). The feasibility of WM tablet tasks in predicting scholastic skills in classroom settings.
Kendeou, P., McMaster, K. L., & Christ, T. J. (2016). Reading comprehension: Core components and processes. Policy Insights from the Behavioral and Brain Sciences, 3, 62–69. https://doi.org/10.1177/2372732215624707.
Kiili, C., Leu, D. J., Marttunen, M., Hautala, J., & Leppänen, P. H. (2018a). Exploring early adolescents’ evaluation of academic and commercial online resources related to health. Reading and Writing, 31, 533–557. https://doi.org/10.1007/s11145-017-9797-2.
Kiili, C., Leu, D. J., Utriainen, J., Coiro, J., Kanniainen, L., Tolvanen, A., et al. (2018b). Reading to learn from online information: Modeling the factor structure. Journal of Literacy Research, 50, 304–334. https://doi.org/10.1177/1086296X18784640
Kintsch, W. (1998). Comprehension: A paradigm for cognition. New York, NY: Cambridge University Press.
Kintsch, W., & Rawson, K. (2005). Comprehension. In M. J. Snowling & C. Hulme (Eds.), The science of reading: A handbook (pp. 209–226). Malden, MA: Blackwell Publishing.
LaBerge, D., & Samuels, S. J. (1974). Toward a theory of automatic information processing in reading. Cognitive Psychology, 6, 293–323. https://doi.org/10.1016/0010-0285(74)90015-2.
Lapp, D., Shea, A., & Wolsey, T. D. (2011). Blogging and audience awareness. Journal of Education, 191, 33–44. https://doi.org/10.1177/002205741119100104.
Leu, D. J., Forzani, E., Burlingame, C., Kulikowich, J., Sedransk, N., Coiro, J., et al. (2013a). The new literacies of online research and comprehension: Assessing and preparing students for the 21st century with common core state standards. In S. B. Newman & L. B. Gambrell (Eds.), Quality reading instruction in the age of common core standards (pp. 219–236). Newark, DE: International Reading Association.
Leu, D. J., Forzani, E., Rhoads, C., Maykel, C., Kennedy, C., & Timbrell, N. (2015). The new literacies of online research and comprehension: Rethinking the reading achievement gap. Reading Research Quarterly, 50, 37–59. https://doi.org/10.1002/rrq.85.
Leu, D. J., Kinzer, C. K., Coiro, J., Castek, J., & Henry, L. A. (2013b). New literacies and the new literacies of online reading comprehension: A dual level theory. In N. Unrau & D. Alvermann (Eds.), Theoretical models and process of reading (6th ed., pp. 1150–1181). Newark, DE: IRA.
Lewandowski, D. (2011). The influence of commercial intent of search results on their perceived relevance. In Proceedings of the 2011 iConference (pp. 452–458) Seattle, WA: ACM. https://doi.org/10.1145/1940761.1940823.
Lindeman, J. (1998). Ala-asteen lukutesti ALLU [Reading test for primary school ALLU]. Turku: Center for Learning Research.
Logan, S., & Johnston, R. (2009). Gender differences in reading ability and attitudes: Examining where these differences lie. Journal of Research in Reading, 32, 199–214. https://doi.org/10.1111/j.1467-9817.2008.01389.x.
McNamara, D. S., & Kintsch, W. (1996). Learning from texts: Effects of prior knowledge and text coherence. Discourse Processes, 22, 247–288. https://doi.org/10.1080/01638539609544975.
McNamara, D. S., & Magliano, J. (2009). Toward a comprehensive model of comprehension. Psychology of Learning and Motivation, 51, 297–384.
Meelissen, M. R., & Drent, M. (2008). Gender differences in computer attitudes: Does the school matter? Computers in Human Behavior, 24, 969–985. https://doi.org/10.1016/j.chb.2007.03.001.
Meyer, M. S., & Felton, R. H. (1999). Repeated reading to enhance fluency: Old approaches and new directions. Annals of Dyslexia, 49, 283–306.
Muthén, L. K., & Muthén, B. O. (1998–2017). Mplus user’s guide, 8th edn. Los Angeles, CA: Muthén & Muthén.
National Reading Panel, National Institute of Child Health, & Human Development. (2000). Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction. Washington, DC: National Institute of Child Health and Human Development.
Naumann, J., & Sälzer, C. (2017). Digital reading proficiency in German 15-year olds: Evidence from PISA 2012. Zeitschrift für Erziehungswissenschaft, 20, 585–603. https://doi.org/10.1007/s11618-017-0758-y.
Organisation for Economic Co-operation and Development [OECD]. (2013). PISA 2012 assessment and analytical framework: Mathematics, reading, science, problem solving and financial literacy. Paris: OECD Publishing. https://doi.org/10.1787/9789264190511-en.
Organisation for Economic Co-operation and Development [OECD]. (2013). PISA 2012 results: Excellence through equity (volume II): giving every student the chance to succeed. Paris: OECD Publishing. https://doi.org/10.1787/9789264201132-en.
Peng, P., Fuchs, D., Fuchs, L. S., Elleman, A. M., Kearns, D. M., Gilbert, J. K., et al. (2018). A longitudinal analysis of the trajectories and predictors of word reading and reading comprehension development among at-risk readers. Journal of Learning Disabilities. https://doi.org/10.1177/0022219418809080.
Pérez, A., Potocki, A., Stadtler, M., Macedo-Rouet, M., Paul, J., Salmerón, L., et al. (2018). Fostering teenagers’ assessment of information reliability: Effects of a classroom intervention focused on critical source dimensions. Learning and Instruction, 58, 53–64. https://doi.org/10.1016/j.learninstruc.2018.04.006.
Perfetti, C. (2007). Reading ability: Lexical quality to comprehension. Scientific Studies of Reading, 11, 357–383. https://doi.org/10.1080/10888430701530730.
Perfetti, C., & Stafura, J. (2014). Word knowledge in a theory of reading comprehension. Scientific Studies of Reading, 18, 22–37. https://doi.org/10.1080/10888438.2013.827687.
Purcell, K., Rainie, L., Heaps, A., Buchanan, J., Friedrich, L., Jacklin, A., et al. (2012). How teens do research in the digital world. Washington, DC: Pew Research Center’s Internet & American Life Project.
Raven, J. C. (1998). Raven’s progressive matrices. Oxford: Psychologists Press, Oxford.
Rouet, J. (2006). The skills of document use: From text comprehension to web-based learning. Mahwah, NJ: Lawrence Erlbaum.
Rouet, J., Ros, C., Goumi, A., Macedo-Rouet, M., & Dinet, J. (2011). The influence of surface and deep cues on primary and secondary school students’ assessment of relevance in web menus. Learning and Instruction, 21, 205–219. https://doi.org/10.1016/j.learninstruc.2010.02.007.
Salmerón, L., Cañas, J. J., Kintsch, W., & Fajardo, I. (2005). Reading strategies and hypertext comprehension. Discourse Processes, 40, 171–191. https://doi.org/10.1207/s15326950dp4003_1.
Salmerón, L., García, A., & Vidal-Abarca, E. (2018). The development of adolescents’ comprehension-based Internet reading skills. Learning and Individual Differences, 61, 31–39. https://doi.org/10.1016/j.lindif.2017.11.006.
Share, D. L. (2008). Orthographic learning, phonological recoding, and self-teaching. Advances in Child Development and Behavior, 36, 31–82. https://doi.org/10.1016/S0065-2407(08)00002-5.
Snowling, M. J. (2013). Early identification and interventions for dyslexia: A contemporary view. Journal of Research in Special Educational Needs, 13, 7–14. https://doi.org/10.1111/j.1471-3802.2012.01262.x.
Stahl, S. A., Hynd, C. R., Britton, B. K., McNish, M. M., & Bosquet, D. (1996). What happens when students read multiple source documents in history? Reading Research Quarterly, 31, 430–456.
Strømsø, H. I., & Bråten, I. (2009). Beliefs about knowledge and knowing and multiple text comprehension among upper secondary students. Educational Psychology, 29, 425–445. https://doi.org/10.1080/01443410903046864.
Swart, N. M., Muijselaar, M. M., Steenbeek-Planting, E. G., Droop, M., de Jong, P. F., & Verhoeven, L. (2017). Cognitive precursors of the developmental relation between lexical quality and reading comprehension in the intermediate elementary grades. Learning and Individual Differences, 59, 43–54. https://doi.org/10.1016/j.lindif.2017.08.009.
Tarchi, C. (2010). Reading comprehension of informative texts in secondary school: A focus on direct and indirect effects of reader’s prior knowledge. Learning and Individual Differences, 20, 415–420.
The Finnish National Board of Education. (2016). National core curriculum for basic education 2014. Helsinki: The Finnish National Board of Education.
Tilstra, J., McMaster, K., Van den Broek, P., Kendeou, P., & Rapp, D. (2009). Simple but complex: Components of the simple view of reading across grade levels. Journal of Research in Reading, 32, 383–401. https://doi.org/10.1111/j.1467-9817.2009.01401.x.
Torppa, M., Eklund, K., Sulkunen, S., Niemi, P., & Ahonen, T. (2018). Why do boys and girls perform differently on PISA reading in Finland? The effects of reading fluency, achievement behaviour, leisure reading and homework activity. Journal of Research in Reading, 41, 122–139. https://doi.org/10.1111/1467-9817.12103.
Wigfield, A., & Guthrie, J. (1997). Relations of children’s motivation for reading to the amount and breadth of their reading. Journal of Educational Psychology, 89, 420–432. https://doi.org/10.1037/0022-0663.89.3.420.
Wytek, R., Opgenoorth, E., & Presslich, O. (1984). Development of a new shortened version of Raven’s Matrices test for application rough assessment of present intellectual capacity within psychopathological investigation. Psychopathology, 17, 49–58. https://doi.org/10.1159/000284003.
Yu, C. (2002). Evaluating cutoff criteria of model fit indices for latent variable models with binary and continuous outcomes (Doctoral dissertation). University of California, Los Angeles. Retrieved January 20, 2017, from http://www.statmodel.com/download/Yudissertation.pdf
Zhang, Z., & Yuan, K. H. (2016). Robust coefficients alpha and omega and confidence intervals with outlying observations and missing data: Methods and software. Educational and Psychological Measurement, 76, 387–411.
Acknowledgements
Open access funding provided by University of Jyväskylä (JYU). We thank the teachers, students, and parents from the participating schools for their cooperation. We also thank Sini Hjelm, Sonja Tiri, and Paula Rahkonen for collecting and managing the data. We would also like to thank two anonymous reviewers for their constructive feedback. Last but not least, we thank the development team of the Online Research and Comprehension Assessment (ORCA).
Funding
This research was part of the project (No. 274022), Internet and learning difficulties: Multidisciplinary approach for understanding information seeking in new media (eSeek), funded by the Academy of Finland.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
Scoring criteria for students’ online research and comprehension performance by component skills
Sub-skill | Observed variables | Scores |
---|---|---|
Locating | Formulation of the first search query to locate OR2 | 0–2 p. |
Time spent locating OR2 | 0–4 p. | |
Formulation of the second search query to locate OR4 | 0–2 p. | |
Time spent locating OR4 | 0–4 p. | |
Confirming credibility | Evaluation of authors’ expertise in the academic online resource (OR2) | 0–3 p. |
Evaluation of credibility of information in the academic | 0–3 p. | |
online resource (OR2) | ||
Questioning credibility | Evaluation of authors’ expertise in the commercial online resource (OR3) | 0–3 p. |
Evaluation of credibility of information in the commercial online resource (OR3) | 0–3 p. | |
Identifying main ideas | Identifying main ideas from OR1: news page, reporting research results | 0–2 p. |
Identifying main ideas from OR2: academic online resource, answering FAQs on energy drinks with a neutral tone | 0–2 p. | |
Identifying main ideas from OR3: commercial online resource, including only positive health effects of energy drinks in a press release | 0–2 p. | |
Identifying main ideas from OR4: news page, presenting an expert statement | 0–2 p. | |
Synthesizing | Number of online resources used in the summary | 0–3 p. |
Integration of ideas in the summary: coherence, coverage, and use of connectives | 0–3 p. | |
Communicating | Quality of argumentation in the email: stance supported by online resources, number of reasons representing different perspectives | 0–5 p. |
Communicative practices in the email: awareness of the audience, clear and polite way of expressing oneself | 0–5 p. |
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Kanniainen, L., Kiili, C., Tolvanen, A. et al. Literacy skills and online research and comprehension: struggling readers face difficulties online. Read Writ 32, 2201–2222 (2019). https://doi.org/10.1007/s11145-019-09944-9
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11145-019-09944-9