Abstract
This study examined students’ ability to select relevant ideas from multiple online texts and integrate those ideas in their written products. Students (N = 162) used a web-based platform to complete an online inquiry task in which they read three texts presenting different perspectives on computer gaming and wrote an article for a school magazine on the issue based on these texts. Students selected two snippets from each text during reading and wrote their article with the selected snippets available. The selected snippets were scored according to their relevance for completing the task, and the written products were scored according to their integration quality. The results showed that most students performed well on the selection task. However, nearly half of the written products were characterized by poor integration quality. The hierarchical multiple regression analysis showed that students’ selection of relevant ideas from the texts contributed to their integration of information across texts over and above both reading fluency and reading comprehension skills. The study provides new evidence on the relationship between selection and integration when younger students work with multiple texts, and both theoretical and educational implications of these findings are discussed.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Introduction
Although students often are given writing assignments based on multiple texts, such multiple text-based writing is considered to represent a great challenge across educational levels (Cumming et al., 2016; Mateos & Solé, 2009). One reason is that high-quality written responses include arguments that take multiple perspectives into account, with those perspectives linked using connectives that signal important relationships among them (Mateos et al., 2018). In addition, students should use available textual resources and draw well-justified conclusions based on their argumentation (Du & List, 2020). Finally, students should elaborate and transform their own ideas into written text, which seems to require considerable reflection and inferencing on the part of students (Wolfe & Goldman, 2005).
Previous research has mainly examined the integration of information across multiple texts among students at secondary and post-secondary levels. To contribute to the understanding of multiple text integration among younger students, this study aimed to shed light on sixth graders’ efforts to integrate information across multiple online texts, particularly on their ability to use ideas gathered from multiple texts in their written products. Such understanding may help educators design learning experiences with multiple textual resources that support younger students’ comprehension and use of multiple texts presented through different mediums. In addition, the study introduces a digital environment that includes several tools to facilitate students’ work with multiple online texts. Before turning to the present study, we will discuss theoretical assumptions regarding multiple text integration and the potential role of integrative writing tasks in this regard. Finally, we will review previous research on integration.
Integrating Ideas when Reading Multiple Texts
Integrating ideas when reading multiple texts is a complex process that requires identification of relevant information in single texts and integration of such information into a new whole in the service of meaning-making and text production (Barzilai et al., 2018; Griffin et al., 2012). Multiple text comprehension builds on single text comprehension (Cho & Afflerbach, 2017; Kiili et al., 2018; Mahlow et al., 2020), which involves constructing a coherent mental representation of the situation, issue, or phenomenon described in the text (Kintsch, 1988). When constructing a mental representation of a single text, readers engage in intratextual integration, that is, in identifying important ideas within a text and relating them to one another (Kintsch, 1988; Mateos et al., 2018). Further, skilled readers routinely use world knowledge to make causal or bridging inferences within a text (Kintsch, 1988; Singer, 2013).
In addition to single text comprehension, multiple text reading tasks require that readers construct a coherent representation across multiple texts (Britt et al., 2018; Cho & Afflerbach, 2017). As described in frameworks of multiple document literacy (Britt et al., 2018; List & Alexander, 2019; Perfetti et al., 1999), intertextual integration, that is, connecting complementary or contradictory contents across texts, is essential to building an integrated mental model. Such an integrated mental model includes key ideas from each text that may agree or disagree. Another main component of multiple texts comprehension is creating a representation of source information (Perfetti et al., 1999; Rouet, 2006), which contains information about sources (e.g., authors or publications), links between sources and content included in the integrated mental model, and relationships between sources. Because several studies have shown that representing source information when writing from multiple texts is particularly rare among primary and secondary school students (Florit et al., 2020a; Kiili et al., 2020; Pérez et al., 2018), this study focused on the integration of textual content.
In addition to intratextual and intertextual integration, integrating textual information with prior knowledge has been given much emphasis in models of reading comprehension (Cervetti & Wright, 2020; McNamara & Magliano, 2009). Thus, prior knowledge has been shown to be an important individual difference factor in both single (Elbro & Buch-Iversen, 2013; Fisher & Frey, 2009; McNamara & Kintsch, 1996) and multiple text comprehension (Bråten et al., 2014; Davis et al., 2018; Le Bigot & Rouet, 2007). For example, prior knowledge facilitates intertextual strategic processing (Bråten et al., 2014) and contributes to an integrated understanding of a topic discussed across texts (Hagen et al., 2014). In summary, previous research has identified at least three essential forms of integration involved in multiple text comprehension: intratextual integration, intertextual integration, and text-prior knowledge integration (List, 2020).
One way educators may seek to foster these forms of integration is by assigning integrative writing tasks that require connecting and comparing ideas from multiple texts to serve the communicative purpose of the writing task (Barzilai et al., 2018; Florit et al., 2020a; Valenzuela & Castillo, 2022). Clearly formulated task assignments are important because students are supposed to use information provided in the task to construct a task model that, in turn, directs their further processing and task completion (List et al., 2019a; Rouet & Britt, 2011). The construction of a task model is one of the five iterative and overlapping core processes of multiple text comprehension described in the Multiple Documents Task-based Relevance Assessment and Content Extraction (MD-TRACE) model by Rouet and Britt (2011). The other core processes are assessing the information need; selecting, processing, and integrating relevant information from selected documents; constructing the task product; and evaluating the quality of the product in relation to the task. In this study, we focused on students’ selection and integration of information from online texts when responding to an integrative writing task.
Furthermore, when creating a written product from multiple texts, students can corroborate evidence, compare, and connect relevant ideas across the texts by (re)organizing ideas and using linguistic connectives (List et al., 2019b; Spivey & King, 1989). The use of connective words is associated with the quality of writing (Galloway & Uccelli, 2019; Latini et al., 2019; Taylor et al., 2019), with connective words assisting students in expressing additive connections (also, in addition), causal connections (because, therefore), and adversative connections (on the other hand, whereas) across ideas. For example, Taylor et al. (2019) found that middle school students’ use of adversative connective words was associated with more integrated written products.
Previous Research on Integration
When Primor and Katzir (2018) reviewed studies examining readers’ integration of information from multiple texts, they found that more than half of the 50 studies used expressive tasks (e.g., essay tasks) to investigate how readers select information from multiple texts, form intertextual relations, or draw inferences across texts. Of note is, however, that few of these studies investigated readers younger than 15 years of age.
Research including younger students has indicated that they find integration of information across multiple texts challenging (Blaum et al., 2017; Florit et al., 2020a; Sabatini et al., 2014). For example, in a much-cited think-aloud study, sixth-graders seldom engaged in intertextual integration while reading multiple texts on a historical topic (Wolfe & Goldman, 2005). Similarly, students have been shown to face challenges in integrating ideas when tasked to compose multiple source-based essays (Florit et al., 2020a; Kiili et al., 2020). In Florit et al.’s (2020a) study, fourth graders wrote two essays, one on the healthiness of chocolate and one on the effects of video gaming. Only 18 and 31% of the students, respectively, were found to include opposing perspectives in their essays. Kiili et al. (2020) found that one-third of the sixth graders included ideas only from one out of four textual resources or did not refer to any text at all in their essays. However, although multiple text integration is challenging to primary school students, there is also some evidence that students as young as nine years old spontaneously may attempt to integrate information across texts when the texts are optimal for integration and even struggling readers can integrate information across texts when connections between texts are salient (Beker et al., 2019).
Multiple text comprehension and integration are affected by more basic reading skills. Thus, previous studies of primary school students have established that basic reading skills, such as reading fluency (Florit et al., 2020a; Kiili et al., 2020) and reading comprehension skills (Florit et al., 2020a; Kanniainen et al., 2019), contribute to multiple text comprehension. There is also evidence that basic reading skills, such as word recognition, contribute to multiple text comprehension even among upper secondary school students (Bråten et al., 2013). Further, working memory (Banas & Sanchez, 2012; Braasch et al., 2014), strategic processing (Goldman et al., 2012; Hagen et al., 2014;), and comprehension monitoring (Florit et al., 2020a) have been shown to facilitate multiple text comprehension. Among these individual cognitive factors, we focused on the role played by reading fluency and reading comprehension skills in the integration of information across multiple texts in the present study.
The Present Study
Given the scarcity of prior empirical work on primary school students’ multiple texts integration, we aimed to contribute to building a research base in this area by asking 162 sixth-graders to complete a computer-based inquiry task about computer gaming in a closed information environment. Students searched for three relevant online texts with a search engine, read three relevant online texts that were either selected by the students or assigned to them, selected relevant information from these texts, evaluated the texts, and created a written product based on the texts. In the present study, we focus on the selection of relevant information from the texts and the composition of the written product. Specifically, our study addressed the following questions:
-
1.
To what extent were students able to select relevant ideas from the available online texts?
-
2.
To what extent did students integrate ideas in their written products, and which types of integration did they perform?
-
3.
To what extent did reading fluency, reading comprehension, and selection of relevant ideas contribute to students’ integration into the written products?
Method
Participants
In the present study, we used convenience sampling and recruited schools and teachers based on their opportunities and willingness to participate. Altogether, 179 students took part in the study, but 17 students were excluded because they did not complete all the phases of the task that were relevant to this study (see the section Task and Digital Platform). The remaining 162 students attended 10 different Finnish elementary schools and ranged in age from 11–14 years, with most of them (80%) being 12 years of age. Of the students, 77 students were boys and 85 were girls. Of note is that in the inclusive school system of Finland, students with special needs are also part of regular classes. Almost all students (99%) had at least one device with Internet access at home.
All students completed the inquiry task as part of regular school work. The task was aligned with the objectives of the curriculum, which emphasize the ability to seek information from different sources, identify different perspectives on examined issues, acquire and share information, and produce diverse texts (Finnish National Core Curriculum for Basic Education, 2014).
The students’ guardians received an information letter that included details about the participation, benefits, and risks, as well as a request for permission to use students’ responses for research purposes. Informed consent was obtained from students’ guardians. Students were informed that participation in the study (i.e., using their responses for research purposes) was voluntary and that they could withdraw their participation whenever they wished.Footnote 1
Reading Measures
Reading Fluency
Reading fluency was measured with a time-limited word chain test (Holopainen et al., 2004). The test contained 25 four-word chains written without inter-word spaces, and students’ task was to separate as many words as possible in 90 s by drawing vertical lines between the words. Students’ scores were their total number of correctly separated words, with a maximum score of 100. The internal consistency reliability (Cronbach’s α) for students’ scores on the word chain test was 0.96.
Reading Comprehension
To assess students’ reading comprehension, we used one test from a reading comprehension test battery that includes four parallel tests (Vauras et al., 2017; see Alisaari et al., 2018, Salo et al., 2022). This reading comprehension test consisted of three open-ended questions and a cloze task with 17 gaps. The open-ended questions measured students’ skills in recognizing important information and, to some extent, integrating this information into a coherent written response. The cloze task measured students’ skills in locating and using textual information appropriately. In addition, inference skills were required for the successful completion of the cloze task.
Students were given up to seven minutes to read a 227-word long expository text titled “The diversity of nature is disappearing.” Afterward, they answered three open-ended questions about the main ideas presented in the text: “How does global warming threaten coral reefs?”, “How does global warming affect nature’s diversity?”, and “The text mentions three important ways that should be used to protect nature’s diversity. Which are they?” To successfully answer these questions, students needed to use the whole text. The text was available to students when answering the open-ended questions. They had 15 min to answer these questions.
After completing the open-ended questions, students used the same expository text to complete the cloze task by filling in the appropriate words (17 gaps) within 15 min. The text used for the cloze task included the same information as the expository text but differed in wording and the organization of the content. For example, the following gaps; “[___________], which speed up global warming, will multiply, when rainforests won’t [___________];” can be answered by locating and using the following sentences in the expository text: “The felling of rainforests will increase greenhouse gas emissions manifold globally. Because of this, rainforests will be unable to absorb and cleanse greenhouse gasses.''.
Students’ responses to the open-ended questions were scored based on the amount of relevant information they included. The scores ranged from 0 to 6 points on each question, yielding a maximum score of 18 points. Inter-rater reliability was established for these scores by two raters who independently scored 68 students’ responses (Hämäläinen et al., 2020). Cohen’s kappa was 0.90, 0.68, and 0.95 for responses to the first, second, and third questions. All disagreements were resolved through discussion. On the cloze test, the scoring of each correctly filled gap varied from one to two points, yielding a maximum score of 27 points.
Task and Digital Platform
We examined students’ integration of ideas from multiple online texts as part of a larger online inquiry task. Students were tasked to write an article for a school magazine with the title: “Computer gaming can have advantages and disadvantages.'' They were also asked to write a recommendation on how children should use computer games. They were asked to search for three online texts and to write their articles based on these texts. Students could revisit the task assignment through a navigation bar at any time during the completion of the task. The task assignment is presented in Appendix A.
Students completed the task on a web-based platform called Neurone (González‐Ibáñez et al., 2017). On this platform, students were guided by two virtual students: one who guided them in using the tool embedded in the system and another who gave them a task assignment and several sub-task assignments during the online inquiry task.
Students worked in four time-limited phases consistent with the phases of the online research and comprehension model by Leu and colleagues (Leu et al., 2013, 2015): (1) information search and selection of relevant online texts using a custom search engine (8 min), (2) reading of online texts and selecting relevant ideas (i.e., snippets) using a snippet selection tool (12 min), (3) credibility evaluation of online texts (7 min), and 4) composing the article with the help of the selected snippets (15 min). The time limits for the phases ensured that students would have a chance to complete all phases within a 45-min lesson, which was the time available for this assignment in the schools. At the beginning of each phase, students received instructions concerning the sub-task at hand.
In the first phase, students searched for relevant online texts using a search engine in a closed search space that included links to three relevant and 17 irrelevant texts. The irrelevant links included keywords that appeared in the task assignment, but the texts concerned issues that were not relevant to the task at hand, such as the history of computer gaming. Students were tasked to select three online texts. After submitting their selections, students received feedback that informed on how many relevant online texts they had selected. A thumbs-up icon next to a page name indicated a relevant selection, whereas a thumbs-down icon indicated an irrelevant selection. If students succeeded in selecting all three relevant online texts, they proceeded to the next phase of the task. If one or more selections were irrelevant, students could try again until they located the correct pages or reached the time limit. If a student could not select the relevant texts within the time limit, the student was provided with the correct texts. They were informed that they would be working on the three most relevant online texts so that they would understand why these texts might differ from their selections. This procedure ensured that all students had the same materials to read. Once students had successfully completed a phase or reached the time limit, the program advanced to the next phase. Our study focused on phases two and four, that is, on the selection of relevant ideas and composing of the article.
In the second phase, students were instructed (see Appendix B) to carefully consider what was important on each page and select two relevant ideas (i.e., snippets) from each online text with a snippet tool, thus selecting six snippets altogether. Students were not allowed to select more than two snippets per text, and they had to discard previous selections if they wanted to change their selections. Students were informed that each selection could consist of a maximum 20 words. Figure 1 presents the snippet selection tool. As can be seen, students selected snippets by highlighting a section of the text and saving it by clicking a save button. The selected snippets appeared to the right of the online text. If selections were longer than allowed, the system saved only the first 20 words.
In the third phase, students evaluated the credibility of the texts. The texts were presented one at a time, and students rated each text by awarding it between 1 and 5 stars depending on their evaluation of its credibility. They were also asked to justify their credibility ratings in writing. After rating all texts or reaching the time limit, students proceeded to phase four.
In the fourth phase, students were asked to compose the article with the help of the selected snippets (see Appendix C). They were also encouraged to write the article using their own words. Figure 2 presents the writing tool students used to write and edit their texts. When they first entered the writing space, all their selected snippets were visible on the right side of the writing space. Students could choose to see all their snippets simultaneously or sorted by online texts, such that only two snippets from one text were visible at a time. By double-clicking a snippet, students could open the text page from which the snippet was selected. On that page, the selected snippets were highlighted so that students could see them in their textual context. The program did not allow students to copy and paste text from the snippets into their articles. Based on a previous study with the same age group (Kiili et al., 2020), we expected that some students might write very short responses, even only one sentence. To avoid this, the minimum length of students’ written products was set as 50 words. The tool also displayed a word count, allowing students to monitor their progress in terms of text production. Throughout the inquiry task, students received a reminder when there were three minutes left to finish a sub-task.
Online Texts
Table 1 shows a summary of the three online texts that students read. The texts varied in their position on computer gaming, with one text for computer gaming, one against, and one representing positions both for and against computer gaming. The texts were designed for the purpose of this study to ensure that each text had unique content. Consequently, the texts discussed computer gaming from three different perspectives: health, learning, and behavior. Students could be assumed to have some knowledge or experience relevant to the content of the texts, for example, regarding learning through games, games for exercising, consequences of extensive gaming, and violent games. We also ensured that the vocabulary used in the texts was appropriate for this age level. All the texts provided information regarding the potential outcomes of playing computer games—negative, positive, or both—thus allowing students to integrate reasons within and across the positions. Each text had four paragraphs. The two middle paragraphs contained information about the advantages and disadvantages of computer gaming, while the first and last paragraphs were introductory or contextual. Finally, the texts were similar in length (ranging from 148 to 175 words).
Data Analysis and Dependent Measures
Snippet Selection
In scoring students’ snippet selections, we first identified every unique snippet they selected (n = 159). Then, we scored these snippets according to their relevance to the task on a scale ranging from 0 to 3. The scoring criteria that we used are shown in Table 2. Two raters independently scored all the 159 unique snippets, resulting in a Cohen’s kappa of 0.85. All disagreements were resolved through discussion. When all the unique snippets had been scored, we calculated a snippet score for each student. Because students had selected six snippets across the three texts, the maximum score for the snippet selection was 18 points.
Integration of Ideas
The scoring of students’ written products (i.e., articles for a school magazine) in terms of integration proceeded in three phases. In Phase 1, we segmented them into thematic units, with a thematic unit defined as an idea or a chain of connected ideas. A thematic unit can consist of ideas within or across the perspectives (i.e., health, learning, and behavior) represented in the texts. However, the ideas can also originate from the student's prior knowledge or be generated in response to the task of offering a recommendation.
Within a thematic unit, ideas can be connected in several ways, such as by using connective words not copied from the original text, repetition of key concepts, organization of ideas, or elaborations by means of examples. Thematic unit ends when a student switches to a thematically different idea without connecting it to the preceding idea with any means (e.g., connecting words). Table 3 presents examples of the thematic units. Because the connective words were the most important means to integrate ideas, we have underlined the connective words that students themselves added to their written products (i.e., did not copy from the source text). Example 1 in Table 3 illustrates how a student used additive connectors to integrate ideas across one text and across the text and prior knowledge. Example 5 illustrates how a student used an adversative connector to integrate benefits and disadvantages of computer gaming across the two texts representing opposite positions. In brief, when identifying the thematic units in a written product, we identified all the integrative elements that students had created themselves and evaluated their appropriate use case by case.
In Phase 2, we identified the sources of ideas in each thematic unit. These included the selected snippets, other parts of the text that were not included in the snippets, prior knowledge, and ideas generated in response to the task (e.g., as part of a recommendation).
In Phase 3, we determined based on sources of ideas whether (0 = no; 1 = yes) a thematic unit included (1) intratextual integration, (2) intertextual integration, (3) integration of textual information and prior knowledge, and (4) a recommendation justified with textual information. Students were only awarded a point for intratextual integration if they displayed effort to integrate ideas. Thus, if they combined snippets or text ideas that were thematically connected in an online text without creating any intratextual connections themselves, they were not given a point. Of note also is that a recommendation had to be justified with textual information to be awarded one point. Table 3 further describes each type of integration, also providing examples of how students combined ideas from different sources within each integration type.
To estimate the reliability of our identification of thematic units, the first and second authors collaboratively identified the thematic units in 25% of the written products. Then, the same authors independently identified the thematic units in 20% of the written products. The borders of the thematic units, as determined by the first author, were used as a reference when calculating the percentage of agreement. The agreement was 80%. Disagreements were resolved through discussion, and the first author identified the thematic units in the remaining 55% of the written products.
Overall Integration Quality
An overall integration quality score was calculated for each student based on the written product. In doing this, we first determined whether the written product included positions both for and against computer gaming, indicating that both sides mentioned in the task assignment had been covered. Second, based on our analysis of integration types, we calculated the number of indications of integration across the thematic units included in the written product. Finally, we determined whether the written product included a justified recommendation, indicating that students had applied text content in responding to the second part of the task prompt (i.e., Describe in your article how children should use computer games; see Appendix A). Table 4 shows how the positions, integration, and recommendation were considered when calculating an overall integration quality score that could vary from 0 to 6 points based on the written products.
Procedure
The data for this study were collected in the classrooms during two 45-min lessons on different days. Two researchers were present in both lessons. In the first lesson, students completed the reading fluency and reading comprehension measures. In the second lesson, they completed the online inquiry task. Students worked on their computers, and the researchers were present throughout the task to help students with technical challenges.
Results
Selection of Relevant Ideas
Students’ snippet selections indicated that they, on average, were able to select relevant ideas from the online texts. Specifically, their mean score for the snippet selections was 16.00 (SD = 2.41) out of 18 points. Still, students’ scores ranged from 4 to 18, and five students scored below 12 points. Of these five students, three may not have been engaged in the task as they spent only a few minutes on this sub-task (with 12 min available), whereas the two others spent more than 10 min on the task, suggesting that these students struggled with the selection task.
Integration of Ideas
The mean length of the 162 written products was 70.36 words (SD = 25.11). These written products consisted of 578 thematic units (M = 3.62, SD = 1.21). Table 5 shows the different types of integration identified in the thematic units. As can be seen, the most common type of integration was intratextual, accounting for 51.61% of all instances of integration. One quarter of all the thematic units included this type of integration. Further, integration of textual information and prior knowledge was the second most common type of integration, accounting for 26.16% of the instances of integration. This type of integration was observed in 12.63% of the thematic units. Intertextual integration, which accounted for 15.05% of the instances of integration, was observed in only 7.17% of the thematic units.
As much as 37.8% of the 578 thematic units included one or more types of integration. Among the thematic units that included integration, 76.58% included one type of integration, 21.62% included two types of integration, and 1.80% included three or four types of integration.
Integration Quality
The mean of students’ overall integration quality scores was 2.50 (SD = 1.44). Quite a few (6.2%) of the written products neither included both positions nor showed any indications of integration, thus obtaining a score of 0. Further, 45.1% of the written products showed only limited integration, obtaining scores of 1 or 2. However, 9.3% of students’ written products were rich in terms of integration and obtained scores of 5 or 6.
Prediction of Integration
Table 6 presents descriptive statistics and zero-order correlations for the measured variables. As can be seen, the integration of ideas in the written products was positively correlated with the snippet selection scores and the reading measures. To investigate the contribution of reading skills and snippet selection to the integration of ideas in the written products, we performed a multiple hierarchical regression analysis with students’ overall integration quality score as the dependent measure. In the first step, we entered reading comprehension and reading fluency into the equation, and in the second step, we entered the snippet selection scores (see Table 7). Because some distributions were negatively skewed, we also performed the regression analysis without extreme values and estimated the potential influence of the skewness of the snippet selection scores on the results.
In the first step, the reading measures, taken together, explained 16.2% of the variance in students’ overall integration quality (R2 = 0.162, Fchange (3, 154) = 9.822, p < 0.001). Reading comprehension measured with open questions and reading fluency were unique positive predictors in this step. After entering snippet selection scores in the second step, we observed a statistically significant 4.3% increase in the explained variance, with R2 = 0.205, Fchange (4, 153) = 9.477, p = 0.005, after the second step. In the second step, not only reading comprehension measured with open questions and reading fluency but also snippet selection were unique positive predictors of students’ integration of ideas in the written products. Although the explained variance was somewhat smaller, the same analysis performed without extreme values of the snippet selection score gave similar results, with R2 = 0.144, Fchange (3, 152) = 8.538, p < 0.001, after the first step, and R2 = 0.174, Fchange (1, 151) = 5.485, p = 0.020, after the second step.
Discussion
This study examined how sixth graders selected and integrated information from multiple online texts. In particular, it provides new insights into the integration skills of primary school students, who represent a population that has received less attention compared to older students (Barzilai et al., 2018; Primor & Katzir, 2018). The study also contributes to methodology by introducing a new unit of analysis labeled “thematic unit,” which allows for observation of integration at a level beyond single clauses or idea units (cf., Gil et al., 2010; Salmerón et al., 2020). Further, the study introduced a snippet tool that can facilitate the selection of relevant ideas, as well as a writing tool that offers an opportunity to navigate across the selected ideas and observe them in their original context. Both tools were designed to save students’ cognitive resources for integration. In the following, we discuss the results, address the limitations of our study, and offer some instructional recommendations based on our study.
In integrating information from multiple texts, readers need to select relevant information from single texts and integrate that information into a coherent representation. Accordingly, we first asked to what extent students were able to select relevant ideas from the available online texts. Most of the students performed quite well when using the snippet tool to select relevant ideas from the texts. However, some students faced difficulties identifying relevant ideas, although the available texts included only a limited amount of irrelevant information. Low performance in selecting important information may also relate to a lack of engagement because students with low scores spent relatively little time on the task. Because notetaking seems to be a challenging activity (Bonner & Holliday, 2006; Peverly et al., 2003), which also may be a bit cumbersome and time-consuming for younger readers, the snippet tool was designed to facilitate the selection of relevant ideas and allocation of cognitive resources to the creation of an integrated task product. As such, the snippet tool can be assumed to simplify the more complex notetaking process for this age group.
Our second question concerned students’ ability to integrate ideas in their post-reading written products. Although students, on average, performed well in selecting relevant ideas, integration of ideas was found to be more challenging. Nearly half (45%) of the written products included only one or two indications of integration or no integration at all. This result is not surprising, however, given that the integration of ideas when writing from multiple texts has shown to be challenging even for upper secondary school students (Kiili & Leu, 2019) and adult readers (Linderholm et al., 2014; List & Du, 2021). Many primary and secondary school students may rely quite heavily on copying or listing separate ideas in their written products (Kiili et al., 2020; Merkt et al., 2017). Still, there were large individual differences in students’ integration performance, with some sixth-grade students composing well-written, integrated texts for their age (see also Blaum et al., 2017; Florit et al., 2020b).
When integration was present in written products, students most commonly integrated ideas within single texts, with integration of information across texts observed infrequently. Thus, whereas intratextual integration was observed in 25% of the thematic units, intertextual integration was observed in only 7%. Such absence of intertextual integration when writing from multiple texts is consistent with previous research (Florit et al., 2020a; Solé et al., 2013).
Lack of intertextual integration can be due to several reasons. First, each text provided information about gaming from a single perspective (health, behavior, or learning), and it seems likely that making connections within one perspective was easier than making connections across perspectives. Relatedly, some studies have indicated that integration across contradictory texts is more challenging than across supporting texts (List et al., 2021). In the study by Kiili et al. (2020), the contextual overlap across the texts seemed to facilitate sixth graders’ intertextual integration, with students’ written responses including slightly more intertextual than intratextual connections.
Second, the task assignment in this study included two parts. The first part asked students to consider the advantages and disadvantages of computer gaming, whereas the second part asked them to recommend how children should use computer games. The second part, in particular, was supposed to prompt students to provide recommendations drawing on reasons presented across the texts. However, most of the students concentrated on the first part of the task, and only a few students (15%) included a justified recommendation in their written products. It is conceivable that the given title (i.e., Computer gaming may have both advantages and disadvantages) encouraged students to list advantages and disadvantages rather than making connections across the texts. In any case, concentration on only the first part of the task assignment suggests that students may have formed an incomplete task model (Rouet & Britt, 2011).
Although our study did not examine associations between prior knowledge and integration performance, we did explore to what extent students integrated prior knowledge in their written responses. Specifically, integration of prior knowledge and textual content was observed in 13% of the thematic units. Of note is that we did not ask students to reflect on their prior knowledge before the task, which may facilitate prior knowledge application during task completion (Kiili & Leu, 2019). Presumably, facilitating the use of prior knowledge during efforts to integrate information also would have supported students' understanding of textual content (Gil et al., 2010; Le Bigot & Rouet, 2007).
In addressing our third research question, concerning the contributions of reading skills and selection of relevant ideas to integration performance, we found that both reading fluency and reading comprehension skills were positively associated with integration. It is noteworthy that reading comprehension assessed with open questions was a unique positive predictor of integration performance, whereas reading comprehension measured with the cloze test was not. This suggests that compared to the cloze test, responding to the open-ended comprehension questions required skills more closely related to the composition of a written product from multiple texts.
Further, although basic reading skills may be foundational in efforts to integrate ideas across multiple texts (see also Florit et al., 2020a), such skills explained only a limited part (16%) of students’ integration performance. This suggests that integration across multiple texts requires competence beyond basic reading skills, which needs to be explicitly taught to students. Interestingly, the selection of relevant ideas explained variance in integration performance over and above the basic reading skills. Although the increment in the explained variance was not stellar, this result suggests that the successful selection of relevant ideas is an important step in creating an integrated written task product (cf. Cho & Afflerbach, 2017). Selecting relevant information from online texts may play a more prominent role in authentic online contexts where textual materials are more diverse. In the present study, the selection of relevant ideas was scaffolded by providing students with relevant texts containing information that can be regarded as credible. Thus, future studies could examine the selection of relevant ideas and subsequent integration of ideas in an authentic, more complex online context in which students’ search and evaluation skills also may influence the quality of the selection of relevant ideas and the written products.
This study also contributes methodologically to the literature on writing from multiple texts. Several previous studies have used idea units in examining integration across texts (Gil et al., 2010; Kiili et al., 2020; Salmerón et al., 2020); however, to our knowledge, this study is the first to use thematic units in analyzing integration performance. Compared to an idea unit containing a main verb that expresses an event, activity, or state (Magliano et al., 1999), a thematic unit represents an idea or a chain of connected ideas. Thus, a thematic unit is a broader unit of analysis compared to the relatively restricted idea unit used in prior research. As such, it may be better suited to reveal students’ attempts to integrate content beyond single ideas.
Limitations
The present study has several limitations worth mentioning. First, because students’ general snippet selection scores were high, they may not have distinguished properly between the most skilled students and the rest. In the regression analysis, such a ceiling effect might have influenced the relationship between the snippet selection scores and integration performance, leading to a falsely significant result (Austin & Brunner, 2003). However, a more detailed inspection of the skewness showed that there was a reliable statistical connection between snippet selection and integration performance in this study. Still, future research should try to better understand the relationship between the identification of relevant textual ideas and students’ integration performance when writing from multiple texts.
Second, students were given 15 min to complete the writing task, and some students did not submit their written products within the given time frame. Their integration scores might therefore not reflect their full potential. The average integration score for these students’ written products was 2.03, which was slightly below the average for the entire sample (2.50). However, the average length of these students’ written products was comparable to that of the entire sample (67 vs. 70 words). The time limit for each phase was set for practical reasons to ensure that students had the opportunity to respond to all phases of the task. To further improve the quality of the data, future researchers should try to monitor students’ activities during task performance more closely.
Third, prior knowledge and basic writing skills were not measured. Although we did not include a prior knowledge measure, our analysis focused on the spontaneous integration of prior knowledge into students’ written products. With respect to basic writing skills, independent measurement of such skills should preferably be performed in future research in this area.
Lastly, we examined only basic reading skills and the selection of information from texts in relation to students’ integration performance and, thus, did not examine how other phases of online inquiry might be associated with students’ integration. However, if students did not succeed in locating relevant online texts, they were provided them. Thus, poor text selection did not affect the sources available for writing, and students hardly used their credibility justifications in their written products (see Hämäläinen et al., 2020).Footnote 2 Further, as suggested by List (2020), emotional and motivational aspects should be considered in addition to cognitive skills when examining multiple text integration. Therefore, future studies could also examine the role of variables such as topic interest, behavioral engagement, and self-efficacy when primary school students engage in multiple text integration tasks.
Instructional Implications
Few studies have investigated how multiple text integration skills can be taught in primary school (Barzilai et al., 2018). One possible reason is the high demands of multiple text integration. Still, there is some evidence that such integration skills can be successfully promoted among upper primary school students (e.g., Martinez et al., 2015). The results of the present study also suggest that students need guidance in intertextual integration. Because intertextual integration seems to be easier across complementary than conflicting texts (List et al., 2021), it might be fruitful to start practicing integration with complementary texts. It might also be profitable to begin practicing integration across only two online texts (e.g., Kirkpatrick & Klein, 2009; Martinez et al., 2015) before gradually increasing the difficulty level by introducing additional texts. Presumably, students would also benefit from instruction that guides them through the key processes of integration, that is, selecting information from online texts, organizing ideas, and linking ideas within and across texts (van Ockenburg et al., 2019). Such guidance can include explicit instruction, modeling, written prompts, and digital scaffolds (Barzilai et al., 2018), and facilitated group discussions (Wissinger & De La Paz, 2015). Finally, students may benefit from knowledge about different writing strategies (e.g., planning and revising) in selecting and reflecting on their own strategies (van Ockenburg et al., 2021).
A close reading of students’ written products suggested that only a few students were proficient in using connective words. Students could therefore benefit from explicit instruction in how connectives function and how different types of connectives can be used in writing (Taylor et al., 2019). For example, teachers can model the use of connectives by adapting the think-aloud method (Coiro, 2011; Davey, 1983), such as when reading a short text aloud, highlighting main ideas, and comparing and linking these ideas by using different connectives. Further, explicit instruction could explain how texts can be structured, which, in turn, may help students cluster ideas more meaningfully. For example, when writing compare-contrast essays, students could take advantage of textual organizers, stating the situations, topics, or phenomena that are being compared as well as how these are similar or differ (Hammann & Stevens, 2003).
Absence of integration may be related to the fact that students often rely on identifying and copying ideas from source texts. Thus, encouraging students to paraphrase and elaborate on selected content when writing and teaching them how to do so could promote a deeper understanding of the texts. Because students may be reluctant to invest time and effort in elaborating their ideas in writing (List & Alexander, 2018), such skills could be practiced by asking them to rewrite short passages or answer open questions.
In the present study, we designed a snippet tool that afforded a quick and easy way to select and save relevant information from the source texts and thus facilitated the allocation of resources to the writing process. However, to fully benefit from this type of tool and avoid using it for copying and pasting, students probably need to be instructed in how to process the selected information further. Students could, for example, be tasked to select the snippets individually and then create a written product collaboratively. In this process, they could discuss the selected snippets with peers, compare each other’s selections, and collaboratively identify connections between the selected snippets and organize them conceptually before starting on the actual writing process. Such discussions about the text contents could also facilitate students’ expression of the main ideas with their own words. In addition to the snippet tool, we designed the writing tool with which students were able to navigate across the selected snippets. By clicking the snippet, students could see the snippet in its textual context, which reduces the demands on memory. Future studies could examine how students use the offered affordances.
That many participants in this study responded adequately to only part of the assignment may also suggest that their task model was incomplete. Readers’ task model can be considered important in directing their focus and allocation of resources, thus guiding the reading process (Rouet & Britt, 2011). Teachers could support students in interpreting the task by reading the task assignment in class and having them collaboratively identify important task features. Further, students should be reminded that they can revisit the task assignment to check whether they are on the right track. All told, multiple text integration represents so many challenges to primary school students that they should be given explicit instruction and support in how to master the various aspects of this complex task.
Data Availability
Data and procedures have met all ethical guidelines and standards of our institutions. Informed consent was requested from students’ guardian(s).
Notes
The sample of students included in this study also contributed to data reported by Hämäläinen et al. (2020). However, the research questions, analyses, and findings reported in this article are unique to this study.
Sourcing in the written products was also rare. None of the students referred to the author or the publisher of the texts, while some students (n = 34) included names or institutions mentioned in the text, mainly as a by-product of copying content from the snippets in their written products.
References
Alisaari, J., Turunen, T., Kajamies, A., Korpela, M., & Hurme, T.-R. (2018). Reading comprehension in digital and printed texts. L1-Educational Studies in Language and Literature, 18, 1–18. https://doi.org/10.17239/L1ESLL-2018.18.01.15
Austin, P. C., & Brunner, L. J. (2003). Type I error inflation in the presence of a ceiling effect. The American Statistician, 57(2), 97–104. https://doi.org/10.1198/0003130031450
Banas, S., & Sanchez, C. A. (2012). Working memory capacity and learning underlying conceptual relationships across multiple documents. Applied Cognitive Psychology, 26(4), 594–600. https://doi.org/10.1002/acp.2834
Barzilai, S., & Zohar, A. (2012). Epistemic thinking in action: Evaluating and integrating online sources. Cognition and Instruction, 30(1), 39–85. https://doi.org/10.1080/07370008.2011.636495
Barzilai, S., Zohar, A., & Mor-Hagani, S. (2018). Promoting integration of multiple texts: A review of instructional approaches and practices. Educational Psychology Review, 30(3), 973–999. https://doi.org/10.1007/s10648-018-9436-8
Beker, K., van den Broek, P., & Jolles, D. (2019). Children’s integration of information across texts: Reading processes and knowledge presentations. Reading and Writing, 32(3), 663–687. https://doi.org/10.1007/s11145-018-9879-9
Blaum, D., Griffin, T. D., Wiley, J., & Britt, M. A. (2017). Thinking about global warming: Effect of policy-related documents and prompts on learning about causes of climate change. Discourse Processes, 54(4), 303–316. https://doi.org/10.1080/0163853X.2015.1136169
Bonner, J. M., & Holliday, W. G. (2006). How college science students engage in note-taking strategies. Journal of Research in Science Teaching, 43(8), 786–818. https://doi.org/10.1002/tea.20115
Braasch, J. L. G., Bråten, I., Strømsø, H. I., & Anmarkrud, Ø. (2014). Incremental theories of intelligence predict multiple document comprehension. Learning and Individual Differences, 31, 11–20. https://doi.org/10.1016/j.lindif.2013.12.012
Bråten, I., Anmarkrud, Ø., Brandmo, C., & Strømsø, H. I. (2014). Developing and testing a model of direct and indirect relationships between individual differences, processing and multiple-text comprehension. Learning and Instruction, 30, 9–24. https://doi.org/10.1016/j.learninstruc.2013.11.002
Bråten, I., Ferguson, L. E., Anmarkrud, Ø., & Strømsø, H. I. (2013). Prediction of learning and comprehension when adolescents read multiple texts: The roles of word-level processing, strategic approach, and reading motivation. Reading and Writing, 26, 321–348. https://doi.org/10.1007/s11145-012-9371-x
Britt, M. A., Rouet, J. F., & Durik, A. M. (2018). Literacy beyond text comprehension: A theory of purposeful reading. Routledge.
Cervetti, G. N., & Wright, T. S. (2020). The role of knowledge in understanding and learning from text. In E. B. Moje, P. Afflerbach, P. Enciso, & N. K. Leseaux (Eds.), Handbook of reading research (Vol. 5, pp. 237–260). Routledge.
Cho, B.-Y., & Afflerbach, P. (2017). An evolving perspective of constructively responsive reading comprehension strategies in multilayered digital text environments. In S. E. Israel (Ed.), Handbook of research on reading comprehension (2nd ed., pp. 109–134). Guilford.
Cohen, J. (1960). Kappa: Coefficient of concordance. Educational and Psychological Measurement, 20(1), 37–46.
Coiro, J. (2011). Talking about reading as thinking: Modeling the hidden complexities of online reading comprehension. Theory into Practice, 50(2), 107–115. https://doi.org/10.1080/00405841.2011.558435
Cumming, A., Lai, C., & Cho, H. (2016). Students’ writing from sources for academic purposes: A synthesis of recent research. Journal of English for Academic Purposes, 23, 47–58. https://doi.org/10.1016/j.jeap.2016.06.002
Davey, B. (1983). Think aloud: Modeling the cognitive processes of reading comprehension. Journal of Reading, 27(1), 44–47.
Davis, D. S., Huang, B., & Yi, T. (2018). Making sense of science texts: A mixed method examination of predictors and processes of multiple-text comprehension. Reading Research Quarterly, 52(2), 227–252. https://doi.org/10.1002/rrq.162
Du, H., & List, A. (2020). Evidence use in argument writing based on multiple texts. Reading Research Quarterly, 56(4), 715–735. https://doi.org/10.1002/rrq.366
Elbro, C., & Buch-Iversen, I. (2013). Activation of background knowledge for inference making: Effects on reading comprehension. Scientific Studies of Reading, 17(6), 435–452. https://doi.org/10.1080/10888438.2013.774005
Fisher, D., & Frey, N. (2009). Background knowledge. The missing piece of the comprehension puzzle. McGraw-Hill. http://siopformisd.pbworks.com/w/file/fetch/80441810/background%20knowledge%20overlooked%20factor%20in%20reading%20comprehension.pdf
Florit, E., Cain, K., & Mason, L. (2020a). Going beyond children’s single text comprehension: The role of fundamental and higher-level skills in 4th graders’ multiple document comprehension. British Journal of Educational Psychology, 90(2), 449–472. https://doi.org/10.1111/bjep.12288
Florit, E., De Carli, P., Giunti, G., & Mason, L. (2020). Advanced theory of mind uniquely contributes to children’s multiple-text comprehension. Journal of Experimental Child Psychology, 189, Article 104708. https://doi.org/10.1016/j.jecp.2019.104708
Galloway, E. P., & Uccelli, P. (2019). Beyond reading comprehension: Exploring the additional contribution of core academic language skills to early adolescents’ written summaries. Reading and Writing, 32(3), 729–759. https://doi.org/10.1007/s11145-018-9880-3
Gil, L., Bråten, I., Vidal-Abarca, E., & Strømsø, H. I. (2010). Summary versus argument tasks when working with multiple documents: Which is better for whom? Contemporary Educational Psychology, 35(3), 157–173. https://doi.org/10.1016/j.cedpsych.2009.11.002
Goldman, S. R., Braasch, J. L. G., Wiley, J., Graesser, A. C., & Brodowinska, K. (2012). Comprehending and learning from internet sources: Processing patterns of better and poorer learners. Reading Research Quarterly, 47(4), 356–381. https://doi.org/10.1002/rrq.027
González‐Ibáñez, R., Gacitúa, D., Sormunen, E., & Kiili, C. (2017). NEURONE: oNlinE inqUiRy experimentatiON systEm. Proceedings of the Association for Information Science and Technology, 54(1), 687–689. https://doi.org/10.1002/pra2.2017.14505401117
Griffin, T. D., Wiley, J., Britt, M. A., & Salas, C. R. (2012). The role of CLEAR thinking in learning science from multiple-document inquiry tasks. International Electronic Journal of Elementary Education, 5(1), 63–78.
Hagen, Å. M., Braasch, J. L. G., & Bråten, I. (2014). Relationship between spontaneous note-taking, self-reported strategies and comprehension when reading multiple texts in different task conditions. Journal of Research in Reading, 37(1), 141–157. https://doi.org/10.1111/j.1467-9817.2012.01536.x
Hämäläinen, E. K., Kiili, C., Marttunen, M., Räikkönen, E., González-Ibáñez, R., & Leppänen, P. H. T. (2020). Promoting sixth graders’ credibility evaluation of web pages: an intervention study. Computers in Human Behavior, 110, Article 106372. https://doi.org/10.1016/j.chb.2020.106372
Hammann, L. A., & Stevens, R. J. (2003). Instructional approaches to improving students’ writing of compare-contrast essays: An experimental study. Journal of Literacy Research, 35(2), 731–756. https://doi.org/10.1207/s15548430jlr3502_3
Holopainen, L., Kairaluoma, L., Nevala, J., Ahonen, T., & Aro, M. (2004). Lukivaikeuksien seulontamenetelmä nuorille ja aikuisille [Dyslexia screening test for youth and adults]. Niilo Mäki Instituutti.
Kanniainen, L., Kiili, C., Tolvanen, A., Aro, M., & Leppänen, P. H. T. (2019). Literacy skills and online research and comprehension: Struggling readers face difficulties online. Reading and Writing, 32(9), 2201–2222. https://doi.org/10.1007/s11145-019-09944-9
Kiili, C., Bråten, I., Kullberg, N., & Leppänen, P. H. T. (2020). Investigating elementary school students’ text-based argumentation with multiple information resources. Computers & Education, 147, Article 103785. https://doi.org/10.1016/j.compedu.2019.103785
Kiili, C., & Leu, D. J. (2019). Exploring the collaborative synthesis of information during online reading. Computers in Human Behavior, 95, 146–157. https://doi.org/10.1016/j.chb.2019.01.033
Kiili, C., Leu, D. J., Utriainen, J., Coiro, J., Kanniainen, L., Tolvanen, A., Lohvansuu, K., & Leppänen, P. H. T. (2018). Reading to learn from online information: Modeling the factor structure. Journal of Literacy Research, 50(3), 304–334. https://doi.org/10.1177/1086296X18784640
Kintsch, W. (1988). The role of knowledge in discourse comprehension: A construction-integration model. Psychological Review, 95(2), 163–182. https://doi.org/10.1016/S0166-4115(08)61551-4
Kirkpatrick, L. C., & Klein, P. D. (2009). Planning text structure as a way to improve students’ writing from sources in the compare-contrast genre. Learning and Instruction, 19(4), 309–321. https://doi.org/10.1016/j.learninstruc.2008.06.001
Latini, N., Bråten, I., Anmarkrud, Ø., & Salmerón, L. (2019). Investigating effects of reading medium and reading purpose on behavioral engagement and textual integration in a multiple text context. Contemporary Educational Psychology, 59, Article 101797. https://doi.org/10.1016/j.cedpsych.2019.101797
Le Bigot, L., & Rouet, J.-F. (2007). The impact of presentation format, task assignment and prior knowledge on students’ comprehension of multiple online documents. Journal of Literacy Research, 39(4), 445–470. https://doi.org/10.1080/10862960701675317
Leu, D. J., Forzani, E., Rhoads, C., Maykel, C., Kennedy, C., & Timbrell, N. (2015). The new literacies of online research and comprehension: Rethinking the reading achievement gap. Reading Research Quarterly, 50(1), 37–59. https://doi.org/10.1002/rrq.85
Leu, D. J., Kinzer, C. K., Coiro, J., Castek, J., & Henry, L. A. (2013). New literacies: A dual level theory of the changing nature of literacy, instruction, and assessment. In D. E. Alvermann, N. J. Unrau, & R. B. Ruddell (Eds.), Theoretical models and processes of reading (6th ed., pp. 1150–1181). International Reading Association.
Linderholm, T., Therriault, D. J., & Kwon, H. (2014). Multiple science text processing: Building comprehension skills for college student readers. Reading Psychology, 35(4), 332–356. https://doi.org/10.1080/02702711.2012.726696
List, A. (2020). Investigating the cognitive affective engagement model of learning from multiple texts: A structural equation modeling approach. Reading Research Quarterly, 56(4), 781–817. https://doi.org/10.1002/rrq.361
List, A., & Alexander, P. A. (2018). Cold and warm perspectives on the cognitive affective engagement model of multiple source use. In J. L. G. Braasch, I. Bråten, & M.T. McCrudden (Eds.), Handbook of multiple source use (pp. 34–54). Routledge.
List, A., & Alexander, P. A. (2019). Toward an integrated framework of multiple text use. Educational Psychologist, 54(1), 20–39. https://doi.org/10.1080/00461520.2018.1505514
List, A., & Du, H. (2021). Reasoning beyond history: Examining students’ strategy use when completing a multiple text task addressing a controversial topic in education. Reading & Writing, 34(4), 1003–1048. https://doi.org/10.1007/s11145-020-10095-5
List, A., Du, H., & Lee, H. Y. (2021). Examining relation formation across consistent and conflicting texts. Discourse Processes, 58(2), 134–154. https://doi.org/10.1080/0163853X.2020.1834328
List, A., Du, H., & Wang, Y. (2019). Understanding students’ conceptions of task assignments. Contemporary Educational Psychology, 59, Article 101801. https://doi.org/10.1016/j.cedpsych.2019a.101801
List, A., Du, H., Wang, Y., & Lee, H. Y. (2019b). Toward a typology of integration: Examining the documents model framework. Contemporary Educational Psychology, 58, 228–242. https://doi.org/10.1016/j.cedpsych.2019.03.003
Magliano, J. P., Trabasso, T., & Graesser, A. C. (1999). Strategic processing during comprehension. Journal of Educational Psychology, 91(4), 615–629. https://doi.org/10.1037/0022-0663.91.4.615
Mahlow, N., Hahnel, C., Kroehne, U., Artelt, C., Goldhammer, F., & Schoor, C. (2020). More than (single) text comprehension? On university students’ understanding of multiple documents. Frontiers in Psychology, 11, Article 562450. https://doi.org/10.3389/fpsyg.2020.562450
Martínez, I., Mateos, M., Martín, E., & Rijlaarsdam, G. (2015). Learning history by composing synthesis texts: Effects of an instructional programme on learning, reading and writing processes, and text quality. Journal of Writing Research, 7(2), 273–302. https://doi.org/10.17239/jowr-2015.07.02.03
Mateos, M., Martín, E., Cuevas, I., Villalón, R., Martínez, I., & Gonzáles-Lamas, J. (2018). Improving written argumentative synthesis by teaching the integration of conflicting information from multiple sources. Cognition and Instruction, 36(2), 119–138. https://doi.org/10.1080/07370008.2018.1425300
Mateos, M., & Solé, I. (2009). Synthesizing information from various texts: A study of procedures and products at different educational levels. European Journal of Psychology of Education, 24(4), 435–451.
McNamara, D. S., & Kintsch, W. (1996). Learning from texts: Effects of prior knowledge and text coherence. Discourse Processes, 22(3), 247–288. https://doi.org/10.1080/01638539609544975
McNamara, D. S., & Magliano, J. (2009). Toward a comprehensive model of comprehension. Psychology of Learning and Motivation, 51, 297–384. https://doi.org/10.1016/S0079-7421(09)51009-2
Merkt, M., Werner, M., & Wagner, W. (2017). Historical thinking skills and mastery of multiple document tasks. Learning and Individual Differences, 54, 135–148. https://doi.org/10.1016/j.lindif.2017.01.021
Finnish National Core Curriculum for Basic Education. (2014). National core curriculum for basic education 2014. Finnish National Board of Education. (Publication No. 2016:5).
Pérez, A., Potocki, A., Stadtler, M., Macedo-Rouet, M., Paul, J., Salmerón, L., & Rouet, J.-F. (2018). Fostering teenagers’ assessment of information reliability: Effects of a classroom intervention focused on critical source dimensions. Learning and Instruction, 58, 53–64. https://doi.org/10.1016/j.learninstruc.2018.04.006
Perfetti, C. A., Rouet, J.-F., & Britt, M. A. (1999). Towards a theory of documents representation. In H. van Oostendorp & S. Goldman (Eds.), The construction of mental representations during reading (pp. 99–122). Erlbaum.
Peverly, S. T., Brobst, K. E., Graham, M. J., & Shaw, R. (2003). College adults are not good at self-regulation: A study on the relationship of self-regulation, note taking, and test taking. Journal of Educational Psychology, 95(2), 335–346. https://doi.org/10.1037/0022-0663.95.2.335
Primor, L., & Katzir, T. (2018). Measuring multiple text integration: A review. Frontiers of Psychology, 9, Article 2254. https://doi.org/10.3389/fpsyg.2018.02294
Rouet, J. F. (2006). The skills of document use: From text comprehension to Web-based learning. Psychology Press.
Rouet, J.-F., & Britt, M. A. (2011). Relevance processes in multiple document comprehension. In M. T. McCrudden, J. P. Magliano, & G. Schraw (Eds.), Text relevance and learning from text (pp. 19–52). Information Age.
Sabatini, J. P., O’Reilly, T., Halderman, L., & Bruce, K. (2014). Broadening the scope of reading comprehension using scenario-based assessments: Preliminary findings and challenges. L’année Psychologique, 114(4), 693–723. https://doi.org/10.3917/anpsy.144.0693
Salmerón, L., Sampietro, A., & Delgado, P. (2020). Using Internet videos to learn about controversies: Evaluation and integration of multiple and multimodal documents by primary school students. Computers & Education, 148, Article 103796. https://doi.org/10.1016/j.compedu.2019.103796
Salo, A.-E., Vauras, M., Hiltunen, M., & Kajamies, A. (2022). Long-term intervention of at-risk elementary students’ socio-motivational and reading comprehension competencies: Video-based case studies of emotional support in teacher-dyad and dyadic interactions. Learning, Culture and Social Interaction, 34, Article 100631. https://doi.org/10.1016/j.lcsi.2022.100631
Singer, M. (2013). Validation in reading comprehension. Current Directions in Psychological Science, 22(5), 361–366. https://doi.org/10.1177/0963721413495236
Solé, I., Miras, M., Castells, N., Espino, S., & Minguela, S. (2013). Integrating information: An analysis of processes involved and products generated in a written synthesis task. Written Communication, 30(1), 63–90. https://doi.org/10.1177/0741088312466532
Spivey, N. N., & King, J. R. (1989). Readers as writers composing from sources. Reading Research Quarterly, 24(1), 7–26. https://doi.org/10.1598/RRQ.24.1.1
Taylor, K. S., Lawrence, J. S., Connor, C. M., & Snow, C. E. (2019). Cognitive and linguistic features of adolescent argumentative writing: Do connectives signal more complex reasoning? Reading and Writing, 32, 983–1007. https://doi.org/10.1007/s11145-018-9898-6
Valenzuela, Á., & Castillo, R. D. (2022). The effect of communicative purpose and reading medium on pauses during different phases of the textualization process. Reading and Writing. https://doi.org/10.1007/s11145-022-10309-y
van Ockenburg, L., van Weihen, D., & Rijlaarsdam, G. (2021). Choosing how to plan informative synthesis texts: Effects of strategy-based interventions on overall text quality. Reading and Writing. https://doi.org/10.1007/s11145-021-10226-6
van Ockenburg, L., van Weijen, D., & Rijlaarsdam, G. (2019). Learning to write synthesis texts: A review of intervention studies. Journal of Writing Research, 10(3), 401–428. https://doi.org/10.17239/jowr-2019.10.03.01
Vauras, M., Kajamies, A., & Kinnunen, R. (2017). Reading comprehension test. [Evaluation of the student’s reading comprehension, in Finnish]. Unpublished test. University of Turku, Centre for Learning Research.
Wissinger, D., & De La Paz, S. (2015). Effects of critical discussions on middle school students’ written historical arguments. Journal of Educational Psychology, 108(1), 43–59. https://doi.org/10.1037/edu0000043
Wolfe, M. B. W., & Goldman, S. R. (2005). Relations between adolescents’ text processing and reasoning. Cognition and Instruction, 23(4), 467–502. https://doi.org/10.1207/s1532690xci2304_2
Acknowledgements
This research was funded by the Academy of Finland (Project Number: 285806). We would like to thank the iFuco-project (Enhancing Learning and Teaching for Future Competencies of Online Inquiry in Multiple Domains) members for their contribution to the development of the task environment and data collection.
Author information
Authors and Affiliations
Contributions
NK: conceptualization, methodology, formal analysis, investigation, writing—original draft. CK: conceptualization, methodology, resources, writing—review and editing, supervision IB: conceptualization, methodology, writing—review and editing. RG-I: resources, software, writing—review and editing. PL: writing—review and editing, supervision, funding acquisition.
Corresponding author
Ethics declarations
Conflict of interest
This research was funded by the Academy of Finland (Project Number: 285806). The authors have no potential conflicts of interest to disclose.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A: The Task Assignment
Appendix B: Sub-task Assignment of Snippet Selections
Appendix C: Sub-task Assignment of Article Writing
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Kullberg, N., Kiili, C., Bråten, I. et al. Sixth graders’ selection and integration when writing from multiple online texts. Instr Sci 51, 39–64 (2023). https://doi.org/10.1007/s11251-022-09613-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11251-022-09613-5