Introduction

Although students often are given writing assignments based on multiple texts, such multiple text-based writing is considered to represent a great challenge across educational levels (Cumming et al., 2016; Mateos & Solé, 2009). One reason is that high-quality written responses include arguments that take multiple perspectives into account, with those perspectives linked using connectives that signal important relationships among them (Mateos et al., 2018). In addition, students should use available textual resources and draw well-justified conclusions based on their argumentation (Du & List, 2020). Finally, students should elaborate and transform their own ideas into written text, which seems to require considerable reflection and inferencing on the part of students (Wolfe & Goldman, 2005).

Previous research has mainly examined the integration of information across multiple texts among students at secondary and post-secondary levels. To contribute to the understanding of multiple text integration among younger students, this study aimed to shed light on sixth graders’ efforts to integrate information across multiple online texts, particularly on their ability to use ideas gathered from multiple texts in their written products. Such understanding may help educators design learning experiences with multiple textual resources that support younger students’ comprehension and use of multiple texts presented through different mediums. In addition, the study introduces a digital environment that includes several tools to facilitate students’ work with multiple online texts. Before turning to the present study, we will discuss theoretical assumptions regarding multiple text integration and the potential role of integrative writing tasks in this regard. Finally, we will review previous research on integration.

Integrating Ideas when Reading Multiple Texts

Integrating ideas when reading multiple texts is a complex process that requires identification of relevant information in single texts and integration of such information into a new whole in the service of meaning-making and text production (Barzilai et al., 2018; Griffin et al., 2012). Multiple text comprehension builds on single text comprehension (Cho & Afflerbach, 2017; Kiili et al., 2018; Mahlow et al., 2020), which involves constructing a coherent mental representation of the situation, issue, or phenomenon described in the text (Kintsch, 1988). When constructing a mental representation of a single text, readers engage in intratextual integration, that is, in identifying important ideas within a text and relating them to one another (Kintsch, 1988; Mateos et al., 2018). Further, skilled readers routinely use world knowledge to make causal or bridging inferences within a text (Kintsch, 1988; Singer, 2013).

In addition to single text comprehension, multiple text reading tasks require that readers construct a coherent representation across multiple texts (Britt et al., 2018; Cho & Afflerbach, 2017). As described in frameworks of multiple document literacy (Britt et al., 2018; List & Alexander, 2019; Perfetti et al., 1999), intertextual integration, that is, connecting complementary or contradictory contents across texts, is essential to building an integrated mental model. Such an integrated mental model includes key ideas from each text that may agree or disagree. Another main component of multiple texts comprehension is creating a representation of source information (Perfetti et al., 1999; Rouet, 2006), which contains information about sources (e.g., authors or publications), links between sources and content included in the integrated mental model, and relationships between sources. Because several studies have shown that representing source information when writing from multiple texts is particularly rare among primary and secondary school students (Florit et al., 2020a; Kiili et al., 2020; Pérez et al., 2018), this study focused on the integration of textual content.

In addition to intratextual and intertextual integration, integrating textual information with prior knowledge has been given much emphasis in models of reading comprehension (Cervetti & Wright, 2020; McNamara & Magliano, 2009). Thus, prior knowledge has been shown to be an important individual difference factor in both single (Elbro & Buch-Iversen, 2013; Fisher & Frey, 2009; McNamara & Kintsch, 1996) and multiple text comprehension (Bråten et al., 2014; Davis et al., 2018; Le Bigot & Rouet, 2007). For example, prior knowledge facilitates intertextual strategic processing (Bråten et al., 2014) and contributes to an integrated understanding of a topic discussed across texts (Hagen et al., 2014). In summary, previous research has identified at least three essential forms of integration involved in multiple text comprehension: intratextual integration, intertextual integration, and text-prior knowledge integration (List, 2020).

One way educators may seek to foster these forms of integration is by assigning integrative writing tasks that require connecting and comparing ideas from multiple texts to serve the communicative purpose of the writing task (Barzilai et al., 2018; Florit et al., 2020a; Valenzuela & Castillo, 2022). Clearly formulated task assignments are important because students are supposed to use information provided in the task to construct a task model that, in turn, directs their further processing and task completion (List et al., 2019a; Rouet & Britt, 2011). The construction of a task model is one of the five iterative and overlapping core processes of multiple text comprehension described in the Multiple Documents Task-based Relevance Assessment and Content Extraction (MD-TRACE) model by Rouet and Britt (2011). The other core processes are assessing the information need; selecting, processing, and integrating relevant information from selected documents; constructing the task product; and evaluating the quality of the product in relation to the task. In this study, we focused on students’ selection and integration of information from online texts when responding to an integrative writing task.

Furthermore, when creating a written product from multiple texts, students can corroborate evidence, compare, and connect relevant ideas across the texts by (re)organizing ideas and using linguistic connectives (List et al., 2019b; Spivey & King, 1989). The use of connective words is associated with the quality of writing (Galloway & Uccelli, 2019; Latini et al., 2019; Taylor et al., 2019), with connective words assisting students in expressing additive connections (also, in addition), causal connections (because, therefore), and adversative connections (on the other hand, whereas) across ideas. For example, Taylor et al. (2019) found that middle school students’ use of adversative connective words was associated with more integrated written products.

Previous Research on Integration

When Primor and Katzir (2018) reviewed studies examining readers’ integration of information from multiple texts, they found that more than half of the 50 studies used expressive tasks (e.g., essay tasks) to investigate how readers select information from multiple texts, form intertextual relations, or draw inferences across texts. Of note is, however, that few of these studies investigated readers younger than 15 years of age.

Research including younger students has indicated that they find integration of information across multiple texts challenging (Blaum et al., 2017; Florit et al., 2020a; Sabatini et al., 2014). For example, in a much-cited think-aloud study, sixth-graders seldom engaged in intertextual integration while reading multiple texts on a historical topic (Wolfe & Goldman, 2005). Similarly, students have been shown to face challenges in integrating ideas when tasked to compose multiple source-based essays (Florit et al., 2020a; Kiili et al., 2020). In Florit et al.’s (2020a) study, fourth graders wrote two essays, one on the healthiness of chocolate and one on the effects of video gaming. Only 18 and 31% of the students, respectively, were found to include opposing perspectives in their essays. Kiili et al. (2020) found that one-third of the sixth graders included ideas only from one out of four textual resources or did not refer to any text at all in their essays. However, although multiple text integration is challenging to primary school students, there is also some evidence that students as young as nine years old spontaneously may attempt to integrate information across texts when the texts are optimal for integration and even struggling readers can integrate information across texts when connections between texts are salient (Beker et al., 2019).

Multiple text comprehension and integration are affected by more basic reading skills. Thus, previous studies of primary school students have established that basic reading skills, such as reading fluency (Florit et al., 2020a; Kiili et al., 2020) and reading comprehension skills (Florit et al., 2020a; Kanniainen et al., 2019), contribute to multiple text comprehension. There is also evidence that basic reading skills, such as word recognition, contribute to multiple text comprehension even among upper secondary school students (Bråten et al., 2013). Further, working memory (Banas & Sanchez, 2012; Braasch et al., 2014), strategic processing (Goldman et al., 2012; Hagen et al., 2014;), and comprehension monitoring (Florit et al., 2020a) have been shown to facilitate multiple text comprehension. Among these individual cognitive factors, we focused on the role played by reading fluency and reading comprehension skills in the integration of information across multiple texts in the present study.

The Present Study

Given the scarcity of prior empirical work on primary school students’ multiple texts integration, we aimed to contribute to building a research base in this area by asking 162 sixth-graders to complete a computer-based inquiry task about computer gaming in a closed information environment. Students searched for three relevant online texts with a search engine, read three relevant online texts that were either selected by the students or assigned to them, selected relevant information from these texts, evaluated the texts, and created a written product based on the texts. In the present study, we focus on the selection of relevant information from the texts and the composition of the written product. Specifically, our study addressed the following questions:

  1. 1.

    To what extent were students able to select relevant ideas from the available online texts?

  2. 2.

    To what extent did students integrate ideas in their written products, and which types of integration did they perform?

  3. 3.

    To what extent did reading fluency, reading comprehension, and selection of relevant ideas contribute to students’ integration into the written products?

Method

Participants

In the present study, we used convenience sampling and recruited schools and teachers based on their opportunities and willingness to participate. Altogether, 179 students took part in the study, but 17 students were excluded because they did not complete all the phases of the task that were relevant to this study (see the section Task and Digital Platform). The remaining 162 students attended 10 different Finnish elementary schools and ranged in age from 11–14 years, with most of them (80%) being 12 years of age. Of the students, 77 students were boys and 85 were girls. Of note is that in the inclusive school system of Finland, students with special needs are also part of regular classes. Almost all students (99%) had at least one device with Internet access at home.

All students completed the inquiry task as part of regular school work. The task was aligned with the objectives of the curriculum, which emphasize the ability to seek information from different sources, identify different perspectives on examined issues, acquire and share information, and produce diverse texts (Finnish National Core Curriculum for Basic Education, 2014).

The students’ guardians received an information letter that included details about the participation, benefits, and risks, as well as a request for permission to use students’ responses for research purposes. Informed consent was obtained from students’ guardians. Students were informed that participation in the study (i.e., using their responses for research purposes) was voluntary and that they could withdraw their participation whenever they wished.Footnote 1

Reading Measures

Reading Fluency

Reading fluency was measured with a time-limited word chain test (Holopainen et al., 2004). The test contained 25 four-word chains written without inter-word spaces, and students’ task was to separate as many words as possible in 90 s by drawing vertical lines between the words. Students’ scores were their total number of correctly separated words, with a maximum score of 100. The internal consistency reliability (Cronbach’s α) for students’ scores on the word chain test was 0.96.

Reading Comprehension

To assess students’ reading comprehension, we used one test from a reading comprehension test battery that includes four parallel tests (Vauras et al., 2017; see Alisaari et al., 2018, Salo et al., 2022). This reading comprehension test consisted of three open-ended questions and a cloze task with 17 gaps. The open-ended questions measured students’ skills in recognizing important information and, to some extent, integrating this information into a coherent written response. The cloze task measured students’ skills in locating and using textual information appropriately. In addition, inference skills were required for the successful completion of the cloze task.

Students were given up to seven minutes to read a 227-word long expository text titled “The diversity of nature is disappearing.” Afterward, they answered three open-ended questions about the main ideas presented in the text: “How does global warming threaten coral reefs?”, “How does global warming affect nature’s diversity?”, and “The text mentions three important ways that should be used to protect nature’s diversity. Which are they?” To successfully answer these questions, students needed to use the whole text. The text was available to students when answering the open-ended questions. They had 15 min to answer these questions.

After completing the open-ended questions, students used the same expository text to complete the cloze task by filling in the appropriate words (17 gaps) within 15 min. The text used for the cloze task included the same information as the expository text but differed in wording and the organization of the content. For example, the following gaps; “[___________], which speed up global warming, will multiply, when rainforests won’t [___________];” can be answered by locating and using the following sentences in the expository text: “The felling of rainforests will increase greenhouse gas emissions manifold globally. Because of this, rainforests will be unable to absorb and cleanse greenhouse gasses.''.

Students’ responses to the open-ended questions were scored based on the amount of relevant information they included. The scores ranged from 0 to 6 points on each question, yielding a maximum score of 18 points. Inter-rater reliability was established for these scores by two raters who independently scored 68 students’ responses (Hämäläinen et al., 2020). Cohen’s kappa was 0.90, 0.68, and 0.95 for responses to the first, second, and third questions. All disagreements were resolved through discussion. On the cloze test, the scoring of each correctly filled gap varied from one to two points, yielding a maximum score of 27 points.

Task and Digital Platform

We examined students’ integration of ideas from multiple online texts as part of a larger online inquiry task. Students were tasked to write an article for a school magazine with the title: “Computer gaming can have advantages and disadvantages.'' They were also asked to write a recommendation on how children should use computer games. They were asked to search for three online texts and to write their articles based on these texts. Students could revisit the task assignment through a navigation bar at any time during the completion of the task. The task assignment is presented in Appendix A.

Students completed the task on a web-based platform called Neurone (González‐Ibáñez et al., 2017). On this platform, students were guided by two virtual students: one who guided them in using the tool embedded in the system and another who gave them a task assignment and several sub-task assignments during the online inquiry task.

Students worked in four time-limited phases consistent with the phases of the online research and comprehension model by Leu and colleagues (Leu et al., 2013, 2015): (1) information search and selection of relevant online texts using a custom search engine (8 min), (2) reading of online texts and selecting relevant ideas (i.e., snippets) using a snippet selection tool (12 min), (3) credibility evaluation of online texts (7 min), and 4) composing the article with the help of the selected snippets (15 min). The time limits for the phases ensured that students would have a chance to complete all phases within a 45-min lesson, which was the time available for this assignment in the schools. At the beginning of each phase, students received instructions concerning the sub-task at hand.

In the first phase, students searched for relevant online texts using a search engine in a closed search space that included links to three relevant and 17 irrelevant texts. The irrelevant links included keywords that appeared in the task assignment, but the texts concerned issues that were not relevant to the task at hand, such as the history of computer gaming. Students were tasked to select three online texts. After submitting their selections, students received feedback that informed on how many relevant online texts they had selected. A thumbs-up icon next to a page name indicated a relevant selection, whereas a thumbs-down icon indicated an irrelevant selection. If students succeeded in selecting all three relevant online texts, they proceeded to the next phase of the task. If one or more selections were irrelevant, students could try again until they located the correct pages or reached the time limit. If a student could not select the relevant texts within the time limit, the student was provided with the correct texts. They were informed that they would be working on the three most relevant online texts so that they would understand why these texts might differ from their selections. This procedure ensured that all students had the same materials to read. Once students had successfully completed a phase or reached the time limit, the program advanced to the next phase. Our study focused on phases two and four, that is, on the selection of relevant ideas and composing of the article.

In the second phase, students were instructed (see Appendix B) to carefully consider what was important on each page and select two relevant ideas (i.e., snippets) from each online text with a snippet tool, thus selecting six snippets altogether. Students were not allowed to select more than two snippets per text, and they had to discard previous selections if they wanted to change their selections. Students were informed that each selection could consist of a maximum 20 words. Figure 1 presents the snippet selection tool. As can be seen, students selected snippets by highlighting a section of the text and saving it by clicking a save button. The selected snippets appeared to the right of the online text. If selections were longer than allowed, the system saved only the first 20 words.

Fig. 1
figure 1

A screenshot of the snippet tool

In the third phase, students evaluated the credibility of the texts. The texts were presented one at a time, and students rated each text by awarding it between 1 and 5 stars depending on their evaluation of its credibility. They were also asked to justify their credibility ratings in writing. After rating all texts or reaching the time limit, students proceeded to phase four.

In the fourth phase, students were asked to compose the article with the help of the selected snippets (see Appendix C). They were also encouraged to write the article using their own words. Figure 2 presents the writing tool students used to write and edit their texts. When they first entered the writing space, all their selected snippets were visible on the right side of the writing space. Students could choose to see all their snippets simultaneously or sorted by online texts, such that only two snippets from one text were visible at a time. By double-clicking a snippet, students could open the text page from which the snippet was selected. On that page, the selected snippets were highlighted so that students could see them in their textual context. The program did not allow students to copy and paste text from the snippets into their articles. Based on a previous study with the same age group (Kiili et al., 2020), we expected that some students might write very short responses, even only one sentence. To avoid this, the minimum length of students’ written products was set as 50 words. The tool also displayed a word count, allowing students to monitor their progress in terms of text production. Throughout the inquiry task, students received a reminder when there were three minutes left to finish a sub-task.

Fig. 2
figure 2

A screenshot of the writing tool

Online Texts

Table 1 shows a summary of the three online texts that students read. The texts varied in their position on computer gaming, with one text for computer gaming, one against, and one representing positions both for and against computer gaming. The texts were designed for the purpose of this study to ensure that each text had unique content. Consequently, the texts discussed computer gaming from three different perspectives: health, learning, and behavior. Students could be assumed to have some knowledge or experience relevant to the content of the texts, for example, regarding learning through games, games for exercising, consequences of extensive gaming, and violent games. We also ensured that the vocabulary used in the texts was appropriate for this age level. All the texts provided information regarding the potential outcomes of playing computer games—negative, positive, or both—thus allowing students to integrate reasons within and across the positions. Each text had four paragraphs. The two middle paragraphs contained information about the advantages and disadvantages of computer gaming, while the first and last paragraphs were introductory or contextual. Finally, the texts were similar in length (ranging from 148 to 175 words).

Table 1 The summary of the online texts

Data Analysis and Dependent Measures

Snippet Selection

In scoring students’ snippet selections, we first identified every unique snippet they selected (n = 159). Then, we scored these snippets according to their relevance to the task on a scale ranging from 0 to 3. The scoring criteria that we used are shown in Table 2. Two raters independently scored all the 159 unique snippets, resulting in a Cohen’s kappa of 0.85. All disagreements were resolved through discussion. When all the unique snippets had been scored, we calculated a snippet score for each student. Because students had selected six snippets across the three texts, the maximum score for the snippet selection was 18 points.

Table 2 Scoring criteria for snippet selection

Integration of Ideas

The scoring of students’ written products (i.e., articles for a school magazine) in terms of integration proceeded in three phases. In Phase 1, we segmented them into thematic units, with a thematic unit defined as an idea or a chain of connected ideas. A thematic unit can consist of ideas within or across the perspectives (i.e., health, learning, and behavior) represented in the texts. However, the ideas can also originate from the student's prior knowledge or be generated in response to the task of offering a recommendation.

Within a thematic unit, ideas can be connected in several ways, such as by using connective words not copied from the original text, repetition of key concepts, organization of ideas, or elaborations by means of examples. Thematic unit ends when a student switches to a thematically different idea without connecting it to the preceding idea with any means (e.g., connecting words). Table 3 presents examples of the thematic units. Because the connective words were the most important means to integrate ideas, we have underlined the connective words that students themselves added to their written products (i.e., did not copy from the source text). Example 1 in Table 3 illustrates how a student used additive connectors to integrate ideas across one text and across the text and prior knowledge. Example 5 illustrates how a student used an adversative connector to integrate benefits and disadvantages of computer gaming across the two texts representing opposite positions. In brief, when identifying the thematic units in a written product, we identified all the integrative elements that students had created themselves and evaluated their appropriate use case by case.

Table 3 Analysis of integration types within thematic units

In Phase 2, we identified the sources of ideas in each thematic unit. These included the selected snippets, other parts of the text that were not included in the snippets, prior knowledge, and ideas generated in response to the task (e.g., as part of a recommendation).

In Phase 3, we determined based on sources of ideas whether (0 = no; 1 = yes) a thematic unit included (1) intratextual integration, (2) intertextual integration, (3) integration of textual information and prior knowledge, and (4) a recommendation justified with textual information. Students were only awarded a point for intratextual integration if they displayed effort to integrate ideas. Thus, if they combined snippets or text ideas that were thematically connected in an online text without creating any intratextual connections themselves, they were not given a point. Of note also is that a recommendation had to be justified with textual information to be awarded one point. Table 3 further describes each type of integration, also providing examples of how students combined ideas from different sources within each integration type.

To estimate the reliability of our identification of thematic units, the first and second authors collaboratively identified the thematic units in 25% of the written products. Then, the same authors independently identified the thematic units in 20% of the written products. The borders of the thematic units, as determined by the first author, were used as a reference when calculating the percentage of agreement. The agreement was 80%. Disagreements were resolved through discussion, and the first author identified the thematic units in the remaining 55% of the written products.

Overall Integration Quality

An overall integration quality score was calculated for each student based on the written product. In doing this, we first determined whether the written product included positions both for and against computer gaming, indicating that both sides mentioned in the task assignment had been covered. Second, based on our analysis of integration types, we calculated the number of indications of integration across the thematic units included in the written product. Finally, we determined whether the written product included a justified recommendation, indicating that students had applied text content in responding to the second part of the task prompt (i.e., Describe in your article how children should use computer games; see Appendix A). Table 4 shows how the positions, integration, and recommendation were considered when calculating an overall integration quality score that could vary from 0 to 6 points based on the written products.

Table 4 Integration quality scores for the written products

Procedure

The data for this study were collected in the classrooms during two 45-min lessons on different days. Two researchers were present in both lessons. In the first lesson, students completed the reading fluency and reading comprehension measures. In the second lesson, they completed the online inquiry task. Students worked on their computers, and the researchers were present throughout the task to help students with technical challenges.

Results

Selection of Relevant Ideas

Students’ snippet selections indicated that they, on average, were able to select relevant ideas from the online texts. Specifically, their mean score for the snippet selections was 16.00 (SD = 2.41) out of 18 points. Still, students’ scores ranged from 4 to 18, and five students scored below 12 points. Of these five students, three may not have been engaged in the task as they spent only a few minutes on this sub-task (with 12 min available), whereas the two others spent more than 10 min on the task, suggesting that these students struggled with the selection task.

Integration of Ideas

The mean length of the 162 written products was 70.36 words (SD = 25.11). These written products consisted of 578 thematic units (M = 3.62, SD = 1.21). Table 5 shows the different types of integration identified in the thematic units. As can be seen, the most common type of integration was intratextual, accounting for 51.61% of all instances of integration. One quarter of all the thematic units included this type of integration. Further, integration of textual information and prior knowledge was the second most common type of integration, accounting for 26.16% of the instances of integration. This type of integration was observed in 12.63% of the thematic units. Intertextual integration, which accounted for 15.05% of the instances of integration, was observed in only 7.17% of the thematic units.

Table 5 Number of thematic units including specific integration types

As much as 37.8% of the 578 thematic units included one or more types of integration. Among the thematic units that included integration, 76.58% included one type of integration, 21.62% included two types of integration, and 1.80% included three or four types of integration.

Integration Quality

The mean of students’ overall integration quality scores was 2.50 (SD = 1.44). Quite a few (6.2%) of the written products neither included both positions nor showed any indications of integration, thus obtaining a score of 0. Further, 45.1% of the written products showed only limited integration, obtaining scores of 1 or 2. However, 9.3% of students’ written products were rich in terms of integration and obtained scores of 5 or 6.

Prediction of Integration

Table 6 presents descriptive statistics and zero-order correlations for the measured variables. As can be seen, the integration of ideas in the written products was positively correlated with the snippet selection scores and the reading measures. To investigate the contribution of reading skills and snippet selection to the integration of ideas in the written products, we performed a multiple hierarchical regression analysis with students’ overall integration quality score as the dependent measure. In the first step, we entered reading comprehension and reading fluency into the equation, and in the second step, we entered the snippet selection scores (see Table 7). Because some distributions were negatively skewed, we also performed the regression analysis without extreme values and estimated the potential influence of the skewness of the snippet selection scores on the results.

Table 6 Correlations between snippet selection scores, written product integration scores and reading measures
Table 7 Results of hierarchical regression analysis for variables predicting integration in written products

In the first step, the reading measures, taken together, explained 16.2% of the variance in students’ overall integration quality (R2 = 0.162, Fchange (3, 154) = 9.822, p < 0.001). Reading comprehension measured with open questions and reading fluency were unique positive predictors in this step. After entering snippet selection scores in the second step, we observed a statistically significant 4.3% increase in the explained variance, with R2 = 0.205, Fchange (4, 153) = 9.477, p = 0.005, after the second step. In the second step, not only reading comprehension measured with open questions and reading fluency but also snippet selection were unique positive predictors of students’ integration of ideas in the written products. Although the explained variance was somewhat smaller, the same analysis performed without extreme values of the snippet selection score gave similar results, with R2 = 0.144, Fchange (3, 152) = 8.538, p < 0.001, after the first step, and R2 = 0.174, Fchange (1, 151) = 5.485, p = 0.020, after the second step.

Discussion

This study examined how sixth graders selected and integrated information from multiple online texts. In particular, it provides new insights into the integration skills of primary school students, who represent a population that has received less attention compared to older students (Barzilai et al., 2018; Primor & Katzir, 2018). The study also contributes to methodology by introducing a new unit of analysis labeled “thematic unit,” which allows for observation of integration at a level beyond single clauses or idea units (cf., Gil et al., 2010; Salmerón et al., 2020). Further, the study introduced a snippet tool that can facilitate the selection of relevant ideas, as well as a writing tool that offers an opportunity to navigate across the selected ideas and observe them in their original context. Both tools were designed to save students’ cognitive resources for integration. In the following, we discuss the results, address the limitations of our study, and offer some instructional recommendations based on our study.

In integrating information from multiple texts, readers need to select relevant information from single texts and integrate that information into a coherent representation. Accordingly, we first asked to what extent students were able to select relevant ideas from the available online texts. Most of the students performed quite well when using the snippet tool to select relevant ideas from the texts. However, some students faced difficulties identifying relevant ideas, although the available texts included only a limited amount of irrelevant information. Low performance in selecting important information may also relate to a lack of engagement because students with low scores spent relatively little time on the task. Because notetaking seems to be a challenging activity (Bonner & Holliday, 2006; Peverly et al., 2003), which also may be a bit cumbersome and time-consuming for younger readers, the snippet tool was designed to facilitate the selection of relevant ideas and allocation of cognitive resources to the creation of an integrated task product. As such, the snippet tool can be assumed to simplify the more complex notetaking process for this age group.

Our second question concerned students’ ability to integrate ideas in their post-reading written products. Although students, on average, performed well in selecting relevant ideas, integration of ideas was found to be more challenging. Nearly half (45%) of the written products included only one or two indications of integration or no integration at all. This result is not surprising, however, given that the integration of ideas when writing from multiple texts has shown to be challenging even for upper secondary school students (Kiili & Leu, 2019) and adult readers (Linderholm et al., 2014; List & Du, 2021). Many primary and secondary school students may rely quite heavily on copying or listing separate ideas in their written products (Kiili et al., 2020; Merkt et al., 2017). Still, there were large individual differences in students’ integration performance, with some sixth-grade students composing well-written, integrated texts for their age (see also Blaum et al., 2017; Florit et al., 2020b).

When integration was present in written products, students most commonly integrated ideas within single texts, with integration of information across texts observed infrequently. Thus, whereas intratextual integration was observed in 25% of the thematic units, intertextual integration was observed in only 7%. Such absence of intertextual integration when writing from multiple texts is consistent with previous research (Florit et al., 2020a; Solé et al., 2013).

Lack of intertextual integration can be due to several reasons. First, each text provided information about gaming from a single perspective (health, behavior, or learning), and it seems likely that making connections within one perspective was easier than making connections across perspectives. Relatedly, some studies have indicated that integration across contradictory texts is more challenging than across supporting texts (List et al., 2021). In the study by Kiili et al. (2020), the contextual overlap across the texts seemed to facilitate sixth graders’ intertextual integration, with students’ written responses including slightly more intertextual than intratextual connections.

Second, the task assignment in this study included two parts. The first part asked students to consider the advantages and disadvantages of computer gaming, whereas the second part asked them to recommend how children should use computer games. The second part, in particular, was supposed to prompt students to provide recommendations drawing on reasons presented across the texts. However, most of the students concentrated on the first part of the task, and only a few students (15%) included a justified recommendation in their written products. It is conceivable that the given title (i.e., Computer gaming may have both advantages and disadvantages) encouraged students to list advantages and disadvantages rather than making connections across the texts. In any case, concentration on only the first part of the task assignment suggests that students may have formed an incomplete task model (Rouet & Britt, 2011).

Although our study did not examine associations between prior knowledge and integration performance, we did explore to what extent students integrated prior knowledge in their written responses. Specifically, integration of prior knowledge and textual content was observed in 13% of the thematic units. Of note is that we did not ask students to reflect on their prior knowledge before the task, which may facilitate prior knowledge application during task completion (Kiili & Leu, 2019). Presumably, facilitating the use of prior knowledge during efforts to integrate information also would have supported students' understanding of textual content (Gil et al., 2010; Le Bigot & Rouet, 2007).

In addressing our third research question, concerning the contributions of reading skills and selection of relevant ideas to integration performance, we found that both reading fluency and reading comprehension skills were positively associated with integration. It is noteworthy that reading comprehension assessed with open questions was a unique positive predictor of integration performance, whereas reading comprehension measured with the cloze test was not. This suggests that compared to the cloze test, responding to the open-ended comprehension questions required skills more closely related to the composition of a written product from multiple texts.

Further, although basic reading skills may be foundational in efforts to integrate ideas across multiple texts (see also Florit et al., 2020a), such skills explained only a limited part (16%) of students’ integration performance. This suggests that integration across multiple texts requires competence beyond basic reading skills, which needs to be explicitly taught to students. Interestingly, the selection of relevant ideas explained variance in integration performance over and above the basic reading skills. Although the increment in the explained variance was not stellar, this result suggests that the successful selection of relevant ideas is an important step in creating an integrated written task product (cf. Cho & Afflerbach, 2017). Selecting relevant information from online texts may play a more prominent role in authentic online contexts where textual materials are more diverse. In the present study, the selection of relevant ideas was scaffolded by providing students with relevant texts containing information that can be regarded as credible. Thus, future studies could examine the selection of relevant ideas and subsequent integration of ideas in an authentic, more complex online context in which students’ search and evaluation skills also may influence the quality of the selection of relevant ideas and the written products.

This study also contributes methodologically to the literature on writing from multiple texts. Several previous studies have used idea units in examining integration across texts (Gil et al., 2010; Kiili et al., 2020; Salmerón et al., 2020); however, to our knowledge, this study is the first to use thematic units in analyzing integration performance. Compared to an idea unit containing a main verb that expresses an event, activity, or state (Magliano et al., 1999), a thematic unit represents an idea or a chain of connected ideas. Thus, a thematic unit is a broader unit of analysis compared to the relatively restricted idea unit used in prior research. As such, it may be better suited to reveal students’ attempts to integrate content beyond single ideas.

Limitations

The present study has several limitations worth mentioning. First, because students’ general snippet selection scores were high, they may not have distinguished properly between the most skilled students and the rest. In the regression analysis, such a ceiling effect might have influenced the relationship between the snippet selection scores and integration performance, leading to a falsely significant result (Austin & Brunner, 2003). However, a more detailed inspection of the skewness showed that there was a reliable statistical connection between snippet selection and integration performance in this study. Still, future research should try to better understand the relationship between the identification of relevant textual ideas and students’ integration performance when writing from multiple texts.

Second, students were given 15 min to complete the writing task, and some students did not submit their written products within the given time frame. Their integration scores might therefore not reflect their full potential. The average integration score for these students’ written products was 2.03, which was slightly below the average for the entire sample (2.50). However, the average length of these students’ written products was comparable to that of the entire sample (67 vs. 70 words). The time limit for each phase was set for practical reasons to ensure that students had the opportunity to respond to all phases of the task. To further improve the quality of the data, future researchers should try to monitor students’ activities during task performance more closely.

Third, prior knowledge and basic writing skills were not measured. Although we did not include a prior knowledge measure, our analysis focused on the spontaneous integration of prior knowledge into students’ written products. With respect to basic writing skills, independent measurement of such skills should preferably be performed in future research in this area.

Lastly, we examined only basic reading skills and the selection of information from texts in relation to students’ integration performance and, thus, did not examine how other phases of online inquiry might be associated with students’ integration. However, if students did not succeed in locating relevant online texts, they were provided them. Thus, poor text selection did not affect the sources available for writing, and students hardly used their credibility justifications in their written products (see Hämäläinen et al., 2020).Footnote 2 Further, as suggested by List (2020), emotional and motivational aspects should be considered in addition to cognitive skills when examining multiple text integration. Therefore, future studies could also examine the role of variables such as topic interest, behavioral engagement, and self-efficacy when primary school students engage in multiple text integration tasks.

Instructional Implications

Few studies have investigated how multiple text integration skills can be taught in primary school (Barzilai et al., 2018). One possible reason is the high demands of multiple text integration. Still, there is some evidence that such integration skills can be successfully promoted among upper primary school students (e.g., Martinez et al., 2015). The results of the present study also suggest that students need guidance in intertextual integration. Because intertextual integration seems to be easier across complementary than conflicting texts (List et al., 2021), it might be fruitful to start practicing integration with complementary texts. It might also be profitable to begin practicing integration across only two online texts (e.g., Kirkpatrick & Klein, 2009; Martinez et al., 2015) before gradually increasing the difficulty level by introducing additional texts. Presumably, students would also benefit from instruction that guides them through the key processes of integration, that is, selecting information from online texts, organizing ideas, and linking ideas within and across texts (van Ockenburg et al., 2019). Such guidance can include explicit instruction, modeling, written prompts, and digital scaffolds (Barzilai et al., 2018), and facilitated group discussions (Wissinger & De La Paz, 2015). Finally, students may benefit from knowledge about different writing strategies (e.g., planning and revising) in selecting and reflecting on their own strategies (van Ockenburg et al., 2021).

A close reading of students’ written products suggested that only a few students were proficient in using connective words. Students could therefore benefit from explicit instruction in how connectives function and how different types of connectives can be used in writing (Taylor et al., 2019). For example, teachers can model the use of connectives by adapting the think-aloud method (Coiro, 2011; Davey, 1983), such as when reading a short text aloud, highlighting main ideas, and comparing and linking these ideas by using different connectives. Further, explicit instruction could explain how texts can be structured, which, in turn, may help students cluster ideas more meaningfully. For example, when writing compare-contrast essays, students could take advantage of textual organizers, stating the situations, topics, or phenomena that are being compared as well as how these are similar or differ (Hammann & Stevens, 2003).

Absence of integration may be related to the fact that students often rely on identifying and copying ideas from source texts. Thus, encouraging students to paraphrase and elaborate on selected content when writing and teaching them how to do so could promote a deeper understanding of the texts. Because students may be reluctant to invest time and effort in elaborating their ideas in writing (List & Alexander, 2018), such skills could be practiced by asking them to rewrite short passages or answer open questions.

In the present study, we designed a snippet tool that afforded a quick and easy way to select and save relevant information from the source texts and thus facilitated the allocation of resources to the writing process. However, to fully benefit from this type of tool and avoid using it for copying and pasting, students probably need to be instructed in how to process the selected information further. Students could, for example, be tasked to select the snippets individually and then create a written product collaboratively. In this process, they could discuss the selected snippets with peers, compare each other’s selections, and collaboratively identify connections between the selected snippets and organize them conceptually before starting on the actual writing process. Such discussions about the text contents could also facilitate students’ expression of the main ideas with their own words. In addition to the snippet tool, we designed the writing tool with which students were able to navigate across the selected snippets. By clicking the snippet, students could see the snippet in its textual context, which reduces the demands on memory. Future studies could examine how students use the offered affordances.

That many participants in this study responded adequately to only part of the assignment may also suggest that their task model was incomplete. Readers’ task model can be considered important in directing their focus and allocation of resources, thus guiding the reading process (Rouet & Britt, 2011). Teachers could support students in interpreting the task by reading the task assignment in class and having them collaboratively identify important task features. Further, students should be reminded that they can revisit the task assignment to check whether they are on the right track. All told, multiple text integration represents so many challenges to primary school students that they should be given explicit instruction and support in how to master the various aspects of this complex task.