Advertisement

Educational Technology Research and Development

, Volume 67, Issue 5, pp 1197–1230 | Cite as

Development of software to support argumentative reading and writing by means of creating a graphic organizer from an electronic text

  • Toshio MochizukiEmail author
  • Toshihisa Nishimori
  • Mio Tsubakimoto
  • Hiroki Oura
  • Tomomi Sato
  • Henrik Johansson
  • Jun Nakahara
  • Yuhei Yamauchi
Open Access
Development Article

Abstract

This paper describes the development of a software program that supports argumentative reading and writing, especially for novice students. The software helps readers create a graphic organizer from the text as a knowledge map while they are reading and use their prior knowledge to build their own opinion as new information while they think about writing their essays. Readers using this software can read a text, underline important words or sentences, pick up and dynamically cite the underlined portions of the text onto a knowledge map as quotation nodes, illustrate a knowledge map by linking the nodes, and later write their opinion as an essay while viewing the knowledge map; thus, the software bridges argumentative reading and writing. Sixty-three freshman and sophomore students with no prior argumentative reading and writing education participated in a design case study to evaluate the software in classrooms. Thirty-four students were assigned to a class in which each student developed a knowledge map after underlining and/or highlighting a text with the software, while twenty-nine students were assigned to a class in which they simply wrote their essays after underlining and/or highlighting the text without creating knowledge maps. After receiving an instruction regarding a simplified Toulmin’s model followed by instructions for the software usage in argumentative reading and writing along with reading one training text, the students read the target text and developed their essays. The results revealed that students who drew a knowledge map based on the underlining and/or highlighting of the target text developed more argumentative essays than those who did not draw maps. Further analyses revealed that developing knowledge maps fostered an ability to capture the target text’s argument, and linking students’ ideas to the text’s argument directly on the knowledge map helped students develop more constructive essays. Accordingly, we discussed additional necessary scaffolds, such as automatic argument detection and collaborative learning functions, for improving the students’ use of appropriate reading and writing strategies.

Keywords

Argumentative reading and writing Electronic text Underlining and/or highlighting Graphic organizer 

Introduction

Teaching and learning argumentative reading and writing is an essential educational topic in undergraduate educational programs, particularly in the freshman year of university education worldwide (Muller Mirza and Perret-Clermont 2009; Newell et al. 2011; Nakano and Maruno 2013). Although critical thinking, reading and writing are also considered necessary dispositions and skills to be acquired in general education (Daiek and Anter 2003; Angeli and Valanides 2009; Tomasek 2009; Wilson 2016), argumentative reading and writing are increasingly being emphasized in recent curriculum reforms in various countries (Council of Chief State School Officers and National Governors Association 2010). In contrast to critical reading and writing, which aim to elicit a reader’s response and to promote taking the perspective of the text in a variety of ways (Douglas 2000; Tomasek 2009), argumentative reading and writing focus on prompting a reader to appropriately identify a thesis and any supportive information for the thesis, to assess relationships between the thesis and the supportive information in an argument, and to develop their new ideas based on the argument by connecting and reinterpreting the information in the text(s) in an argumentative manner (Crowhurst 1990; Gárate et al. 2007; Parodi 2007; Newell et al. 2011). Acquiring these skills is important for students to help them become good citizens in a knowledge-based society (Goldman 2004; Goldman and Scardamalia 2013; Westby 2004).

In this sense, argumentative reading and writing require readers to go beyond analysis and skepticism while reading text(s), usually in the critical reading context (Ennis 1993), and to include knowledge creation based on their active engagement with the text so that they can question, explain, and connect ideas within and across texts and to their prior knowledge in order to write their argument based on the text. However, many studies on this topic have focused solely on argumentative “reading” while treating students’ recall of an author’s claims as textual comprehension, their identification of inconsistent logic in the text as critical evaluation, and their abstinence from bias as a sign that they are a critical reader (Britt et al. 2008; Larson et al. 2009; Wolfe et al. 2009). For a novice student to achieve a desirable level of argumentative reading and “writing” in practice, it is necessary for them to develop scaffolds to assist them in building constructive arguments based on their interpretation and evaluation of the arguments in the text (Scardamalia et al. 2012; Goldman and Scardamalia 2013).

The present study aims to develop software to support beginner students’ argumentative reading and writing; the software aims to bridge argumentative reading and writing by providing students with a function for intuitively creating knowledge maps from the text to represent their understanding based on their argumentative reading. The software also provides a function to develop links for creating knowledge maps with the students’ own ideas so that they can develop their argument before writing an argumentative essay.

Underlining and mapping as key strategies to promote argumentative reading and writing

As scaffolds to help novice students in argumentative reading and writing, prior studies in reading comprehension with the task of producing a written composition have revealed that the use of strategies such as underlining, annotating, highlighting, and various other forms of external representation, including note-taking, are effective in facilitating students’ understanding of the text, particularly when the students are instructed to read the text critically and to develop an essay (Bråten and Samuelstuen 2004; Kobayashi 2007, 2014). Both external representations and note-taking are particularly prevalent strategies for most college students, even without any instruction (Stahl et al. 1991; Lonka et al. 1994; Slotte and Lonka 1998), and provide similar effects for reading comprehension, such as encoding effects, while external storage effects result from the activity of reviewing them (Di Vesta and Gray 1972; Makany et al. 2009). These external strategies are effective in helping students make inferences from the text content (Haenggi and Perfetti 1992). Related studies have also indicated that external representations enhance the quality of critical essays that are written subsequently (Annis 1985; Lonka et al. 1994; Kobayashi 2007, 2014). Thus, underlining, highlighting, and taking notes are all important external strategies that can help students not only understand the text but also make inferences and develop critiques. In our daily lives, we combine these external strategies when reading a text with a purpose. However, prior studies have not investigated the effects of combining such strategies (cf. Ponce et al. 2013, 2018).

Another promising way to help students easily understand text(s) in an argumentative manner is using nonlinear note-taking techniques such as graphic organizers (Shapiro et al. 1995; Robinson et al. 1998; Katayama and Robinson 2000; Kools et al. 2006; Robinson et al. 2006). This strategy includes the use of concept maps (Novak and Gowin 1984), knowledge maps (O’Donell et al. 2002), and similar approaches (Makany et al. 2009), which are graphic organizers that represent ideas as node-link assemblies (for a review, see Nesbit and Adesope 2006). These graphic organizers are employed in reading research (McCagg and Dansereau 1991; Chmielewski and Dansereau 1998; Chang et al. 2001; Conlon 2008; Adesope et al. 2017) and for the creation of texts (Czuchry and Dansereau 1996; Straubel 2006; Giombini 2008; Davies 2011; Simper et al. 2016). It is well known that representing the content of a text spatially in the format of a map is effective in recreating the text’s content in terms of the relationships between key elements in the text and the text’s structure (McCagg and Dansereau 1991; Chmielewski and Dansereau 1998), especially when the text material is highly complex (Fiorella and Mayer 2016), thus achieving text comprehension at the situation level (Kools et al. 2006). Recent studies have also suggested that creating graphic organizers can encourage students to use a generative learning strategy (for a review, see Fiorella and Mayer 2016), which involves actively constructing meaning from to-be-learned information by reorganizing it and integrating it with prior knowledge (Chi and Wiley 2014), more than just underlining or highlighting can (Ponce and Mayer 2014a, b).

Instead of giving students a map of a text constructed by an expert, it is useful to have students construct knowledge maps themselves, as the process helps them recognize the important concepts, relationships, and structures of a text (Boyle and Weishaar 1997; McCagg and Dansereau 1991). Some studies have suggested that student-constructed maps are effective in helping students associate new knowledge with their existing knowledge and experience (Novak and Gowin 1984; O’Donell et al. 2002), although making a partial contribution by filling in empty cells on a graphic organizer in a forced-choice task is also effective (Robinson et al. 2006). In line with strategies for the external representation of reading, such as graphic organizers (Robinson 1997; Buckingham Shum 2002; Robinson et al. 2003; Harrell 2011; Harrell and Wetzel 2015; van Gelder 2015), some studies have also shown that knowledge maps can help students achieve higher-level learning goals, such as problem-solving transfer, application, and analysis (Anderson and Krathwohl 2000). The reason is that students establish relationships in the map while reconstructing their existing knowledge; this approach encourages them to think about argument(s) while reading the texts, a process that does not occur when the students fill out certain ideas based on forced choices.

For instance, Lonka et al. (1994) showed that the quality of critical essays written after reading a text significantly improved with the use of maps, since students need to reflect on the content while creating mental representations of the text. Beyerbach and Smith (1990) used the “Learning Tool” software, developed by Kozma (1987), to allow students of infant education to create node-and-link structures. The aim was to help the students reflect on their own learning and to create and modify maps concerning “effective instruction for infants.” Such experiments have shown that the task of developing “effective instruction” maps indeed helps learners to better review what they have learned. Ponce and Mayer (2014a) asked college students to read an onscreen text on the left side of the screen while filling in an onscreen matrix regarding the text on the right side of the screen (the experimental group) or while taking notes in a box on the right side of the screen (the control group). The results indicated that the experimental group performed better than the control group on a set of tests. Those prior studies indicated that summarizing from a text to a graphic organizer using software is a promising strategy for enhancing reading comprehension. However, they did not investigate how creating node-link representation enhanced students’ idea development based on the text that the students read.

In sum, developing node-link assemblies not only helps students understand text content but also may assist in the reading process and encourage students to critique the text when they are required to write an argumentative essay. Thus, developing a node-link structure is a promising approach to assisting readers to organize their ideas so that they can develop the narrative of an essay prior to writing it. Indeed, Nesbit and Adesope (2006) mentioned in their review paper that “more research is needed on the effectiveness of concept mapping as a pre-writing activity” (p. 435). However, the very limited research on graphic organizers has pursued this direction by helping students either learn argumentative writing (e.g., Nussbaum and Schraw 2007) or just learn to develop a narrative (e.g., Simper et al. 2016).

We believe that node-link representations are promising in helping students construct their arguments, not only in discussions but also in writing. A variety of studies on developing computer software systems show the effectiveness of node-link representations to help learners develop their arguments. For example, CSILE/Knowledge Forum® (Scardamalia and Bereiter 1991, 2014) generates a relational network that shows how notes for each idea are connected by using a commenting function that helps the students take the ideas described in the students’ notes to a higher level (van Bruggen et al. 2003); it also employs a script function, called “thinking types,” that supports students’ argumentative manner. However, Belvedere (Paulucci et al. 1995; Toth et al. 2002), QuestMap (Conklin 2003), Undo-kun (Funaoi et al. 2002), TC3 (van Amelsvoort et al. 2007), and Webspiration (Hsu et al. 2015) allow students to develop their node-link representations using evidence from observations or experiments or their ideas themselves, making a significant contribution to helping students develop their arguments even before beginning collaborative argumentation. However, prior studies that developed this kind of software did not address how to bridge argumentative reading and writing by allowing students to use combinations of the strategies.

External representations generated from electronic documents as a means to bridge argumentative reading and writing

Because electronic documents have become increasingly popular, research is being conducted on reading strategies for online documents or digital texts depending on students’ learning strategies and the strategies’ suitability under particular circumstances (Hsieh and Dwyer 2009; Chen et al. 2014; Hermena et al. 2017; Ben-Yehudah and Eshet-Alkalai 2018). Various computer software programs are being developed to guide beginner students’ reading strategies by providing functions such as prompts (Britt and Aglinskas 2002; Stadtler and Bromme 2008; Park and Kim 2016), graphic organizational frameworks (Chang et al. 2001; Liu et al. 2010; Pirnay-Dummer and Ifenthaler 2011; Ponce et al. 2012; Dwyer et al. 2013; Ponce, et al. 2013; Kiili et al. 2016), and underlining and annotation functions (Glover et al. 2007; Wolfe 2000; Chen et al. 2012; Chen and Huang 2014; van Horne et al. 2016; Winne et al. 2017; Yanikoglu et al. 2017; for a review, see Wolfe 2002) as well as mobile reading software (e.g., Chen et al. 2011; Chen and Lin 2014). Some of these functions directly liaise with digital texts and can automatically retrieve key points from the text, while others allow students to manually input answers or manipulate the texts in order to prompt their deeper thinking. To the best of our knowledge, no application has been developed to enable students to use these underlines and/or highlights or annotations directly while taking notes or utilizing external strategies.

A computerized graphic organizer can help learners construct node-link assemblies such as knowledge maps and can be easily adjusted or quickly corrected; computer-assisted mapping may also have positive effects on learners’ reading abilities (Liu et al. 2010), especially when the software can use the correct function to review students’ summaries (Chang et al. 2001). Some software also provides automatic functions to support the construction of maps (e.g., Chang, et al. 2001; Pirnay-Dummer and Ifenthaler 2011; Juarez Collazo et al. 2015). Note that some of the above studies used premade graphic organizers to foster students’ text comprehension (and even a nonautomatic premade graphic organizer; Colliot and Jamet 2018), while others had learners compare maps with their ideas or input text data in a framework proved by the software (e.g., Ponce and Mayer 2014a, b). However, from the perspective of bridging argumentative reading and writing, self-construction of maps has been shown to be more efficient than providing premade maps (McCagg and Dansereau 1991; in the EFL setting, Liu et al. 2010; Liu 2011; Eftekhari et al. 2016). As mentioned earlier, self-constructed maps help students recognize important concepts, relationships, and structures of texts. Creating node-link structural representations such as knowledge maps helps them link ideas contained in the nodes to their own ideas while developing their own opinions. This approach is key to enabling students to bridge their reading comprehension and their ideas based on the texts in order to develop argumentative essays. Thus, a way to bridge underlining and/or highlighting texts and creating knowledge maps seems to be a promising scaffold for students to help their argumentative reading and writing.

In addition, we should point out that there is a need to provide scaffolds for students to avoid distortions of their understanding while developing their argumentative essays. Goldman et al. (2013) indicated that some students employed distortions of text-based information while writing their essays because of a need for synthesizing and inference. Thus, bridging reading and writing in an argumentative manner is difficult for novice learners. To address this difficulty, it is necessary to provide an additional scaffold so that students can use what the authors wrote in the texts appropriately in order to constructively develop their essays.

Research questions

The literature has shown that independently from each other, (1) underlining and/or highlighting or (2) the self-construction of a node-link representation has significant impacts in helping students make inferences about the text. In addition, the self-construction of node-link representations such as knowledge maps is a promising way to help learners develop an argument. Thus, helping learners use a combination of underlining and/or highlighting and node-link representation as external strategies is a promising strategy to foster argumentative reading and writing. Furthermore, because the students need some scaffolds to avoid using distortions of text information while achieving argumentative reading and writing, there is a need to provide scaffolds to help them develop their essays based on what the authors wrote. The abovementioned literature review leads us to construct the following research questions and software design:
  1. (1)

    Do beginner learners who use combined strategies with a knowledge map that reflects the original text develop a better argumentative essay regarding the text? (Fig. 1)

     
  2. (2)
    What kinds of external strategies used by learners impact the quality of their argumentative reading and writing?
    Fig. 1

    The software supports bridging argumentative reading and writing by combining underlining and/or highlighting and knowledge-mapping

     

Design and development of software to support argumentative reading and writing

Our reading support application is equipped with three panes (Fig. 2): the Document pane, where a user reads, underlines, highlights and annotates a text; the Knowledge Map pane, where a learner generates a knowledge map based on the text (Fig. 2); and the Essay Editor pane, where the learner writes an essay (Fig. 3). We designed the interface based on the anticipated learning process so that two panes are always aligned beside each other, and the reader can switch between the two panes. For example, either the Document pane or the Essay Editor pane is aligned with the Knowledge Map pane so that learner can compare the text (Fig. 2) or their essay (Fig. 3) with their knowledge map. This juxtaposition makes it easy for the learner to create his or her knowledge map and to develop an essay based on the map. In addition, the system allows additional texts to be imported for reading and creating a map in different tabs of each pane. The users can switch between texts and maps (and sometimes essays) so that they can import secondary source materials to improve their knowledge map or their essay.
Fig. 2

Software interface (Document pane and Knowledge Map pane)

Fig. 3

Software interface (Essay Editor pane and Knowledge Map pane)

Underlining/highlighting and annotation

The left side of Fig. 2 is the Document pane. A learner selects an underlining pen or a highlighting marker in the desired color from the toolbox at the top of the screen. As we described earlier, this function can help the learner understand the text, make inferences, and develop critiques (Bretzing and Kulhavy 1981; Annis 1985; Haenggi and Perfetti 1992; Lonka et al. 1994; Kobayashi 2007, 2014). The learner can underline or highlight portions of the text using the pen or the marker and adjust the length of the underlined and/or highlighted portion by moving the cursors at both ends of the underlined or highlighted sections. The learner can change the color of each underlined or highlighted section directly by using the toolbox at the top to make trial-and-error adjustments while underlining the text. The meaning of each color is not fixed, so the learner or the teacher can adjust the definitions to the appropriate school level or for individual purposes (e.g., Tsubakimoto et al. 2010).

Creating a knowledge map with quotation nodes cited from underlined or highlighted texts

The Knowledge Map pane is shown on the right side of Figs. 2 and 3. On this pane, the learner can create not only knowledge maps but also various other graphic organizers and representations, including tables, photos, and handwritten memoranda. This flexibility enables educators to develop their own approaches using their own discretion to decide which methods from this software should be included (Tsubakimoto et al. 2010). However, in this study, we focus on developing a knowledge map according to Toulmin’s model (described below) for supporting argumentative reading and writing on this pane.

When the learner intends to build a knowledge map based on the underlined or highlighted portions of the text, he or she drags and drops the underlined or highlighted portions of the text from the Document pane to the Knowledge Map pane. Then, the software automatically creates quotation nodes (referred to as “idea nodes” in the study described later) that contain the corresponding portions and colors of the text on the Knowledge Map pane (Fig. 2). Thus, the learner can easily utilize the combination of the two strategies. Each quotation node includes a copy of the underlined or highlighted portion so that the learner can write his or her essay without repeatedly referring back to the Document pane or distorting the original idea while developing ideas. Each quotation node reflects the original underlined and/or highlighted color of the text; the content and color of each quotation node in the Knowledge Map pane change automatically in accordance with the adjustment of the length and the color of the corresponding underlined and/or highlighted portion of the text in the Document pane. In contrast, the colors of the underlined and/or highlighted portions of the text are adjusted in accordance with the change in the quotation nodes’ colors on the Knowledge Map pane.

The learner can freely draw lines and shapes, such as arrows, circles, rectangles, or text boxes, by using drawing functions to express relationships among quotation nodes (Fig. 3) so that he or she can develop ideas while manipulating and creating knowledge maps. In addition, the learner can add personal idea nodes (referred to as “self-nodes” in the study described later) to write down his or her personal ideas and interconnect them with some of the quotation nodes or with structural elements of the knowledge maps that he or she has created to represent the learner’s inferences and critiques based on the text content. This function is inspired by CSILE/Knowledge Forum® (Scardamalia and Bereiter 1991, 2014), which was designed for knowledge-building activities in relation to students’ compositions and scientific argumentation. This kind of interconnecting activity can enhance the learner’s inference abilities (Chi and Wiley 2014). The personal idea node does not have any information at first, allowing the learner to write or edit the content.

Essay editor

The software juxtaposes the Knowledge Map pane and the Editor pane so that the learner can write an essay while looking at the map (Fig. 3); the learner can plan, translate, and review his or her essay even while developing and reviewing the knowledge map. This layout was designed to prompt the learner’s higher-order thinking to plan, reason, and infer while writing but not to act as an automatic writing support function, such as a pop-up prompt to guide further steps. Thus, the learner’s activity, while writing his or her essay, can be iterative. While writing the essay, the learner may insert quotation(s) from the text to correctly build on the author’s original ideas. In such a case, the learner locks the map by clicking a button, drags a quotation node or a personal idea node onto the map, and drops it onto the Editor pane so that the software creates a quotation in the Editor pane.

The quotation does not allow the learner to edit its content directly for the following reasons. First, based on our design intention to foster knowledge building based on the text author’s idea while the learner is writing an essay, we intended to have the learner respect the idea of the text and reconfirm exactly what the author wrote. Each quotation has the name of the text and the page number (i.e., source information) where the quotation appears so that the learner can easily jump to the corresponding portion of the text to reread it. Second, this function is designed to avoid plagiarism in order to respect the original idea as written in the text. The learner can add personal thoughts before or after the quotation which he or she has located on the Editor pane; thus, the learner can achieve appropriate use of the text in the argument while respecting the original author’s idea. As mentioned above, the quotation box shows the source information, while the software provides information regarding word count and quotation ratio. Thus, even a novice learner (and the teacher) can understand whether the volume of the quotation content is appropriate in the essay and can consider and respect the source information while adding their own discussion before and after the quotation in order to avoid plagiarism.

Method for a study in classrooms

We designed a study in classrooms to examine how creating knowledge maps from underlined and/or highlighted sentences on the software would impact the quality of essays, especially when considering the hypothesis proposed above.

Participants

To recruit students who were motivated to learn argumentative reading and writing strategies while maintaining ecological validity, the study was held as a special lesson (no credit) for undergraduate students to learn argumentative reading and writing with a research purpose.

A total of sixty-three freshman and sophomore undergraduate students at one of the national universities in Tokyo were recruited through advertisements on campus and announcements in classes. All the participants were enrolled in the liberal arts education program at the university. The age range of the participants was 18 to 22. They had received no formal argumentation education (such as Toulmin’s model) before participating in this special lesson. Two of the participants had previous experience in using Windows tablet PCs, but they had no experience in reading digital texts using a stylus on those devices.

We conducted this study with undergraduate students for two reasons. First, as discussed earlier, there is a growing interest in and necessity for argumentative reading and writing at the university level (e.g., Newell et al. 2011; Abdollahzadeh et al. 2017). Second, it is necessary for Japanese undergraduate students to acquire argumentative reading and writing skills because they do lack learning opportunities to acquire those skills in the primary and secondary education curricula (Schwarz and Baker 2017). One reason for the lack of argumentation education prior to commencing university studies is that Japanese schools and students prefer to maintain a harmonious classroom atmosphere so that they can give precedence to “wa” in the group (Barnlund 1989; Sekiguchi 2002; Muller Mirza et al. 2009; Schwarz and Baker 2017, p. 222). Thus, conducting discussions or debates among members of a group is not preferable for students, and students go to great lengths to avoid argumentativeness, even compared to students in other Asian countries. Therefore, Toulmin’s model has not been widely adopted in Japanese secondary education (Muller Mirza et al. 2009).

However, to begin studying in undergraduate programs, first-year students must acquire the skill of argumentative reading and writing as a basis of cultivating professional and disciplinary knowledge in higher education (Nakano and Maruno 2013). Thus, we conducted this study with the software for Japanese freshman and sophomore students who were complete beginners in argumentative reading and writing and needed to acquire such skills.

We divided the students into two classes. One class (the knowledge-mapping class) used the software introduced above to create knowledge maps when reading, while the other class (the non-knowledge-mapping class) used the software when reading texts but did not create knowledge maps. Based on each applicant’s personal schedule, we aimed to balance the number of students and the grades and genders of the participants in each class. Finally, we obtained thirty-four (three females and thirty-one males; twenty-six freshmen and eight sophomores) participants in the knowledge-mapping class and twenty-nine (seven females and twenty-two males; sixteen freshmen and thirteen sophomores) participants in the non-knowledge-mapping class. No significant associations were found (Fisher’s exact tests: grade: p = .108, gender: p = .165). The students were compensated for their participation.

Material

The content of the assigned text that was used for this study discussed changes in the current American employment system; the text consisted of 5423 Japanese characters organized into 19 paragraphs. The content regarded how, despite an increasingly fluid labor market in the modern United States, many workers do not fit the profile of the traditional “organization man” (i.e., an individual who desires to work for a large organization) and instead choose to work for companies on a project-by-project basis as “free agents.”

We identified the author’s main statement in the material as follows. Free agents are becoming more common and increasingly impacting the traditional “organization”-based employment system, and there is a need to understand the free agent economy based on the following foundations in the manuscript: on average, less than one in ten American employees is currently working for the Fortune 500 companies; a temporary staffing company is becoming the largest private employer in the United States; in a strong trend, the transition of career success is shifting from the individual to the organization (explained with some examples); and statistics show that seven or more of ten Americans want to have their own business rather than work for a company. These foundations were used in parallel to support the central claim, and each of them is enough to support the author’s main statement.

This material was chosen based on the expectation that the students participating in this study, which was conducted as a lesson, would be motivated to read the text because employment and job hunting are major topics that students are concerned about, and the students would likely possess some prior knowledge about the topics.

Procedure

The classes were conducted to evaluate the effectiveness of the software described above, and we distributed this software on a tablet PC to each student in each classroom. Before beginning to use the software, all the students were given the same lecture on a simplified Toulmin’s model (Toulmin 1958). We used Toulmin’s model, among a variety of models of argumentation (see Bentahar et al. 2010, for a review), because it is widely used to organize argumentation, and can be used in a simplified manner (e.g., Stegmann et al. 2007; Howell 2018). In essence, the rationale was that when a learner reads a given text, it is important for him or her to clearly understand the relationships between the author’s claims and the grounds for the claims. This understanding can also foster the students’ inference to discover the relationships between their ideas and what the author wrote so that they can respect what the author wrote when considering their own opinions and questions about the text. Due to the lack of argumentation education in primary and secondary education in Japan, we did not provide guidance on how to refute arguments so that the students could simply and clearly comprehend and analyze the relationship between the author’s claims and the supporting details.

The students were then given a paper-based short manual for the software, received a short lecture on how to use the software on a tablet PC, and then read a text that we created as an exercise. This approach allowed the students to familiarize themselves with the software on the tablet PC and practice analyzing the text based on the simplified Toulmin’s model. Since the students were not expected to have any experience using PCs or tablet PCs for education, the entire exercise took 100 min. Then, following a 15-mins break, the students read and interpreted the target text. There were no time restrictions; therefore, the students’ essay quality was not affected by any time restrictions or by their lack of experience in using PCs for learning. All the students finished writing their essays within 100 min. Thus, the total time of the session was 215 min.

The instructions were the same for both classes: “Accurately point out the author’s arguments and their grounds, and develop your own opinion in response.” Both classes were allowed to underline and/or highlight the text and annotate the underlined and/or highlighted portions. All the students used a stylus pen to write on the digital text; red underlining or highlighting was used to identify the author’s claims, blue for the evidence of these claims, green to mark interesting portions, and yellow to point out problems with the author’s logic; these color allocations were provided in the instructions.

The knowledge-mapping class received further instructions on how to create knowledge maps and how to use them for their essay writing as well as the quotation function of the maps. The instructor showed an example that was similar to Figs. 2 and 3 on the tablet PC to explain how to use the software. According to the instructions, the students were expected to develop a knowledge map that reflected the author’s argument and then to link their ideas to the corresponding idea nodes so that they could represent their built-on ideas.

Students in the non-knowledge-mapping class were asked to write their essays immediately after completing the underlining and/or highlighting task. These students class were allowed to write some text-based memoranda on the Editor (without creating graphic organizers) to prepare for developing their essays.

The reason that both classes were allowed to use the underlining and/or highlighting function was to focus on the effectiveness of a combination of underlining and/or highlighting and the knowledge map functions; many existing studies have revealed that underlining and/or highlighting texts can enhance students’ understanding of the text as well as their inferences and critiques, as mentioned in the literature review. After finishing their essays, the students were asked to fill out a questionnaire regarding the software functions to assess its usability.

Analysis

Usability assessment

To examine usability by the students for underlining and/or highlighting and for creating knowledge maps, we provided several questions to assess how they used the software while underlining and/or highlighting the text, while developing knowledge maps based on the text (only for the knowledge-mapping class students), and while writing their essays. The students answered the questions on a 4-point Likert scale (1—strongly disagree; 4—strongly agree).

Counting and examining underlined and/or highlighted sections, quotation nodes, and personal idea nodes

Based on the literature review, we estimated that underlining/highlighting the text could affect students’ argumentative reading and writing. To examine how the students’ use of underlining and/or highlighting affected argumentative reading and writing, we counted the number of underlined and/or highlighted sections of the text in each student’s product. We also determined the number of students who could appropriately identify the main statement and at least one of the grounds, which were indicators of each student’s performance.

In addition, for the knowledge-mapping class, we counted the number of quotation node colors and personal idea nodes in each student’s product because we could also predict the effectiveness of those functions for the students’ argumentative reading and writing based on the literature review. Similarly, we determined the number of students who used the appropriate quotation nodes of the main statement and the ground(s) on the Knowledge Map pane.

Examining the quality of knowledge maps

We further examined how the students represented the relationships among the quotation nodes generated from the text and the personal idea nodes using four criteria: 1) whether the students linked red and blue quotation nodes, 2) whether the students linked appropriate quotation nodes representing the main statement and grounds, 3) whether the students linked their ideas or prior knowledge to the quotation node(s) of some elements of the text, and 4) whether the students linked their ideas or prior knowledge to the appropriate quotation node(s) representing the author’s main statement or grounds. An example of a knowledge map generated by one student is shown in Fig. 4 (we added the criteria numbers to the knowledge map). These four criteria were considered quality measures for the knowledge maps; the use of appropriate quotation nodes and links is important in understanding the target text and developing a constructive argumentative essay.
Fig. 4

An example of a knowledge map generated by a student (bold English explanations were attached by the author researchers)

We selected the first criterion because it should be one of the indicators of whether a student read the text, based on Toulmin’s model. However, this indicator alone does not provide sufficient evidence regarding whether the students could appropriately represent the structural elements based on the content of their knowledge maps. Therefore, we also examined whether each student linked the appropriate quotation nodes to represent the relationship between the appropriate main statement and the grounds; the linking indicated the quality of the process of argumentative reading while the students tried to understand the text by creating a map, as in the second criterion. The third criterion, derived from the knowledge-building aspect of argumentative writing, indicated whether the students tried to build their opinions on a variety of ideas from the text (including the author’s main statement or the grounds), as in the knowledge-building activity on the Knowledge Forum. However, we also investigated whether such building of the students’ opinion was linked to the author’s main statement or grounds in the text to examine whether the students tried to create an argument directly on the author’s main statement or grounds, even though the maps corresponding to this criterion should overlap with the third criterion. For each criterion, each essay was coded in a dichotomous format, regardless of whether there were any corresponding representations.

Furthermore, some of the students spontaneously used strategies to group several quotation nodes by enclosing those nodes with rectangles or handwritten circles. Such external strategies can prompt more constructive thinking and learning (Chi and Wiley 2014). Thus, we examined the number of groups and the number of nodes shown in each group as well as the average number of nodes in each group.

Coding and scoring essays

Two of the authors provided overall scores independently to indicate the quality of each essay using 11-point Likert-type scales based on the following two characteristics: logical coherence (0 = extremely nonlogical and incoherent, 10 = extremely logical and coherent) and constructiveness (0 = extremely nonconstructive, 10 = extremely constructive) as well as analytic coding, as described below. This multiple assessment approach allowed us to more accurately perceive the texts; as Elbow (2006) argued, “A single lens always hides or distorts aspects of what is being looked at” (p. 91). Logical coherence and constructiveness were used to assess the quality of the students’ products from the aspects of reading and writing, respectively; logical coherence is a criterion used in critical reading education (e.g., Daiek and Anter 2003; Spears 2006), and constructiveness is a criterion of knowledge building through learning from texts (Chan et al. 1992). We used these overall scores as holistic scoring because holistic scoring can identify each student’s performance level and is highly correlated with analytic scores (Bacha 2001).

The two researchers carefully read the abovementioned references and scored each essay independently, using the references that included each assessment scheme. The average intraclass correlation coefficient (ICC) of the logical coherence score and the constructiveness score was 0.86 with a 95% confidence interval (CI) from 0.79 to 0.92 (F (62, 62) = 13.746, p < .001) and 0.83 with a 95% CI from 0.74 to 0.89 (F (62, 62) = 10.852, p < .001), respectively, indicating a satisfactory level of agreement (Landis and Koch 1977). Finally, the two researchers discussed the discrepancies by checking the essays and by referring to the assessment schemes before determining the final scores.

In addition, we conducted analytic coding that examined whether each essay contained element(s) resulting from argumentative reading and writing. Table 1 shows our coding scheme: Criteria 1 to 3 were formulated based on the basic requirements for argumentative reading and writing; as mentioned above, we instructed the students to use the simplified Toulmin’s model (Toulmin 1958) and explained to the students that identifying the author’s thesis and the supporting details for explaining the claim(s) is important for using inference to identify the implied ideas in the text as well as understanding the logical structure (Daiek and Anter 2003; Spears 2006). As shown in the example, some students who used the author’s main statement as their opinion (C212) without explaining the author’s argument misunderstood the meaning of grounds (E203; the student misunderstood this concept as only an organizational issue) and used grounds only to explain their argument (C202); the two researchers coded these essays as not containing such elements. Criteria 4 to 6 were formulated based on the requirement of argumentative writing; the students needed to build on their opinions as a knowledge-building activity (Scardamalia and Bereiter 2014) that should address the author’s purpose or context (Daiek and Anter 2003; Spears 2006). Use of prior knowledge can be a catalyst for knowledge building (Chan et al. 1992; Kimmerle et al. 2011). Thus, for example, restating the author’s main statement as the student’s opinion (E222) and ignoring the text’s purpose or context were not appropriate approaches to argumentative reading and writing or a constructive way to build knowledge (C215, C225). The researchers coded such essays as having no elements of criteria 4 to 6.
Table 1

Coding scheme for essays

Code

Definition

Examples

Identifying the author’s main statement

The learner identifies the author’s main statement

[Yes] Author claims that in American society, free agents will replace organization men. (C202)

[No] If this trend continues, people of the organization man era will decrease, and the ratio of free agents will increase. This is my analysis of the individual level. (C212)

Identifying the author’s ground(s)

The learner identifies at least one of the four grounds in the text

[Yes] As the shift was being made towards the free agent, the power also shifted from the organization to the individual. The mainstream approach now is to choose specific members for a specific purpose, and once that purpose is met, the team is resolved. (C222)

[No] Large-scale fixed organizations that carry many individuals are about to be replaced by small and flexible networks in which individuals constantly change. (E203)

Explaining the statement and the grounds in relation to each other

The learner explains the author’s argument connecting the author’s claim and ground(s). Not necessarily a complete warrant but must be a reasonable explanation(s)

[Yes] The author points out that the largest private sector employer in the United States is Manpower—a staffing firm with over 1100 outlets across the country—and that tens of millions of Americans are working as free agents. (E204)

[No] The author introduces a statistic from the Department of Labor that states that “more than half of temporary workers desire to work full time,” but he does not discuss this issue deeply and refers to it only by stating that “temporary work is not all good.” This left an awkward feeling. (C202)

Stating an opinion that builds on the author’s statement

The learner states his/her opinion(s), addressing the author’s claims

[Yes] As mentioned in the main text, there will be people who desire a stable lifestyle and continue to work as organization men. However, apart from personal reasons, I believe there are reasons based on the survival of the entire society. (C216)

[No] Considering the current situation of the society, and searching for a social framework that is adaptable for workers, it is important to understand the underlying factors of the social changes mentioned in this paper. (E222)

Stating an opinion addressing the author’s purpose or context

The learner states his/her opinion(s), addressing the author’s purpose or context

[Yes] I would like to make an original proposal. There is a need to group the free agents into two categories based on why they chose that style of work. (E209)

[No] I do not think this “common sense” is a new concept. The United States values freedom and individuality. The old work style—which is based on mass production and requires uniformity in both the product and work style—in no way reflected American values. (C215)

Prior knowledge use addressing the author’s purpose or context

The learner uses his/her prior knowledge to develop his/her argument to address the author’s purpose or context

[Yes] The increase of “NEET” and temporary workers in Japan in recent years may be the beginning. Therefore, the free agent issue in the US is something we must not disregard as someone else’s problem. (C209)

[No] Therefore, I do not think a free agent workstyle is fit for the Japanese. (C225)

The two researchers carefully coded all the essays independently, referring to the coding scheme described in Table 1. Each essay was coded in a dichotomous format for each criterion. Interrater reliability was tested using Cohen’s kappa and was found to range from 0.73 to 0.97, with an average of 0.88 (SD = 0.08), indicating a satisfactory level of agreement (Landis and Koch 1977). Finally, the two researchers discussed the discrepancies by checking the essays and the coding scheme and then determined the final values.

In the scoring and coding procedure, the students’ essays were exported to rich text files without any formatting attached in the software, and each composition was given an anonymous identification code for blind coding and evaluation. The two researchers received the files in a randomized order both for scoring and for coding independently; in order to avoid bias between scoring and coding, the researchers scored each of the two criteria in two sessions separated by two weeks and then coded the essays one month after they had scored them. In the analytical procedure of scoring and coding the essays, the two researchers first read all the essays to become familiar with the data and then worked together on two sample essays (not included in this study) to achieve a certain level of agreement on the scoring and coding procedures. Then, they either scored or coded the rest of the data independently.

Results

Usability assessment

The overall evaluation by the students of the functions of the software is described in Table 2. The results indicate that the usability was moderately accepted, and many students in both groups positively evaluated the software as useful. The students noted that they could think deeply, especially while underlining and/or highlighting the text and creating maps. The non-knowledge-mapping students concentrated more on identifying the author’s argument than the knowledge-mapping students (t (61) = − 2.107, p = .039, d = 0.520) because their opportunity and strategy to identify the author’s argument before writing their essays was limited to underlining and/or highlighting in addition to simply reading the text.
Table 2

Summative subjective evaluation (average and standard deviation) for the use of the software

Items

Knowledge-mapping class

Non-knowledge-mapping class

I deeply thought about where the most important main statement and its ground in the text were while reading and underlining/highlighting the text

3.15 (0.50)

3.41 (0.50)

It was easy to underline/highlight the text

2.18 (0.76)

2.48 (0.83)

This software was useful when I underlined/highlighted the text

2.79 (0.64)

2.90 (0.72)

I deeply thought about the logical structure of the text while creating a knowledge map

3.21 (0.48)

Creating a map clarified the relationship between my idea or question and the main statement or its grounds in the text

2.79 (0.54)

It was easy to create a knowledge map

2.61 (0.54)

This software was useful when I created a knowledge map

2.94 (0.60)

I could write my essay based on the text’s main statement and its grounds

2.82 (0.62)

2.79 (0.56)

I could try to relate my idea or question to the text’s main statement and its ground while writing my essay

3.03 (0.67)

3.03 (0.63)

This software was useful when I developed my essay

2.65 (0.88)

2.62 (0.82)

I looked at my knowledge map again while writing my essay

3.00 (0.60)

Some students felt some uneasiness in using the underlining and/or highlighting function, which might be due to their lack of experience using a stylus prior to this classroom evaluation—none of the students were familiar with using this interface of Windows tablet PCs.

Essay quality in terms of coherence and constructiveness

Table 3 shows the coding results for the students’ essays based on criteria 1 to 6. We conducted a Fisher’s exact test on each criterion to identify the differences between the two classes. There were significant associations in criteria 1 to 3, meaning that the students in the knowledge-mapping class were able to identify and explain the author’s main statement and grounds in a well-organized manner, whereas the students in the other class concentrated more on identifying the main statement and grounds while underlining and/or highlighting the text.
Table 3

Results of coding the students’ essays

 

Knowledge-mapping class

Non-knowledge-mapping class

Fisher’s exact test

Cohen’s κ

 

Yes

No

Yes

No

Identifying the author’s statement

34

0

24

5

p = .017

.880

Identifying the author’s ground(s)

28

6

15

14

p = .014

.925

Explaining the statement and the grounds in relation to each other

25

9

14

15

p = .068

.898

Stating an opinion that builds on the author’s statement

27

7

18

11

p= .166

.732

Stating an opinion addressing the author’s purpose or context

25

9

15

14

p = .115

.898

Prior knowledge use addressing the author’s purpose or context

20

14

12

17

p = .210

.968

Regarding the essays’ logical coherence and constructiveness scores, we conducted a t test for each score to examine the differences between the classes, which yielded significant differences in both scores (Table 4): the average scores in the knowledge-mapping class were significantly higher than those in the non-knowledge-mapping class. The results showed a large effect size on the constructiveness score and a medium effect size on the logical coherence score.
Table 4

Average scores (and standard deviations) of the students’ essays

 

Logical coherence

Constructiveness

Knowledge-mapping class

5.74 (2.25)

5.84 (2.41)

Non-knowledge-mapping class

3.98 (2.35)

3.41 (2.08)

ICC

.864

.831

 

t (61) = − 3.018, p = .004, d = 0.766

t (61) = − 4.232, p < .001, d = 1.073

Factors of external strategies impacting the quality of argumentative reading and writing

Underlining/highlighting/quotation nodes/personal idea nodes

Table 5 shows the number of highlighted and/or underlined portions of the text. Since the distributions were not normal, we conducted Mann–Whitney U-tests to examine the differences between the two classes for the number of underlined and/or highlighted portions of the text; the results did not yield significant differences (Table 5). We also investigated how many students identified the author’s main statement and grounds appropriately while underlining and/or highlighting and found no significant association between the two classes (χ2(1) = 0.009, n.s.).
Table 5

Analysis of portions where the students underlined/highlighted the text

 

Red

Blue

Green

Yellow

Total

Average and SD

     

 Knowledge-mapping class

7.44 (5.07)

5.50 (3.62)

5.79 (5.44)

0.50 (0.79)

19.24 (7.18)

 Non-knowledge-mapping class

6.93 (5.45)

4.90 (4.09)

5.03 (3.90)

0.52 (1.21)

17.38 (9.38)

Mann–Whitney’s U-test

U = 443.500, Z = − 0.686, p = .498

U = 427.500, Z = − 0.908, p = .368

U = 485.000, Z = − 0.111, p = .915

U = 445.500, Z = − 0.811, p = .425

U = 373.000, Z = − 1.658, p = .098

Appropriate identifications

     

 Knowledge-mapping class

33 (97.1%)

25 (73.5%)

 Non-knowledge-mapping class

29 (100%)

18 (62.1%)

 

χ2 (1) = 0.009, n.s.

   

We also examined whether the students’ appropriate identification of the author’s main statement and grounds while underlining and/or highlighting the text affected their essays’ logical coherence scores and constructiveness scores. We found no significant differences (logical coherence: t (61) = 0.000, p = 1.00; constructiveness: t (61) = − 0.017, p = .986) between the students who identified the author’s main statement and grounds appropriately while underlining and/or highlighting the text (logical coherence M = 4.93, SD = 2.73; constructiveness M = 4.73, SD = 2.82) and those who did not (logical coherence M = 4.93, SD = 1.80; constructiveness M = 4.71, SD = 1.98). Therefore, we concluded that the identification of the appropriate statement and grounds while underlining/highlighting the text is not the most critical factor for fostering argumentative reading and writing on the software.

Table 6 shows the number of quotation nodes and personal idea nodes in the knowledge maps of students in the knowledge-mapping class and indicates that all the students in the knowledge-mapping class used at least one red and one blue quotation node.
Table 6

Analysis of quotation nodes and personal idea nodes in the students’ knowledge maps in the knowledge-mapping class

 

Personal idea nodes (black)

Red quotation nodes

Blue quotation nodes

Green quotation nodes

Yellow quotation nodes

Total

Average numbers and standard deviations of quotation nodes/personal idea nodes

2.79 (1.12)

5.94 (3.10)

5.03 (3.10)

3.47 (3.64)

1.71 (1.62)

19.12 (5.36)

Number of students who used quotation node(s)

34 (100%)

34 (100%)

34 (100%)

25 (73.5%)

9 (26.5%)

 1) Number of students who linked red and blue nodes

29 (85.3%)

 2) Number of students who linked appropriate nodes representing the author’s argument

14 (41.2%)

 3) Number of students who linked their personal idea nodes and quotation nodes

28 (82.4%)

 4) Number of students who linked their personal idea nodes and appropriate node(s) representing the author’s argument

16 (47.1%)

We conducted correlation analyses to examine the relationship between the scores and the number of each colored underlined and/or highlighted section and each colored quotation node. There was a significant correlation only between the logical coherence score and the number of yellow quotation nodes used on the knowledge map (r = − .391). Using yellow quotation nodes implies that the student tends to pay attention to logical inconsistency in the author’s text. Therefore, focusing on logical inconsistencies might be a distracting factor that affects the quality of the essays from the aspect of the logical coherence score.

Quality of knowledge maps

We analyzed how each student created the relationship among the author’s main statement and grounds and the student’s ideas in his or her knowledge map. The coding results for the quality of the knowledge maps revealed that fourteen (criterion 2) of twenty-nine students (criterion 1) in the knowledge-mapping class who identified the author’s main statement and grounds appropriately created link(s) between them, while fifteen other students (criteria 1 and 2) linked red and blue quotation nodes in different ways. Sixteen (criteria 4) of twenty-eight students (criteria 3) linked their ideas or prior knowledge to the author’s main statement or grounds on their knowledge maps, while twelve other students (criteria 3 and 4) linked their ideas or prior knowledge to portions of the knowledge map rather than to the appropriate quotation nodes of the author’s main statement or grounds on the map. Thus, most of the students in the knowledge-mapping class developed their knowledge maps, and some of the students appropriately linked some of the elements of the author’s text to their own ideas while developing their knowledge maps. However, some of the students neither clearly linked the author’s main statement to the grounds nor linked their ideas to the author’s main statement or grounds on their knowledge maps.

We examined whether the essays’ logical coherence scores and constructiveness scores were affected by linking the appropriate author’s main statement and grounds (criterion 2) or by linking the students’ ideas to the appropriate author’s main statement or grounds (criterion 4). The analyses revealed a slightly significant difference for the logical coherence scores in the appropriate link(s) between the author’s main statement and grounds on the knowledge maps (with link (N = 14): M = 6.61, SD = 1.88, without link (N = 20): M = 5.13, SD = 2.32, t (32) = 1.974, p = .057, d = 0.688), while no significant difference was observed for the constructiveness scores (with link (N = 14): M = 6.57, SD = 2.02, without link (N = 20): M = 5.33, SD = 2.58, t (32) = 1.510, p = .141). Regarding linking the students’ ideas to the author’s main statement or grounds, there was a significant difference for the constructiveness scores (with link (N = 16): M = 6.78, SD = 1.71, without link (N = 18): M = 5.00, SD = 2.67, t (32) = 2.280, p = .029, d = 0.784), while no significant difference was observed for the logical coherence scores (with link (N = 16): M = 6.25, SD = 1.90, without link (N = 18): M = 5.28, SD = 2.48, t (32) = 1.271, p = .213). Therefore, we considered that either linking the appropriate red and blue quotation nodes or appropriately linking the students’ ideas to the author’s main statement or grounds is the most critical factor for fostering argumentative reading and writing.

Discussion

Digital argumentative reading and writing and the use of external strategies with the software

This study aimed to develop software to support students’ argumentative reading and writing and to explore the effectiveness of external strategies, such as underlining and/or highlighting and creating knowledge maps, for supporting students’ argumentative reading and writing. We conceived two research questions through the literature review: whether the students would develop a constructive essay if they created a knowledge map based on their underlining and/or highlighting of the text using the software and how the quality of the essays would be affected by various factors related to external strategies, such as underlining and/or highlighting, as well as the construction of a knowledge map. Although the number of participants in this study was relatively small, the following statements can still be made. First, creating a knowledge map could assist the students in developing their argumentative essays, even though only underlining/highlighting the text would be effective to some extent. The average score of essay constructiveness in the knowledge-mapping class was significantly higher than that in the non-knowledge-mapping class, with a fairly large effect size (d = 1.073), while the difference in the logical coherence scores between the two classes was significant. Overall, we can conclude that using this software with knowledge mapping from underlined and/or highlighted text is effective for fostering argumentative reading and writing for novice students.

Knowledge mapping from underlined and/or highlighted text fostered students’ identification and understanding of the author’s argument (Table 3, criteria 1 to 3). This effect was powerful for students in the knowledge-mapping class, even though students in the non-knowledge-mapping class engaged more in capturing the author’s main statement and grounds. However, there were no significant differences between the two classes regarding representing the student’s opinion in an argumentative manner (i.e., criteria 4 and 5) or using prior knowledge to develop an opinion in an argumentative manner (i.e., criterion 6), which indicates that students in the non-knowledge-mapping group could also develop their arguments to some extent merely by underlining and/or highlighting the text. This finding is consistent with previous studies showing that underlining and/or highlighting the text helps students make inferences and critique information (e.g., Haenggi and Perfetti 1992), even though the students in both conditions could not juxtapose the Document pane and the Essay Editor pane (thus, all the students had limitations in using a variety of texts that were not on the knowledge map). Therefore, we argued that the students in the knowledge-mapping class could understand the author’s argument, enhancing their ability to draw inferences while developing the knowledge map and leading to a high level of constructiveness and logical coherence in their essays, especially when they linked their ideas and the author’s argument on the knowledge map. In contrast, the non-knowledge-mapping students, who only underlined and/or highlighted the text, concentrated much more on identifying the author’s argument in the text.

Second, we argued that the students should construct a map representation in order to constructively develop their argumentative essay. Some of the students could link neither the appropriate quotation nodes of the author’s main statement and their grounds nor their own ideas and the appropriate quotation nodes of the author’s argument because constructing a map representing the argumentative structure of a text is a difficult task for novice students (cf. Chang et al. 2001, 2005). However, developing links between the author’s argument and the student’s ideas on the map was significantly effective in making the essays constructive. We found that the students needed to focus on linking their ideas only to the author’s main statement or grounds in order to foster the development of constructive essays. As O’Donell et al. (2002) indicated, when the students placed their personal idea nodes in their maps to include their own ideas, they were prompted to reflect on the relationships between the text and what they thought while reading, which allowed them to gain a greater awareness of their own positions and opinions. If the students focused only on the author’s main statement and grounds while developing their own ideas, they could concentrate on developing their opinions without the distraction of other ideas so that they could develop their constructive essays. We also found that the use of more yellow quotation nodes (possible logical errors identified by the students) could be distracting in terms of developing the essays and was related to a decrease in the logical coherence score. We need to further investigate this reasoning process.

In sum, combining underlining and/or highlighting a text and creating a knowledge map directly from those underlined and/or highlighted sections could be a good scaffold to bridge reading an electronic text and constructively generating an argumentative essay, especially when the students link their ideas to the author’s argument. We argued that creating a knowledge map from a text by using this software is an effective way to bridge students’ argumentative reading and writing practices, though some improvements to the software are still required.

Necessary functional improvements

We realized the necessity for software functional improvements. For example, the results showed that all thirty-four students in the knowledge-mapping class created personal idea node(s) on their maps; however, six students (17.6%) did not link their personal idea node(s) to any of the quotation nodes. Furthermore, we noted the importance of directly linking the students’ personal idea node(s) to the author’s argument nodes, which was one of the critical factors for supporting argumentative reading and writing. Our findings indicate that students should be encouraged to link these nodes.

One possible method to encourage such linking is a prompting function, such as an instructional scaffold, to encourage the learner to make the connection, as in Wang et al. (2016), and facilitating their ability to draw inferences. Therefore, the automatic identification of the author’s argument (e.g., Lippi and Torroni 2015; Winne et al. 2017) is needed. Such a function could help students who are not good at capturing the author’s argument when they cannot identify and use the author’s argument in their knowledge maps and link their ideas to the quotation nodes.

A second promising approach is collaborative learning to critique the argumentative reading and writing created by using the software; this method allows students to synthesize more knowledge and encourages mutual review (Bruillard and Baron 2000). A mutual review opportunity can especially foster students’ linking between their ideas and the author’s ideas and enable them to discuss their ideas to enhance the constructive quality. Regarding supporting students’ reading, Kwon and Cifuentes (2007) indicated that maps in a group learning condition showed higher-quality propositions and relationships between concepts and links than individual maps, indicating that collaborative learning promotes deeper conceptual understanding. Mochizuki et al. (2009) and Mochizuki and Tsubakimoto (2014) have already developed a function to allow readers to engage in a collaborative dialogue, especially on the map based on their underlining and/or highlighting, although past studies have undertaken mostly collaborative reading annotations without a graphic organizer (e.g., Chen and Chen 2014). We need to conduct further investigations to identify the effectiveness of collaborative learning that encourages learners to use appropriate strategies and functions to develop better constructive essays.

The usability improvements, especially for the underlining/highlighting function, are necessary for first users of tablet PC functions and interfaces, such as a stylus. When the students used a stylus to identify the start and end of a sentence to underline and/or highlight or to extend or shrink the underlining or highlighting, they needed to move the stylus concisely. Thus, the beginner users of tablet PCs felt that using the underlining/highlighting function was too difficult, while more experienced users did not feel uneasy in developing their knowledge maps with the stylus, according to Fitts’s law (Fitts 1954). The software allows the use of a mouse interface; thus, the students could have used the mouse to underline/highlight the text. Although a variety of software development studies for reading electronic texts currently use a stylus to underline/highlight the texts (e.g., Schilit et al. 1999; Wolfe 2002; Waycott and Kukulska-Hulme 2003), we need to consider providing options to enable users to adjust the input interface according to their individual purposes.

Limitations and future research

This research has several limitations and suggests directions for future research. The number of participants is relatively small, even though we believe that this study’s findings can reasonably direct the future design of the software. Obviously, more participants are necessary for the future studies described below. First, this study does not show the learning effect or an improvement in argumentative reading and writing skills. To consider how we may support the learning process of argumentative reading and writing, we need to examine the fading process of this scaffold and the improvement of argumentative reading and writing skills without any scaffolds or without computers in a long-term study.

Second, the classroom evaluation study was designed using a text that the students should have had some prior knowledge of and interest in. Prior studies have shown that when students have absolutely no prior knowledge, it is difficult for them to reach the level of situational representation even if maps are created (Lowe 1996; Holfman and van Oostendorp 1999). For example, Holfman and van Oostendorp (1999) used four question types, two at the test structure level (microstructure/macrostructure) and two at the text representation level (text base/situation model), and found that maps prevented the construction of situation models in low-knowledge groups. Thus, we should consider that this result may be limited to students with some prior knowledge of the target text. Further research is needed to examine how this phenomenon unfolds for students with no prior knowledge of the text.

Third, although we examined the effect of combining underlining/highlighting a text and creating a map while reading, we could not elaborate the exact factors that fostered argumentative reading and writing, for instance, how the automatic copying function from the underlined/highlighted texts to a knowledge map and to an essay worked to avoid distortions of the students’ understanding. Examining the detailed learning process using video data and protocol is necessary to reveal how this combination of functions works to support the argumentative reading process.

Fourth, this study focused on the effectiveness of the software for argumentative reading and writing only in the case of creating a knowledge map according to the simplified Toulmin’s model, even though the software provides multiple functions for a variety of possible graphic organizers and representations, including even linear note-taking. We need to further investigate and show a variety of possible effective ways to use this software so that teachers and students can understand its usefulness.

In addition to the abovementioned discussion regarding scaffolds for students’ argumentative knowledge-building, further research is needed to investigate how this kind of software for supporting reading electronic texts could support integrated reading involving multiple texts, according to recent studies (Kobayashi 2007; Goldman and Scardamalia 2013; Chinn and Rinehart 2016). These studies have indicated that argumentative reading and writing can be fostered through the integration of multiple texts. As mentioned earlier, this software can load several texts simultaneously and allows learners to create a knowledge map for integration purposes. The results of this study indicated that combining underlining/highlighting the texts and creating a nonlinear external representation such as a knowledge map based on the underlined or highlighted sections is a powerful strategy for enabling learners to deeply infer their ideas and to develop arguments built on the text ideas. This strategy will make it easy for learners to compare and relate the essences of multiple texts and will also help learners identify disagreements and commonalities; thus, they can achieve knowledge creation through inferring and resolving disagreements among multiple texts (Thomm and Bromme 2016).

Notes

Acknowledgement

We would like to thank Spiceworks Corporation, Silicon Studio Corporation, and Microsoft Japan K.K. as well as Microsoft Development K.K. for their contribution to the design and development of the software described in this paper. This study was conducted as a part of a research project in Microsoft chair of Educational Environment and Technology (MEET) at the University of Tokyo. Moreover, we are extremely grateful to Mr. Shin-ichi Watanabe who supported the software development when he worked at Microsoft Development K.K. In addition, we would like to thank anonymous reviewers for their constructive comments for the earlier versions of this manuscript. This work was also in part supported by the Telecommunications Advancement Foundation in 2009 and 2019, and the JSPS KAKENHI Grant-in-Aids for Scientific Research from the Japanese Society of Promotions for Science (#JP20800018, #JP21700826, #JP16K12796, and #JP19H01720), as well as the Senshu Research Abroad Program in AY2015 for the first author.

References

  1. Abdollahzadeh, E., Farsani, M. A., & Beikmohammadi, M. (2017). Argumentative writing behavior of graduate EFL learners. Argumentation, 31, 641–661.  https://doi.org/10.1007/s10503-016-9415-5.Google Scholar
  2. Adesope, O. O., Cavagnetto, A., Hunsu, N. J., Anguiano, C., & Lloyd, J. (2017). Comparative effects of computer-based concept maps, refutational texts, and expository texts on science learning. Journal of Educational Computing Research, 55(1), 46–69.  https://doi.org/10.1177/0735633116654163.Google Scholar
  3. Anderson, L. W., & Krathwohl, D. R. (Eds.). (2000). A taxonomy for learning, teaching, and assessing: A revision of bloom’s taxonomy of educational objectives. New York: Longman.Google Scholar
  4. Angeli, C., & Valanides, N. (2009). Instructional effects on critical thinking: Performance on ill-defined issues. Learning and Instruction, 19(4), 322–334.  https://doi.org/10.1016/j.learninstruc.2008.06.010.Google Scholar
  5. Annis, L. F. (1985). Student-generated paragraph summaries and the information-processing of prose learning. Journal of Experimental Education, 54(1), 4–10.  https://doi.org/10.1080/00220973.1985.10806390.Google Scholar
  6. Bacha, N. (2001). Writing evaluation: what can analytic versus holistic essay scoring tell us? System, 29(3), 371–383.  https://doi.org/10.1016/S0346-251X(01)00025-2.Google Scholar
  7. Barnlund, D. C. (1989). Public and private self in Japan and the United States: Communicative styles of two cultures. Tokyo: Intercultural Pr.Google Scholar
  8. Bentahar, J., Moulin, B., & Bélanger, M. (2010). A taxonomy of argumentation models used for knowledge representation. Artificial Intelligence Review, 33(3), 211–259.  https://doi.org/10.1007/s10462-010-9154-1.Google Scholar
  9. Ben-Yehudah, G., & Eshet-Alkalai, Y. (2018). The contribution of text-highlighting to comprehension: A comparison of print and digital reading. Journal of Educational Multimedia and Hypermedia, 27(2), 153–178.Google Scholar
  10. Beyerbach, B. A., & Smith, J. M. (1990). Using a computerized concept mapping program to assess pre-service teachers’ thinking about effective teaching. Journal of Research in Science Teaching, 27(10), 961–971.  https://doi.org/10.1002/tea.3660271005.Google Scholar
  11. Boyle, J. R., & Weishaar, M. (1997). The effects of expert-generated versus student-generated cognitive organizers on the reading comprehension of students with learning disabilities. Learning Disabilities Research & Practice, 12(4), 228–235.Google Scholar
  12. Bråten, I., & Samuelstuen, M. S. (2004). Does the influence of reading purpose on reports of strategic text processing depend on students’ topic knowledge? Journal of Educational Psychology, 96(2), 324–336.  https://doi.org/10.1037/0022-0663.96.2.324.Google Scholar
  13. Bretzing, B. B., & Kulhavy, R. W. (1981). Note-taking and passage style. Journal of Educational Psychology, 73(2), 242–250.  https://doi.org/10.1037/0022-0663.73.2.242.Google Scholar
  14. Britt, M. A., & Aglinskas, C. (2002). Improving student’s ability to use source information. Cognition and Instruction, 20, 485–522.  https://doi.org/10.1207/S1532690XCI2004_2.Google Scholar
  15. Britt, M. A., Kurby, C. A., Dandotkar, S., & Wolfe, C. R. (2008). I agreed with what? Memory for simple argument claims. Discourse Processes, 45(1), 52–84.  https://doi.org/10.1080/01638530701739207.Google Scholar
  16. Bruillard, E., & Baron, G. L. (2000). Computer-based concept mapping: A review of a cognitive tool for students. In D. Benzie & D. Passey (Eds.), Proceedings of conference on educational uses of information and communication technologies (pp. 331–338). Beijing: Publishing House of Electronics Industry (PHEI).Google Scholar
  17. Buckingham Shum, S. (2002). The roots of computer supported argument visualization. In P. Kirschner, S. Buckingham Shum, & C. Carr (Eds.), Visualizing argumentation: Software tools for collaborative and educational sense-making (pp. 3–24). London: Springer.Google Scholar
  18. Chan, C. K. K., Burtis, P. J., Scardamalia, M., & Bereiter, C. (1992). Constructive activity in learning from text. American Educational Research Journal, 29(1), 97–118.  https://doi.org/10.3102/00028312029001097.Google Scholar
  19. Chang, K.-E., Sung, Y.-T., Chang, R.-B., & Lin, S.-C. (2005). A new assessment for computer-based concept mapping. Educational Technology & Society, 8(3), 138–148.Google Scholar
  20. Chang, K.-E., Sung, Y.-T., & Chen, S.-F. (2001). Learning through computer-based concept mapping with scaffolding aid. Journal of Computer Assisted Learning, 17, 21–33.  https://doi.org/10.1111/j.1365-2729.2001.00156.x.Google Scholar
  21. Chen, C.-M., & Chen, F.-Y. (2014). Enhancing digital reading performance with a collaborative reading annotation system. Computers & Education, 77, 67–81.  https://doi.org/10.1016/j.compedu.2014.04.010.Google Scholar
  22. Chen, G., Cheng, W., Chang, T.-W., Zheng, X., & Huang, R. (2014). A comparison of reading comprehension across paper, computer screens, and tablets: Does tablet familiarity matter? Journal of Computers in Education, 1(2–3), 213–225.  https://doi.org/10.1007/s40692-014-0012-z.Google Scholar
  23. Chen, C.-M., & Huang, S.-H. (2014). Web-based reading annotation system with an attention-based self-regulated learning mechanism for promoting reading performance. British Journal of Educational Technology, 45(5), 959–980.  https://doi.org/10.1111/bjet.12119.Google Scholar
  24. Chen, Y.-C., Hwan, R.-H., & Wang, C.-Y. (2012). Development and evaluation of a Web 2.0 annotation system as a learning tool in an e-learning environment. Computers & Education, 58, 1094–1105.  https://doi.org/10.1016/j.compedu.2011.12.017.Google Scholar
  25. Chen, C. M., & Lin, Y. J. (2014). Effects of different text display types on reading comprehension, sustained attention and cognitive load in mobile reading contexts. Interactive Learning Environment, 24(3), 553–571.  https://doi.org/10.1080/10494820.2014.891526.Google Scholar
  26. Chen, N.-S., Teng, D. C.-E., Lee, C.-H., & Kinshuk, (2011). Augmenting paper-based reading activity with direct access to digital materials and scaffolded questioning. Computers & Education, 57(2), 1705–1715.  https://doi.org/10.1016/j.compedu.2011.03.013.Google Scholar
  27. Chi, M. T. H., & Wiley, R. (2014). The ICAP framework: Linking cognitive engagement to active learning outcomes. Educational Psychologist, 49(4), 219–243.  https://doi.org/10.1080/00461520.2014.965823.Google Scholar
  28. Chinn, C. A., & Rinehart, R. (2016). Commentary: Advances in research on sourcing—source credibility and reliable processes for producing knowledge claims. Reading and Writing, 29(8), 1701–1717.  https://doi.org/10.1007/s11145-016-9675-3.Google Scholar
  29. Chmielewski, T. L., & Dansereau, D. F. (1998). Enhancing the recall of text: Knowledge mapping training promotes implicit transfer. Journal of Educational Psychology, 90(3), 407–413.  https://doi.org/10.1037/0022-0663.90.3.407.Google Scholar
  30. Colliot, T., & Jamet, É. (2018). Does self-generating a graphic organizer while reading improve students’ learning? Computers & Education, 126, 13–22.Google Scholar
  31. Conklin, J. (2003). Dialog mapping: reflections on an industrial strength case study. In P. A. Kirschner, S. J. Buckingham-Shum, & C. S. Carr (Eds.), Visualizing argumentation: Software tools for collaborative and educational sense-making (pp. 117–135). London: Springer.Google Scholar
  32. Conlon, T. (2008). Practical text concept mapping: new pedagogy, new technology. In A. J. Cañas, P. Reiska, M. K. Åhlberg, & J. D. Novak (Eds.), Concept mapping: connecting educators. Tallinn, Estonia & Helsinki: Tallinn University.Google Scholar
  33. Council of Chief State School Officers, & National Governors Association. (2010). Common Core Standards for English language arts. Retrieved on November 26, 2016, from http://www.corestandards.org/ELA-Literacy/.
  34. Crowhurst, M. (1990). Teaching and learning the writing of persuasive/argumentative discourse. Canadian Journal of Education, 15(4), 348–359.  https://doi.org/10.2307/1495109.Google Scholar
  35. Czuchry, M., & Dansereau, D. (1996). Node-link mapping as an alternative to traditional writing assignments in undergraduate psychology courses. Teaching of Psychology, 23(2), 91–96.  https://doi.org/10.1207/s15328023top2302_4.Google Scholar
  36. Daiek, D. B., & Anter, N. M. (2003). Critical reading for college and beyond. New York: McGraw-Hill.Google Scholar
  37. Davies, M. (2011). Concept mapping, mind mapping and argument mapping: what are the differences and do they matter? Higher Education, 62(3), 279–311.  https://doi.org/10.1007/s10734-010-9387-6.Google Scholar
  38. Di Vesta, F. J., & Gray, G. S. (1972). Listening and note taking. Journal of Educational Psychology, 63(1), 8–14.  https://doi.org/10.1037/h0032243.Google Scholar
  39. Douglas, N. L. (2000). Enemies of critical thinking: Lessons from social psychology research. Reading Psychology, 21(2), 129–144.  https://doi.org/10.1080/02702710050084455.Google Scholar
  40. Dwyer, C., Hogan, M. J., & Stewart, I. (2013). An examination of the effects of argument mapping on students’ memory and comprehension performance. Thinking Skills & Creativity, 8, 11–24.  https://doi.org/10.1016/j.tsc.2012.12.002.Google Scholar
  41. Eftekhari, M., Sotoudehnama, E., & Marandi, S. S. (2016). Computer-aided argument mapping in an EFL setting: Does technology precede traditional paper and pencil approach in developing critical thinking? Education Technology Research and Development, 64, 339–357.  https://doi.org/10.1007/s11423-016-9431-z.Google Scholar
  42. Elbow, P. (2006). Do we need a single standard of value for institutional assessment? An essay response to Asao Inoue’s “community-based assessment pedagogy. Assessing Writing, 11(2), 81–99.  https://doi.org/10.1016/j.asw.2006.07.003.Google Scholar
  43. Ennis, R. H. (1993). Critical thinking assessment. Theory into Practice, 32(3), 179–186.  https://doi.org/10.1080/00405849309543594.Google Scholar
  44. Fiorella, L., & Mayer, R. E. (2016). Eight ways to promote generative learning. Educational Psychology Review, 28, 717–741.  https://doi.org/10.1007/s10648-015-9348-9.Google Scholar
  45. Fitts, P. M. (1954). The information capacity of the human motor system in controlling the amplitude of movement. Journal of Experimental Psychology, 47(6), 381–391.  https://doi.org/10.1037/h0055392.Google Scholar
  46. Funaoi, H., Yamaguchi, E., & Inagaki, S. (2002). Collaborative concept mapping software to reconstruct learning processes. Proceedings of International Conference on Computers in Education 2002 (Vol. 1, pp. 306–310). Danvers, MA: IEEE Computer Society,  https://doi.org/10.1109/cie.2002.1185933.
  47. Gárate, M., Melero, Μ. Α., Tejerina, R., Echevarría, E., & Gutiérrez, R. (2007). Written argumentative abilities of 4th grade students of compulsory secondary education: An integrated educational intervention. Infância y Aprendizaje, 30(4), 589–602.  https://doi.org/10.1174/021037007782334346.Google Scholar
  48. Giombini, L. (2008). Concept maps and CMAPTools: A cognitive writing system for the general development of thought in scholar age. In A. J. Cañas, P. Reiska, M. Åhlberg, & J. D. Novak (Eds.), Concept mapping: Connecting educators (pp. 218–226). Helsinki: University of Helsinki.Google Scholar
  49. Glover, I., Xu, Z., & Hardaker, G. (2007). Online annotation: Research and practices. Computers & Education, 49(4), 1308–1320.  https://doi.org/10.1016/j.compedu.2006.02.006.Google Scholar
  50. Goldman, S. R. (2004). Cognitive aspects of constructing meaning through and across multiple texts. In N. Shuart-Ferris & D. M. Bloome (Eds.), Uses of intertextuality in classroom and educational research (pp. 317–351). Greenwich, CT: Information Age Publishing.Google Scholar
  51. Goldman, S. R., Lawless, K. A., & Manning, F. (2013). Research and development of multiple source comprehension assessment. In M. A. Britt, S. R. Goldman, & J. F. Rouet (Eds.), Reading: From words to multiple texts (pp. 180–199). New York: Routledge, Taylor & Francis Group.Google Scholar
  52. Goldman, S. R., & Scardamalia, M. (2013). Managing, understanding, applying, and creating knowledge in the information age: Next generation challenges and opportunities. Cognition and Instruction, 31(2), 255–269.  https://doi.org/10.1080/10824669.2013.773217.Google Scholar
  53. Haenggi, D., & Perfetti, C. A. (1992). Individual differences in reprocessing of text. Journal of Educational Psychology, 84(2), 182–192.  https://doi.org/10.1037/0022-0663.84.2.182.Google Scholar
  54. Harrell, M. (2011). Argument diagramming and critical thinking in introductory philosophy. Higher Education Research & Development, 30(3), 371–385.  https://doi.org/10.1080/07294360.2010.502559.Google Scholar
  55. Harrell, M., & Wetzel, D. (2015). Using argument diagramming to teach critical thinking in a first-year writing course. In M. Davies & R. Barnett (Eds.), The Palgrave handbook of critical thinking in higher education (pp. 213–232). New York: Palgrave Macmillan.Google Scholar
  56. Hermena, E. W., Sheen, M., AlJassmi, M., AlFalasi, K., AlMatroushi, M., & Jordan, T. R. (2017). Reading rate and comprehension for text presented on tablet and paper: Evidence from Arabic. Frontiers in Psychology, 8, 257.  https://doi.org/10.3389/fpsyg.2017.00257.Google Scholar
  57. Holfman, R., & van Oostendorp, H. (1999). Cognitive effects of a structural overview in a hypertext. British Journal of Educational Technology, 30, 129–140.  https://doi.org/10.1111/1467-8535.00101.Google Scholar
  58. Howell, E. (2018). Expanding argument instruction: incorporating multimodality and digital tools. Journal of Adolescent & Adult Literacy, 61(5), 533–542.  https://doi.org/10.1002/jaal.716.Google Scholar
  59. Hsieh, P.-H., & Dwyer, F. (2009). The instructional effect of online reading strategies and learning styles on student academic achievement. Educational Technology & Society, 12(2), 36–50.Google Scholar
  60. Hsu, P.-S., van Dyke, M., Chen, Y., & Smith, T. J. (2015). The effect of a graph-oriented computer-assisted project-based learning environment on argumentation skills. Journal of Computer Assisted Learning, 31(1), 32–58.  https://doi.org/10.1111/jcal.12080.Google Scholar
  61. Juarez Collazo, N. A., Elen, J., & Clarebout, C. (2015). The multiple effects of combined tools in computer-based learning environments. Computers in Human Behavior, 51A, 82–95.  https://doi.org/10.1016/j.chb.2015.04.050.Google Scholar
  62. Katayama, A. D., & Robinson, D. H. (2000). Getting students “partially” involved in note-taking using graphic organizers. Journal of Experimental Education, 68(2), 119–133.  https://doi.org/10.1080/00220970009598498.Google Scholar
  63. Kiili, C., Coiro, J., & Hämäläinen, J. (2016). An online inquiry tool to support the exploration of controversial issues on the internet. Journal of Literacy and Technology, 17(1–2), 31–52.Google Scholar
  64. Kimmerle, J., Moskaliuk, J., & Cress, U. (2011). Using wikis for learning and knowledge building: Results of an experimental study. Educational Technology & Society, 14(4), 138–148.Google Scholar
  65. Kobayashi, K. (2007). The influence of critical reading orientation on external strategy use during expository text reading. Educational Psychology, 27(3), 363–375.  https://doi.org/10.1080/01443410601104171.Google Scholar
  66. Kobayashi, K. (2014). Students’ consideration of source information during the reading of multiple texts and its effect on intertextual conflict resolution. Instructional Science, 42, 183–205.  https://doi.org/10.1007/s11251-013-9276-3.Google Scholar
  67. Kools, M., van de Wiel, M. W., Ruiter, R. A. C., Crüts, A., & Kok, G. (2006). The effect of graphic organizers on subjective and objective comprehension of a health education text. Health Education & Behavior, 33(6), 760–772.  https://doi.org/10.1177/1090198106288950.Google Scholar
  68. Kozma, R. B. (1987). The implications of cognitive psychology for computer-based learning tools. Educational Technology, 27(11), 20–25.Google Scholar
  69. Kwon, S. Y., & Cifuentes, L. (2007). Using computers to individually-generate vs. collaboratively-generate concept maps. Educational Technology & Society, 10(4), 269–280.Google Scholar
  70. Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33, 159–174.  https://doi.org/10.2307/2529310.Google Scholar
  71. Larson, A. A., Britt, M. A., & Kurby, C. A. (2009). Improving students’ evaluation of informal arguments. The Journal of Experimental Education, 77(4), 339–366.  https://doi.org/10.3200/JEXE.77.4.339-366.Google Scholar
  72. Lippi, M., & Torroni, P. (2015). Context-independent claim detection for argument mining. In Proceedings of the 24th International Conference on Artificial Intelligence (pp. 185–191). AAAI Press.Google Scholar
  73. Liu, P.-L. (2011). A study on the use of computerized concept mapping to assist ESL learners’ writing. Computers & Education, 57(4), 2548–2558.  https://doi.org/10.1016/j.compedu.2011.03.015.Google Scholar
  74. Liu, P. L., Chen, C. J., & Chang, Y. J. (2010). Effects of a computer-assisted concept mapping learning strategy on EFL college students’ English reading comprehension. Computer & Education, 54(2), 436–445.  https://doi.org/10.1016/j.compedu.2009.08.027.Google Scholar
  75. Lonka, K., Lindblom-Ylanne, S., & Maury, S. (1994). The effect of study strategies on learning from text. Learning and Instruction, 4(3), 253–271.  https://doi.org/10.1016/0959-4752(94)90026-4.Google Scholar
  76. Lowe, R. K. (1996). Background knowledge and the construction of a situational representation from a diagram. European Journal of Psychology of Education, 11(4), 377–397.  https://doi.org/10.1007/BF03173279.Google Scholar
  77. Makany, T., Kemp, J., & Dror, I. E. (2009). Optimising the use of note-taking as an external cognitive aid for increasing learning. British Journal of Educational Technology, 40(4), 619–635.  https://doi.org/10.1111/j.1467-8535.2008.00906.x.Google Scholar
  78. McCagg, E. C., & Dansereau, D. F. (1991). A convergent paradigm for examining knowledge mapping as a learning strategy. Journal of Educational Research, 84(6), 317–324.  https://doi.org/10.1080/00220671.1991.9941812.Google Scholar
  79. Mochizuki, T., Oura, H., Sato, T., Nishimori, T., Tsubakimoto, M., Nakahara, J., … Miyatani, T. (2009). eJournalPlus: Development of a collaborative learning system for constructive and critical reading skills. In A. Dimitracopoulou, C. O’Malley, D. Suthers, & P. Reimann (Eds.), Computer supported collaborative learning practices—CSCL 2009 community events proceedings, Vol. 2 (pp. 100–102). International Society of the Learning Sciences.Google Scholar
  80. Mochizuki, T., & Tsubakimoto, M. (2014). The effect of peer response using the critical reading software “eJournalPlus”. Research Report of JSET Conferences, 14(1), 225–232.Google Scholar
  81. Muller Mirza, N., & Perret-Clermont, A.-N. (Eds.). (2009). Argumentation and education: Theoretical foundations and practices (pp. 67–90). New York: Springer.Google Scholar
  82. Muller Mirza, N., Perret-Clermont, A.-N., Tartas, V., & Iannaccone, A. (2009). Psychosocial processes in argumentation. In N. Muller Mirza & A.-N. Perret-Clermont (Eds.), Argumentation and education: Theoretical foundations and practices (pp. 67–90). New York: Springer.Google Scholar
  83. Nakano, M., & Maruno, S. (2013). The effect of debate training on argumentation skills: The developmental process for Japanese college students. Studies for the Learning Society, 3(1–2), 4–12.  https://doi.org/10.2478/sls-2013-0001.Google Scholar
  84. Nesbit, J. C., & Adesope, O. O. (2006). Learning with concept and knowledge map: A meta-analysis. Review of Educational Research, 76(3), 413–448.  https://doi.org/10.3102/00346543076003413.Google Scholar
  85. Newell, G. E., Beach, R., Smith, J., & van der Heide, J. (2011). Teaching and learning argumentative reading and writing: A review of research. Reading Research Quarterly, 46(3), 273–304.  https://doi.org/10.3102/00346543076003413.Google Scholar
  86. Novak, J. D., & Gowin, B. (1984). Learning how to learn. New York: Cambridge University Press.Google Scholar
  87. Nussbaum, E. M., & Schraw, G. (2007). Promoting argument-counterargument integration in students’ writing. Journal of Experimental Education, 76(1), 59–92.  https://doi.org/10.3200/JEXE.76.1.59-92.Google Scholar
  88. O’Donell, A. M., Dansereau, D. F., & Hall, R. H. (2002). Knowledge maps as scaffolds for cognitive processing. Educational Psychology Review, 14(1), 71–86.  https://doi.org/10.1023/A:1013132527007.Google Scholar
  89. Park, S. M., & Kim, C. (2016). The effects of a virtual tutee system on academic reading engagement in a college classroom. Educational Technology Research and Development, 64(2), 195–218.  https://doi.org/10.1007/s11423-015-9416-3.Google Scholar
  90. Parodi, G. (2007). Reading-writing connections: Discourse-oriented research. Reading and Writing, 20(3), 225–250.  https://doi.org/10.1007/S11145-006-9029-7.Google Scholar
  91. Paulucci, M., Suthers, D., & Weiner, A. (1995). Belvedere: stimulating students’ critical discussion. CHI95 conference companion, interactive papers, May 7–11, Denver, CO (pp. 123–124).Google Scholar
  92. Pirnay-Dummer, P., & Ifenthaler, D. (2011). Reading guided by automated graphical representations: How model-based text visualizations facilitate learning in reading comprehension tasks. Instructional Science, 39(6), 901–919.  https://doi.org/10.1007/s11251-010-9153-2.Google Scholar
  93. Ponce, H. R., López, M. J., & Mayer, R. E. (2012). Instructional effectiveness of a computer-supported program for teaching reading comprehension strategies. Computers & Education, 59, 1170–1183.Google Scholar
  94. Ponce, H. R., & Mayer, R. E. (2014a). Qualitatively different cognitive processing during online reading primed by different study activities. Computers in Human Behavior, 30, 121–130.  https://doi.org/10.1016/j.chb.2013.07.054.Google Scholar
  95. Ponce, H. R., & Mayer, R. E. (2014b). An eye movement analysis of highlighting and graphic organizer study aids for learning from expository text. Computers in Human Behavior, 41, 21–32.  https://doi.org/10.1016/j.chb.2014.09.010.Google Scholar
  96. Ponce, H. R., Mayer, R. E., & López, M. J. (2013). A computer-based spatial learning strategy approach that improves reading comprehension and writing. Educational Technology Research and Development, 61(5), 819–840.  https://doi.org/10.1007/s11423-013-9310-9.Google Scholar
  97. Ponce, H. R., Mayer, R. E., Loyola, M. S., López, M. J., & Méndez, E. E. (2018). When two computer-supported learning strategies are better than one: An eye-tracking study. Computers & Education, 125, 376–388.  https://doi.org/10.1016/j.compedu.2018.06.024.Google Scholar
  98. Robinson, D. H. (1997). Graphic organizers as aids to text learning. Reading Research and Instruction, 37(2), 85–105.  https://doi.org/10.1080/19388079809558257.Google Scholar
  99. Robinson, D. H., Corliss, S. B., Bush, A. M., Bera, S. J., & Tomberlin, T. (2003). Optimal presentation of graphic organizers and text: A case for large bites? Educational Technology Research and Development, 51(4), 25–41.  https://doi.org/10.1007/BF02504542.Google Scholar
  100. Robinson, D. H., Katayama, A. D., Beth, A., Odom, S., Hsieh, Y.-P., & Vanderveen, A. (2006). Increasing text comprehension and graphic note taking using a partial graphic organizer. The Journal of Educational Research, 100(2), 103–111.  https://doi.org/10.3200/JOER.100.2.103-111.Google Scholar
  101. Robinson, D. H., Katayama, A. D., DuBois, N. F., & DeVaney, T. (1998). Interactive effects of graphic organizers and delayed review in concept acquisition. The Journal of Experimental Education, 67(1), 17–31.  https://doi.org/10.1080/00220979809598342.Google Scholar
  102. Scardamalia, M., & Bereiter, C. (1991). Higher levels of agency for children in knowledge building: A challenge for the design of new knowledge media. The Journal of the Learning Sciences, 1(1), 37–68.  https://doi.org/10.1207/s15327809jls0101_3.Google Scholar
  103. Scardamalia, M., & Bereiter, C. (2014). Knowledge building and knowledge creation: Theory, pedagogy, and technology. In K. Sawyer (Ed.), Cambridge handbook of the learning sciences (2nd ed., pp. 397–417). New York: Cambridge University Press.Google Scholar
  104. Scardamalia, M., Bransford, J., Kozma, R., & Quellmalz, E. (2012). New assessments and environments for knowledge building. In P. Griffin, B. McGaw, & E. Care (Eds.), Assessment and teaching of 21st century skills (pp. 231–300). Dordrecht: Springer.Google Scholar
  105. Schilit, B. N., Price, M. N., Golovchinsky, G., Tanaka, K., & Marshall, C. C. (1999). As we may read: The reading appliance revolution. IEEE Computer, 32(1), 65–73.Google Scholar
  106. Schwarz, B. B., & Baker, M. J. (2017). Dialogue, argumentation and education: History, theory and practice. New York: Cambridge University Press.Google Scholar
  107. Sekiguchi, Y. (2002). Mathematical proof, argumentation, and classroom communication: From a cultural perspective. Tsukuba Journal of Educational Study in Mathematics, 21, 11–20.Google Scholar
  108. Shapiro, B. P., van den Broek, P., & Fletcher, C. R. (1995). Using story-based causal diagrams to analyze disagreements about complex events. Discourse Processes, 20, 51–77.  https://doi.org/10.1080/01638539509544931.Google Scholar
  109. Simper, N., Reeve, R., & Kirby, J. R. (2016). Effects of concept mapping on creativity in photo stories. Creativity Research Journal, 28(1), 46–51.  https://doi.org/10.1080/10400419.2016.1125263.Google Scholar
  110. Slotte, V., & Lonka, K. (1998). Using notes during essay-writing: Is it always helpful? Educational Psychology, 18(4), 445–459.  https://doi.org/10.1080/0144341980180406.Google Scholar
  111. Spears, D. (2006). Developing critical reading skills. New York: McGraw-Hill.Google Scholar
  112. Stadtler, M., & Bromme, R. (2008). Effects of the metacognitive computer-tool met.a.ware on the web search of laypersons. Computers in Human Behavior, 24(3), 716–737.  https://doi.org/10.1016/j.chb.2007.01.023.Google Scholar
  113. Stahl, N. A., King, J. R., & Henk, W. A. (1991). Enhancing students’ notetaking through training and evaluation. Journal of Reading, 34(8), 614–622.  https://doi.org/10.2307/40014606.Google Scholar
  114. Stegmann, K., Weinberger, A., & Fischer, F. (2007). Facilitating argumentative knowledge construction with computer-supported collaboration scripts. International Journal of Computer-Supported Collaborative Learning, 2(4), 421–447.  https://doi.org/10.1007/s11412-007-9028-y.Google Scholar
  115. Straubel, L. H. (2006). Creative concept mapping: From reverse engineering to writing inspiration. In A. J. Cañas, & J. D. Novak (Eds.), Concept maps: Theory, methodology, technology: Proceedings of the Second International Conference on Concept Mapping (Vol. 1, pp. 162–169). San Jose, Costa Rica: Universidad de Costa Rica.Google Scholar
  116. Thomm, E., & Bromme, R. (2016). How source information shapes lay interpretations of science conflicts: Interplay between sourcing, conflict explanation, source evaluation, and claim evaluation. Reading and Writing, 29(8), 1629–1652.  https://doi.org/10.1007/s11145-016-9638-8.Google Scholar
  117. Tomasek, T. (2009). Critical reading: using reading prompts to promote active engagement with text. International Journal of Teaching and Learning in Higher Education, 21(1), 127–132.Google Scholar
  118. Toth, E. E., Suthers, D. D., & Lesgold, A. M. (2002). “Mapping to know”: The effects of representational guidance and reflective assessment on scientific inquiry. Science Education, 86(2), 264–286.  https://doi.org/10.1002/sce.10004.Google Scholar
  119. Toulmin, S. E. (1958). The uses of argument. Cambridge: Cambridge University Press.Google Scholar
  120. Tsubakimoto, M., Kamiya, T., Yashiro, K., Kubo, M., Mochizuki, T., & Yamauchi, Y. (2010). A practice of Japanese classes through the utilization of eJournalPlus: A learning support system for language skills. Research Report of JSET Conferences, 10(5), 89–96.Google Scholar
  121. van Amelsvoort, M., Andriessen, J., & Kanselaar, G. (2007). Representational tools in computer-supported collaborative argumentation-based learning: How Dyads work with constructed and inspected argumentative diagrams. The Journal of the Learning Sciences, 16(4), 485–521.  https://doi.org/10.1080/10508400701524785.Google Scholar
  122. van Bruggen, J. M., Boshuizen, H. P. A., & Kirschner, P. A. (2003). A cognitive framework for cooperative problem solving with argument visualization. In P. A. Kirschner, S. J. Buckingham Shum, & C. S. Carr (Eds.), Visualizing argumentation: Software tools for collaborative and educational sense-making (pp. 25–47). London: Springer.Google Scholar
  123. van Gelder, T. (2015). Using argument mapping to improve critical thinking skills. In M. Davies & R. Barnett (Eds.), The Palgrave handbook of critical thinking in higher education (pp. 183–192). New York: Palgrave Macmillan US.Google Scholar
  124. van Horne, S., Russell, J., & Schuh, K. L. (2016). The adoption of mark-up tools in an interactive e-textbook reader. Educational Technology Research and Development, 64(3), 407–433.  https://doi.org/10.1007/s11423-016-9425-x.Google Scholar
  125. Wang, H. Y., Huang, I., & Hwang, G. J. (2016). Effects of a question prompt-based concept mapping approach on students’ learning achievements, attitudes and 5C competences in project-based computer course activities. Educational Technology & Society, 19(3), 351–364.Google Scholar
  126. Waycott, J., & Kukulska-Hulme, A. (2003). Students’ experiences with PDAs for reading course materials. Personal and Ubiquitous Computing, 7(1), 30–43.  https://doi.org/10.1007/s00779-002-0211-x.Google Scholar
  127. Westby, C. (2004). A language perspective on executive functioning, metacognition, and self-regulation in reading. In C. A. Stone, E. R. Silliman, B. J. Ehren, & K. Apel (Eds.), Handbook of language and literacy (pp. 398–427). New York: Guilford.Google Scholar
  128. Wilson, K. (2016). Critical reading, critical thinking: delicate scaffolding in english for academic purposes (EAP). Thinking Skills and Creativity, 22, 256–265.  https://doi.org/10.1016/j.tsc.2016.10.002.Google Scholar
  129. Winne, P. H., Nesbit, J. C., & Popowich, F. (2017). nStudy: A system for researching information problem solving. Technology, Knowledge, and Learning, 22(3), 369–376.  https://doi.org/10.1007/s10758-017-9327-y.Google Scholar
  130. Wolfe, J. L. (2000). Effects of annotations on student readers and writers. In K. Anderson (Ed.), Proceedings of the fifth ACM conference on digital libraries (pp. 19–26). New York: ACM Press.Google Scholar
  131. Wolfe, J. (2002). Annotation technologies: A software and research review. Computers and Composition, 19(4), 471–497.  https://doi.org/10.1016/S8755-4615(02)00144-5.Google Scholar
  132. Wolfe, C. R., Britt, M. A., & Butler, J. A. (2009). Argumentation schema and the myside bias in written argumentation. Written Communication, 26(2), 183–209.  https://doi.org/10.1177/0741088309333019.Google Scholar
  133. Yanikoglu, B., Gogus, A., & Inal, E. (2017). Use of handwriting recognition technologies in tablet-based learning modules for first grade education. Educational Technology Research and Development, 65(5), 1369–1388.  https://doi.org/10.1007/s11423-017-9532-3.Google Scholar

Copyright information

© The Author(s) 2019

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.School of Network and InformationSenshu UniversityKawasakiJapan
  2. 2.Graduate School of Human SciencesOsaka UniversitySuita, OsakaJapan
  3. 3.Komaba Organization for Educational ExcellenceThe University of TokyoTokyoJapan
  4. 4.Center for Innovative Teaching and LearningTokyo Institute of TechnologyTokyoJapan
  5. 5.Faculty of Human InformaticsAichi Shukutoku UniversityNagakute, AichiJapan
  6. 6.Formerly Silicon Studio Co, Ltd.TokyoJapan
  7. 7.College of BusinessRikkyo UniversityTokyoJapan
  8. 8.Interfaculty Initiative in Information StudiesThe University of TokyoTokyoJapan

Personalised recommendations