Introduction: “Dr Google” Challenges

The way we think, act, interact and communicate has been profoundly transformed as digital media has come to permeate our everyday lives (Boyd 2010; Madianou and Miller 2013). An increasing part of our waking hours are spent illuminated by the blue light from smartphones, tablets and laptop screens. Besides, multiple opportunities for communication in a globalised world brings a never-ending flow of online information about health, well-being and lifestyle-related topics (Weber et al. 2009). Not so long ago, health information was almost exclusively accessible for medical professionals and health providers. However, due to digital media this type of information is now open access for the lay public to read and communicate about (Ross 2014; West 2013). Along with an increased use of portable digital devices, the internet has become one of the main sources for health-related information (Bylund et al. 2007; van de Belt et al. 2013). Searching the web for health advice has reached such an extent that researchers even talk about individuals consulting “Dr Google” in favour of healthcare professionals in real life (Hoving et al. 2010; Lee et al. 2015). People who surf the web in search of health information come across an overwhelming amount of web pages, blogs, chat forums and social media platforms (Fergie et al. 2013). While medical practitioners and medical researchers are among the senders, a growing amount of online health information originates from individuals sharing experiences, opinions, emotions and beliefs (Song et al. 2016). On the one hand, digital media has become a valuable starting point for individuals’ self-management in health-related questions. On the other hand, when navigating the broad range of sources and conflicting information, the capability to critically reason is put to the test (Song et al. 2016; Walraven et al. 2009; Weber et al. 2009).

The increasing need to confront and make meaning of conflicting online health information also challenges how critical reasoning may become relevant in science education (cf. Belova and Eilks 2016; Buckingham 2003). Recognising media awareness as important for participation in a democratic society (Chang Rundgren and Rundgren 2014), it is suggested that aspects of accessing, evaluating, analysing and creating digital media ought to be included in science education (Klosterman et al. 2012). In Sweden, the digitization of schools is reflected in curricula learning goals, as well as through investment, to increase the availability of digital tools in education (The Swedish National Agency of Education 2011; The Swedish National Agency of Education 2016). However, Swedish science teachers lack support using digital technologies (The Swedish National Agency of Education 2016), and a recent case study points towards communicative difficulties when integrating online learning situations in science education (Dohn and Dohn 2017).

The aim of this study is to explore how science education may be designed to create opportunities for students to develop the capability to critically reason about online health information.

Previous Research on Critical Reasoning in Science Classroom Practices

Around the world, the capability to critically reason is a central learning goal for science education, which is often discussed as important to qualify students’ participation in society (Aikenhead 2007). In science education practices, critical reasoning is commonly treated as participation in socio-scientific reasoning about health or environmental issues (cf. Christenson et al. 2014; Van Poeck and Vandenabeele 2012). Learning activities tend to be centred around supporting students’ argumentation skills or use of scientific concepts for decision-making in current issues (Sadler 2004). However, a number of studies suggest that critical engagement with health-related information reaches beyond decision-making, as it also includes self-management, a need to clarify medical advice, gain emotional support from others or results from pure curiosity (cf. Caiata-Zuffrey et al. 2010; Lee et al. 2014a).

In relation to the increased use of the internet for information retrieval, our capability to critically confront science related issues is challenged in online environments (Weber et al. 2009). However, to date the use of media in science education has favoured print-based media, such as newspaper articles, over digital media (McClune and Jarman 2012). Also, when media is used in the science classroom, students are most often engaged by analysing science-related media already accessed and evaluated by the teacher (Klosterman et al. 2012). Consequently, approaching science in the media primarily becomes a way to contextualise scientific concepts, demonstrating the relevance of science or raising students’ interests in science (McClune and Jarman 2014). Using science-related media for such purposes can be described as learning science through media. However, several recent studies highlight the importance of also learning about the media being used (Belova and Eilks 2016; Chang Rundgren and Rundgren 2014; McClune and Jarman 2014). Learning through and about media is suggested as a way to develop students’ capabilities to critically approach and use science-related online information (Belova and Eilks 2016).

The capability to critically reason has been discussed as an important factor in the preparation of students for postsecondary education. The Swedish Schools Inspectorate (2015) emphasise that the development of a capability to critically reason in upper secondary school is crucial for future university studies, while acknowledging that critical reasoning is one of the most challenging aspects in relation to scientific inquiry. In a large-scale survey, the Swedish National Agency of Education (2016) recurrently charts competences and the use of digital technology in Swedish schools. In a 2016 report, the agency points to students’ evaluation of user-generated social media as a new dimension of critical reasoning. They also find that students tend to evaluate their own capability to critically reason as sufficient. However, the majority of teachers reported that source critique is rarely and briefly included in education. The Swedish teachers also reported experiencing a lack of digital support and an insecurity about how to meaningfully integrate digital media with curricula learning goals.

In a recent study in upper secondary biology education, Dohn and Dohn (2017) introduced social media as part of a classroom activity. Their results indicate that integrating digital media can open up new learning situations and increase students’ interest in science. However, science teachers cannot assume young people to be digital natives who have already mastered the challenges of browsing the digital landscape (cf. Buckingham 2003).

In Dohn and Dohn’s (2017) study, the students struggled with communicative challenges in the blend between everyday use of digital media and science education practice. This gives further support to a suggestion that various ways of using digital tools may not be directly transferable across contexts (Lantz-Andersson 2016). The way students discuss something in the context of a science classroom may not necessarily reflect how they would confront it outside of the school situation. Instead, what is made possible to learn in an educational setting depends on how it is perceived by the students (Andrée and Lager-Nyqvist 2012). Accessing and evaluating digital information may have different meanings in different contexts, and there is a need to further explore tentative meanings in relation to science education.

Critical Reasoning in Health Literacy Research

When it comes to critical reasoning and online health information retrieval, a growing number of studies relate to the notion of health literacy. A common aim in these studies can be described as to improve public health by supporting people’s capabilities to make well-informed health-related decisions (cf. Weber et al. 2009). A recent review of research in the field confirms earlier suggestions of critical reasoning as an important aspect of online health literacy (Diviani et al. 2016). Hence, with perspectives on science education considered important for participation in society, the way critical reasoning is discussed in the health literacy field becomes of great value for this study.

One of the conclusions drawn by Diviani et al. (2016) is that people with low health literacy tend to use less established or incorrect criteria for online source evaluation. Based on a summary of research in the communication and literacy field, Metzger (2007) concludes that the most common way to support, structure and improve people’s online source evaluation are checklist approaches. This means that people are educated to ask and answer a list of fixed questions designed to cover evaluation criteria, oftentimes created by an authority in the field. Commonly, such criteria deal with aspects of information content and authorship, such as accuracy, complexity, medical expertise and references to scientific publications (Diviani et al. 2016; Gauld and Williams 2009). However, checklists are rarely used in everyday practice and the criteria lay people actually use for online source evaluation, such as web page design or information usefulness, tend to be regarded as less reliable markers for trustworthiness (Metzger 2007). As a response to the checklist approach, and in line with the later suggestion of including learning about media in science education (cf. Chang Rundgren and Rundgren 2014), a model that focuses the social context of the information has been suggested (Meola 2004). However, in part this model encouraged people to use another kind of list, namely that of websites already approved by experts. Hence, an emerging question is to what extent such approaches develop people’s own capability to critically confront information.

In order to understand more about navigational needs from a user perspective, Lee et al. (2014a) interviewed Australian adults about their online health information-seeking behaviours. Regarding experienced online information retrieval barriers, participants reported uncertainty about their own capability to evaluate relevance and trustworthiness of inconsistent health sources. A later literature review by the same authors indicates that few interventions have been made to improve health literacy (Lee et al. 2014b). In line with earlier suggestions (cf. Diviani et al. 2016; Meola 2004) is a call for studies focusing on design interventions where websites are pre-labelled as either trustworthy or are blacklisted by researchers (Lee et al. 2014b). A summary of various models proposed to support people’s evaluation of online sources shows a predominant focus of assessing the quality of online information (Metzger 2007). Metzger also concludes that the capability to critically reason reaches beyond evaluation of the information as such (Walraven et al. 2009). For instance, the presence of real life examples and personal narratives seem to be important when evaluating health information online. Even though cultural differences exist, a recent cross-cultural study in an Asian-American context shows that personal narratives, experiences and peer-to-peer exchange on the trustworthiness of online health-related information become important across cultural contexts among college students (Song et al. 2016). Similar results were presented in a Scottish study on young people’s perception and evaluation of health-related digital media (Fergie et al. 2013). Fergie et al. conclude that personal narratives and expert-derived information are both valued sources of health information among young people, however, they are often judged according to different criteria.

Methodologically, recent health literacy researches have targeted adult participants responding to questionnaires about their online information retrieval behaviours and critical approach. However, a study in a British context compared first- and third-year undergraduate students’ judgement of trust in online health information (Johnson et al. 2015). Johnson et al. asked students to rank the influence of multiple factors related to source evaluation, such as credibility, style, brand and usefulness for evaluating trustworthiness of information. The results show that older students used more evaluation criteria than younger ones. They also ranked information by reliability, quality and the extent to which the source contained facts, rather than opinions, as factors affecting trustworthiness the most. The third-year students did not consider whether the information was tailored to them personally, and they also reported that the speed with which they found the information was of less importance. The results of this study can be compared to an earlier study by Walraven et al. (2009), in which students were asked to account for their source evaluation behaviours in a written questionnaire. Here, the questionnaire was then followed by an intervention engaging students in an online information retrieval activity. The criteria students initially mentioned as important for source evaluation in the questionnaire matched the criteria later listed by Johnson et al. (2015). When students’ actual engagement with the online information retrieval task was analysed, Walraven et al. (2009) found that relevance and speed (in terms of easy access) were among the most influential factors for source evaluation. Students’ capability to reproduce source evaluation criteria, other than the ones they explicitly use, gives support for research to take the point of departure in the challenges students meet while encountering online information. Nevertheless, previous research on people’s use of online sources in the health literacy field has primarily been based on participants’ retrospective and written statements about their online information retrieval behaviours, quantitative measures of the influence or abundance of certain evaluation criteria, or charting information retrieval behaviours. To our knowledge, few studies have focused on the qualitative aspects of critical reasoning in digital contexts.

Critical Reasoning as Freedom of Choice

It has been suggested that open access to online information about issues that concern our bodies has great potential to strengthen individual autonomy and maintain public health (Cline and Haynes 2001; Hoving et al. 2010). By extension, this can be seen as upholding human freedom and the equal right to health and education in a democratic society (UN 1948). Acknowledging critical reasoning as important for participation in a democratic society, our theoretical point of departure is based on Martha Nussbaums’ framework The Capabilities Approach (Nussbaum 2011). Focusing issues of human development, the Capabilities Approach is centred around the democratic values of social justice and freedom of choice. Here, we draw on the concepts of capabilities and functionings, and how they relate to each other. The two concepts were chosen since they open up for capabilities to be understood in relation to a specific situation. This is valuable when considering that our aim is to create opportunities for students to develop capabilities to critically reason in science education practice.

Capabilities are described as the personal abilities or skills we develop while interacting. Capabilities thus become a condition for, as well as a consequence of, participation in social practices (Nussbaum 2011). This way of conceptualising learning, or more specifically the development of capabilities, as a process of participation in social practices has previously been suggested by, for instance, Lave and Wenger (1991). By acknowledging what a person is able to do, be and become, capabilities are recognised as important for expanding people’s repertoires of participating in various situations, and thus, to influence their own quality of life. A fundamental question concerns the opportunities made available in specific situations. Nussbaum makes an important distinction between to be able and to be enabled. The freedom to choose ways of doing and being depends on a combination of our capabilities and what the specific situation opens up for. Thus, to be able does not equal to be enabled. As for education, Nussbaum’s framework allows us to consider students’ capabilities in relation to what learning is made possible in a given task or classroom activity.

In the Capabilities Approach, the notion of capabilities as freedom of choice is contrasted with the concept of functionings (Nussbaum 2011). A functioning is “an active realization of one or more capabilities” (op cit. p. 25–26). The notion of freedom of choice is built into the notion of capability as an opportunity to select. Nussbaum refers to an example of a person who is starving and a person who is fasting—both may have the same functioning in relation to nutrition, but they do not have the same capability; the person who fasts may choose not to fast, whereas the person who is starving has no choice. According to Nussbaum, promoting functionings is not the same as promoting capabilities. For instance, it is of both a societal and individual concern that the citizens of a society function as healthy and vital human beings. However, to promote specific functionings would involve making people act in specific ways. In relation to public health, a functioning approach could be to allow authorities to prescribe worthwhile activities, nutritional regulations or even personal trainers to tell us how to exercise. In Sweden, such dietary advice has long been prescribed by the National Food Agency and its predecessors. From a capabilities approach, one would be more concerned with how to develop an individual’s own capability to make lifestyle choices. The capabilities approach asks what human beings might develop the capacity to do and which ones are the valuable ones. Thus, it is evaluative and ethical; and, in the form of Nussbaum, focused on the areas of freedom central to human dignity.

From an educational perspective, distinguishing between capabilities and functionings makes it possible to study how classroom situations may afford or constrain students’ developing certain capabilities. For instance, we may be good at developing students’ capability to critically reason, but then deny them the opportunity to actually participate in critical reasoning activities. In other cases, we may do well at creating situations in which students engage in critical reasoning in prescribed ways, but fail to promote the capability to participate. Given history, culture and local traditions, Nussbaum (2011) calls for capabilities to be further specified in relation to the situation in which they are to be practiced and developed. Nussbaum argues that three of the most central capabilities are related to life, bodily health and bodily integrity. These are all central to science education and the endeavour to create opportunities for students to develop the capability to critically reason about online health information. In this study, we focus on the following research questions:

  1. 1.

    How is the capability to critically reason enacted in students’ encounters with conflicting online health information in science education?

  2. 2.

    In what ways can an evaluation tool contribute to qualify students’ capability to critically reason in science education?

Research Design

The study was carried out as design-based research (DBR). Methodically, DBR resembles how teachers work in everyday practice and has become increasingly common in educational research (Anderson and Shattuck 2012). In the same way as teachers continuously evaluate and gradually adjust classroom activities, interventions include stages of design, implementation and analysis (Cobb et al. 2003). Since familiarity with the local school context as well as theory is crucial for relevant problem formulation and successful implementations, DBR rests on close collaboration between teachers and researchers throughout the project (Sensevy et al. 2013).

Local Context

The study was conducted in close collaboration with two science teachers at a public upper secondary school in Sweden. The school had implemented a school programme centred around education for sustainable development (ESD), and the students were continuously confronted with issues such as global health, consumerism and environmental change. Even though media education was not a school subject of its own, digital tools were part of everyday school practice. Each student was equipped with an individual laptop and continually engaged in a variety of online activities as part of classroom practice, such as information retrieval and communication on social media platforms. The participating science teachers were both experienced and had been colleagues at the school for about 8 years. Closeness in collaboration was ensured as the first author was employed as a teacher at the school, and as such familiar with students, teachers, school practices and culture. Thus, the first author functioned as a teacher-researcher collaborating with colleagues (Wagner 1997). Altogether, 128 students aged 16–17 years took part in the study. All students, as well as their teachers, gave their informed consent to participate.

Intervention

The research process was initiated by a joint problem formulation, in which the teachers and the first author discussed local teaching and learning related challenges. Drawing on the participants’ joint teaching experiences, as well as previous research (as seen described in the introduction), students’ critical approach to online information was chosen as the learning goal of focus. The intervention that followed was set up between 2014 and 2015 in four biology classes from the natural science programme. The intervention was implemented during eight lessons in each class respectively. Groups of two to six students were assigned to jointly access online health information and to critically evaluate relevant sources about a physical or mental health issue of choice, such as cancer, resistance to antibiotics, depression or eating disorders. The students were encouraged to engage in a topic that they found interesting and that was debated in the media in ways that actualised ethical dimensions of public and individual health. The outcome of the information retrieval was later used in students’ own production of websites and podcasts, in which ethical dilemmas in health issues were presented to initiate discussion with peers. To enable students’ spontaneous navigation and critical approach to health information, the groups initially addressed the task without any further instructions or support from the teachers. After two lessons the teachers introduced the acronym GATOR as a tool for critical evaluation of online sources. The GATOR approach for “safer surfing” was originally designed by medical practitioners to educate patients about how to evaluate online healthcare information (Weber et al. 2009). Since health information was the focus here, we believed the tool could be useful to students in relation to the task. Also, acronyms have been described as easy to remember, innovative and meaningful in the sense that they aid individuals to produce mental images to support the organisation and storage of information (Bednarz 1995). Drawing on the notion of an alliGATOR, the acronym here functions as a reminder of the dangers to look out for when surfing the web. Each letter in the acronym represents an aspect of source critique respectively:

  • GenuineDeals with the sender’s authenticity. What are the goals, purposes and missions of the site? Does the site give a false, authentic impression through misleading logos etc.?

  • AccurateDeals with the correctness of the information. Is the information free from error? Is the information updated?

  • TrustworthyDeals with information reliability and validity. Is the information peer-reviewed? Are references quoted? What are the author’s credentials and affiliations?

  • OriginDeals with the producer or origination point of the information. Is the first-hand source reliable? Who manages the site? Is it possible to contact site administrators for clarification?

  • ReadabilityDeals with the reader’s possibility to understand and process the information. Is the information presented in a clear and concise way? Is it too technically advanced or too elementary?

Before used in the student groups, GATOR was introduced in a whole-class session to help the students to translate the acronym into productive critical questions.

Analytic Approach

The data consisted of audio- and video recordings of whole class and group discussions during the implementation lessons. All recordings were transcribed in verbatim by the first author. The collected data were analysed drawing on Graneheim and Lundman’ (2003) framework for qualitative content analysis. The unit of analysis was participation as speech, focusing on the aspects of source critique that emerged during students’ discussions and how critical reasoning became practiced in the groups.

The analysis included four steps: (1) discern meaning units, (2) condense codes, (3) condense categories and (4) frame themes in students’ discussions. Meaning units, defined as situations in which the students explicitly discussed or evaluated the relevance or trustworthiness of an online health related source, were extracted from the transcripts. The meaning units were then condensed into codes, such as students discussing purpose, qualification or transparency when evaluating a source. Once the codes were established they were grouped into categories of similar content. In Table 1, we illustrate how the first three stages of analysis were related. From left to right is an example of a meaning unit (extract from transcript), followed by the codes relating to this meaning unit and then the categories. As seen in Table 1, the codes not trustworthy per se and trustworthy per se both relate to unjustified source evaluations, which were sorted under the category of no need to justify. The category that deals with the information is illustrated by students discussing information congruence in terms of comparing various sources.

Table 1 Illustrates how the first three first steps of the QCA framework were applied (Graneheim and Lundman 2003)

The final step of analysis involved discerning themes in students’ discussions. Drawing on Nussbaum’s (2011) notion of capabilities and functionings, the themes focused on what ways the students practiced critical reasoning in relation to what was made possible in the task.

Findings

Based on our analyses we discerned four themes running across students’ conversations; two with and two without the evaluation tool (GATOR). Each theme illustrates how the students participated in critical reasoning in the student groups. In the following, categories and codes in each theme will be illustrated in commented excerpts of students’ conversations. Each code represents an aspect of the capability to participate in source critique on health-related issues.

Without GATOR

The two initial themes were discerned in the transcripts before GATOR was introduced, illustrating students’ unsupported critical reasoning.

Theme 1: A Struggle to Get an Overview Despite the Information Overload

A theme running through students’ unsupported discussions was how to best navigate and overview the vast amount of health information. Even though this theme occurred in all groups, it was by far more frequently expressed at the beginning of the information retrieval process. The predominant category in this theme was no need to justify, including the codes trustworthy or not trustworthy per se, and students relying on their gut feeling for source evaluation. Another category in this theme was about the challenge of accessing relevant information. One of the students exclaimed “It’s like looking for a needle in a haystack... and it’s all about stress.” The amount of health information appeared to be overwhelming for the students as they needed to orient themselves by using trustworthy sources. The search engines (predominantly Google) generated hundreds or even thousands of hits within less than a second: “[sigh]... 386 hits on scb [short for Statiska Centralbyrån, in English Statistics Sweden]... whohoo... let’s get started.”

Transcript 1 illustrates how a group of students searching for information about cannabis begin negotiating what they are searching for and where to start:

  • Transcript 1: Where to start?

Max: Hum what is it that we want?… we want scientific facts…

Ada: We want everything I guess.

Ted: But, get real. If we want facts about cannabis which of all the sites are we going to choose?

Ada: Well, I’ll start by checking Facebook [social media platform].

In line with science education classroom practice, Max suggested that they should start by searching for “scientific facts”. Ted seemed struck by the information overload, exclaiming “which of all the sites are we going to choose?” Ada suggested an inclusive approach, saying that they should be open to “everything”. She then stated that she would start with the information retrieval on Facebook. As a 16-year-old, Ada and her peers are likely to be frequent social media users. Turning to Facebook may have been a way to approach the task habitually. However, Facebook, as a social media platform, is used to create opinion or share personal stories and experiences, and is hardly recognised for its scientific credibility. Ada’s decision to start by browsing Facebook, however, may suggest that she sought information other than scientific facts. Her suggestion that they wanted everything substantiates such an interpretation. Within this specific science education context, many student groups accepted pages like Facebook during their initial work with the task. A second group, working with antibiotics resistance, were also struggling to generate an overview of the vast amount of online information. Similarly, to the group in transcript 1, the second group introduced the aspect of trustworthiness when negotiating what sources to use.

  • Transcript 2: Identifying and using non-trustworthy sources

Sam: Perfect. Let’s get some sort of basic knowledge about antibiotics… the thing is… I really BELIEVE that if you’re going to really do source critique you shouldn’t use…

Lin: Flashback [open access wiki]

[laughter]

Sam: Or hmm… Wikipedia.

[students browsing]

Hal: I just read the definition like…

Sam: Yeah, I saw that you were looking it up but think it’s much better to Wikipedia sources…

Lin: Mm… yeah exactly you can check the source.

Hal: Exactly, but if you only want like a small overview introduction to what antibiotics are…

Sam: Yeah but still… yeah but exactly but if you’re going to copy paste that overview definition you still have to be…

Hal: Yeah, yeah, but I won’t be using Wikipedia as a source later on.

Sam: No, but no… well and then these sources … we can check them out… oh shit, it’s so damn MUCH!

In line with the task instructions, Sam expressed a wish to get “basic knowledge” about antibiotics and highlighted the importance of source critique. Supported by Lin, he concluded that “if you’re going to really do source critique,” you shouldn’t go to sources like Flashback or Wikipedia. As the work of information retrieval proceeded, the scattered and vast amount of information made it hard for the students to fulfil their own standards. It appeared as though before understanding the information, the unexpected challenge was to actually find it. Despite his initial dismissal, Hal decided to use Wikipedia. Not to gain in-depth knowledge though: “I just read the definition,” he explained. He further suggested that Wikipedia could be useful to get “a small overview introduction.” His peers seemed to agree that Wikipedia was not to be considered a trustworthy source. By adding that he would not use Wikipedia “as a source later on,” Hal acknowledges that Wikipedia is not always an accepted source within a school science context.

The students in transcript 2 took a stance on whether sources could be trusted or not, although the reasons behind these decisions were never made explicit. Sources like Wikipedia and Flashback were considered not trustworthy per se. In our analyses, we sorted a large number of codes under the category of no need to justify, which may be seen to reflect students’ frequently unjustified statements about sources’ trustworthiness. In addition, there were sources judged as trustworthy per se. “I think that Socialstyrelsen [The Swedish National board of Health and Welfare] is a trustworthy source” or “Läkartidningen [journal published by the Swedish Medical Association] ... that sounds pretty good.” The category ‘no need to justify’ thus includes codes of unjustified rejection as well as unjustified acceptance.

A while into the lesson, when the students had acquired an overview, individuals expressed a need to deepen their knowledge about their health issue, while also facing a frustrating lack of time. In this new phase, pages like Flashback and Wikipedia no longer provided enough information. Evaluating source trustworthiness and relevance was a time-consuming process, and, in addition, the students had to spend time grasping the complicated subject-related content. In transcript 3, the group from transcript 1 had advanced their information retrieval and were now struggling with how to allocate their time.

  • Transcript 3: Allocating the time wisely

Max: Well, I don’t know… it’s just that it’s so much of it… which bits are we going to use?

Ted: But if you go to a site, there are so many sub-sections like “you and cannabis” or like “using cannabis”, “how cannabis affects the brain”, “long-term effects”, “cannabis versus alcohol”. I mean… it’s like… what are you going to click on? I don’t know what we want to find out, so it feels like… reading a whole page of stuff and then just like… No!

Max: mm

Ted: Right. And at the same time perhaps that’s what you have to do to know… it gets a bit double-sided… you don’t know if you’re spending your time on something sensible.

In this conversation, the students seemed to lack a strategy to decide on whether to continue reading or continue searching was the best way forward. Their previous approach, to beforehand label a source as trustworthy or not per se, no longer seemed to apply. Max asked, “which bits are we going to use” and “what are you going to click on” in order to gain deeper knowledge about various aspects of cannabis abuse. Ted answered that reading the whole site might be necessary, however, “you don’t know if you’re spending your time on something sensible” until afterwards. A student from another group said: “There should be like a site that’s like Google or like Wikipedia but only with like research articles and where you can like search for like stress like negative... positive... that would be really nice.” In lack of such a resource, the students express that a strategy to shortcut the source evaluation is a gut feeling.

  • Transcript 4: Trusting the gut feeling

Eva: Yeah, you can’t just think… “No, I’ve read it”… “This will do”... you can’t be lazy!

Jon: mm

Eva: It’s about actually making a bit of an effort to get...

Ali: Yeah, exactly.

Eva: But you can’t just doubt everything, can you. You have to trust your sources a bit too because you can’t spend half an hour searching every lesson to check up if this source really is OK.

Ali: No.

Eva: Then you have to trust your gut feeling a bit too.

The conversation illustrates how the students negotiate conflicting demands of judging fast and reading slow. On the one hand, Eva said that “you can’t be lazy” and that you have to make “a bit of an effort” to find out whether a source is trustworthy and relevant. On the other hand, due to lack of time you also have to “trust your sources” and rely on “gut feeling”. Before GATOR, all student groups referred to their gut feeling by using utterances such as “it feels”, “it seems” or “I believe” when discussing their sources.

In sum, when participating in the search of online health information, the students struggled to find ways to quickly navigate the vast amount of information. The students negotiated conflicting the demands between taking a critical stance and managing time by referring to sources as presupposedly trustworthy or not trustworthy per se and relying on their “gut feeling”.

Theme 2: To Negotiate Trustworthiness

Another theme that ran through the unsupported discussions were negotiations of what sources counted as trustworthy and why. The predominant codes and categories in this theme related to the purposes of both the senders and the readers. For instance, trustworthiness was discussed as depending on what the students were looking for, how they perceived the purpose of the task and what sender purposes they discerned. In relation to the sender, relevant competence, qualifications or experiences were discussed as affecting trustworthiness. Most groups seemed to share an idea that unbiased sources were possible to find, and that their online navigation was primarily directed towards identifying such sources.

Occasionally, students expressed that some sources are unnecessary to question:

  • Transcript 5: Questioning authorities

Sid: I’m thinking about this source’s business like if one... hmm… how we’re going to be able to check them… or if we like take for instance Vårdguiden [governmental online health care advice] sort of or like it’s the same… it’s like… it’s the government… same principle as school like…

Gry: Oh yeah.

Sid: It’s like… you just have to swallow it hook, line and sinker…

Moa: It feels like it’s difficult or unnecessary to question all this.

Sid: What did you say?

Moa: It doesn’t feel like it’s so easy to scrutinize…

Sid: No… scrutinize… no.

In most student groups, authorities such as government, universities or medical experts were considered trustworthy per se. In transcript 5, Sid expressed difficulties with questioning governmental sources: “you just have to swallow it hook, line and sinker.” The utterance may be interpreted as he considered the source trustworthy per se, and that there was no reason (or even possibility) to be sceptical. Moa agreed, saying that it felt “difficult or unnecessary to question all this.” The group of students thus concludes that certain sources may be regarded as inherently objective and not just difficult, but unnecessary to question.

Groups of students sometimes articulated their source evaluations in more detail. In doing so, scepticism was often directed towards the sender of the online information, highlighting, for instance, the senders’ qualifications. Qualifications were then discussed as having adequate training, being a titled expert, and working at a hospital or university. “I mean, where do they work... Sahlgrenska [Swedish hospital] ... that’s an unbiased source.” The importance of the sender qualification is also illustrated in the following statement made by a student discussing Wikipedia: “Since anyone can write on it, any idiot can change like severely complicated things.” In the end, the group decided to turn to other sources than Wikipedia for information about how cannabis affects the brain. In addition to formal training or education, personal experience was also regarded as trustworthy by the students, even when it was sometimes difficult to evaluate.

The purpose of the senders were discussed in all groups, such as informing, advertising and giving advice. As illustrated in transcript 6, purposes aligned with “doing good”, such as improving public health, seemed to be taken as a guarantee for trustworthiness.

  • Transcript 6: Doing good

Sid: Well, you can think like, for example, Landstinget [Swedish governmental health care provider] has got public health as their goal… like Landstinget wants you to have good health so that they need to spend as little money as possible…so, then, perhaps, they aim at giving sort of trustworthy information or correct information.

Moa: Huh, you reckon?

The students identified organisations and agencies as senders for informative purposes, and then accepted them without further questioning. According to Sid, Landstinget “wants you to have good health,” so the information on their page should be regarded as trustworthy and correct. However, Sid suggested that Landstinget might have some additional economic interests and “to spend as little money as possible.” Nevertheless, it seemed as though Landstingets’ position as an authority eliminated the risk for economic purposes to overshadow or influence their informative ones. Browsing the web for health information, the students continuously came across personal experiences, emotions and opinions on chat forums and user-driven encyclopaedias. When consulting sources like Wikipedia and Flashback, the students accepted those as user-driven sources. However, given their initial ambition to find objective sources, this became somewhat problematic. In the conflict between searching for unbiased sources and dealing with clearly subjective ones, the students started to discuss their own purposes of information retrieval. In transcript 7 the students discussed whether to include Flashback as a source or not.

  • Transcript 7: It depends on what you are looking for

Ted: But it can still be relatively trustworthy if you’re looking for someone’s experiences… or like…

Max: Yeah… if you mean like why you’re browsing… but if you want like scientificfacts…

Ted: Yes.

Max: Well then, Flashback isn’t the best place to start.

Ted: Yeah, but like “how cannabis effects the brain” it’s not really…

Max: Then you don’t like check out Flashback.

Ted: Sure they might not be completely wrong, but it’s not like trustworthy.

Here, the students discussed trustworthiness depending on what one is looking for, highlighting personal relevance. According to Ted, Flashback might be a trustworthy source if you are “looking for someone’s experiences,” and Max stated that Flashback is not the “the best place to start” if you are looking for scientific facts. Even though Flashback was judged as not “completely wrong” about scientific facts, it was evaluated as a more trustworthy source for personal experiences. The students seemed to balance two different purposes at the same time. On the one hand, searching for objective sources, and on the other hand, legitimising the use of subjective sources by referring to relevance. The students considered individual health-related experiences and personal narratives as complementary to scientific facts: “I mean it doesn’t seem to be research based, but it’s still her experience.” Embracing personal and subjective dimensions of health, subjective sources were clearly used by the students to “give perspectives” on complex matters. However, after a while the above group of students started to question the trustworthiness of experience-based sources.

  • Transcript 8: Evaluating experiences, emotions and opinions

Max: But isn’t Flashback more trustworthy when it comes to experiences and opinions… because one can’t say that it isn’t trustworthy. If I ask you about your experience of something and then I say like “no, that’s not trustworthy”… but it doesn’t have to be either because one can bullshit on the internet too!

Ted: Yeah, yeah, like “I’ve tried cannabis and that was just nice. I saw elephants.”

Initially, Max concluded that questioning experiences and opinions is almost impossible, saying “... because one can’t say that it isn’t trustworthy. If I ask you about your experience of something...” Then it seemed to strike him that even personal narratives could be made up. Many groups identified a need to question experiences, emotions and opinions, and at the same time expressed difficulties in doing so. In some groups, a certain level of subjectivity was accepted: “You said that if you have cancer you may have like other emotions than me /.../ and then it also becomes more insecure.” In other groups, the students tried to evaluate trustworthiness by comparing personal narratives: “You have to make a general evaluation and highlight various like /.../ maybe one can look up to see if more people have the same kind of experience.”

In sum, when the students negotiated trustworthiness they questioned both the sender and the information. However, some groups of students expressed uncertainty about how to evaluate information provided by authorities. The students judged a wide range of sources, including experiences, emotions and opinions in light of their own purposes of information retrieval.

With GATOR

The following themes were discerned in the transcripts after GATOR had been introduced to the groups, thus illustrating how this specific tool for evaluation of health information afforded and constrained students’ critical reasoning.

Theme 3: To Unravel and Handle Subjectivity

When the groups of students were introduced to GATOR as an evaluation tool, the strive to quickly find objective sources was replaced with a systematic questioning of all sources. Theme 3 concerns how the students negotiated various source limitations. The predominant categories in this theme are related to the sender and the information. For instance, the sender’s purpose, qualifications, fundings and transparency were highlighted, along with information specificity, timeliness, origin, correctness, readability, consensus and disagreement.

In the following excerpt, a group of students search for stress-related information. Dag and Kim ended up at a page about stress administered by Stockholm University. Using GATOR encouraged the students to engage in a critical scrutiny of sources that previously had been identified as trustworthy.

  • Transcript 9: Suspicious minds

Kim: No… we won’t use GATOR for that [source]… or… I was thinking about Stressforskningsinstitutet [The Institute for Stress Related Research] we don’t really need to.

Dag: We can do it a bit quickly now, but I think Stressforskningsinstitutet belongs to Stockholm University…

Kim: Mm.

Dag: So, that makes it sort of trustworthy… but we could go through a bit of that…but I’m looking at an article just now and there’s the names [authors’] and that it’s Stockholm University…here’s his email address too…I can mail him straight away…it doesn’t say when it was published.

Kim: “Written by experts in the field…” [reading out loud] … I think it’s difficult to be certain…

To start, Kim suggested that there was no need to scrutinise the source “Stressforskningsinstitutet”: “we won’t use GATOR for that.” Dag agreed and stated that it appeared to be a genuine site since it “belongs to Stockholm University.” This way of source critique is similar to what is illustrated in transcripts 5 and 6, where the students accepted information from authorities without questioning. Dag then changed his mind and suggested a quick evaluation. He then identified the sender and checked transparency, saying “there’s the names and that it’s Stockholm University… here’s his email address too.” By remarking “it doesn’t say when it was published,” he highlighted a limitation regarding information timeliness. Kim concluded that the page was “written by experts in the field,” but also suggested caution using the page, saying “it’s difficult to be certain.” Compared to the unsupported critical reasoning illustrated by the transcripts in theme 2, the students now evaluated all sources, including those previously framed as objective. Using only GATOR, one student in all groups referred to “gut feeling” when evaluating a source and later labelled a source as trustworthy per se.

Guided by GATOR, the students integrated a number of critical questions directed towards both the sender and the information. Transcript 10 illustrates how Dag and Kim combined critical questions on trustworthiness in relation to another stress-related website:

  • Transcript 10: Critical questions combined

Kim: But they wrote on the page that if it’s someone without qualifications who’s done the writing then they will state that clearly together with the facts. So, it should be her…

Dag: Yeah, but I think they should write who has… it only says when it was published, not who wrote it /…/ it’s good to have lots of pages so you can compare… I agree to that but I think we should be a bit critical towards this one /.../

Kim: On the other hand, one can’t question that they’re honest… I mean they say like everything.

Dag: They say everything, right?

Kim: Mm. I mean they do state all the terms and conditions of the site so then they’re relatively honest… correct… I don’t know how correct they are… they don’t always give you source critique and stuff.

Dag: But that’s a bit hard for us to say…

Kim highlighted the sender’s qualification when saying “if it’s someone without qualifications who’s done the writing then they will state that clearly together with the facts.” By noting that the site administrator claimed to declare when authors were unqualified, Kim pointed towards the importance of source transparency. This was supported by Dag: “Yeah, but I think they should write who has… it only says when it was published not who wrote it.” In the excerpt, Kim and Dag integrate aspects of the sender’s transparency and information timeliness. When Dag added “it’s good to have lots of pages so you can compare,” he also pointed to congruency of information across sources as strengthening trustworthiness. However, some students identified weak points in such supposition, saying, for instance: “it does not have to be true only because it’s unique. It can be an exception that proves the rule.” Due to the lack of information and relevant author competence, Dag expressed uncertainty about using the source as a reference: “we should be a bit critical towards this one.” This, however, appeared to bother Kim less: “On the other hand, one can’t question that they’re honest… I mean they say like everything.” The negotiation of weaknesses and strengths seemed to make Kim a bit doubtful about her previous dismissal, admitting that “they do state all the terms and conditions of the site so then they’re relatively honest.” Here, sender transparency was additionally brought up, which seemed to have made the source evaluation even more complex: “But that’s kinda hard for us to say.”

In the group discussions, the students unravelled a variety of source biases. A challenge that emerged concerned the task of deciding what to trust when nothing seemed reliable anymore. In transcript 11, Kim and Dag continued the conversation from transcript 10.

  • Transcript 11: Handling incomplete sources

Kim: Yeah, I know, but… “references to the primary sources that the information is based upon are always presented if possible” [reading out loud].

Dag: Ah… now we’ve got it!

Kim: Now we’ve got it!

Dag: I’ve got a negative feeling about their site… I’m gonna take it out of our sources actually.

Kim: No!

Dag: Yes!

Kim: But yeah if it… but I have to write…you know I’m going to write…

Dag: No, I put like brackets round it that we are going to use it like facts in our investigation.

Kim: It doesn’t have to be useless just because we’re critical about the site… that’s just good, isn’t it?

Dag: That’s good but I’ll write “uncertain”…

Kim: Yeah, yeah, it’s kinda good considering the GATOR investigation that we decided that it was… that we’re questioning it.

During the course of interaction, Dag and Kim identified different purposes with the critical evaluation. When Kim read out loud “references to the primary sources that the information is based upon are always presented if possible,” Dag responded “Ah… now we’ve got it.” Due to the possible lack of references to primary sources he planned to “take it out of our sources…” Kim opposed Dag’s decision and suggested that scepticism does not necessarily mean that a source is useless: “It doesn’t have to be useless just because we’re critical about the site… that’s just good, isn’t it?” The interaction becomes an example of how critical reasoning was practiced as a negotiation over how to handle source limitations.

In sum, GATOR opened up for a more extensive critical reasoning in the sense that students in all groups identified and combined various critical questions directed towards each source. The capability to critically reason was practiced as accessing, systematically evaluating and comparing all sources. In doing so the students jointly integrated various critical questions directed towards both the sender and the information. Unfolding source limitations, the students also expressed uncertainty about how to handle incomplete or biased sources. Most groups considered congruency as a sign of trustworthiness, whereas incongruency raised scepticism.

Theme 4: Accessing and Evaluating Scientific Facts

Another theme in the group discussions supported by GATOR concerned accessing and evaluating scientific facts. The predominant codes in this theme deal with information quality. The students discussed congruency, specificity, timeliness, origin, correctness and readability in relation to health information. The capability to critically reason was practiced as ways to access, understand and evaluate sources with a scientific content or origin:

  • Transcript 12: Searching for true facts

Max: But the information could be wrong.

Ted: But com’on. Are they going to be honest with their facts or honest about who they are?

Max: Yeah, the facts. Those facts are true.

Ted: But how can we know that [the fact is true]? That’s what we’re gonna find out?

Max: But isn’t that what trustworthiness is all about?

In transcript 12, Max and Ted discussed the trustworthiness of a webpage about cannabis abuse. Max suggests that the information on the webpage could be wrong. Ted, however, points to other aspects than just the information as such, asking: “are they going to be honest with their facts or honest about who they are?” Max immediately answers that “the facts are true” and “isn’t that what trustworthiness is all about?” This short transcript illustrates a shift of focus that emerged in all group discussions when GATOR was introduced. Rather than the previous inclusion of a variety of sources (as expressed in themes 1 and 2), online navigation became narrowed down to the purpose of finding true facts.

A challenge that the groups identified concerned how they could verify that a fact was to be considered true. Ted asked “... how can we know that [the fact is true]? That’s what we’re gonna find out?” After a while Ada joined the discussion, and in transcript 14 she and her peers continued to negotiate what kind of information might qualify as “true”.

  • Transcript 13: Scientific research as generating true facts

Ada: Yeah, that’s what it has to be… it’s always like scientific reports, isn’t it?

Max: Mm… that’s what it has to be.

Ted: Yeah, but I mean like the effects of cannabis that should…

Max: You can’t answer that by just using sources.

Ted: You have to make an investigation.

Max: But… are you going to put people in a room to smoke …[mumble]?

Ted: Yeah, then it’s not like a source in the field, it’s like… mm…

Ada: Yeah, everyone says different things. There is no basic research about cannabis and what are the true facts then? I mean in that case it has to be a proper scientific investigation and there isn’t one, is there?

In the above transcript, Ada continued the discussion over how to decide what facts qualify as true by suggesting that “it’s always like scientific reports, right?” The students continued equating scientific research with empirical investigations, saying that questions like the effect of cannabis cannot be answered “by just using sources.” Ted suggested that they could “make an investigation.” Max seemed to consider an experimental approach and wondered if a way forward was “to put people in a room to smoke?” Ted commented “then it’s not like a source in the field.” With the comment, Ted opens up for true facts to be generated and not only to be “found” in sources (cf. Transcript 12). At the end of transcript 13, Ada confronted the group with a dilemma: “There is no basic research about cannabis and what are the true facts then?” The group then decided to leave the questions about how cannabis affects the brain unanswered. Instead, they proceeded with information retrieval and exploring other aspects of cannabis abuse.

  • Transcript 14: Limitations of readability

Kim: Readable… very readable… perhaps a little bit too readable… do you think?

Dag: A little bit too readable… yeah, well it’s hard to say.

Kim: Nooo… I mean some of these articles are… or nooo…

Dag: I would say it’s not too hard to read and it’s not too easy.

Kim: No... no, it’s sort of just right actually.

Dag: But it’s got quite a lot of self-diagnosing.

Kim: “By activating via the autonomic nervous system…” [reading out loud].

Dag: Yeah, it’s definitely not too readable /… /

Kim: “The page is readable [spells out loud]… no how shall I put it… “anybody can read what’s on the page” or nobody could possibly read what’s on the page, but it’s meant for… I can imagine that if you’re younger like fourteen or thirteen [years old] then you might not get it.

Dag: No ...words, I mean words that they’ve… I mean [more] difficult words about health.

Discussing source readability, Kim wondered if it might be “a little bit too readable?” Dag replied that “it’s hard to say,” but concluded that “it’s not too hard to read and it’s not too easy.” Transcript 14 shows a rather complex picture of how source readability and maybe trustworthiness are related emerges. Most groups mentioned information being clear, concise and not too advanced as markers for trustworthiness. In the above transcript, the students add oversimplification of complicated matters as negatively affecting trustworthiness. When Kim read out loud on the page about “the autonomic nervous system…,” Dag concluded that “it’s definitely not too readable.” Here, readability was primarily discussed in relation to the language used on the page: “words that relate like more difficult words about health.” In other groups, readability sometimes included disposition and layout: “The web page is structured in a way that makes it understandable and there are clear headlines and like... you understand what is says.”

In sum, the capability to critically reason became practiced as finding true facts by consulting scientific digital resources. The information as such was at the centre of attention and the use of scientific concepts was considered a marker for trustworthiness.

Summary of Findings

The four resulting themes illustrate how students participated in critical reasoning at various stages of the information retrieval process. Without the evaluation tool, the capability to critically reason became practiced as ways to quickly navigate the vast amount of online health information. The students frequently made unjustified evaluations of sources, oftentimes based on “gut feeling”. Even though the students identified a variety of critical questions related to both the sender, the reader and the information as such, various questions were seldom integratively directed towards each source. Further, trustworthiness did not seem to be something fixed or given beforehand. Instead, the students practiced critical reasoning as a joint negotiation of what might count as trustworthy in the specific situation. The students further acknowledged that sources could be used for multiple purposes, and that each purpose opens up for new definitions of what trustworthiness could be. This influenced students’ source evaluation in that both expert-generated and user-generated sites were consulted.

Introducing GATOR as an evaluation tool enabled the students to structure their critical evaluation in that each source was discussed using a combination of various critical questions. As the capability to critically reason became practiced as ways to articulate and justify statements about sources, “gut feeling” was no longer referred to. Every single source was considered potentially biased, and the critical reasoning focused on how to use sources despite biases, inconsistencies and inaccuracies. However, using GATOR as a tool also constrained students’ capability to critically reason. The primary purpose of online navigation retrieval came to be about only accessing scientific facts. The students no longer included sources based on experiences, opinions and emotions, and ceased to refer to their own purposes or values.

Discussion

Given the increased use of the internet for online health information retrieval (cf. van de Belt et al. 2013), our aim with this study was to explore how opportunities for students to develop critical reasoning could be created in science education. Drawing on Nussbaum’s (2011) notion of capabilities and functionings, the four resulting themes in students’ critical reasoning are discussed as a consequence of what was made possible in the task. An underlying concern in the Capabilities Approach is individuals’ freedom of choice in accordance with personal values and beliefs (Nussbaum 2011). Hence, an emerging question is whether the learning situation enabled students to develop critical reasoning in such ways.

Altogether, the findings of this study confirm the view of young people as active and engaged, yet cautious when approaching health information on the internet (Fergie et al. 2013). The students in this study initially relied on “gut feeling”, making quick and unjustified source evaluations, and by consulting a broad range of sources, including online encyclopaedias and social media platforms. Similar approaches to online health information among young people, as well as adults, have been confirmed in a number of earlier studies in the health literacy field (cf. Flanagin and Metzger 2007; Walraven et al. 2009; Gauld and Williams 2009). A common conclusion in previous research is that young people lack a critical attitude in the sense that they do not know sufficient evaluation criteria, or are unable to use and apply expert derived checklists correctly (Walraven et al. 2009; Diviani et al. 2016). In contrast to findings in previous research by Johnson et al. (2015) and Diviani et al. (2016), the students in this study directed just about the same amount and type of critical questions towards online sources both with and without an evaluation tool. This indicates that the students may be able to identify, and to some extent apply, a broad range of evaluation criteria. In contrast to previous suggestions (cf. Johnson et al. 2015), less attention should perhaps be paid to the number of criteria used by the students, as to increasingly emphasise how they are used.

Affordances and Constraints of an Evaluation Tool

Drawing on the results of this study and Nussbaums’ (Nussbaum 2011) distinction between capabilities and functionings, we argue that there are important constraints associated with the introduction of evaluation tools and that these need to be considered in order to qualify students’ critical reasoning in science education.

Taking a Capabilities Approach opens up consideration of the ways students are able to function as critical readers given a specific task. Just like in the study by Walraven et al. (2009), the students in this study initially expressed a need to make quick decisions about source trustworthiness and valued easy access. However, in line with a suggested broader societal concern to maintain medical information quality and reproducing expert knowledge online (Conrad and Stults 2010), the students also expressed uncertainty about consulting sources not commonly accepted in science education. Students’ initial strategies to accept online encyclopaedias and to trust their “gut feeling”, appeared to be pragmatic and realistic responses to the struggle of cautiously accessing relevant sources in the information overload. Just like in the study by Fergie et al. (2013), the students initially consulted a broad range of sources, including user-generated sites, as an account of other people’s experiences and personal narratives. However, implementing the evaluation tool changed expressions of critical reasoning in ways that can not only be attributed to students’ initial unsupported online navigation. For instance, using the tool the students directed their online information retrieval to find scientific facts only. Song et al. (2016) and Caiata-Zuffrey et al. (2010) report that people commonly have purposes far beyond verifying medical facts when browsing the web for health information. When people approach health information in everyday life, purpose other than searching for the truth are at stake, and thus senders other than medical experts and information other than scientific facts may become important. It is commonly argued that addressing complex societal issues, such as health, has the potential to broaden perspectives to include, for example, ethics, emotions and personal narratives in science education (Andrée and Lager-Nyqvist 2017).

An emerging question is in what ways the suggested lists of trustworthy online sources or expert derived evaluation criteria (Lee et al.; Meola 2004) opens up for people to evaluate sources according to their own values and beliefs. If critical reasoning becomes reduced to a cumulative use of already established evaluation criteria, other purposes than finding scientific facts risks being overshadowed. Acknowledging capabilities as freedom of choice (Nussbaum 2011), we argue that science education ought to open up for each student to confront and explore a diversity of voices and perspectives on issues with personal relevance. By inviting students to a science classroom where what counts as trustworthy is framed as what is given beforehand, opportunities to introduce democratic values and the inclusion of multiple perspectives may become lost (Andrée and Lundegård 2013). A challenge may be that school science is often presented as holding certain and already established knowledge, hence students might not expect to be invited to confront science related issues in critical ways (McClune and Jarman 2014).

Implications

This study illustrates how health information is approached and how trustworthiness is negotiated and then embedded in specific practices with different tools and purposes, and that the evaluation tool provided to the students’ mediation requires critical reasoning. The results indicate that the capability to critically reason reaches beyond the use of ready-made lists of evaluation criteria. An evaluation tool such as GATOR may be useful to structure students’ discussions and illustrate possible ways to scrutinise sources. Integrating and comparing various lists can serve as a potential starting point for students’ own negotiation of trustworthiness. However, GATOR also constrains the students’ functionings in evaluating health information. In order to develop young people’s capability to deal with health information in a way that does not limit their freedom to orient themselves, to know themselves, and to make decisions in personally relevant health issues.

In line with the design-based research approach, we formulate implications of this study in the form of two tentative principles for designing learning activities that may enable students’ development of the capability to critically reason in science education:

  1. 1.

    To support students’ access as well as evaluation of online sources.

    Surfing the web for specific information, the students will come across a variety of sources and conflicting information. Therefore, opportunities to practice online navigation becomes crucial. Such opportunities could, for instance, be created in activities that are centred around how to narrow down and filter search results, how to backtrack primary sources presented in online encyclopaedias, or how abstracts can be helpful to summarise the content in scientific articles.

  2. 2.

    To open up for students’ critical scrutiny of a broad range of sources in relation to both scientific value and personal relevance.

    Working with societal issues with scientific content in science education means approaching issues with no right or wrong answer. Therefore, opportunities to include a variety of sources and perspectives becomes important. Such opportunities may, for example, be created when the students themselves are encouraged to negotiate trustworthiness. Inviting students to participate in discussions in which trustworthiness is not predefined can render reflection on how online information correlates with science as well as their own values and beliefs.