Keywords

1 Introduction

The internet has become indispensable as a source of information and news. Given the vast amount of information available as well as the large numbers of information sites, it has become increasingly difficult to judge the credibility of online information [1]. Metzger argues that in the past, traditional publishing houses used to act as gatekeepers of the information published. There was a cost barrier to printing and the print process allowed for quality control. In the digital age, anyone can be an author of online content. Digital information and content can be published anonymously, and easily plagiarized and altered [1, 2]. Online news platforms are in a continual race against time to be the first to publish online, and in the process they sacrifice quality control. In the process, the gatekeeping function of evaluating the credibility of online information has shifted to the individual users.

To date, scholars in information literacy have developed checklists to assist users in assessing the credibility of online information, as well as various theories and models to describe how users evaluate information in practice [2,3,4,5]. These models highlight aspects such as the influence of the subjectivity of the user in evaluating content, the process of evaluation as well as the cognitive heuristics that users typically apply during evaluation. The models also recognize that in the era of social computing and social media, evaluation has a strong social component [6].

In an overview of studies on the assessment of the credibility of online information, it was found that neither the established normative guidelines for evaluating credibility, nor the descriptive models for evaluating credibility consider the quality of reasoning and argumentation contained in the information that is evaluated. That is strange, since critical thinking is generally regarded as an important information literacy skill, and in addition it is viewed as an important 21st century workplace skill [7, 8].

In this paper, we present a case for the use of critical thinking as a means to assess the quality and credibility of online content. We suggest how critical thinking could be used to enhance current credibility assessment practices. The known processes have in common with critical thinking the fact that they are all concerned with the credibility of evidence that is presented to substantiate the findings of an online article. Whereas credibility models mainly focus on presentation and content, critical thinking extends the evaluation of content by evaluating the quality of the argument presented. Admittedly, many fake (and other) news stories contain limited if any arguments to evaluate. While the absence of an argument is not enough to discredit an online article, its presence can be used as a quality indicator. The presence of a weak argument will reduce the perceived credibility of the claim or finding of an article, while a strong argument will enhance its credibility.

This paper commences with a short overview of existing guidelines and descriptive models for evaluating the credibility of online information. The common themes among these models are summarized. Next, the paper introduces the building blocks of critical thinking and proceeds to indicate how critical thinking is used for argument evaluation. A means to assess the credibility of online information is proposed that uses critical thinking in a way that recognizes and builds on previous work related to credibility assessment.

2 Existing Research on Assessing the Credibility of Online Information

Credibility refers to the believability of information [4]. Credibility is regarded to be subjective: it is not an objective attribute of an information source, but the subjective perception of believability by the information receiver [4, 9]. As such, two different information receivers can have different assessments of the credibility of the same piece of information.

Research on assessing the credibility of online information can be categorized into research on normative guidelines (in other words, what should people be looking at when they assess credibility) and research related to descriptive models or theories (how people are assessing credibility in practice).

2.1 A Checklist for Information Credibility Assessment

The normative approach to the assessment of information credibility is promoted by the proponents of digital literacy, who aim to assist internet users in developing the skills required for evaluating online information. Their assumption is that online information can be evaluated in the same manner as information found elsewhere [1]. A checklist approach is usually followed, where the list covers the following five components: accuracy, authority, objectivity, currency, and coverage or scope [1]. Accuracy refers to the degree that the content is free from errors and whether the information can be verified elsewhere. It is an indication of the reliability of the information on the website. Authority refers to the author of the website, and whether the website provides contact details of the author and the organisation. It is also concerned with whether the website is recommended or endorsed by a trusted source. Objectivity considers whether the content is opinion or fact, and whether there is commercial interest, indicated for example by a sponsored link. Currency refers to the frequency of updates, and whether the date is visible. Coverage refers to the depth and comprehensiveness of the information [1]. In a checklist approach, a user is given a list of questions of things to look out for. For example, in terms of currency, the user has to look for evidence of when the page was last updated.

In a series of studies conducted by Metzger and her colleagues [1], it was found that even when supplied with a checklist, users rarely used it as intended. Currency, comprehensiveness and objectivity were checked occasionally, whilst checking an author’s credentials, was the least preferred by users. This correlates with findings by Eysenbach and Köhler [10] who indicate that the users in their study, did not search for the sources behind their website information, or how the information was compiled. This lack of thoroughness is ascribed to the users’ lack of willingness to expend cognitive effort [6]. The apparent attempt by users to minimise cognitive effort has given rise to studies on how users apply cognitive heuristics as well as other means to assess credibility more quickly and with less effort. This research led to the development of a number of descriptive models and theories on how users assess credibility in practice.

2.2 Descriptive Models and Theories Related to Information Credibility Assessment

The Use of Cognitive Heuristics.

A number of studies indicate that internet users avoid laborious methods of information evaluation, and that they prefer to use more superficial cues, such as using the look and feel of a website as a proxy for credibility rather than analyzing the content [5, 6, 11]. When evaluating credibility, people tend to apply cognitive heuristics, which are mental short cuts or rules of thumb. Based on their previous experience people respond to cues and act on these subconsciously, without the need to spend mental effort [6, 12, 13]. Five heuristics are identified that users commonly apply to decide on the credibility of online content [6]. The reputation heuristic is applied when users recognize the source of the information as one they believe to be reputable, possibly because of brand familiarity or authority. The endorsement heuristic means that a source is believed to be credible if other people believe so too; either people they know or people that have given it a good rating. The consistency heuristic means that if similar information about something appears on multiple websites, the information is deemed to be credible. The expectancy violation heuristic is a strong negative heuristic. Information that is contrary to the user’s own beliefs is not deemed to be credible. Lastly, when using the persuasive intent heuristic, users assess whether there is an attempt to persuade them or sell something to them. In this case, the information is perceived to be not credible because there is a perceived ulterior motive or an attempt to manipulate the user.

The Prominence-Interpretation Theory.

The Prominence-Interpretation theory comprises two interlinked components that describe what happens when a user assesses the credibility of a website [14]. First, a user notices something (prominence), and then they interpret what they see (interpretation). If one of the two components are missing, there is no credibility assessment. A user will notice existing and new elements of a website and interpret the elements for credibility in an iterative fashion until being satisfied that a credibility assessment can be made. Conversely, the user may stop when they reach a constraint, such as running out of time [14]. A visual representation of the Prominence-Interpretation theory is provided in Fig. 1.

Fig. 1.
figure 1

Prominence-interpretation theory [14]

Prominence refers to the likelihood that certain elements will be noticed or perceived by the user. The user must first notice the element, to form a judgement of the credibility of the information. If the user does not notice the element, it does not play a role. Five factors are identified that influence prominence, namely: Involvement, topic, task, experience and individual differences. The most dominant influence is user involvement, referring to the user’s motivation and ability to engage with content. Topic refers to the type of website the user visits. The task is the reason why the user is visiting the websites. Experience refers to the experience of the user, in relation to the subject or topic of the website. Individual differences refer to the user’s learning style, literacy level or the user’s need for cognition. When a user’s involvement is high, and the user’s experience is of expert status, the user will cognitively notice more elements [14].

Interpretation refers to the user’s judgement of the element under review. For example, a broken link on a website will be interpreted as bad and lead to a lower credibility assessment of the website. Interpretation of elements is affected by a user’s assumptions, skills, knowledge and context.

Consolidation.

When comparing the research on the use of heuristics [6] with the Prominence-Interpretation theory [14], one can see that the use of heuristics fits well into the “interpretation” component of Prominence-Interpretation theory.

A Web Credibility Framework.

Fogg’s web credibility framework [15] contains the categories of operator, content and design. Operator refers to the source of the website, the person who runs and maintains the website. A user makes a credibility judgement based on the person or organisation operating the website. Content refers to what the site provides in terms of content and functionality. Of importance is the currency, accuracy and relevance of the content and the endorsements of a respected outside organisation. Design refers to the structure and layout of the website. Design has four elements namely information design (structure of the information), technical design (function of the site on a technical level, and search function), aesthetic design (looks, feel and professionality of the design) and interaction design (user experience, user interaction and navigation) [15].

The web credibility framework was extended by Choi and Stvilia [3] who divided each of the three categories (operator, content and design) into the two dimensions of trustworthiness and expertise, thereby forming what is called the Measures of Web Credibility Assessment Framework.

Consolidation.

When consolidating the web credibility framework [15] and its extension [3] with the work on credibility assessment presented in the prior sections, one can say that the web credibility frameworks contribute to prominence as well as the interpretation. The web design contributes to the prominence or noticeability of the information. Further, the level of professionality of the design can be interpreted by means of a heuristic such as the reputation heuristic. The website operator and content, when noticed, get interpreted by means of evaluation heuristics. Hence, the work presented in 2.2.1 – 2.2.3 can be reconciled into different aspects of online information that, when noticed, get interpreted by means of heuristics.

Iterative Models on the Evaluation of Online Information.

According to the Prominence-Interpretation theory [14] the interpretation of information occurs in an iterative fashion until a credibility assessment can be made. Two other models also recognize the iterative nature of credibility assessment. These are the cognitive authority model [2] and Wathen and Burkell’s model [16].

With the cognitive authority model, the information seeker iteratively assesses the authority and credibility of online content by considering the author, document, institution and affiliations [2]. These are integrated into a credibility judgement. The model is similar to the checklist [1], but proposes that users employ the technology available to them to make the judgement. Like the checklist, the cognitive authority model is normative.

Wathen and Burkell [16] also propose an iterative way of assessment. According to their research users first do a surface credibility check based on the appearance and presentation of the website. Secondly, the user will look for message credibility by assessing the source and the content of the message. Lastly, the content itself is evaluated. During this final stage, sense-making of the content occurs, depending on factors such as the user’s previous level of knowledge on the topic. If, at any stage, the user becomes aware of a reason to doubt the credibility of the information, the iterative process is stopped. Wathen and Burkell’s model [16] is normative but also incorporates descriptive research on information evaluation.

2.3 A Synthesised Summary of Existing Work on the Credibility Assessment Process

To synthesise the joint findings from previous work on credibility assessment of online information:

  • Credibility cues need to be noticed before they are processed [14].

  • The evaluation process is iterative and moves from surface level checks (such as look and feel of a website) through to engagement with the content [14, 16].

  • From the onset of the evaluation process, cognitive or judgmental heuristics are applied to assess credibility. This is especially true during the interpretation phase, when a user evaluates the content itself [1, 4,5,6]. Judgmental heuristics are used in order to reduce cognitive effort as the user is inundated with information.

  • The evaluation process takes part in a social context and some of the evaluation cues are socially generated, such as number of website visitors, user recommendations or social rankings [6].

In the section that follows, the principles of critical thinking will be introduced. This is in order to assess how critical thinking might be used to evaluate online content in the light of what is already known about credibility evaluation.

3 Critical Thinking

The Foundation of Critical Thinking considers critical thinking as “that mode of thinking - about any subject, content, or problem - in which the thinker improves the quality of his or her thinking by skillfully analyzing, assessing, and reconstructing it” [17]. Some authors consider it an indispensable skill in problem solving. Halpern suggests a taxonomy of critical thinking skills covering a broad range of skills as (1) verbal reasoning skills, (2) argument analysis skills, (3) skills in thinking as hypothesis testing, (4) dealing with likelihood and uncertainties and (5) decision making and problem solving skills [18]. The aspect of critical thinking of interest in this paper, relates to the analysis of arguments. A useful definition for critical thinking is therefore the one suggested by Tiruneh and his co-authors [19]: critical thinking is the ability to analyse and evaluate arguments according to their soundness and credibility, respond to arguments and reach conclusions through deduction from given information [19]. Booth et al. [20], basing their work on ideas of Toulmin et al. [21], consider a basic argument to consist of a claim (or conclusion), backed by reasons which is supported by evidence. An argument is stronger if it acknowledges and responds to other views and if necessary, shows how a reason is relevant to a claim by drawing on a general principle (which is referred to as a warrant).

The following argument, adopted from [20: 112] illustrates these components: “TV violence can have harmful psychological effects on children” (CLAIM), “because their constant exposure to violent images makes them unable to distinguish fantasy from reality” (REASON). “Smith (1997) found that children ages 5–7 who watched more than 3 h of violent television a day were 25% more likely to say that what they saw on television was ‘really happening’” (EVIDENCE). “Of course, some children who watch more violent entertainment might already be attracted to violence” (ACKNOWLEDGEMENT). “But Jones (1999) found that children with no predisposition to violence were as attracted to violent images as those with a violent history” (RESPONSE).

Booth and his co-authors [20: 114] use the following argument to illustrate the use of a warrant in an argument: “We are facing significantly higher health care costs in Europe and North America (CLAIM) because global warming is moving the line of extended hard freezes steadily northward.” (REASON). In this case the relevance of the reason to the claim should be stated by a general principle: “When an area has fewer hard freezes, it must pay more to combat new diseases carried by subtropical insects no longer killed by those freezes” (WARRANT).

Of course, good arguments need more than one reason in support of conclusions and complex arguments contains sub-arguments. However the main components remain the same. Figure 2 summarizes the main components of a basic argument.

Fig. 2.
figure 2

The core components of an argument [20: 116]

Critical thinking entails the identification of the core components in an argument (analysis) in order to judge its credibility, quality and to formulate a response to it. According to Butterworth and Thwaites [7], a good quality argument is one where the reasons are true or justified and where the conclusion follows from the reasons. By using these criteria in the evaluation of arguments, classical fallacies such as the post hoc fallacy or circular reasoning can be identified. In addition, the evaluation of an argument entails asking questions and finding counter examples. A good quality argument will pre-empt objections or counter examples and respond to it. Butterworth and Thwaites [7] consider a credible argument as one which is plausible/believable (acknowledging that some highly improbable claims can be true), and having a trusted source. The credibility is enhanced if the claim is corroborated by different sources with different kinds of evidence.

3.1 The Use of Critical Thinking in the Context of Existing Credibility Assessment Models

It is suggested that critical thinking is included in the credibility assessment process, as follows. With reference to the Prominence-Interpretation theory [14], critical thinking can be applied during the interpretation phase. It can be used to assess the quality of evidence as well as evaluate the argument itself. It will only be used during a later stage in the iterative process of credibility assessment, possibly in the third phase of Wathen an Burkell’s iterative model [16].

4 Discussion: Potential Challenges to Using Critical Thinking to Assess the Credibility of Online Information

When considering the use of critical thinking to evaluate the credibility of online information, some challenges are apparent.

First, as indicated earlier, it is known that users, who are flooded with information, are applying as a coping mechanism the use of judgmental heuristics to reduce cognitive effort. Therefore, they prefer to use cues that will give them immediate reason to believe or not believe the information presented to them. Argument evaluation is an exercise that requires cognitive effort, especially when a complex claim is presented. Therefore, users will not go to the effort of thoroughly evaluating an argument if they can help it, unless there is high motivation to do so, for example when university level students are looking for material to support the arguments in their essays.

A second challenge to the use of critical thinking in this context is that online news or other online content does not always contain an argument. A piece of news on social media may just consist of evidence. In that case, critical thinking would require the evaluation of the credibility of the evidence.

A third possible challenge is that in an effort to mislead, the author of fake news may present a credible looking argument on the basis of fake evidence that cannot readily be verified. Hence, while good argumentation is often associated with good quality content, this may not always be the case. However, the cognitive effort of trying to second-guess the veracity of a well presented argument is so high that this is not a feasible task in everyday credibility assessment situations.

4.1 Addressing the Challenges

The above mentioned challenges could be addressed as follows.

The challenge of the cognitive effort of critical thinking may be improved by means of training. As motivated earlier in the paper, critical thinking forms part of information literacy and is an important 21st century user skill. Training and regular exercise in argument evaluation will make it become an easier habit, so that it can be more easily applied. A number of universities have compulsory first year information literacy courses, and this is where critical thinking can be introduced. The authors are involved in the teaching of critical thinking and problem solving to IS first year students. A study with a pre- and post- assessment exercise to determine the effect of the course, was done during the first half of 2019. A total of 154 students participated in the pre-assessment evaluation and 166 students in the post-assessment evaluation. The objective of the course was not to train students to identify fake news but to analyse and evaluate arguments and to cultivate a critical attitude towards reading and interpreting texts.

Findings from a Course on Critical Thinking.

Pre-assessment: During the pre-assessment, students were asked several questions to test their critical thinking skills and one question to determine the credibility of a piece of information found online. The information presented to them [22] was part of fake news and presented an argument against the use of prison inmates to provide laughter in CBS sitcoms. In the pre-assessment only 16% of students could identify it as fake news. Students who identified it as fake news, applied most of the cognitive heuristics listed in Sect. 2.2. For example, a few students knew that The Onion is a website known for its satire articles (reputation heuristics). A handful of students applied the expectancy violation heuristics (“It just doesn’t make sense to me honestly”; “In today’s day and age, such practice would never be accepted seeing as people get offended by even the most futile things”; “In today’s age, laughter can be produced on computers or a group of laughs taken once and then played back whenever the producers feel”). Consistency heuristics were also used (“This is my first time hearing about it”). Quite a number of students pointed out the lack of credible evidence.

Post-assessment:

In the post-assessment questions were asked to assess critical thinking skills in general and the last question focused on fake news. Two different pieces of information were provided, one fake news and the other not (see Table 1).

Table 1. Article 1 and Article 2

Both articles contained far-fetched claims. Article 1 [23] is an argument containing unsubstantiated claims, sweeping statements and emotional language. Article 2 [24] was sourced from a ‘strange but true’ SkyNews site. Article 2 is a report based on claims backed by credible evidence. Students were asked to determine which one is fake news and to provide an argument for their choice. The results are given in Table 2.

Table 2. Responses on question to identify articles as fake news

Article 1.

Students who correctly identified Article 1 as fake news (56% of students), typically mentioned the relative obscurity of the website and the absence of names of experts (“there is said that experts were used in the article but none of the so called “experts” names or institutions were called to show the research”). In other words, they applied the reputation heuristic. The following student applied the expectancy violation heuristic to (incorrectly) identify Article 1 as real news: (“Article 1 can be seen as real news because the facts are not absurd”). What was clearly noticeable was that in their assessment, most students used the critical thinking skills taught during the semester: they pointed out that the claims are not supported by evidence (“they state that there are parents who burned?? the film but no numbers are provided it could be 2 out of 1000 but nothing is stated to prove this reason”). They further mentioned the subjective nature of the article (“The use of adjectives such as “arrogant”, “disrespectful”, “envious” makes the article sound extremely biased”) as well as harsh language (“The article is also very opinionated and the language used is quite harsh”). They also think the reasoning to be faulty (“And the argument is unstructured“the “reasoning” doesn’t lead up to a suitable conclusion.”)

Article 2.

Students who incorrectly identified Article 2 as fake news (56% of students) in general used the expectancy violation heuristic. They could not imagine sheep to be school pupils. (“Although article 2 comes from a reliable source the facts are absurd. [However] Article 1 can be seen as real news because the facts are not absurd”).

Discussion.

Article 1 is an argument whereas Article 2 a report. This explains why students were able to use critical thinking skills to evaluate article 1. In article 2, where no clear argument was present, critical thinking could only be applied to evaluate the evidence. Students found the evidence to be specific and traceable which contributed to its credibility. Only 36% of students were able to classify both articles correctly. However, the fact that only 8% said that neither articles were fake news, was encouraging, compared to the pre-assessment result where 84% of students were not able to recognize the supplied article as fake news. The post-assessment results indicate that most students had developed a critical attitude towards the supplied texts.

Recommendations on Combining Critical Thinking and Cognitive Heuristics.

The use of critical thinking skills in identifying fake news can be complemented by applying the consistency heuristic [6] to seek for other online sources that carry similar evidence.

Lastly, since the assessment of credibility of online information has been found to be a socially interactive activity [6], the endorsement heuristic could be used to inquire on a social platform whether information is credible. For example, a hoax website can be visited to see if the information has been exposed by other users as a hoax.

5 Conclusion

This paper considered the work that has been done to date on the assessment of the credibility of online information. A concise overview was presented of some of the major contributions in this domain. These contributions were synthesized into a list of common attributes that represent the key characteristics of credibility assessment models. Following this, the elements of critical thinking was introduced. Suggestions were made as to how critical thinking could be used for credibility assessment. The challenges related to the use of critical thinking in practice were also considered, and suggestions were made to overcome these challenges. The outcomes of the effect of the teaching of critical thinking skills on IS students’ ability to identify fake news, were discussed. Preliminary findings show that where fake news are presented as arguments, students use their skills of analysis and evaluation of arguments to identify fake news. Where fake news are reports, students look for quality evidence.

This paper contributes to the literature on the assessment of the credibility of online information. It argues that, and suggests how, the important 21st century skill of critical thinking can be applied to assess the credibility of online information. In doing so, this paper makes a contribution in terms of the responsible design, implementation and use of current day information and communications technology.