Introduction

Self-regulated learning rarely happens in isolation. Learners are, at least indirectly, often confronted with other people’s knowledge, opinions and judgments. This may include a fellow student’s task solutions or information that anonymous individuals post online in social media, edit in online encyclopedias or answer in online learning environments. However, it is still largely unknown how such information affects self-regulated learning processes. In metacognition research, several indicators were identified that individuals use when regulating their learning process (e.g., deciding what to learn; Nelson & Narens, 1990). One of those indicators is the subjective confidence learners have in their knowledge or assumptions, which is often connected to specific solutions to problems and thus closely linked to (self-)testing. Subjective confidence ratings are understood to be based on the learners’ metacognitive processing of internal information, but may also be influenced by external factors when learners interact with their social environment and perceive socio-cognitive information within this environment like other people’s knowledge or opinions. Via internal re-evaluations of own knowledge, socio-cognitive information may ultimately influence the regulation of learning processes in individuals. Thus, although the focus of research on self-regulated learning and metacognition typically lies on the individual, regulating learning often has an inherent social component via social influence or social interaction processes (Järvelä & Hadwin, 2013). With the availability of various information and opinions from more or less reliable sources on the internet (e.g., in social media), possible social influences on self-regulated learning are increasingly important, especially in online-learning settings, where constraints of the communication medium might lead learners to give more weight on the available social cues (Walther, 2011). Thus, the underlying processes are of specific interest. Research on conformity (e.g., within social psychology) has shown effects of other people’s assumptions and opinions for decades (Bond & Smith, 1996) and it seems reasonable to assume such processes to influence learning processes as well. Yet, despite its relevance, little research has thoroughly looked into the underlying processes from a metacognitive point of view (although there are exceptions within metacognition research, e.g., Koriat et al., 2016; or social psychological research, e.g., McGarty et al., 1993). Thus, the purpose of this paper is to empirically contribute to research on the influence of social information on learners from a metacognitive perspective and to shed more light on the underlying processes that make learners regulate their learning within social settings.

The theoretical background section will be structured as follows: Firstly, we will describe how metacognition and confidence judgments shape self-regulated learning processes and how these judgments are formed. Secondly, we will describe how these judgments can be influenced by social information and how source and own confidence affect this process, before deducting and empirically testing our hypotheses.

Metacognitive regulation of learning processes

Theories on self-regulated learning involving metacognition usually suggest a cyclic model in which learners monitor their learning processes and exert control based on this information. Nelson and Narens (1990) define the cognitive processes that occur during learning as being split into at least two interrelated levels, the meta- and the object-level. According to this framework, learning processes and outcomes are on the object-level, and learners create a dynamic model of them on the meta-level. Two processes lead to a flow of information between these two levels: metacognitive monitoring and control (Nelson & Narens, 1990).

Monitoring is crucial as learners determine how well they understand certain material, how their learning is progressing, or how confident they are in their knowledge and—as a result—use this information as a basis to control their learning, for example, by allocating study time and deciding on appropriate learning strategies to be conducted within a given time frame (e.g., information searching, re-reading, help-seeking). Empirically, monitoring learning processes and outcomes has repeatedly been found to influence study decisions and therefore regulation of study (e.g., Metcalfe & Finn, 2008; Nelson & Leonesio, 1988; Nelson et al., 1994) and ultimately, learning outcomes (e.g., Dignath et al., 2008; Thiede, 1999; Thiede et al., 2003). To understand how self-regulated learning processes unfold under social influence, it is thus mandatory to understand what contributes to the development of learners’ monitoring judgments and how control decisions are influenced by social information.

One important indicator learners use when making study decisions is the confidence in the correctness of an assumption regarding a topic in question. Thus, in the following two subsections, we will first look at confidence and its role in making study decisions, before looking more closely into how confidence judgements are formed.

Confidence in learning and decision making

Confidence is a subjective indicator for the accuracy of learners’ actual knowledge on a subject (see Nelson & Narens, 1990). Confidence judgments are made after learners retrieve information from memory, usually prompted by a question requiring a response (Dunlosky & Metcalfe, 2009), and thus can be used as a judgment about a particular chunk of knowledge. These retrospective confidence judgments are thus based on subjective evaluations of the validity of one’s own knowledge and assumptions.

Confidence in knowledge can theoretically also be viewed as an inherent part of knowledge itself. Hunt (2003), for example, argues that the term “knowledge” does not refer to some objectively correct belief, but that this belief must be held with a certain amount of confidence. Confidence here represents the trust learners put into their own understanding of and knowledge about a subject. This link sets it apart from other metacognitive judgments about the learning process itself (see Nelson & Narens, 1990) and widens the relevance of these judgments from merely relevant for guiding learning processes (e.g., Schnaubert & Bodemer, 2017), to an additional merit for making other relevant life decisions. While learners may use this meta-level information to make adequate study decisions (Robey et al., 2017; Schnaubert & Bodemer, 2017), these judgments can also be linked to informed decision making (see Miller, Spengler, & Spengler, 2015) and thus have been studied in other areas (like eyewitness research) as well.

When learners judge the confidence they have in the correctness of their response to a specific task or item, the judgment may rely on analytic and non-analytic processes and reflects the learners’ metacognitive experiences during task completion (Efklides, 2008). As they are linked to self-testing, confidence judgments are often assumed to be fairly accurate predictors of knowledge accuracy (e.g., Costermans et al., 1992; Maki, 1998), although, empirically, this is not always the case (e.g., Schnaubert & Bodemer, 2017). Because confidence judgments are linked to specific assumptions, they seem well suited to guide learners’ study efforts towards specific content that still needs to be learned (Rawson & Dunlosky, 2007)—a link which has been empirically confirmed in various studies (e.g., Schnaubert & Bodemer, 2016, 2017).

In sum, response confidence judgments depict the learners’ metacognitive judgments regarding their accuracy of knowledge and may be used to guide learning, for example, the learners’ search for information, allocation of study time or the choice of specific learning strategies. Thus, it is vital to understand how these judgments are formed.

Sources of confidence judgments

Most research examining possible sources of metacognitive judgments focus on internally generated information. One prevalent theory in research on metacognitive judgments is that learners base them on different cues (Brewer et al., 2005; Koriat, 1997; Koriat et al., 2008). Various aspects of the learning process, including cognitive processes and memory traces, can serve as cues to learners for judging their comprehension or memorization of information. For example, the ease with which information comes to mind (retrieval fluency) can serve as a cue for judging one’s confidence in an answer (Kelley & Lindsay, 1993) and likewise, other cues such as accessibility or the quality of retrieved information can be used to form metacognitive judgments as well (see Dunlosky & Nelson, 1992; Nelson et al., 1994). However, other information may also affect learners’ confidence. For example, if learners do not know the answer to a specific question, they may try to solve the task by inductive inference (Gigerenzer et al., 1991). In this scenario, learners may use probability cues to judge an answer as being more of less likely to be correct. The perceived validity of the cues used to infer the answer (cue validity) will then determine the confidence judgment representing the perceived probability of their answer being correct (Gigerenzer et al., 1991). Thus, in this model, learners look for cues that may help them answer the question. Amount and (perceived) validity of evidence supporting specific assumptions also empirically seem to affect confidence (Koriat et al., 1980). Although such cues may be shared across a population, potentially adding to group consensus and positive correlations between consensuality and confidence (Prototypical Majority Effect; see Koriat, 2019; Koriat et al., 2016), they mostly reflect internal information, yet available external (social) information such as other people’s opinions or assumptions might also be used as a cue when learners judge their knowledge and form confidence in specific assumptions. Thus, in the following section, we will take a closer look into the mechanisms of socio-cognitive influence on learning.

The influence of socio-cognitive information on learners

Learners can be influenced by environmental and behavioral events when they interact with a social context, for example, social information may influence learners’ cognitive processes and self-regulated learning (Carvalho Filho & Yuzawa, 2001; Järvelä et al., 2013; Zimmerman, 1989). This refers to situations where learners interact with each other physically (face-to-face communication) as well as digitally (computer-mediated communication). In the following, we will briefly describe how socio-cognitive information may influence the learners’ assumptions about a topic, their metacognitive evaluations of these assumptions, but also their control decisions, tapping into various research areas. Further, we will discuss how own and others’ metacognitive evaluations may influence these adjustments and how metacognitive evaluation of others affect their perceived source competence. We will start with the influence of other learners’ assumptions on a learner’s assumptions and confidence.

Agreeing and disagreeing others

One of the most widely known effects of socio-cognitive information on the assumptions of an individual most likely concern assumptions of a social community or group. Prominently shown by Asch (1951, 1956), opinions or assumptions of a group may not only create uncertainty, but also influence the assumptions of individuals, to the extent that they may conform to objectively incorrect answers (Bond & Smith, 1996). More specifically, agreement or disagreement with a reference group was shown to influence the confidence of individuals in their answers: In the study of McGarty and colleagues (1993), the participants’ confidence was increased when they found out that a reference group agreed with their assumption, and it likewise decreased when they disagreed with the reference group. Furthermore, this increase of uncertainty was mediated by the stimulus information at hand—meaning that with limited stimulus information available, the subjects were more likely to be influenced by others. Taking a metacognitive perspective, the assumptions of other individuals might be used as an external cue validating or contradicting own initial assumptions, especially in the case of limited information requiring inductive inference (see Gigerenzer et al., 1991). Socio-cognitive information on assumptions should thus strengthen or weaken the confidence learners have in the correctness of their knowledge or assumptions, leading them to converge towards the assumptions put forward by others. Although holding majority views may not always be due to social influence, but may be due to shared cues individuals consult when forming assumptions (see Koriat, 2019), in conclusion, agreement or disagreement with others seems to play a role in the individual formation of assumptions or opinions and the confidence in them (see also Koriat et al., 2018; Yaniv et al., 2009). Such changes on the meta-level should also affect learning processes as part of the metacognitive self-regulation cycle. When discussing how social information may affect metacognitive self-regulation of learning processes, it is important to look beyond metacognitive monitoring and re-evaluation in light of external information, and more specifically at how learners use the information to steer further learning processes (i.e., regulate their learning). One area that is specifically concerned with potentially beneficial learning processes induced by the social situation is research on socio-cognitive conflicts.

Conflicting assumptions received a lot of attention from research concerning socio-cognitive conflicts, which is especially relevant here, as such conflicts were shown to influence regulatory processes (e.g., Schnaubert & Bodemer, 2016, 2019) albeit the exact metacognitive mechanisms remain unclear. Being confronted with opposing points of view from a social source (e.g., within a social environment or during social interaction) is commonly described as socio-cognitive conflict (Bell et al., 1985; Buchs et al., 2004; Butera, Darnon, & Mugny, 2010). Socio-cognitive conflicts can be beneficial for learning, because they may lead to a restructuring and reorganization of cognitions in individuals (Bell et al., 1985; Johnson et al., 1998). Furthermore, in a learning context, they are assumed to trigger regulatory processes, particularly through producing uncertainty (Schnaubert & Bodemer, 2016), making individuals reconsider their own assumptions and even integrating others’ assumptions (Butera & Buchs, 2005; Butera et al., 2010). Empirically, conflicting assumptions have been found to increase epistemic curiosity (Lowry & Johnson, 1981) and guide the search for additional information (Schnaubert & Bodemer, 2016, 2019).

Taken together, research in this area clearly suggests that socio-cognitive conflicts may alter learning processes of individuals—which is of particular relevance considering the ubiquity of potentially conflict-inducing information, especially during self-regulated online learning. However, looking at these processes from a metacognitive point of view, things get less obvious: How exactly does the external information, especially conflicting information, influence the learning processes of individuals? Do conflicts in the form of diverging assumptions trigger metacognitive control processes (e.g., search for additional information) directly, or do they merely act as a contradicting cue to induce doubt and uncertainty in learners (see also Quiamzade & Mugny, 2001)?

We argued above that social information may lead to adjustments in confidence and assumptions by providing a (social) cue supporting or weakening the learner’s preferred assumption. This adjusted confidence is then assumed to form the basis for study decisions as would be in line with metacognition theory. However, as of yet, there is no empirical research testing whether the conflict itself triggers additional learning processes that go beyond the influence of metacognitive re-evaluations.

While disagreeing others may evoke socio-cognitive conflicts and have a direct informational influence on individual assumptions, there are other socio-cognitive influences to consider which may affect individual learning processes. Thus, we will look into the role of metacognitive self-evaluations of others (i.e., source confidence) in affecting self-regulated learning.

Source confidence

Metacognitive self-evaluations of other learners, i.e., their confidence in assumptions, may provide additional information that can help learners to judge (1) the assumptions put forward by another individual, (2) the individual him-/herself, and (3) the task at hand. Through these various mechanisms, metacognitive self-evaluations of others may impact self-regulated learning.

  1. (1)

    Judging assumptions

Given that learners assign higher confidence to assumptions they believe to be correct, it seems reasonable to assume they consider confident assumptions to be more valuable. As mentioned earlier, perceived cue validity is an important factor when using cues to judge own knowledge or assumptions (Gigerenzer et al., 1991). Accordingly, various research suggests that source confidence affects the individual’s evaluation of incoming information (e.g., Brewer & Burke, 2002; Cutler et al., 1988; Whitley & Greenberg, 1986). Thus, assumptions presented with higher confidence may have a greater effect on conformity and re-evaluation processes. Yet, to our record, this effect has not been studied in learning settings so far, so it is of interest whether and how it applies to this context.

  1. (2)

    Judging the source

Source confidence not only has an effect on how incoming information is perceived, it might also influence the perception of the source itself. Advice-taking research highlights the role of source confidence in judging advisors (e.g., Lee & Dry, 2006; Price & Stone, 2004; Sniezek & Van Swol, 2001). Research in this area suggests that high confidence in assumptions leads to being perceived as more knowledgeable, credible and even competent (Anderson et al., 2012). Conceptually, this is closely related to research on source credibility, which emerged in the past century and gained a lot of attention ever since. A review on source credibility concluded, for example, that sources perceived as highly credible are preferred over sources low in credibility, and that sources with high credibility have a higher persuasive power (Pornpitakpan, 2004). Thus, the credibility of a source influences how individuals are persuaded and change their opinions, with higher credibility leading to more persuasion. Credibility in the learning context may largely be attributed to the expertise of a source, although other dimensions such as trustworthiness may also be relevant in other settings (see Hovland et al., 1953). In sum, it is important to acknowledge the role that source confidence might play in learning settings, not only with regard to valuing specific (confident) assumptions, but also because learners may perceive highly confident individuals as more competent which may affect the dynamics of social learning.

  1. (3)

    Judging the task

Apart from confidence being used to judge incoming information or the source of the information, it may also be a valuable information about the task at hand. Karabenick (1996) found, that the indication of questions by a co-learner influences learners’ self-judged comprehension of text material (metacomprehension) and self-reported confusion, with more co-learner questions leading to more confusion in learners. Karabenick (1996) assigns this shift in metacomprehension in reaction to others’ questions to a change in perceived difficulty of the stimulus material by the learners. This effect may be similar if learners are informed that other learners are less confident in their responses to a question than they are themselves. According to Efklides (2008), metacognitive confidence can be described as a metacognitive experience that serves as a link between an individual’s knowledge and a task at hand. Accordingly, it seems reasonable that learners may use this information to adjust their understanding of task difficulty. This is in line with research by Zhao and Linderholm (2011), who found learners anchoring their metacognitive judgments to other learners’ performance. Thus, the confidence other learners assign to their assumption may affect an individual’s confidence independent of the assumption itself. Socio-cognitive information hence was shown to influence an individual’s assumptions as well as metacognitive evaluations.

Individual confidence

Last but not least, the impact of socio-cognitive information on learners may depend on characteristics of the learners in question. One important characteristic of individuals in learning settings (when confronted with external information) is their own confidence in assumptions. A common conception about the role of own confidence on conformity is that the lower one’s own confidence in an assumption is, the more prone one is to be influenced by others; in other words: to conform (e.g., Thorley & Kumar, 2017; E. Walther et al., 2002). This seems logical when assuming that confidence in assumptions reflects the amount and perceived validity of cues supporting initial assumptions and thus these are harder to be out-weighed by incoming information rebutting this position. Thus, the stronger the foundation is in the first place, the higher the confidence and the less relevant each piece of incoming information becomes.

However, an individual’s confidence can also influence how incoming information is perceived. Research in this area mainly comes from feedback research building on Kulhavy and Stock’s (1989) feedback model, and assumes an internal feedback function of response confidence as a result of self-monitoring on memory processes (Butler & Winne, 1995). Research in this area suggests that own confidence has an impact on how incoming information is processed (e.g., Fazio & Marsh, 2009; Kulhavy et al., 1990; Stock et al., 1992; Webb et al., 1997).

Thus, there is good reason to assume that social information (i.e., others’ assumptions and metacognitive evaluations) may lead to conformity and adaptation in terms of adjustment of assumptions as well as metacognitive re-evaluations (i.e., confidence)—however, it might also be perceived differently depending on the foundation of own initial assumptions, in other words: own confidence.

Hypotheses

Hypothesis 1: Adjustment of answers and convergence

We described that metacognitive processes, namely monitoring and control, are central for self-regulated learning (see also Winne & Hadwin, 1998). Learners are assumed to use their monitoring to control the learning process and thus, metacognitively regulate their learning (Thiede, 1999; Thiede et al., 2003). Many researchers examined possible internal influences on these metacognitive judgments (e.g., Gigerenzer et al., 1991; Koriat et al., 1980), yet despite ample evidence from several research areas supporting the idea that external information affects individual assumptions and their metacognitive evaluations, little research was done to look specifically into how such external information on others’ assumptions affects learners’ cognitions and metacognitive evaluations within a learning context. However, these influences are increasingly important as not only forms of online learning are becoming increasingly popular (Eurostat, 2019), but also social learning environments are becoming more important and learners are confronted with an increasing amount of information about other learners’ assumptions and metacognitive self-evaluations online and offline. We argue that these external influences (e.g., other people’s assumptions) can serve as a cue for learners when they judge their knowledge or state of learning. Thus, we would expect learners to incorporate views of other learners and converge towards their assumptions:

H1a. Learners adjust their own assumptions in the direction of other persons’ assumptions (convergence).

Further, it is reasonable to assume that individuals may be less prone to external influence when their internal cues are sufficient for a straightforward decision. Applied to a learning context, this would mean that if learners have ample evidence supporting their general assumption—leading to high confidence in their assumptions (Gigerenzer et al., 1991)—they may be less reliant on external information and may thus refrain from adjusting their own assumption, whereas they may be prone to use this information increasingly with rising levels of uncertainty due to missing stimulus information and thus unreliable or conflicting cues (McGarty et al., 1993). Thus, we assume that:

H1b. Learners are more inclined to change the general direction of their assumption when they are uncertain.

Additionally, we assume validity of the external information to play an important role when learners have to incorporate different assumptions. Eyewitness research as well as source credibility research suggests that source confidence may be an indicator to validate information of a specific source (e.g., Brewer & Burke, 2002; Price & Stone, 2004). Thus, we assume that:

H1c. Learners are more inclined to change the general direction of their assumption in favor of a partner’s assumptions, if the partner’s assumptions are presented with confidence.

Hypothesis 2: Perceived competence

Further, research on source credibility suggests that a source’s confidence might lead to perceiving a source as more credible, knowledgeable and competent. While research has shown the effect of source confidence on perceived competence and advice taking in various areas (Anderson et al., 2012; Sniezek & Van Swol, 2001) and it seems plausible that the same mechanisms apply to learning settings, this has not been empirically confirmed, yet. It follows then, that when a learner is confronted with an assumption of a learning partner who is highly confident, the learner would more likely perceive this person as credible and competent and would thus be more likely to consider the partner’s assumption for deciding on his/her own answer. Therefore, we put forward the following hypotheses:

H2a. The more confident persons are in their assumptions, the more competent they are perceived.

H2b. The more competent another person is perceived, the higher his/her influence is on an individual’s change of assumption.

Hypothesis 3: Search for additional information

Apart from changing cognitions (i.e., knowledge and assumptions) as well as their metacognitive evaluation (i.e., confidence in assumptions), external information may impact an individual’s learning process. Metacognitive self-regulation processes build on metacognitive evaluations and thus changes on the meta-level should impact regulation attempts. The connection between metacognitive judgments and study attempts has been confirmed in various studies, especially with regard to re-study decisions (e.g., Metcalfe & Finn, 2008; Thiede et al., 2003) or the search for additional information (e.g., Schnaubert & Bodemer, 2016, 2017). Thus, we assume that:

H3a. Learners decide whether to get additional information regarding a subject based on their (current) confidence in assumptions, with uncertainties being more likely to trigger searches for information than certainties.

However, similar conclusions could be drawn from research on socio-cognitive conflicts or controversies, which are also supposed to trigger an additional search for information (Lowry & Johnson, 1981). Empirical research confirms this link (e.g., Schnaubert & Bodemer, 2016), however, it remains yet unclear, if this link is due to metacognitive changes resulting from conflicting evidence. Researchers hypothesize (Buchs et al., 2004) and have found empirical evidence (McGarty et al., 1993) that conflicts may induce uncertainty regarding the correct solution. Additionally, there is some evidence that learners integrate information on other learners’ assumptions and confidence when making study decisions (Schnaubert & Bodemer, 2016). Despite this evidence, the link between conflicting information, metacognitive judgments (e.g., confidence) and metacognitive control processes (e.g., study decisions) remains understudied. Socio-cognitive conflicts have been found to affect metacognitive control processes (e.g., search for information). However, theoretical models of metacognitive self-regulation assume that this happens rather indirectly through changes in metacognitive judgments. Thus, in our study, we study how effects of conflict on study decisions change when metacognitive re-evaluations are taken into account. In line with a metacognitive view on study regulation assuming study decisions rely on metacognitive mechanisms, which in turn may be affected by socio-cognitive conflict, we hypothesize that:

H3b. Conflict loses its unique impact on decisions to get additional information when learners are able to adjust their metacognitive evaluations.

Methods

Subjects

The study took place between the end of 2016 and early 2017 (December–February) at a German university. 61 participants took part in the study, of which one needed to be excluded from the data due to a server error, which leads to a final sample of N = 60 participants. Of these, 34 were female and 26 were male with a median age of 23 (IQR = 5). The majority of the sample were students (86.67%). Advertisement focusing on the university’s Psychology and Applied Cognitive and Media Science programs and course credit offered to those students suggests a majority coming from said program, which focuses especially on psychology and computer science. Apart from course credit, no other compensation was offered. As the learning material was in German, language skills were measured by self-assessment on a six-point equidistant scale (from 0 to 5), resulting in a medium to high assessment (M = 4.73, SD = 0.52) throughout the sample with the empirical minimum of 3 only given by two participants. We therefore did not exclude any participants based on low language skills. Participants were mainly recruited via Facebook and the study program’s internet forum. The sample size was calculated using G*Power. We aimed for 95% power and medium effects for the main calculations (within-sample t-tests or Wilcoxon tests for two measurement points and one-way within-subject ANOVA for three data points), which gave us a minimum sample size of 42 to 54 (depending on calculation). To allow for some lee-way and exclusion of data if necessary (e.g., outliers, compliance issues), we targeted 60 participants.

Research design and independent variables

A within-subject design was chosen, so the same conditions applied to all participants. This design was chosen in order to allow us to study metacognitive changes induced by socio-cognitive information and thus how learners react to varying social information while controlling for interindividual differences between conditions. In our study, learners first read expository texts, provided their assumptions and confidence in these assumptions triggered by true–false statements, and were then confronted with data on assumptions and confidence in these assumptions of a potential learning partner before being able to adjust their own assumptions and confidence ratings (more information on the introduction of the learning partner is provided below). Additionally, they were able to request additional information on each assumption. The data representing assumptions and confidence ratings of a potential learning partner was computer generated. We varied the assumptions of the (computer-generated) learning partner that learners were confronted with during the study as a within-subject variation. More specifically, we varied the assumptions’ direction in comparison to the learner (agreement yes/no) and the confidence with which they were provided (response confidence: uncertain (0)—completely certain (5)). Confidence was varied across four occasions (represented by statements regarding different texts), with confidence with regard to one text being low (0, 1), with regard to two texts medium (2, 3) and with regard to one text high (4, 5). Direction of assumption was varied within each occasion with the partner agreeing and disagreeing each roughly three times with statements within each text. To avoid effects due to text topic, the topics were given in random order and were randomly assigned to partner confidence levels.

Material and procedure

The experiments were conducted in our research lab and all materials and procedures were presented via a computer. After welcoming and declaration of consent, participants were asked to fill out a questionnaire assessing their age, sex, and employment. They then started to read two of four German texts on different scientific disciplines all ranging between 223 and 335 words (M = 297.5) in a randomized order. The topics were decoherence (physics), electrophoresis (biology), economics of fertility (economics), and praxeology (social sciences). Afterwards, during the pre-test, they judged six statements regarding each read text as either being true or false on a combined certainty-answer scale (see Fig. 1). An example item for physics is “Field strength E is directly proportional to ionic charge Q.”, an example for economics of fertility is “The new household economics by Becker assumes that couples decide on the number of children based on their availability of resources.” (both translated from German). To prevent fatigue, the procedure was split up in two parts, so that they first read two texts and judged statements on these texts, and then repeated the procedure for the other two texts (see Fig. 2). A two-part split was used rather than four separate blocks consisting of text and judgement, so that each judgment block was separated from reading about this specific topic by one action to avoid information in working memory to interfere with the earlier judgments within each block. Thus, the procedure required the participants to focus on a different topic between reading a text about a topic and judging statements for said topic, either by reading a different text (judgments 1 and 3) or by judging statements from a different topic (judgments 2 and 4). We randomized the texts and partner confidence levels to avoid sequence effects.

Fig. 1
figure 1

Judgement scale ranging from true/certain to untrue/certain; translated from German

Fig. 2
figure 2

Experimental procedure

After having completed the last pre-test, the participants were presented with socio-cognitive information of a learning partner, i.e., how this partner judged each of the six statements of the first text. The learning partner’s answers were generated by an algorithm which yielded three certainty-levels varied across texts: Certain (4 to 5), neither certain nor uncertain (2 to 3) and uncertain (0 to 1). Within each text, half of the partner’s answers agreed and half disagreed with the learners’ initial response. The participants were asked to judge the competence of their learning partner on an 11-point endpoint-labelled equidistant scale (ranging from “not competent” (0), to “highly competent” (10)). Afterwards, they were asked again to judge the same six statements from the pre-test as either being correct or false, this time with the learning partner’s answers visible (post-test, see Fig. 3). They also had the opportunity to indicate a request for additional information on each item (even though no actual additional information was provided). This procedure was repeated for all four texts. No time limits were set for either the reading parts or the tests and most participants finished within 30 min (M = 24.63 min, SD = 4.60).

Fig. 3
figure 3

Depiction of partner information (top), re-statement of own assumption (bottom), and request for additional information (right); translated from German

Introduction of the learning partner

Participants were told in the briefing of the study that the study consisted of individual and cooperative learning tasks. During instruction before receiving information on the answers of the (computer-generated) learning partner, they were explicitly told they would receive information from another participant of this study, who was then labelled as “learning partner” throughout the study. We did not provide information on why this particular partner was chosen, instructions on how this information should be used, or further social cues as even minimal cues may affect how participants judge others (e.g., due to gender; Matheson, 1991). Thus, the chosen scenario reflects social information received from an unknown other learner void of specific social cues affecting data interpretation. While this scenario is thus rather minimalistic, it allows us to focus on our variation to see how learners use it without further instruction while eliminating other influences contingent on a more elaborate and maybe authentic scenario (this is discussed in detail in the discussion section).

Dependent variables

Dependent variables were assessed in the four pre- and four post-test phases of the experiment (one for each topic). During the pre-tests, we assessed the learners’ answers (4 tests * 6 answers = 24 measures of direction of assumptions and confidence) prior to the presentation of partner information. We also included information on the partners’ answers (24 accounts of direction of assumptions and confidence of partner) to assess initial discrepancies between the participant and the partner. In the first part of each of the four post-tests, we assessed how learners judged their partners’ competence (4 times). In the second part, we re-assessed learners’ answers (24 measures of direction of assumptions and confidence) post presentation of partner information and included the same information on the learning partner as before and also assessed requests for additional information (24 times). A detailed description of the variables computed and used to test the hypotheses is described below.

Convergence (hypothesis 1a)

To assess if learners adjusted their answers in the direction of the learning partner (convergence), we computed the absolute distance between the participant’s answers and the partner’s answer for each answer pre and post viewing the information and computed mean scores for each participant. Since answers were assessed on a combined confidence and assumption scale ranging from -5 (false, confident) via zero (not confident) to + 5 (true, confident), possible absolute distance values ranged from 0 to 10. For further analyses, we also computed a mean adjustment score per participant. Positive scores mean that learners adjust their answers in the direction of the learning partner (convergence: distance becomes smaller) and negative scores mean that they move away from their partners (divergence: distance becomes greater).

Adjustment of general assumptions (hypotheses 1a, 1b, 1c and 2b)

We also assessed if learners changed the general direction of their assumptions, especially in case of disagreement (hypothesis 1a). Thus, we looked into all situations, in which either participant or partner thought the statement to be true (+ 1 to + 5) and the other to be false (− 1 to − 5). Situations in which participants or partners were undecided in the pre-test were not viewed as disagreements and thus excluded. We then assessed in how many of these cases, participants changed their answers to agree with the partner. Situations, in which participants changed their answer to zero (undecided) were not counted as changes to agree as participants neither agreed nor disagreed. We calculated a percentage score representing the percentage of initial disagreements that ended in agreements. For the analysis of the impact of perceived partner competence (hypothesis 2b), we computed mean rates per person and text. To contrast changes in situations of disagreement with situations of agreement and test if changes were independent of agreement or disagreement, we also assessed in how many cases participants changed their answers in post-test when there was general agreement in pre-test. Using the same basic procedure, we calculated a percentage score to represent the percentage of situations in which participants changed their answers to disagree with the partner. Again, initial participant or partner responses containing a zero were excluded, post-test zeros were not counted as change to disagree.

In order to test if own or partner confidence differed between situations in which participants solved their disagreement via changes of answers and situations in which participants stuck to their initial response, we also computed further measures. We calculated mean initial confidence values for situations in which learners changed their assumption towards the partners’ assumption and for situations in which learners persisted in disagreement, ideally leading to two values for each learner. Additionally, we computed within-subject point-biserial rank correlations between confidence (0–5) and adjustment of answer in the face of disagreement (0, 1) (hypothesis 1b). Similarly, we computed mean partner confidence values for cases in which participants changed their answer in the face of disagreement and for cases in which participants persisted in disagreement, and additionally calculated the within-subject correlation values (hypothesis 1c).

Partner competence (hypotheses 2a, 2b)

To assess if learners judged their partner’s competence by their confidence in responses, we asked learners to judge their partner’s competence with regard to each specific topic after viewing the partner’s responses. This left us with four ratings per participant: one for low (uncertain), two for medium (neither certain nor uncertain) and one for high (certain) confidence level topics. For one analysis (regarding hypothesis 2a), we aggregated the two values for medium confidence. To assess the influence of perceived partner competence on the adjustment of assumptions (hypothesis 2b), we used these values without additionally considering the experimentally driven intervention of different confidence levels.

Confidence-based information requests (hypothesis 3a)

To assess if learners use their own confidence to decide where to request further information (confidence-based regulation), we computed within-learner point-biserial rank correlation coefficients between confidence ratings post (after viewing partner information; 0–5) and the request for more information (0, 1), leaving us with one coefficient (possibly ranging from − 1 to + 1) per participant. Low (negative) coefficients mean that learners request mainly information on uncertain items and high (positive) mean that they request mainly information on certain items.

Conflict-based information requests (hypothesis 3b)

To analyze if learners use conflict as a criterion to make study decisions after adjustment, we calculated conflict-based regulation by correlating conflict with the participants’ decision to request further information. Conflict was operationalized as the absolute distance between self- and partner judgments in case of diverging assumptions and set to zero in case of agreement. Like for confidence-based regulation, we computed within-subject point-biserial rank correlations. Positive coefficients meant that participants mainly request information on items with a higher degree of conflict. To check if conflict-based regulation even occurred within our scenario, we also computed the correlation for pre-adjustment conflict-values. While we call this measure “conflict-based regulation” (in line with Schnaubert & Bodemer, 2016, 2019), the relationship between conflict and information request is not necessarily a direct one.

For better comparison with existing studies, we additionally calculated phi-correlations between the presence or absence of conflict on a binary scale (indecision on either participant or partner were coded as absence of conflict) and the request for information pre and post adjustments. This measure ignores confidence values and thus looks into the correlation between the mere presence of conflicting assumptions and information requests rather than distances between ratings. While the main intention of these additional calculations is to ease comparisons, this procedure also makes it a coarser and more stable indicator by defining conflict in absolute terms of conflicting responses to the same question (as for example in Buchs et al., 2004) rather than including discrepancies in metacognitive judgments.

Results

All analyses were carried out using either SPSS 24, R statistics (TOASTER package) or Jamovi (v 0.9.1.7). Level of significance was set to 5%. If not otherwise reported, prerequisites for the tests conducted were checked and met.

Hypothesis 1: Adjustment of answers and convergence

Hypothesis 1a

To answer hypothesis 1a about whether learners adjust their own assumptions in the direction of other persons’ assumptions, we first compared the absolute distance between own and partner answers pre and post adjustment. The absolute distance between own and partner answers was higher pre adjustment than post with a positive difference of 0.41. Descriptive results can be seen in Table 1. There was a high correlation between the distances pre and post adjustment (r = 0.569, p < 0.001) and a paired t-test revealed that the difference between the values was statistically significant (t(59) = 6.15, p < 0.001, d = 0.78).

Table 1 Means and standard deviations of the distance between own and partner answers pre and post adjustment

Additionally, we were interested in more global changes in assumptions, meaning the change of the direction of an assumption (changing one’s opinion from agreeing to a statement to disagreeing or vice versa). Overall, only 12 participants changed their answers more often to disagree than to agree, 36 changed more often to agree, and there were 12 ties. Overall, there were 412 conflicts pre adjustment and 138 times learners changed their answer to agree with the partner (33.50% of the conflicting cases). In contrast, learners changed their answers to disagree 68 times in total (out of 710; 9.40%).

To answer our research question, we compared the number of times participants changed the direction of their answer when initially agreeing with a partner and thus ended up disagreeing (change to disagree) to the number of times they changed the direction of their answer when initially disagreeing and thus ended up agreeing with the partner (change to agree). Within our sample, learners changed their assumptions on average 1.13 times to disagree and 2.3 times to agree with a partner with a mean difference of 1.17 between the values. The full descriptive results can be seen in Table 2. The correlation between both values was low and positive, but not statistically significant (r = 0.087, p = 0.509). We conducted a t-test for paired samples to test for differences in occurrence and found a highly significant difference with people changing their assumptions significantly more often to agree with a partner than to disagree (t(59) = 4.44, p < 0.001, d = 0.58).

Table 2 Means and standard deviations of the number of time learners changed their assumptions depending on direction of change

Hypothesis 1b

To test whether learners are more inclined to change the general direction of their assumption when they are uncertain (hypothesis 1b) and thus, if confidence had a potential impact on participants’ changing or persisting in the face of disagreement, we conducted further analyses. First, we computed within-subject point-biserial rank correlations between confidence and change in the face of disagreements and found the mean coefficient to be negative (M = -0.43, SD = 0.37). Coefficients were not normally distributed (W = 0.952, p = 0.047) with a positive skew of 0.64. We used a t-test for one sample to test the coefficient against zero and used 10,000 percentile bootstrapping to account for the lack of normal distribution in the data. The result showed a significant deviation from zero (t(47) = − 8.03, p < 0.001, d = 1.16).Footnote 1

Second, we tested if mean confidence values differed between cases in which participants changed or persisted with their answers. Mean confidence for answers that were changed was close to the middle of the confidence-scale, whereas mean confidence for answers that persisted was somewhat higher. Descriptive results can be seen in Table 3. There was a significant correlation between the values (r = 0.435, p = 0.001) and a t-test for paired samples revealed a significant difference between the measures (t(50) = − 7.71, p < 0.001, d = 1.01); mean difference was − 1.26 (see Table 3).

Table 3 Means and standard deviations of own and partner confidence for answers that were changed and persisted

Hypothesis 1c

We conducted similar analyses to test whether learners are more inclined to change the general direction of their assumption in favor of a partner’s assumptions, if the partner’s assumptions are presented with confidence (hypothesis 1c) and thus, if partner confidence had a potential impact on participants’ changing or persisting in the face of disagreement. First, we computed within-subject point-biserial rank correlations between partner confidence and change in the face of disagreements. The mean coefficient was slightly positive (M = 0.04, SD = 0.43). We used a t-test for one sample to check the coefficient against zero. The result showed no significant deviation from zero (t(50) = 0.70, p = 0.490, d = 0.09).Footnote 2

Second, we checked if mean partner confidence values differed between cases in which participants changed or persisted in their answers. As shown in Table 3, mean partner confidence was rather similar for both cases and there was no significant correlation between mean partner confidence for answers that were changed and the mean partner confidence of answers that persisted (r = − 0.210, p = 0.139). A t-test for paired samples revealed no significant difference between the measures (t(50) = 0.83, p = 0.408, d = 0.12).

Hypothesis 2: Perception of partner competence

Hypothesis 2a

To test whether persons more confident in their assumptions are perceived as more competent (hypothesis 2a) and thus if competence estimations varied according to the three different partner confidence levels, we conducted an ANOVA and planned comparison analysis with linear contrasts assuming a linear trend with competency estimation to rise with rising confidence levels. Due to deviations from the normality assumption on the levels of the within-subject factor, we conducted a rank transformation (ranked over all data points; mean rank values were assigned to ties) beforehand as recommended within the literature (e.g., Baguley, 2012; Wilcox, 2012; Zumbo & Zimmerman, 1993). Due to violations of sphericity assumptions (Mauchly-W(2) = 0.849, p = 0.009), we conducted Huynh–Feldt corrections on the within-subjects effect and found a highly significant effect of the partners’ confidence level on the competence estimation (F(1.79,105.32) = 73.49, p < 0.001, ηp2 = 0.56). We then conducted a planned comparison analysis with linear contrasts and found our data to fit the anticipated pattern of a linear trend (F(1,59) = 109.39, p < 0.001, ηp2 = 0.65) indicating that when the confidence of the partner increases, the competency estimation increases accordingly. Figure 4 depicts the mean ranks according to confidence level.

Fig. 4
figure 4

Mean ranks of competence estimation according to confidence level

Hypothesis 2b

To test whether persons perceived as more competent have a higher influence on an individual’s change of assumption (hypothesis 2b), we assessed the impact of perceived partner competence on the adjustment of answers by conducting mixed models accounting for the interdependence of the data per participant and measuring the impact of perceived competence (fixed effect) on the adjustment of answers. The results are shown in Table 4. We can see that perception of partner competence has no impact on the percentage of changes of the general direction of the answer (β < 0.01, p = 0.246). A low pseudo R2 (as described in Kenny et al., 2006, pp. 94–95) suggests no relevant contribution to explain variance by adding information on perceived partner competence (pseudo R2 = 0.02).

Table 4 Parameter estimates (fixed effects) of perceived competence on adjustments of answers

Hypothesis 3: Search for additional information

Hypothesis 3a

To test whether learners decide whether to get additional information regarding a subject based on their (current) confidence in assumptions (hypothesis 3a) and thus if learners regulated metacognitively, we first computed confidence-based regulation by using point-biserial rank correlation coefficients between confidence after adjustment and information requests within-subject. The mean regulation coefficient across the sample was at − 0.44 (SD = 0.28). A one sample t-test (tested against zero) showed a highly significant result (t(50) = − 11.21, p < 0.001, d = 1.57) indicating that learners—on average—regulate metacognitively by requesting information more frequently when their confidence is comparably low.

Hypothesis 3b

To test the hypothesis that conflict loses its unique impact on decisions to get additional information when learners are able to adjust their metacognitive evaluations (hypothesis 3b) meaning that conflict-based regulation dwindles after re-evaluation and degree of conflict has no impact on the decision to request additional information after confidence was adjusted, we computed point-biserial rank correlation coefficients between the degree of conflict after adjustment (absolute distance between disagreeing answers with agreement coded as no conflict (zero)) and information requests within-subject. The mean correlation coefficient across the sample was at 0.04 (SD = 0.20). Because we hypothesized the absence of a correlation, we used equivalency testing as proposed by Lakens (2017) to test its equivalence to zero. To test this hypothesis, we assessed if conflict-based regulation after adjustment was equivalent to zero with the effect (Cohen’s d) being within the bounds of a -0.20 to 0.20 interval, in line with Cohen’s (1988) interpretation of small effects. We used two one-sided t-tests (TOST, Welch-tests) to determine if the coefficient was likely to be within these equivalence bounds (see Lakens, 2017). Equivalency testing against the lower bound of -0.20 revealed significantly higher coefficients, but tested against the upper bound of 0.20 was not significantly lower (see Table 5: conflict degree / adjusted). Testing the mean value against zero showed no significant deviation (t(50) = 1.32, p = 0.193, d = 0.20).Footnote 3

Table 5 Results for TOST (against upper and lower bound) for adjusted and unadjusted conflict values for conflict degree and absolute measures of conflict

However, further analyses showed a similar picture for unadjusted conflict values (prior to viewing partner data). The mean correlation between degree of conflict pre-adjustment and information requests was close to zero (M = 0.06, SD = 0.23). Shapiro-Wilk-test revealed deviations from normal distribution (W = 0.94, df = 52, p = 0.015). For comparability, we still conducted TOST-equivalency testing as Welch-tests are somewhat robust against deviations from normality, but results should be treated with caution. Results revealed a similar picture as for adjusted values: the coefficients tested against the upper bound of 0.20 were not significantly lower, but against the lower bound of − 0.20 significantly higher (see Table 5: conflict degree / unadjusted) and tested against zero, there was no significant deviation (t(51) = 1.94, p = 0.059, d = 0.26). With a high correlation between the values pre and post adjustment (r = 0.704, p < 0.001), a t-test for paired samples using 10,000 bootstrapping revealed no differences in conflict-based regulation based on pre- or post-adjusted values (t(50) = 0.954, p = 0.345, d = 0.12).

For better comparison with existing studies (e.g., Schnaubert & Bodemer, 2016) and adopting a more stringent view on conflict (competing assumptions), we additionally repeated the analyses using binary measures of presence or absence of conflict. We found the correlation coefficients to be in a similar range for unadjusted (M = 0.07, SD = 0.24, W = 0.95, p = 0.034) and adjusted (M = 0.06, SD = 0.20, W = 0.96, p = 0.130) values. Equivalency testing using two one-sided t-tests found both coefficients to be higher than the lower bound, but not lower than the upper bound and thus not completely within the pre-set interval, again indicating the possibility of non-equivalence to zero of conflict-based regulation before and after adjustment (see Table 5: absolute conflict). Standard null hypothesis testing indicated a statistically significant deviation from zero for pre-adjustment values (t(51) = 2.20, p = 0.032, d = 0.29), but narrowly missed the level of significance for post-adjustment values (t(50) = 2.01, p = 0.050, d = 0.30).

Discussion

Summary and interpretation

In our study we investigated the impact of socio-cognitive information (others’ assumptions and confidence in these assumptions) on learners’ cognitions (assumptions) and their metacognitive evaluations (confidence). Further, we were interested if and how this information led to metacognitive changes and thus regulation attempts. Additionally, we briefly looked into the impact of partner confidence information on the perception of the partners’ competence.

Hypothesis 1: Metacognitive re-evaluation and convergence

Hypothesis 1 was about learners adjusting assumptions and converging towards a learning partner. Results indicate that learners adjust their answers towards their learning partners and converge. Due to the nature of the research design chosen (repeated measures and partner information algorithm), this similarity in answers post intervention may not be explained by mere similarity in cue usage across populations (Prototypical Majority Effect; see Koriat et al., 2016). Results also show that learners may even change their assumptions—especially when they are uncertain about their assumptions in the first place. This is in line with our expectation derived from previous literature (e.g., Gigerenzer et al., 1991; McGarty et al., 1993) and reflects the implication by the social comparison theory proposed by Festinger (1954), that individuals are more likely to be influenced when they lack confidence. While some of the changes may be due to random variations (especially when learners were highly uncertain with regard to their assumptions), the data suggests that the direction of change was influenced by the learning partner. Interestingly however, partner confidence did not have an effect on this. While it would further have been interesting to see how perceived competence affects the integration of another learner’s assumptions and convergence, due to the dependencies between competence perception and partner confidence, we abstained from such analyses in this particular study. Since confidence was the more proximal indicator (task related), we assumed this to be the more important cue and thus we only estimated the impact of partner confidence on convergence rather than including perceived competence of partner (a more distal concept, especially when considering it based on the partner’s confidence judgments). While it was assumed that confident assumptions of a partner would provide a higher valued cue for deciding on an assumption and thus lead to more changes, this was not imminent. It is conceivable, that the impact of an unknown partners’ confidence was not enough to meet the threshold of changing assumptions, especially in this setting. Unfortunately, validly testing the impact of partner confidence on mere metacognitive changes towards the partner (i.e., convergence) poses methodological challenges as the possibilities to converge are automatically much greater if the partner is more confident and his/her ratings are on the extremes of the scale. Thus, smaller changes like inducing doubt may have been missed by our more blunt-force approach, but should be followed up upon in future research.

While our hypotheses were built on theoretical approaches on cue utilization and social influence, it is worth noting that mere anchoring could have similar effects. Anchoring effects are effects due to often unconscious influences on the formation of a person’s judgment, even through unrelated or random numbers (Tversky & Kahneman, 1974), and have also been found to impact behavior within social learning scenarios (Cress & Kimmerle, 2007). Anchoring effects especially occur when learners make judgments under uncertainty. In our scenario, learners—especially when highly uncertain—could have used the partners’ judgment as an anchor without consciously interpreting the partner’s answer as a potentially helpful cue. However, research on anchoring usually asks participants to estimate some quantitative dimension like number of countries within the UN (see Furnham & Boo, 2011, for an overview), while our study asked for a binary judgment of a statement being either true or false. This is an important distinction because both answers should be equally available to participants, especially since participants had answered the question before, and the boundaries of the judgments are thus finite, although some anchoring based on selective accessibility due to confirmatory hypothesis testing (see Strack & Mussweiler, 1997) might still occur. The quantitative dimension they were to judge in our study was not the statement’s truth value (which was a binary choice), but their own confidence attached to it. While confidence estimates are assumed to rely on available cues (see above), they still project information about the learner’s own metacognitive experience—an information that may not easily be regarded as a judgement under uncertainty, because it is defined by its perception by the individual in question. Finally, without explicitly testing for it, disentangling social influences and influences of anchoring may not be entirely possible within most social scenarios and while some of the effects observed might have been due to anchoring and insufficient adjustment processes rather than truly social impact, this may be said for most scenarios introducing social cues as information. Follow-up research could estimate the anchoring effect for various social scenarios to judge its impact on social influence research and social influence in general.

Hypothesis 2: Partner competence

For our second hypothesis, our results indicate that learners provided with confidence information on a learning partner’s assumptions use this information to judge the other’s competence. As hypothesized, the three different confidence levels used by the learning partner (low, medium, and high) map the median ranks of the competence estimates, meaning that assumptions of the learning partner presented with low confidence also lead to low competence judgments, while medium and high confidence levels of the partner result in medium and high competence judgments made by the learners, respectively. This is in line with other research connecting confidence and competence perception (e.g., Anderson et al., 2012). Our results thus indicate that these mechanisms of forming social judgments also seem to apply in learning settings, which is of particular interest for learning processes in ill-structured social settings like social media. However, there is one particular methodological difficulty which does not allow to generalize this result confidently to learning settings outside the laboratory: Although the results and theoretical assumptions strongly suggest the usage of the confidence information to judge the other’s competence, this might be very different in less restricted settings. In our study, we used a very controlled environment and while this allows us to ascribe found effects validly to our manipulation, it remains unclear if learners would have used the metacognitive information if another source information had been accessible. The only other information available had been the assumptions of the other learner. While it was possible for the participants to evaluate all these assumptions to judge competence, this would have been much more effortful and mentally challenging for they would have needed to evaluate each statement and compare the assumptions of the partner to a self-generated standard of varying reliability (while depending on their own knowledge about the subject and thus the assumed correctness in the assumptions and the confidence attached to it). Thus, by providing confidence information only (apart from assumptions) and in an easy to interpret way, we may have implicitly suggested the usage of this information. While the potential reactiveness of the design is hard to argue with, prior research found similar effects of confidence on competence perception or credibility in less restricted settings (e.g., Brewer & Burke, 2002; Price & Stone, 2004). Even if information on other learners especially in open online settings may be highly restricted at times, in most real-life settings, there is somewhat more information available. To draw conclusions for such settings, it is thus important to carefully compare the impact of different indicators when different types of information are available and compete about the attention of the participant. Such a scenario would allow for conclusions for real-life settings in which various cues are available to be used to estimate competence and credibility of a source of information. In sum, our research suggests that confidence information positively affects competence perception, although the information might lose some relevance when competing with other information (e.g., formal education, degree, age).

With regard to source credibility, it is also important to mention that we limited our conceptualization of credibility to just one of various facets usually seen as part of the concept (e.g., trustworthiness, expertise or dynamism, see Pornpitakpan, 2004). However, we limited our view on this concept, because within a pure learning context, we argue that—as opposed to research on opinions and persuasion—competence is the main contributing factor. While for example trustworthiness may be important in marketing or opinion building (especially when dealing with topics controversially discussed in society, which we decisively avoided in our study), and may even contravene our view on cue validity (confidence from a non-trustworthy source may be seen as strong persuasion attempts and may even lead to reactivity and rejecting confident ideas because people interpret an underlying agenda), in a very de-personalized context with a mere focus on learning and not a lot of room for debate (all assumptions could be objectively proven right or wrong), we do not expect trustworthiness to be a major factor. Thus, constraining the link between confidence and cue validity to changes in competence perception may fall short in other settings, especially when the issue at hand is opinion-based rather than knowledge based as is often the case in informal social media settings.

Apart from the effect of partner confidence on perceived partner competence, we also assumed said competence to have an effect on the adjustment of answers. Our results did not support this hypothesis and while it is possible that learners use perceived competence as a cue to validate partner assumptions, any possible impact was just not strong enough to impact a “change of heart” and thus, perception of competence cannot be regarded as a major contributor to cognitive changes.

Hypothesis 3: Impact on regulation

The last hypothesis referred to the impact of social influence on regulation or rather, if metacognitive regulation was the main force within those scenarios. As expected, metacognitive regulation occurred to a similar extent as has been found in other studies using confidence judgments within social scenarios (Schnaubert & Bodemer, 2016) although to a smaller extent than observed in individual studies (Schnaubert & Bodemer, 2017). However, the coefficients and data used in these other studies (phi-coefficients, binary data) differed considerably from those used in our study here (point-biserial rank correlation, ordinal and binary data). Other studies using ordinal data like Thiede and colleagues (2003) using gamma correlations on comprehension ratings to assess regulation also found similar coefficients. While we expected conflict-based regulation examined in other studies to be due to metacognitive adjustments to the new data available and thus expected conflict-based regulation to dwindle and be close to zero after adjustments, participants in our study showed only low accounts of conflict-based regulation pre and post adjustments with no significant difference between those occasions. This was surprising, since prior research had shown conflict to be a clear initiator of regulation attempts (Schnaubert & Bodemer, 2016). However, in our study, conflicts were presented by spatially separated ratings on an 11-point scale while in the study of Schnaubert and Bodemer (2016), conflict perception had been supported by a visualization putting much focus on the absolute difference in assumption rather than a view of relative distance of assumptions integrating confidence. Such representational mechanisms (see also Suthers & Hundhausen, 2003) may have shifted the focus in the mentioned study promoting the perception and thus experiencing of socio-cognitive conflicts, while our study may have deflated such an experience by relativizing conflicting assumptions by putting assumptions on a spectrum. Having said that, we did find indications of conflict-based regulation being non-equivalent to zero and although the regulation coefficients were very small, it is worth noting that information on conflicting assumptions of others may be used to make study decisions beyond their effect on metacognitive evaluations, e.g., by provoking interest in specific assumptions. Considering the generally low coefficients of conflict-based regulation also prior to possible adjustments due to metacognitive re-evaluations, we cannot convincingly make a case for or against our third hypothesis (3b) that claimed that conflict-based regulation as a means of regulating learning is due to shifts in metacognitive evaluations of knowledge which alter metacognitive experiences and lead to changes in learning via a mechanism of metacognitive regulation. While we could confirm the shifts in metacognitive evaluation of knowledge and the link between metacognitive experiences and learning processes (metacognitive regulation), without the prevalence of conflict-based regulation, other claims may not be convincingly studied. Thus, there is still limited understanding of how socio-cognitive conflicts affect metamemory and how they trigger self-regulated learning processes. Merging the research on (socio-)cognitive conflict and metamemory research would give more insight into the processes involved in knowledge construction in social settings. Further research should use experimental designs allowing for (within-subject) mediation models to foster our understanding of how self-regulated learning processes unfold and to reveal the mechanisms of how conflicting assumptions affect individual learning in social settings. Additionally, it should take into account representational mechanisms and how the depiction of conflict may affect its perception and thus impact within social learning scenarios.

General issues

Apart from the above discussed issues, there are some aspects that refer to specifics of the study and study design that should be discussed in general. The probably most critical issue concerns the social effects assumed in our rather depersonalized setting. While we opted to use a very minimalistic setting void of unnecessary social information to avoid effects that may have hampered internal validity of the design, it stands to reason if the effects found were due to social influence or other effects not related to the social aspect of the information. The absence of further source information may have had two effects: participants within our sample may have suspected the information was computer or experimenter generated and participants may have been influenced by the lack of social information. A third aspect to discuss concerns the presentation of the self- and social information and thus the scale used as visualization techniques are assumed to affect cognitive processing of (social) information (Schnaubert et al., 2020). Thus, the issues to discuss concern primarily the authenticity of the experimental setup, the social interpretation of the situation, and the scale used within the study.

Authenticity of the learning partner

Regarding the first issue, it is important to take a closer look at the sample and the instructions provided. The instructions used in the study told participants that the information was from other learners who worked on the data previously and referred to the other person as “learning partner” throughout. Although the participants were mainly students within a course of study related to psychology and may thus have had some experience with psychological designs and may have been aware of the possibility of deception within such experiments, the department conducting the experiments is known to focus on social and collaborative learning and interaction processes and we took care that the answer patterns of the learning partner gave a reasonable impression of real data. Using comparable instructions on the origin of the social information (prior participants) and a sample of psychology students, McGarty and colleagues (1993) found roughly half of their participants doubting the information being real at some point within the experiment. However, according to the authors, their re-analyses showed no effect of this post-experiment-judgment on the results indicating that some doubt may not impact how participants react to social information even if social cues are limited.

Research on social influence in general heavily relies on pseudo-collaborative settings to control for social cues that may confound the design. However, this still begs the question about the validity of the interpretation of the found effects as being due to social influence, especially since we did not conduct a treatment check. Thus, we will discuss the effect that doubts regarding the experimental manipulation may have had on our particular experiment. If participants did not believe the instructions given, they may have treated the partner as a hypothetical rather than a real person. This may especially affect the information gained for hypotheses 1 and 3. Hypothesis 2 should be less affected as the judgment of characteristics may also be done in a hypothetical situation as is frequently done within social sciences (e.g., when using vignettes to judge persons or scenarios; Atzmüller & Steiner, 2010). While we did find clear effects in support of hypothesis 1 (convergence), a hypothetical partner may not have been taken too seriously for initiating learning attempts and may have weakened our case for hypothesis 3. Conflicts may thus have less impact in hypothetical situations than when dealing with a real partner. This leaves the question about possible alternative explanations for the effects on convergence (hypothesis 1) assuming participants doubted the design. As mentioned above, anchoring may have contributed to the effects, although they seem unlikely to be the sole contributor. Additionally, we know from social media research that − even in real-life settings − technical sources (e.g., primitive social bots; Ross et al., 2019) and elusive sources (e.g., regarding fake news; Moravec et al., 2019) may have severe social influence on individuals.

Taken together, even if participants’ doubts may have weakened the effects within our study, the results still support (some of) the assumed effects on metacognition and it seems reasonable to assume social influence largely contributing to these effects.

Social interpretation of the situation

Apart from the authenticity of the learning partner, another effect of the minimal social contextual cues provided within our study may be the participants’ perception of the situation as not being particularly social. Thus, the second issue refers to the problem that if believing the information to come from another person, effects may still be distorted by the lack of social context information available. The laboratory setting and minimal information on the supposed learning partner may have led to the situation being perceived as less social. While it is beyond the scope of this paper to dive deeply into implications of restricted cues in media-based communication on the perception of others in general, it is worth noting that the selective availability of social cues is inherent within computer-mediated communication and online learning and is thus the focus of a broad array of research, covering socio-emotional and cognitive processes (Kreijns et al., 2003). Within educational psychology and the learning sciences, group awareness research, for example, rather focuses on the perception and cognitive processing of social context information and has found such information to affect social learning and interaction processes (see Janssen & Bodemer, 2013). Social psychological and communication research also explicitly focus on socio-emotional aspects affecting communication and compliance to group norms (e.g., Reicher et al., 1995). For example, social presence can be conceptualized as the degree of salience of another person and the interpersonal relationship and thus the degree to which a person is perceived as real (Gunawardena, 1995), which has been found to affect not only socio-emotional well-being, but also the willingness to share information within a group (e.g., Cress, 2005). Thus, both areas study how social cues affect (inter-)action within digital environments. However, the proposed effects of social context information during social interaction are contradictory. Some researchers suggest that a lack of social information affects goal-oriented interaction and regulation processes (e.g., Bodemer & Dehler, 2011) and may lead to rather self-focused thinking and behavior (e.g., Kiesler et al., 1984). Others propose that the lack of social information may even accentuate social processes and conformity to social norms (e.g., Reicher et al., 1995; J. B. Walther, 1996).

Thus, while it seems clear that social context information may profoundly affect the reaction to social stimuli and the minimal social cues we provided learners with during our study may thus have affected the results, research dealing with social settings always has to find a middle ground between controlling potentially confounding variables to foster internal validity of the design and using rich media and real partners or personas to potentially gain higher external validity. With a trade-off between internal validity and external validity being inevitable, pseudo-collaborative studies are often used in research on social influence and offer a way to control for confounding variables of real data. Thus, we took the intentional decision to avoid social context information to prevent effects of gender, age, physical appearance, or other factors to taint the results of this study and thus limit the impact of specific source information to affect the study’s results. While this may be regrettable from a holistic viewpoint, it did allow us to strictly control potentially confounding variables. While our findings may thus not be fully generalizable to social settings in which learners know each other or have rich social information available, they may be transferable to situations in which social context information is restricted such as on social media platforms and forums, where often minimal information on other users is available and learners are free to use the available information as they please. Similarly, the learning context within our study consisted of minimal instructions on how the partner information should be used and why. Again, this is consistent with informal learning situations in which learners are confronted with other information (e.g., on social media) and are free to use (or not to use) this information. As the perception of a situation as social is non-binary, study design decisions will always affect the transferability of the effects to different situations. Thus, the setup in our study limits the generalizability of our results to anonymous situations with limited social context information and should be complemented by additional research using more rich social information in an authentic learning setting.

Combined confidence-assumption scale

Last but not least, the depiction of social and self-information in our study may have affected processing. Although this is inherent to all study designs using external representations of information of any kind, it is important to discuss the implications each specific visualization may have had on the results. The integrated depiction of assumptions and confidence used in our study may have affected the learners. As opposed to other research dealing with visualizations of assumptions and confidence in these assumptions (e.g., Schnaubert & Bodemer, 2016, 2019), we opted to integrate confidence and assumption on one scale. While this may change the perception of conflict as stated above (see discussion of hypothesis 3), this is in line with the interpretation of confidence being representative for evidence pro and against certain alternative positions (Gigerenzer et al., 1991). Some researchers—adding a validation component—proposed this view on knowledge reaching from misinformed via uninformed to informed (e.g., Bruttomesso et al., 2003). However, this depiction of confidence and assumptions on an integrated scale may have fostered an integrative perception of the concepts by the participants as well. It is unclear if participants perceive these concepts in the same way if unprompted by visualizations or if their raw mental representation deviates from this. Asking learners to transform their mental representation of own metacognitions on the given scale during assessment may thus not only trigger metacognitive evaluation processes (Schnaubert & Bodemer, 2017), but the learners’ own interpretation of said information may change in reaction to the depiction of the scale as well. While visualizations to foster perceptions of knowledge-related concepts in a certain way may be helpful assets in instructional design and are exploited by researchers working on implicit guidance through visualizations (see Janssen & Bodemer, 2013), this may give false impressions on how learners use such information in other contexts with different setups. Thus, it is important to point out that convergence as measured by minimizing the distance between participant and partner on the integrated scale may only have been triggered by said integrated representation in the first place. For a generalization beyond certain external representations of the concepts involved, research would need to replicate the findings in different scenarios using different means of assessing and depicting the data involved. In general, research dealing with any kind of information provision needs to consider the effects of the way the data is depicted in the research designs as it may hamper not only external validity, but may in fact contribute to the effects in a way that may question internal validity of the results as well.

Outlook

Our study is one piece of the puzzle disentangling social influence on metacognition and especially confidence perception. While other researchers study these effects (e.g., Koriat et al., 2016, 2018), our research complements existing research as it uses within-subject designs allowing us to study how learners adapt to socio-cognitive information and how this information impacts metacognitive monitoring and regulation, thereby providing insights into the dynamics of self-regulated learning in social settings. This may further our understanding on the complexity of social influences on individual learning and help us to gain insight into the mechanisms relevant in social learning scenarios like web-based learning in open learning environments and social media contexts.

Apart from gaining basic knowledge about the relationship between socio-cognitive information, socio-metacognitive information, metacognitive evaluation and regulation, the research can also contribute to instructional design. Especially the discussed issues regarding depiction of concepts (conflict representation vs. integration of assumption and confidence) show how data assessment and representation may impact interpretation and thus handling of data when learners regulate their learning. The power of representational guidance mechanisms (see Suthers & Hundhausen, 2003) may be exploited especially within self-regulated learning, since implicit guidance mechanisms allow for agency while still supporting learning processes (Schnaubert & Bodemer, 2017). Thus, apart from studying social influence and metacognition, our research is a basis for further exploring the possibilities and limitations of the instructional power of representational guidance mechanisms within self-regulated and even collaborative learning research.

While this research is especially relevant for individual learning settings that are embedded in an anonymous social setting with limited social cues available, there are other layers of social influence to consider. While social information in web-based and social media learning may often merely be viewed and considered from an individual and his/her individual learning process, a key part of such settings is the possibility of direct interaction and even collaboration. Thus, the distinction between recipient and provider of information may be blurred and the dynamics may shift from individual self-regulated learning to collaborative knowledge construction attempts. Such processes may contain argumentation, knowledge sharing and directly addressing each other in an interactive or transactive manner (e.g., Cress et al., 2015). These interwoven designs are even more complex and while there is increasing research addressing self-, co- and shared regulation (see Järvelä & Hadwin, 2013), the impact of social interaction on metamemory processes are yet understudied. Bridging the gap between those research areas (especially metacognitive self-regulation and collaborative learning) is being done recently by various research groups (e.g., Järvelä, Järvenoja, Malmberg, Isohätälä, & Sobocinski, 2016; Miller & Hadwin, 2015; Molenaar & Järvelä, 2014). Combining this work with the knowledge gained from highly structured research about social influences on metacognition (e.g., Koriat et al., 2016, 2018) and applied research on cognitive group awareness within the field of computer-supported collaborative learning (for a recent overview see Bodemer, Janssen, & Schnaubert, 2018) could be a major contribution to these fields. Such research transfer from insights of individual to collaborative scenarios can enormously contribute to understanding the even more complex dynamics within groups and to empowering learners in dealing with metacognitive challenges in social scenarios.