Because scenarios in IPCC assessments aim to convey scientifically credible information, we contrast the Intuitive Logics and CIB methods with respect to the criteria of objectivity. We chose these criteria because objective methods in science are customarily taken to provide a path to understanding real and independently-existing things, properties, and processes.
Different meanings of objectivity
There are several quite distinct meanings of the terms ‘objective’ and ‘objectivity’ that have been explored in detail by philosophers of science such as Helen Longino, Heather Douglas, and Elisabeth Lloyd. Thus, ‘objective’ may mean: (1) public, (2) detached, (3) unbiased, which are methodologically oriented. Two additional meanings are more metaphysically oriented: (4) independently existing from us, and (5) real or “really real” (Lloyd 1995).Footnote 9 Additionally, objectivity often makes reference to social operations and relations, which include (6) procedural objectivity and (7) interactive or structural objectivity (Douglas 2004; Longino 1990). We shall examine the CIB and Intuitive Logics approaches with respect to each of these meanings of ‘objective’ and ‘objectivity’ in Sect. 5.2, following our introductory discussion of each of these meanings here in more detail.
The term ‘objective’ sometimes means (1) public, publicly accessible, observable, or intersubjectively or publicly available for inspection, at least in principle.Footnote 10 In other words, an object or action is ‘objective’ (1) when it is performed, perceived, or existing in open view, or openly accessible. Scientists are often very concerned about this methodological form of objectivity, due to its importance in persuading others, and its role in the public accessibility of scientific evidence.
Alternatively, ‘objective’ also sometimes means (2) detached, disinterested, independent from will or wishes, or impersonal (Douglas 2004; Lloyd 1995). In detached objectivity (2), one’s values should not blind one to the existence of unpleasant evidence, because one is invested in a particular view (Douglas 2007, p. 133).
This meaning is closely related to another, that is, (3) unbiased. To clarify, if a person is being objective (3) in the sense of ‘unbiased,’ then she can be making mistakes, but they will fall randomly with relation to the outcome of interest, and not lean in a particular direction. The notion of being unbiased here is basically statistical. A person can have a stake in the outcome of events and still be unbiased. For example, a father can hope that it doesn’t rain at his daughter’s wedding, and thus fail to be detached or disinterested, but still be unbiased, i.e., objective (3), in his estimation of this result. Nevertheless, there are problems of cognitive biases, discussed earlier, which do routinely affect our judgment. Even though a scientist may not lean in a particular direction consciously, and may deliberately be committed to remaining unbiased, biases such as availability and overconfidence can still affect the outcomes of a study or experiment, because these mental processes are operating below the conscious level, i.e., at the level where information is processed unconsciously. Such cognitive biases are systematic, particular, and undesired. Because these psychological, unconscious biases that ordinarily affect our judgment are real, they should be taken into account when setting up and evaluating judgments, and methods to evaluate judgments.
In sum, the first three meanings of ‘objective’—public, detached, and unbiased—pertain to methodology; they are understood as ways that scientists can conduct investigations in order to gain access to scientific facts or truths.
Another meaning of ‘objective’ has more to do with the shape of reality itself: independently or separately existing from us (4). It is distinct from the final meaning: real, or really existing (5) (Lloyd 1995).Footnote 11 You can tell the difference between independently existing and real by considering dreams. They are real, they really exist, but they do not exist independently from us. Scientists are usually in the business of trying to discover or explore that which is real, and that which is independently existing from us, that is, things that are ‘out there in the world’: facts, events, mechanisms, and processes.
There are long-standing philosophical disputes about whether and how science may produce knowledge of independently existing things or processes—indeed, whether there exist any at all—but we shall lay them aside for the much more modest purposes of this paper (Boyd 1983; Fine 1986; Lipton 2004; Peirce 1878; van Fraassen 1980). In a more pedestrian fashion, there is customarily understood to be a very important but usually unspoken link between the first three meanings of ‘objective’ and the next two: the first three, methodological meanings—public (1), detached (2), and unbiased (3)—are believed to be the means and methods that lead to knowledge of the next two, independently existing things (4) (should there be such) and real events, mechanisms, and processes (5) (Lloyd 1995). Finally, the socially oriented meanings of ‘objectivity,’ procedural objectivity (6) and interactive or structural objectivity (7), can be used to reinforce the methodological support provided by the first three meanings of ‘objective.’
For example, consider procedural objectivity (6), which “occurs when a process is set up such that regardless of who is performing that process, the same outcome is always produced” (Douglas 2007, p. 134; drawn from Megill 1994; Porter 1992). Such a socially organized processing of information, experiment, or devices will produce reliable outcomes no matter who is doing the processing, or operating the method. This forced anonymity or interchangeability of the processor precludes the individual processors’ biases and wishes from influencing the outcome of the process—no matter what they are. Procedural objectivity thus embodies and reinforces two of the previously discussed forms of ‘objective’: detached (2), i.e., disengagement from the desired results, and unbiased (3), or being statistically neutral in terms of mistakes made relative to a true value, due to the lack of influence of an experimenter’s own conscious or unconscious biases. Thus, this type of processing, ‘procedurally objective’ processing, provides a set of extremely powerful virtues, which we will discuss below in comparing CIB with Intuitive Logics.
Procedural objectivity (6) is also closely related to replicability, i.e., the reproducibility of the same experiment, observational procedure, or process, for the purposes of testing or confirming an idea, theory, model or hypothesis. Replicability is a standard requirement in nearly every field of the empirical and some theoretical sciences. The notion that other researchers or observers should be able to repeat a set of instructions or procedures for measurement or observation, (i.e., participate in procedural objectivity (6) regarding observations), and then reproduce relevantly similar results as previous observers, is built into the notion of public objectivity (1), and its central place in scientific methodology. Because replicability also involves having other scientists perform ‘the same’ procedures, there is an assumption that the scientists undergoing such a procedure will be objective in the sense of unbiased (3); it is assumed that they will conduct the procedures fairly, and any mistakes will tend to fall fairly to either side, and not be biased in one direction, i.e., objective (3), unbiased.
Finally, we consider ‘interactive,’ ‘structural,’ or ‘transformative’ objectivity (7), the concept of agreement achieved by intense debate or discussion among peers in the scientific community, where the emphasis is on the degree to which “both its procedures and its results are responsive to the kinds of criticisms described,” according to Longino (1990, p. 76; see also Hull 1988). These criticisms include critiques of evidence, experimental design, and theory, as well as background assumptions that underpin all these factors. “Interactive objectivity occurs when an appropriately constituted group of people meet and discuss what the outcome should be,” writes Douglas (2007, p. 135). Such a social vision of objectivity requires a community of interlocutors and also standards of their engagement.
In sum, adherence to the first three meanings of ‘objective’—the methodological meanings, public, detached, and unbiased, [as well as replicability, which is made possible by the social-methodological meaning, procedural objectivity (6)]—is taken to be important in the practice of most of the sciences; it is a promise to gain real knowledge of reality itself.Footnote 12 Whether adherence to meaning (7), interactive or structural objectivity, is methodologically effective or not, gets at the core of the issues relating to CIB versus Intuitive Logics methods in scenario building, which we discuss in the next section.
Comparison of Intuitive Logics and CIB with respect to objectivity
Significantly, the CIB method is more objective than Intuitive Logics in several of the distinct ways mentioned above. First, the CIB method has significantly more (6) procedural objectivity, which finally makes it possible for the development of storyline scenarios to be replicable. As discussed in the previous section, replicability refers to the notion that other researchers or observers should be able to repeat a set of instructions or procedures for measurement or observation, and then reproduce relevantly similar results as previous observers. Here, CIB, which is procedurally objective (6), and also virtually completely replicable thereby, wins out hands down over Intuitive Logics, which, because it involves the vagaries of group interactions of human beings, is not replicable with regard to its results. This procedural objectivity also enables the CIB method to be more ‘objective’ under other meanings of the term.
For instance, CIB is more ‘objective’ in the sense of being objective (1), public, accessible, intersubjective, and publicly available for inspection. The Intuitive Logics methods involve convening a small set of people, usually experts in various fields, to build scenarios. In contrast, when one builds scenarios using a CIB methodology, there could, in theory, be hundreds of experts providing documented input for scenarios. Between these two methodologies, there is a sharp difference between the access that others not present during the building of the scenario, such as outside policy-makers and scientists, have to the information used to develop the scenarios, and the processing of that information. More specifically, in the case of CIB methods, all of the expert judgment values, as well as the final estimations of the internal consistency of the scenarios, are objective (1), available for public access and inspection at any time.
In the Intuitive Logics case, in contrast, the final scenarios are available for public inspection, as well as possibly some or all of the discussions leading to those final scenarios, potentially through some transcript offering a narrative account, but no calculations or estimations involving any complex correlations. In Intuitive Logics, the complex expert judgments regarding the aggregate effects of variables affecting the final scenarios, and especially their combinations and correlations, remain in the experts’ minds completely inaccessible to the public, due to the fact that they are not made explicit, nor are they explicitly calculated while narrative versions of the scenarios are fleshed out.
Remember here that the CIB methods involve eliciting only pairwise expert judgments regarding the pairwise correlations of two variables at a time, which avoid the complex and concluding multi-variable judgments of the sort made in Intuitive Logics contexts, which can involve as many as ten, fifteen or more correlated variables simultaneously. As we discussed above, the thoughts and judgments concerning complex, aggregate sets of correlations among variables made in Intuitive Logics contexts are not explicitly calculated by anyone in the Intuitive Logics discussions, and so remain unavailable for public inspection at any time, and are thus not objective (1), accessible and public, even if a meticulous narrative account is made available of the Intuitive Logics process. The contrast between Intuitive Logics and the CIB methods is a stark one: all of the expert judgments used in CIB calculations of the complex correlations, interrelations between variables, and the grand conclusions regarding the consistencies of a particular set of variables, are recorded explicitly and completely accessible and available for public inspection, and thus objective in the public (1) sense. Meanwhile, almost none of these are available in Intuitive Logics contexts, except for a few key variables and the aggregate, general conclusions of the full storyline scenarios at the end.
We would like to note a significant virtue of the CIB methods at this point, which is unavailable to Intuitive Logics methods. In addition to being available to public inspection, the expert pairwise judgments that contribute to the CIB analyses are also easily challengeable and revisable at any time. That is, any judgments regarding pairwise variable values recorded for a CIB analysis can be updated and modified piecemeal, as new information or analyses develop, unlike the Intuitive Logics situations, which would require the re-convening of the entire expert panels. This makes the CIB methods more flexible and responsive to improvements in both data and theory. The ease and openness of the judgments helps establish the ‘public’ objectivity (1) of the CIB method, while also boosting the scientific credibility and power of the method through its ability to handle new scientific evidence. Note that this ease of revision also enhances structural objectivity (7). The community’s responsiveness to criticism plays the vital role in this form of objectivity, and here we can see the CIB method’s facilitation of the updating and criticism needed for objectivity (7) (Longino 1990).
Procedural objectivity (6) also enables CIB to be more objective under meaning (2), more detached, as well as meaning (3), less biased. A given expert’s conscious attachment or unconscious cognitive bias towards a given result or variable value is muted by this method, through the procedure of having their expert opinion elicited only about pairwise correlations between individual variables, and not about trends or massive correlations between interdisciplinary collections of variables. The particular procedural objectivity of CIB bolsters the detached and less biased meanings of objectivity in two ways. First, the expert’s possibly biased opinions involving the direction of the change of the overall scenarios due to some particular factor(s) are not counted or surveyed, or if untoward biases are included, they are visible and modifiable by others. Second, the requirement that judgments used to arrive at full scenarios be recorded pairwise is a disaggregation technique, and these have been shown to improve the calibration of judgments of quantities unknown to participants in psychological experiments but known in reality, such as “How many packs of Polaroid color film were used in the USA in 1970?” (Armstrong 1978 cited in Morgan and Keith 2008; MacGregor and Armstrong 1994 cited in Morgan and Keith 2008). The implication of these studies is that for judgments of large, unknown quantities with high uncertainty (and here it should be noted that socioeconomic scenarios include judgments about ranges and rates of change for future, unknown quantities that span the globe as well as large continental regions and sometimes countries), disaggregation of judgments corrects for more of the individual cognitive biases discussed in Sect. 3.3, compared to eliciting such judgments from individuals holistically.
With the Intuitive Logics methodologies, under which there is often unaided or minimally aided (and always aggregate or holistic) judgment being elicited, it is a situation in which biases may even be seen to be encouraged. With CIB, it is the opposite. Through the requirement to record each expert’s judgments in each judgment cell, unconscious cognitive bias is counteracted through disaggregation. Additionally, through the public display of judgments in judgment cells, any personal and theoretical biases are made more obvious. Thus, any untoward biases, whether unconscious or conscious, can be managed and revealed through the process of recording disaggregated judgments and through comparisons with other expert judgments. Under the Intuitive Logics methods, such biases are simply incorporated into the resulting scenarios without any chance for piecemeal revision or improvement. Through procedural objectivity (6), the CIB approach thus leads to more detached and less biased scenarios, that is, more ‘objective’ (2) and (3) scenarios.
Despite Intuitive Logics’ clear weaknesses for ‘objective’ meanings (1) public, (2) detached, (3) unbiased, and (6) procedurally objective, Intuitive Logics’ proponents may attempt to claim superiority with respect to ‘objective’ meaning (7), interactive or structural objectivity, due to its invitation to include a variety of experts as a group in the scenario building process. As mentioned above, the details of group membership and interactions are crucial for understanding interactive or structural objectivity (7). For example, how diverse should the group be, and with what expertise? Both Intuitive Logics and CIB are subject to the worry, “Whose judgments are behind the scenarios?” Both methods also require the input of judgments from a variety of disciplines. A big difference between the Intuitive Logics and CIB methods is that CIB makes these judgments objective (1), publicly accessible, and able to be updated piecemeal, as just discussed, while Intuitive Logics masks or precludes this ability by design. Discussion of structural objectivity (7) for CIB is thus more compelling due to its public accessibility (1).
A further issue facing any claims of structural objectivity (7) of Intuitive Logics is that we have abundant reason for expecting problems of ‘groupthink’ biases, discussed in 3.3, to arise in the context of Intuitive Logics scenario building. This is because Intuitive Logics, unlike CIB, requires the direct, face-to-face interactions of a small group of experts (Janis 1972).
Solomon’s analysis of why groupthink biases discussed in Sect. 3.3 work to lead results astray is very interesting. She first summarizes Surowiecki’s discovery of three important conditions for a group to make an epistemically good aggregate judgment: independence, diversity, and decentralization. ‘Independence’ means that each person makes a judgment on his or her own, while ‘diversity’ requires that the individuals making the judgments are sufficiently diverse in both knowledge and perspective, which varies case by case. Finally, ‘decentralization’ means that the actual process of aggregating information treats each person’s judgment equally: no expert or authority is more heavily weighted in the group.
Solomon then analyzes why this set of conditions is effective at producing the epistemically superior outcomes that it does. Significantly, she notes that the various pressures that are typical in group settings, reviewed above, as well as the salience of vocal group members, can lead to the suppression of important information. Consider the fact, she says, that aggregated individual judgments are often the best predictors, and the fact that such aggregates include all of the opinions of the individuals in the group, in their full diversity (2006, pp. 35–36). Such individual opinions “are often based on particular pieces of information that may not be generally known,” she writes (2006, p. 36). It is when these individuals in their full diversity are overruled or suppressed by groupthink, Solomon argues, that information is lost to the group. It is in just this way that simple and plain aggregation preserves information that is lost by group interaction, the latter of which therefore becomes ‘less intelligent.’ Solomon sums up: “Dissent (i.e., different assessments by different individuals) is valuable because it preserves and makes use of all the information available to the community” (2006, pp. 36–37).
Now consider this sort of mechanism of aggregate knowledge production with regard to the Intuitive Logics and CIB methods. Surowiecki’s three key requirements for epistemically superior outcomes are: independence, diversity, and decentralization. Our concern with the disappearance or neglect during analysis and deliberation of what we can call ‘marginalized facts or judgments’—in brief, ‘marginalized information,’—facts known by people in the group who are discouraged for whatever reason from voicing their knowledge through the means listed above—would apply to nearly all Intuitive Logics scenario building methods.
These would be situations in which the requirements of ‘decentralization’ and diversity have failed. Intuitive Logics methods are vulnerable to the various groupthink biases mentioned above precisely because experts cannot relay truly independent judgments, unless groupthink biases are successfully mitigated by some counter-method, a topic we will discuss in a moment. The consequence is that not all knowledge relevant to the outcome or judgments actually gets taken into consideration by the group, which then leads to a clear failure of ‘decentralization.’ Contrast this with the situation with CIB, where even these ‘marginalized facts and judgments’ can get taken into consideration; for example, even the most unpopular facts or judgments, those most unpopular or obscure to leading voices and views within a controversial field, could be equally weighted and compared with other, more popular views, in the processing of information most relevant to the outcome being analyzed.
One might ask if Intuitive Logics approaches could correct for the dangers of groupthink with modifications such as the inclusion of an impartial moderator, and which could also have the potential to protect the crucial ‘marginalized information.’ In this regard, there are a variety of methods and suggestions designed to help groups overcome or manage groupthink and related disabling biases, and all hark back to the three conditions required for judgments from a group for attaining epistemically superior outcomes. However, it appears to be difficult to successfully apply the suggestions for making group interactions most effective. Unless the group follows some kind of structured procedure designed to avoid groupthink and other cognitive biasing phenomena discussed earlier, their results are doubtful.
Such procedures include focusing on increasing the ‘diversity’ of group membership—which is supposed to bring a diversity of distinct perspectives to the discussions in the group—and active encouragement, rather than just tolerance, of dissent (see Sunstein 2003; Surowiecki 2004). In a sense, dissent could be seen as a particular type of ‘diversity’ within the group. However, this last can backfire, as Janis, and Solomon, discuss. Some members of the group who are recognized as opinion leaders may not be psychologically able to tolerate criticism or dissent; similarly, dissent may demoralize or anger others, all of which may distract a group from its original goals (Janis 1982, p. 252; Solomon 2006, pp. 32–33).
Recognizing that all deliberative efforts value group ‘diversity,’ let us consider other methods that attempt to avoid groupthink biases by aiming to enhance other epistemically important conditions. First, there is the Delphi method, where participants do not initially meet face-to-face, but rather interact through an exchange of anonymous assessments, sometimes supplemented by reasons for their opinions. Individual group members are then given the opportunity to adjust their judgments in light of anonymous information collected from their peers. After a few iterations of collecting and adjusting judgments from group members independently in this way, the group is convened to react collectively to the information previously collected anonymously and to arrive at a consensus (Dalkey 1969; Morgan and Henrion 1990 p. 165). Under the Delphi method, ‘independence’ is elevated by initially preventing the interaction of group members; however, ‘independence’ is progressively diluted by asking experts to adjust their judgments in light of their peers’ responses and by pursuing a consensus opinion.
The Nominal Group Technique is also available, in which participants, as a group, also begin deliberation by generating judgments quietly to themselves independently but then quickly move to a group discussion that follows a structured format carefully designed to prevent any one person from dominating the proceedings. The final assessments of alternatives discussed by the group are then made individually and aggregated, such as through having each group member individually rank alternatives and then having the moderator record which alternatives received the total highest ranks (Gustafson et al. 1973). Under the Nominal Group Technique, there is also a place for ‘independence,’ but more effort is focused on aiming for ‘decentralization’ in processing the judgments of the group. However, it is also questionable how well ‘decentralization’ is retained, since each member’s contribution to the group deliberation naturally aims to persuade other group members to see the situation as the speaker does.
Such procedures and variations on them have been formally tested and evaluated by social psychologists seeking solutions to groupthink biases, but they have been found to be ineffective, and little difference has been found among them. In the experiments, interaction among the participants “of any kind seems to have increased overconfidence and so worsened calibration” (Morgan and Henrion 1990, p. 165).
All of this explains why common modifications to natural group processes may not be enough when they focus on ‘diversity’ (e.g. by encouraging dissent), ‘independence’ (e.g. the Delphi method), or ‘decentralization’ (e.g. Nominal Group Technique) to varying degrees; instead, we would likely need to have methods that provide all three conditions simultaneously and invariably, because focusing on each condition to different degrees at different stages in deliberation processes may not be effective for overcoming the epistemic harms and disadvantages of groupthink biases. Although we summarize here a few doubts that existing techniques to combat groupthink are effective, it should be noted that currently, the Intuitive Logics groups contributing to scenarios used by the IPCC are not using any of the aforementioned techniques to combat groupthink and its concurrent loss of marginalized information.
Contrast the persistent problems of groupthink biases faced by Intuitive Logics with the results found by Seaver cited on the same page of Uncertainty: “Seaver (1978) found that simple mathematical aggregation with no interaction at all produced the best results, although he points out that the experts have more faith in assessments achieved through face-to-face interaction” (Morgan and Henrion 1990, p. 165). This finding is similar to that of Grove and Meehl, discussed in Sect. 3.3, who compared clinical versus mechanical methods, i.e. decisions based on subjective human judgments (possibly combined with discussion) versus algorithmic, objective procedure. Thus, CIB methods, with their solicitation of pair-wise variable judgments from diverse experts on a one-by-one basis, and with complete independence, i.e., no need for face-to-face interactions among experts, as well as its fully decentralized, mechanical aggregation of all judgments, parallels Seaver’s zero-interaction results, and we might similarly expect that it would produce epistemically superior outcomes all things being equal, as Grove and Meehl, as well as Suroweicki, have found, and Solomon argues.
It can be seen, then, that a kind of structural objectivity (7), sometimes claimed or implied by Intuitive Logics in virtue of its inclusiveness of a variety of diverse experts, cannot be taken for granted—in fact, it may not be available to a meaningful extent at all, because of the groupthink biases discussed above, which have been shown to be difficult to mitigate by the variety of techniques so far offered.
Helen Longino set standards that should be met for interactions by scientific communities to be objective (7), i.e., through structural or transformative objectivity, to greater degrees, to the extent that they met these four standards: (a) Recognized avenues for criticism; (b) Shared standards of evaluation; (c) Community response to criticisms; (d) Equality of intellectual authority. Let us consider whether an Intuitive Logics group is ordinarily set up with a Longinian setup, and can therefore take part in objectivity (7), structural or transformative objectivity.
For Longinian objectivity (7), one of the keys to attaining scientific objectivity is the structure of community standards and the transformative nature of criticism through those channels, not as expressed in one decision or from one angle, but in several or many, but still usually within one set of disciplinary practices. In addition, we need equality of intellectual authority of a diversity of participants in order to truly have a diversity of perspectives. Note that Longino’s objectivity structural objectivity (7), is essentially a within-discipline notion of objectivity, and does not expand very well or very clearly to a cross-disciplinary matrix or collection, without some careful work, analysis, or reflection (see e.g., Hackett et al. 2008; Hackett and Rhoten 2009). The CIB methods introduced in this paper are process and procedure-based methods which neutralize both individual and groupthink biases, and would also neutralize the groupthink biases that threaten the Longinian group-mediated objectivity (7) she defends.
In sum, those promoting subjective methods, such as Intuitive Logics, have not proposed a strategy to overcome the individual cognitive biases, and especially the social groupthink biases, that cripple the consideration of significantly different scenarios. These facts alone raise questions about their claims of the superiority of their less structured approaches. Thus, the proposed abandonment of the usual, unspoken connection in scientific inference between objective methods [(1) public, (2) detached, (3) unbiased, and (6) procedurally objective and therefore replicable procedures] and increased knowledge of reality [(4) independently existing and (5) really real], seems not to have much current evidence backing it up on the part of the Intuitive Logics methodologists when considering CIB methods for scenario building in climate change assessments.
Uncovering futures (or future states) that are both challenging for scenario modelers and distant from their present circumstances is a primary goal of scenario building, and it is essential to the success of surveying a range of emissions futures for the scientific assessment process of the IPCC. However, a perennial and very grave problem in scenario development is that participants have great difficulty imagining outcomes that are very different from present experience. (See the discussion of cognitive limitations, in Sect. 3.3.) The CIB method effectively neutralizes these cognitive limitations and produces internally consistent futures that are sometimes surprising, simply by filling in judgment cells in a matrix and then running the model. The CIB method is therefore an extremely helpful tool in scenario development, since it overcomes psychological and social limitations under which people tend to see familiar scenarios as more plausible (and potentially more likely) than they actually are. The CIB method effectively neutralizes these biases by dividing the labor of human judgments and logical tests for the internal consistencies of scenarios.
Altogether, as Solomon argued, methods such as those used in Intuitive Logics disappear what we call ‘marginalized information.’ This refers to observations, data, and judgments made unconsciously, or by individuals that are not represented in the final group opinions or judgments in Intuitive Logics. The loss of such information is the fundamental reason that such methods are documentably and in principle epistemically inferior (Solomon 2006, pp. 36–37).
In addition, as discussed in Sects. 3.1 and 3.2, Intuitive Logics opposes the assignment of probability estimates to the various outcomes or futures, by design. In contrast, by explicitly measuring the degrees of internal consistency of large numbers of scenarios, CIB potentially offers such an option, for those for whom such information would be useful, especially in planning. Such facts about the cognitive and social methods themselves versus mechanical methods like CIB have shown that methods like CIB are superior epistemically, just in terms of information elicitation and processing, as we reviewed. We have discussed, moreover, how such epistemic superiority and related features are associated with various meanings of ‘objective’ and ‘objectivity,’ very significant concepts tied to scientific virtues that are important to the IPCC. This variety of reasons is why the IPCC is urged to prefer and adopt methods such as the CIB methodology for creating socioeconomic scenarios.