1 Introduction

It is a common, albeit not uncontroversial practice to distinguish between epistemic, cognitive, and social values when scientific representations like theories and models are to be evaluated. For a long time, there has been debate on whether social values, e.g. prudential and moral considerations, can legitimately enter into the evaluation of a scientific representation. In this paper, I want to show that it is just as important to distinguish the functions of cognitive values like simplicity, broad scope, explanatory power, and easy handling from the function of familiar epistemic values like empirical accuracy and consistency. Cognitive values are properties of scientific representations that result from the idealization which is involved in the construction of a scientific representation.

In more concrete terms, I defend the claim that the aim of evaluating a scientific representation consists in providing information not only about the credibility but also about the usefulness of the representation. In my conception, assessments of usefulness include the questions as to whether the representation is relevant to investigating a certain problem, i.e. a given hypothesis in its context, and as to how practicable it is to use the representation. The usefulness of a representation is thus relative to the problem and the user. Considering the relevance and the practicability of a representation is important both in basic and in applied research. One might object to distinguishing credibility from usefulness since a representation that lacks credibility seems not very useful. Still, whether a representation is relevant to a task like describing, explaining, or predicting a certain phenomenon depends on its structure, i.e. on the properties that result from idealization. Hence, cognitive values contribute to the relevance of a representation to investigating a problem. They are also important regarding aspects like easy handling of a numerical model or the time needed to run a simulation, which, together with further properties, make up the practicability for users.

For structuring the relation between specific functions of epistemic, cognitive, and social values on the one hand and credibility and usefulness as their rationales on the other hand, I distinguish between first-order, second-order, and auxiliary functions. Values used in a first-order function provide information about how a scientific representation performs in terms of credibility and usefulness when applied to the problem in hand, i.e. to the hypothesis and the context to be investigated by means of the representation. Values used in a second-order function specify and weigh values with a first-order function relative to the given problem whereas values used in an auxiliary function facilitate the application of values with a first-order function. Values for assessing credibility and usefulness are both applied in a first-order function. Values which specify or weigh values with a first-order function for the problem in hand and for the user perform a second-order function. Values which simply facilitate the application of other types of values in credibility or usefulness assessments serve an auxiliary function.

The focus of this paper is on rationales for cognitive values.Footnote 1 The paper is structured as follows: Sect. 2 provides some conceptual clarifications and introduces a conceptual framework that structures uses of values in the evaluation of scientific representations. This serves as a basis for discussing the specific functions of cognitive values and their relations to the functions of epistemic and of social values. Section 3 sets out why it is reasonable to distinguish assessments of the usefulness of scientific representations as a rationale for cognitive values with a first-order function from credibility assessments. Thereafter, Sect. 4 deals with relevance assessments. It points out that cognitive values, because they are abstract, need to be specified relative to the problem under investigation. Section 5 looks at the practicability of a representation and distinguishes between the first-order function of cognitive values in practicability assessments and the auxiliary function of cognitive values in facilitating credibility assessments. Here, I also refer to the second-order functions of social values, i.e. to specifying and weighing cognitive and epistemic values with respect to the problem under investigation. Section 6 comments on two general implications for evaluations of scientific representations, while Sect. 7 provides a summary and draws a conclusion.

2 Types of Values and Their Functions

Legitimate and illegitimate applications of values have been discussed in connection with various problems concerning the justification of scientific results, among others the inductive risk which is involved in accepting empirical hypotheses (Rudner 1953), theory choice (Kuhn 1977), or the evaluation of computer models and simulations (Intemann 2015). Sometimes, I shall refer to these debates individually and sometimes collectively. In the latter case, I use “scientific representation” (“representation” for short) as an umbrella term for hypotheses, theories, and models.

The debate on values in the assessment of scientific representations involves a variety of distinctions and differing uses of terms, which calls for some clarifications. To start with, even the use of the term “value” itself is ambiguous. The reason for this ambiguity can be found in the structure of the activity that results in a value judgment. When someone evaluates or assesses something, (1) a standard or criterion (value1)—e.g., accuracy, consistency, or scope—is used to assess (2) the performance or value-relation (value2) of (3) some valuable object (value3)—e.g., theories. A theory (value3), for instance, may work better (value2) than an alternative one in terms of simplicity (value1), or it may meet a certain threshold (value2), e.g. regarding empirical accuracy (value1). This paper focuses on values in the sense of standards or criteria for assessment. I call any kind of consideration that is used as a standard or as a criterion in an evaluative judgment a “value,” “standard,” or “criterion”.Footnote 2 Thus, I use “value” as a functional concept.

I adopt a narrow use of the term “epistemic value” for values like empirical accuracy, robustness, and consistency. The extent to which a value of this type is instantiated by a theory warrants the belief in the theory, be this the assumption that the theory is true, empirically adequate, or confirmed. Hence, epistemic values serve the purpose of assessing credibility.

Furthermore, I use the term “cognitive value,” which frequently refers to properties of scientific representations like comprehensiveness, simplicity, spatiotemporal resolution, or broad scope (e.g., Douglas 2009, 2013; Laudan 2004; Longino 2004).Footnote 3 Cognitive values result from the idealization which is involved in the construction of a scientific representation. Some authors dispense with the distinction between epistemic and cognitive values and, instead, regard the terms “epistemic value” and “cognitive value” in a broader sense as synonyms for “scientific value” (Kuhn 1977; Hempel 1988/2000). What epistemic and cognitive values have in common is that they are characteristic of what some have called the aim of science (Hempel 1988/2000, 216) or a good theory (Kuhn 1977, 321 f.; Laudan 2004, 19). It seems reasonable, however, to draw an explicit distinction. In contrast to epistemic values, the extent to which a cognitive value, such as complexity, is instantiated by a representation does not, by itself, warrant belief in the representation. The extent to which a cognitive value such as complexity is instantiated by a representation may, however, affect the degree of empirical accuracy and hence the credibility of the representation.Footnote 4 Used in a first-order function, cognitive values serve the purpose of assessing the usefulness of a representation. Although epistemic and cognitive values are conceptually different, their simultaneous application as values with a first-order function in a multi-criteria assessment may lead to value conflicts. This is not the case if cognitive values are used for facilitating the determination of the extent to which a representation instantiates epistemic values, which is an auxiliary function.

Finally, I distinguish cognitive values from considerations relating to societal goals such as economic or ethical ones—e.g. efficiency or justice—and call them “social values.” Thus, I do not use “value” as a synonym for what I call “social values,” as proponents of value-free science do (e.g., Betz 2013). Also, I do not agree with the argument that one cannot reasonably distinguish social from other values because the use of any value is based on social conventions (Barnes and Bloor 1982; Steel 2010, 22 f.). This argument uses “social” to highlight that intersubjective agreement provides the normative grounds for regarding certain considerations as values while I use “social” for a certain kind of consideration. In Sect. 4, I shall argue for restricting the legitimate use of social values to a second-order function. A second-order function consists in specifying and weighing values with a first-order function relative to the problem and its context which are investigated by means of the representation. A case in point would be values used for specifying the level of significance of empirical results which is required for the acceptance of a general hypothesis and its subsequent practical application (see Sect. 5). Table 1 presents the conceptual framework that structures the functions of different types of values in a matrix.

Table 1 Matrix for structuring legitimate uses of values in the evaluation of scientific representations

In the remainder of this paper, I discuss the specific functions of cognitive values in more detail and set out how they relate to the functions of epistemic and social values when scientific representations are to be evaluated.

3 Usefulness of a Scientific Representation

Considering usefulness is common practice in the evaluation of computer models. Models are constructed as representational tools for investigating a hypothesis about the target system. They are idealized representations of their target, i.e. deliberate simplifications which distort features of their target (McMullin 1985). In his review of earth system models, Flato (2011, 797) describes this practice as follows: “Of course not everything that is known, nor all the detailed knowledge that is available about each of the individual processes involved, can be included in […] a model. Compromises and approximations must be made in order to yield a model that is simple enough to be run on available supercomputers, yet is comprehensive enough, and of high enough resolution, to provide useful and reliable results. The choices regarding just what compromises to make, what processes will be included, which will be neglected, the complexity to be retained … these constitute the ‘art’ of climate modeling and they rely on the scientific judgment of an interdisciplinary team of researchers.” In this quote, Flato emphasizes the twofold aim of an evaluation. On the one hand, a model should be a reliable tool that produces credible results, and, on the other hand, it should be useful. Usefulness is specified by a set of cognitive criteria that already need to be considered in the construction of a model and its implementation on a computer.

For a long time, the aims of a scientific evaluation of theories have solely been looked at from an empiricist perspective and restricted to the assurance of credibility. Two critiques of the empiricist philosophy of science have paved the way for acknowledging the assurance of usefulness as a proper aim as well. Both start from the observation that typical scientific practice considers additional values like cognitive values for theory choice.

One critique of the empiricist philosophy of science refers to the so-called underdetermination thesis or Duhem–Quine thesis. This thesis claims that theories are underdetermined by relevant data because the results are in principle also compatible with other theories or models that are incompatible with the ones under consideration. This is interpreted as a gap and a deficiency in the justification of theories. Therefore, the argument goes, it is legitimate to fill this gap with further criteria in order to decide whether a theory suits the intended purpose (McMullin 1983, 14). According to Longino (2002, 127), these additional criteria are derived from substantive content-related and methodological background assumptions, and they also “concern what we might call the form of knowledge […]. These [additional criteria] include such properties of theories or models as simplicity and unification” but also social values and political interests (see also Longino 2004, 131–136; Douglas 2009, 96).

Pointing to scientific practice as such, however, does not answer the normative question of whether it is justified to use these cognitive (and social) values in scientific assessments. Claiming that they fill a gap does not provide a justification for why cognitive values—in general as well as the specific ones used—should be appropriate for this task. Explanations of why and how these additional criteria should count in the assessment of theories have to rest on other reasons. According to Carrier (2011, 203), the “non-empirical criteria uncover the features of experience we consider worth knowing.” While this answer leaves open why these features of experience are worth knowing, it indicates that the legitimacy of cognitive values in assessments of scientific representations must be considered independently of the issue of underdetermination (Hirsch Hadorn and Baumberger, forthcoming).

A different critique of the empiricist philosophy of science has been put forward by Kuhn (1977), who takes the typically used values as a set of standard criteria which characterize what is deemed to be a good theory. According to Hempel (1988/2000, 216), these values can be regarded as desiderata which jointly characterize the aim of science. Kuhn (1977, 322) too refers to scientific practice for claiming that the “five characteristics–accuracy, consistency, scope, simplicity, and fruitfulness—are all standard criteria for evaluating the adequacy of a theory. […] Together with others of much the same sort, they provide the shared basis for theory choice.” While Kuhn does not provide normative reasons for justifying scientific practice, Hempel (1988/2000, 216) argues that these criteria taken together “reflect a profound and widely shared human concern whose satisfaction is the principal goal of scientific research–namely, the formation of a general account of the world which is as accurate, comprehensive, systematic and simple as possible and which affords us both understanding and foresight.” Like Kuhn, Hempel (1988/2000, 226) rejects taking the empiricist criteria to be superior to the other criteria in principle because he regards the goal of science not as a search for truth but rather as a search for epistemically optimal theories, which is an ideal characterized by all the pertinent criteria.

Still, viewing epistemic and cognitive values prima facie on a par does not contradict the introduction of a distinction between their functions if this seems to be required. Suárez (2004, 776) pointed out that a representation of a target may fail in two ways; it may be inaccurate and it may be “mistargeted for intended use,” meaning that the target is misrepresented in a way that is relevant to answering the hypothesis about the target in question. This is the case when a representational model ignores aspects that are relevant to the behavior of the target to be investigated by means of this very model. Contessa (2014, 130) distinguishes between the faithfulness of a representational model, which depends on how the target is idealized, and its predictive or explanatory success, noting that “faithfulness and successfulness need not go hand in hand.” This is for instance the case if several misrepresentations of features of the target in a model compensate for each other and, as a consequence, lead to accurate results. Hence, Giere (2004) has advocated a procedure which assesses a model specifically in relation to its adequacy for the given purpose, that is to say for investigating a particular hypothesis. The question of whether a model is adequate for the intended purpose is at the center of an ongoing debate on the evaluation of climate models (for an overview see Baumberger et al. 2017, 3–5). Parker’s (2009, 239) concept of “adequacy for purpose” refers to the kind of empirical evidence and to the degree of accuracy of simulation results, both of which are required when a model is supposed to serve a certain purpose like predicting. I, by contrast, prefer the term “relevance” to point to the importance of cognitive values.

Furthermore, in order to assess the relevance of a representation with respect to a particular problem, the abstract cognitive criteria need to be specified accordingly. The choice relating to how criteria are specified for application is in need of justification since this can be done in several ways. Rochefort-Maranda (2016), for instance, has described five ways in which the criterion “simplicity” is involved in model selection. Referring to different goals like devising a good predictive model or constructing a model under given computational and time constrains, he argues for a view “according to which different goals will justify the importance of different notions of simplicity” (Rochefort-Maranda 2016, 269). He gives examples not only for what I call the relevance of the cognitive values of a representation in investigating a given problem but also for the practicability of a representation for a user. I discuss both as two dimensions of the usefulness of a representation which depend on the properties of a representation that result from idealization.

4 Cognitive Values Assessing the Relevance of a Representation

Properties of a model are relevant to solving a given problem if they are necessary for addressing the problem appropriately. For instance, if the task is to project the impacts of climate change on biodiversity in European alpine forests 20 years ahead, which is a task in an applied context, the spatiotemporal grid of the model should have the requisite degree of resolution for simulating the development of biodiversity in these forests. Another example are properties like simplicity of model structure, elegance of equations (e.g. symmetric equations), and explanatory power of functions, all of which are relevant to understanding the dynamics of the global climate system. If, however, the purpose of a model is to predict future regional climates, relevant properties include comprehensiveness, complexity, spatiotemporal resolution, easy handling of technical aspects, or explanatory power of functions that describe sub-grid processes to be parameterized.Footnote 5

Proposing a general account, I differentiate different types of problems along two dimensions. Firstly, I distinguish between kinds of hypotheses to be answered with the help of a representation, e.g. descriptions, explanations, or predictions. They are typically used for achieving further aims like unification of theories or manipulation of processes. I therefore, secondly, distinguish between different contexts in which these hypotheses are investigated, e.g. basic or applied research.Footnote 6 Hypotheses may be distinguished in further respects which pertain, among others, to the variables or events to be investigated, the temporal or spatial scales of interest, their specificity, and the allowed margin of error (Baumberger et al. 2017, 4).

If a property of a representation proves to be useful for investigating a particular problem, this is a sufficient condition for including this property as a cognitive value in the assessment of the relevance of the representation. A case in point would be the simulation of the development of biodiversity in alpine forests with the intention of yielding accurate information for forest policy. If, for instance, a spatiotemporal grid with high resolution is required for this purpose, the degree of resolution is a cognitive value that is relevant to the assessment of models to be used in such simulations. In connection with relevance, three aspects have to be distinguished. Modelers typically think of relevance in terms of what features of the target system need to be included in order to construct a representation that is appropriate to answer the question about the target. Still, questions of relevance also arise with regard to how what needs to be represented is in fact represented: what is an appropriate idealization of the target if the model is to be used for investigating a given hypothesis such as regional long-term predictions of certain variables? Besides, issues of relevance also arise in connection with empirical data and relate to the question of whether the data available are relevant to a certain hypothesis, i.e. whether the data contribute to the confirmation of the hypothesis (Peschard and van Fraassen 2014, Intemann 2015).

Moreover, it should be kept in mind that while assurance of credibility and assurance of usefulness are two conceptually different aims, assessments and improvements concerning the extent to which representations instantiate the pertinent values are often not independent of one another. Cartwright’s (2006) proposal for assessing the credibility of results in applied contexts with regard to whether this is appropriate “evidence for use” is an example of how relevance needs to be taken into consideration. Cartwright (2012) argues that hypotheses which relate to practical purposes have to account for the causal complexity and the variability of those conditions which are at work in specific contexts of application. These requirements pertain to the properties of the hypotheses that are relevant to investigating the case in hand. Standardized controlled trials necessitate idealizations and, therefore, cannot account for important features of a concrete context and its specific complexity. Hence, hypotheses which have been well-confirmed in standardized controlled trials cannot be assumed to be valid in and applicable to a certain real-world context without further investigations into their relevance (see, e.g., Shrader-Frechette 1997; Cartwright and Munro 2010; Carrier and Finzer 2011).

One of the few systematic analyses of values in the context of relevance assessments is Weisberg’s (2007, 2013) account of different kinds of idealization and representational ideals. A representational ideal, according to Weisberg, consists of inclusion rules, which determine what aspects of the target should be represented by the model, and fidelity rules, which specify the requisite degree of accuracy (Weisberg 2013, 105 ff., 135 ff.). Weisberg differentiates five representational ideals. For the purpose of illustration, I briefly mention two of them, namely COMPLETENESS and SIMPLICITY. The inclusion rules for COMPLETENESS state that every property of the target system and the causal relationships between these properties as well as their exogenous causes must be included while the fidelity rules stipulate that all properties and exogeneous causes must be represented “with an arbitrarily high degree of precision and accuracy” (Weisberg 2007, 649; see also 2013, 106). The inclusion and fidelity rules for SIMPLICITY call for a minimal inclusion of properties, which, at the same time, have nevertheless to assure a qualitative fit between model and target (Weisberg 2007, 650; see also 2013, 107). Weisberg assumes that different intended uses—e.g. explanation or prediction (Weisberg 2007, 635)—or considerations of practicability like simple handling (Weisberg 2007, 641) require different representational ideals which then guide different kinds of idealization.

5 Cognitive Values in Assessing the Practicability of a Representation for a User

How practicable it is to use a representation in investigating a hypothesis depends on the properties of the representation considered relative to the knowledge and the infrastructure available to the users as well as on their interests. For instance, the knowledge and the infrastructure of a research group may influence decisions like those as to what physical processes are to be included in a climate model and how this is to be achieved (Parker 2014, 27). As a matter of fact, however, practicability is typically discussed with respect to applications in practice where the needs and, again, the interests of users—that is to say social values—may influence decisions concerning scientific methodology. Elliott and McKaughan (2014, 5), for instance, list the following considerations: “‘Is it easy enough to use this model?’, ‘Is this hypothesis accurate enough for our present purposes?’, ‘Can this theory provide results in a timely fashion?’ and ‘Is this model relatively in-expensive to use?’.” They advocate the position that non-epistemic values, including cognitive and social values, may legitimately trump empirical accuracy in applied contexts. According to the authors, these trade-offs are legitimate if the application of non-epistemic values serves the goal of the assessment, and if this goal and the function of non-epistemic values are made explicit (Elliott and McKaughan 2014, 15). This account seems questionable, however, because if a model is supposed to fit the needs or the interests of users, which include social values like efficiency, the function of social values consists in specifying and weighing the relevant cognitive values (e.g., easy handling and yielding results within a short time; see also Intemann 2015, 230; Steel 2010, 27). Therefore, the proposed legitimate function of social values is the second-order function of specifying and weighing cognitive values with a first-order function.

The question of whether scientists should use social values in the second-order function of specifying and weighing epistemic values is at the center of a prominent and still ongoing debate in the context of what has been known as “inductive risk” (Hempel 1960/1965, 92; for an analysis of the debate on inductive risk see Wilholt 2009). Generalizations of empirical findings from a sample involve a twofold risk of error. The risk of false positives consists in accepting a hypothesis which does not hold true in general whereas the risk of false negatives, conversely, consists in rejecting a hypothesis that holds true in general. Rudner (1953, 2) and others have claimed that accepting or rejecting a hypothesis is a task of the scientist qua scientist and that scientists, when doing so, should consider potential societal consequences of errors, giving due regard to ethical criteria. The debate has been extended to further issues of scientific methodology, e.g. to considerations pertaining to what should count as relevant evidence and to how to structure and classify the data in order to account for the social consequences of errors (Douglas 2000, 559).Footnote 7 To such deliberations it has been objected that the task of the scientist qua scientist is not to accept hypotheses but rather to characterize their uncertainty (Betz 2013). This, however, sets the problem one step back to the problem of whether social values can really be excluded from uncertainty characterizations (Steele 2012).

There have been proposals to use cognitive values in an auxiliary function, which means that cognitive values facilitate the determination of how well a representation instantiates epistemic values. Elliott and McKaughan (2014, 2) use the term “secondary consideration” for what I call “auxiliary function.” Contrary to uses in a first-order function, cognitive values with an auxiliary function cannot conflict with epistemic values. Steel (2010, 18), for instance, calls cognitive values that are used for “promot[ing] the attainment of truth without themselves being indicators or requirements of truth”, “extrinsic epistemic values” and distinguishes them from intrinsic epistemic values which are indicators or requirements applying to truth. Broadening the scope of a theory, for instance, may be a means of minimizing error, or a technically simpler model may allow a more efficient handling of a model for test purposes. Douglas (2009, 107; 2013, 800) largely agrees with Steel on the use of cognitive values in this auxiliary function. While the auxiliary function of cognitive values is a sufficient condition for a legitimate use of cognitive values, it is not a necessary condition because that would exclude first-order functions of cognitive values.

6 General Implications for Evaluations of Scientific Representations

The assumption that cognitive values can perform the first-order function of evaluating usefulness has several implications for an account of how scientific representations can be evaluated. Here, I just want to highlight two general points. Firstly, since it depends on the type of problem what values are suitable and how they are to be applied, the usefulness of a representation is relative to the given problem, i.e. to the kind of hypothesis and the context of the investigation. Hence, there is no good representation as such or in general but only relative to the type of problem which is up for investigation. Properties like complexity, high resolution, and easy handling, for instance, may be required for a model-based prediction of regional climate impacts but not for explaining how the global climate system works.

Secondly, in order for an evaluation of scientific representations to qualify as a procedure with epistemic objectivity, it is in need of an explicit systematic structure which guides the application of pertinent values to representations. Such a structure should encompass (1) a specification of the abstract values relative to the problem to be investigated, (2) the determination of the extent to which a representation instantiates each of these values, and (3) a weighting of the pertinent values relative to the problem so as to decide on trade-offs in case of conflicting values.

Both Hempel and Kuhn object to the idea of using algorithmic procedures for establishing the extent to which alternative theories instantiate epistemic and cognitive values and for determining their comparative weightings for theory choice. They regard this approach as misleading. Instead, they opt for intersubjective agreement on expert judgements about how to apply the values, i.e. on how to specify and weigh them (Kuhn 1977, 330; Hempel 1988/2000, 221). Laudan (1984, 62–68) has argued for a systematic procedure and proposed what he calls a “reticulated” approach which places scientific justification in a triadic network that includes aims like those characterized by scientific values as well as theories and methods that mutually depend on each other.

Moreover, also the potential of decision theory has been explored in connection with the assessment of scientific representations. Gaertner and Wüthrich (2015), for instance, propose scoring rules for determining the extent to which pertinent criteria are instantiated by a model, which provides the basis for inter-criteria comparability. Aggregated scores can thereafter be used for establishing a weak order of alternative models. These scoring rules can be invoked to account for the importance of values in a given research field. I call this the relevance of a model regarding a problem. It thus seems that there is a way to determine formally which of the considered models works best for a given problem if all criteria are taken into consideration.

However, when scores are aggregated for ranking purposes, information about the extent to which each of the values is instantiated by a particular model gets lost or remains hidden. The assessment of computer models and simulations is often regarded as an iterative process that consists of model construction, application of the model to the problem in a simulation, and evaluation of the results for improving the model. In this regard, paying attention to the extent to which each of the values is instantiated by a model may be what is required to avoid potential misrepresentations or inaccuracy.

7 Conclusion

In this paper, I have defended the claim that evaluating a scientific representation should aim to provide information about the credibility and about the usefulness of a representation. Cognitive values like simplicity, broad scope, or easy handling are properties of scientific representations that result from the idealization which is necessary whenever a representation of the target is constructed. Cognitive values provide information about the usefulness of a representation. Usefulness includes the relevance of the representation to investigating a hypothesis in a given context (i.e. a problem) as well as the practicability for a user who intends to work with the representation. As for the problems to be investigated, I draw a distinction between different types which is based on the kind of hypothesis (e.g., description, explanation, or prediction), and on the context in which the problems are dealt with (e.g., basic or applied research). Since usefulness and credibility of a scientific representation are always relative to a given problem (i.e. the type of hypothesis and its context) as well as to the user, there is no good theory or model as such.

Cognitive values which provide information about the usefulness of a representation and epistemic values like accuracy and consistency which provide information about its credibility both perform a first-order function and may thus conflict. When cognitive values are used in an auxiliary function, by contrast, they facilitate the application of epistemic values so that value conflicts cannot arise. As regards social values, there are no conflicts with cognitive values if social values are used in the second-order function of specifying the cognitive and the epistemic values relative to the problem under investigation.

Cognitive and epistemic values may be abstract, ambiguous, and vague, however, and they may be in need of appropriate specification and weighing in case of value conflicts. Specifying and weighing cognitive and epistemic values with respect to a given problem and applying these values to the theory, model, or hypothesis in question involves a variety of complex normative problems so that I can only briefly touch on some of them. Besides a clarification of the issues of ambiguity and vagueness of values, a detailed analysis is called for as regards the conceptual interdependence of values and the trade-offs between the varying extents to which a representation instantiates pertinent values. These few examples should suffice to indicate that the aim of assuring objectivity requires much more work on the complex conceptual and normative questions that are connected with the application of values in the evaluation of scientific representations.