Within cognitive science, mental processing is often construed as computation over mental representations—i.e., as the manipulation and transformation of mental representations in accordance with rules of the kind expressible in the form of a computer program. This foundational approach has encountered a long-standing, persistently recalcitrant, problem often called the frame problem; it is sometimes called the relevance problem. In this paper we describe the frame problem and certain of its apparent morals concerning human cognition, and we argue that these morals have significant import regarding both the nature of moral normativity and the human capacity for mastering moral normativity. The morals of the frame problem bode well, we argue, for the claim that moral normativity is not fully systematizable by exceptionless general principles, and for the correlative claim that such systematizability is not required in order for humans to master moral normativity.
This is a preview of subscription content, access via your institution.
Buy single article
Instant access to the full article PDF.
Tax calculation will be finalised during checkout.
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
Tax calculation will be finalised during checkout.
The more specific difficulty originally called the frame problem in early logic-based artificial intelligence is described by Shanahan (2004) as “the challenge of representing the effects of action in logic without having to represent explicitly a large number of intuitively obvious non-effects” (p. 1). This was just one instance of the generic problem that nowadays is usually called the frame problem.
We speak as though there are two distinctive kinds of features—descriptive and moral-normative—merely for convenience. We mean to be neutral in this paper about whether there are moral properties at all, and also about whether, if there are such properties, they are identical to certain properties characterizable in non-normative language. Also, it should be noted that according to the conception of moral normativity just described, although an overall system of moral normativity might include certain general principles linking so-called ‘thick’ moral features like moral justice to ‘thin’ moral features like moral obligation—e.g., W. D. Ross’s principle that there is a prima facie duty to behave in a manner that accords with moral justice—nonetheless any such moral principles would have to be derivable themselves from more fundamental, exceptionless, principles that link purely descriptive features to (thick and/or thin) moral features.
The expression ‘tractably computable’ is a term of art in fields like computer science, artificial intelligence, and cognitive science. It means ‘computable by a physical computational device of the relevant kind’. This makes the expression both somewhat context-dependent (since the relevant kind of computational device varies with context) and vague (since in general it is not clear exactly what functions can be computed by a particular kind of computational device). In the present context, tractable computability essentially means computability by a physical device with roughly the same physical structure and roughly the same resource-capacity as a human brain.
For further elaboration of the notion of morphological content, and for discussion of how such content could get structurally embodied within cognitive systems whose temporal evolution is described formally in terms of the high-dimensional mathematics of dynamical systems theory, see Horgan and Tienson (1996).
This is not to deny that within classical computational systems too, information can get implicitly accommodated rather than being explicitly represented. Indeed, at least some of the rules executed by a computational system need to be “hardwired in,” even though usually various specific items of information (often including certain more specific rules) do get explicitly represented.
The distinction between general-purpose rules and special-purpose rules is orthogonal to the distinction between rules that are explicitly represented and rules that are automatically executed by virtue of a system’s architecture (its “hard-wiring”). Some rules need to be executed architecturally in a computational system, whereas others can be explicitly represented and then explicitly consulted during processing. But wired-in rules can be special-purpose (as they often are in special-purpose devices like hand-held calculators) or general-purpose, and explicitly represented rules also can be either special-purpose or general-purpose.
The term ‘implementation’ is used in computer science, artificial intelligence, and cognitive science in much the same way that the term ‘realization’ is used in philosophy of mind. The idea is that normally there are multiple distinct ways in which states or processes, as characterized at some “higher” level of description, can be carried out by states or processes characterized at some “lower” descriptive level.
Note that rules can be exceptionless without being deterministic. For instance, the rules could instruct a system to “throw the dice” in certain specific situations (e.g., by consulting a table of random of numbers), and could specify different follow-up actions for different potential outcomes of the dice-throw.
Larger issues loom in the vicinity of this one, however, and are not merely verbal. Among them are these: (1) Do those who embrace labels like “the computational theory of mind” make the foundational assumption, at least implicitly, that human cognition conforms to PRL rules? (2) How good is the evidence for the sociological claim that they do make this assumption? Our own view is that the actual practice of researchers in cognitive science and in artificial intelligence is ultimately more important in addressing such questions than are the answers the researchers might give if queried about their own foundational assumptions—and that actual practice reflects a strong (if sometimes only implicit) commitment to PRL rules. Suppose, however, that many practicing researchers doing computational modeling would say “We don’t assume that cognition conforms to PRL rules, because our models deploy default rules that then get implemented computationally.” This would only mean that many researchers are somewhat confused about the underlying assumptions that inform their actual model-constructing practice.
One might think that the kinds of connectionist networks that are simulable by standard computers—networks that update in discrete time-steps, have discrete node-activation values, and are discrete in other pertinent ways too—could not implement non-computational cognitive processing. But that would be a mistake, because conformity to programmable rules is a property that need not “transfer upward” from the subcognitive level to the cognitive level; cf. Horgan and Tienson (1996), Chapter 4, section 4.4, and also Horgan (1997b).
One could allow for different forms of cognitive competence, capturable by different ECP’s; and one could allow the ECP to be indeterministic, with some items on the right of the arrow being disjunctions of TCS’s; but we ignore such complications for simplicity.
Actually, the moral as so expressed is logically stronger than the earlier moral that human cognitive processes are non-computational. For, the earlier moral amounts to the assertion that the ECP cannot be fully systematized by any tractably executable exceptionless general principles expressible as a computer program. This assertion is logically consistent with the thesis that the ECP nonetheless can be fully systematized by certain exceptionless general rules collectively constituting a computer program that is not tractably executable. However, the stronger moral too seems well supported abductively by the frame problem, as follows. First, nobody has any idea how the Quineian/isotropic features exhibited in human cognition could be managed via any kinds of exceptionless, general, programmable rules—regardless of whether or not the rules in question are tractably implementable. Second, the best available explanation for this fact is that the Quineian/isotropic features exhibited in human cognition cannot be so managed—not by general programmable rules that are tractably implementable, and not even by general programmable rules that are not tractably implementable.
It is worth noting that the ECP could be a computable function, in the mathematical sense described by Church and Turing, even if the ECP is not tractably computable—i.e., even if it does not conform to any set of rules which both (i) are expressible as a computer program, and (ii) can be executed by a physical system with physical resources roughly comparable to those of a human brain. For instance, the ECP would be a computable function in the Church/Turing sense as long as it is finite, even if the number of state-transitions it includes is gargantuan. This is because the full set of concrete state-transitions constituting the ECP would itself count as a set of programmable rules (by virtue of being a finite set). Yet, since each of these rules would be completely specific in content—would be a single entry in a “lookup table”—these kinds of rules would lack any form of generality. Such non-general transition-rules therefore would not systematize the ECP, but instead would merely collectively constitute the ECP.
These remarks about soft laws are closely related to the points we made in the final paragraph of section 3 above. If a cognitive system executes “default rules” that are not implemented in the system by classical refinement, then these rules are soft in the sense just specified.
Cognitive architecture might, for instance, be evolutionarily designed to deploy certain heuristics that sometimes generate biased judgments that fail to be epistemically appropriate. Work in cognitive science that is relevant to this possibility includes Kahneman and Tversky (1972), Kahneman, Slovic, and Tversky (1982), Kahneman and Frederick (2005), Gigerenzer and Selten (2002), Gigerenzer (2007).
We speak of human epistemic normativity because we find it plausible that what counts as epistemically appropriate belief-formation for a given kind of creature depends in part on what kinds of belief-forming processes that creature is capable of executing. Epistemic competence constrains epistemic normativity, whether or not the two coincide.
For possible examples of such failures, see Haidt (2001) and Greene (2007). Haidt points to cases in which subjects are morally ‘dumbfounded’ in not being able to give reasons for the putative wrongness of certain actions that they immediately judge to be wrong (e.g., harmless brother-sister incest). Greene argues that moral intuitions associated with deontology—e.g., retributive responses to wrong-doing—are (contrary to what defenders of this view suppose) largely a matter of emotion-based gut responses that can be traced to humans’ evolutionary past, and that reflect morally arbitrary differences. For a response to Greene’s critique of deontology, see Timmons (2007).
Value pluralists, on the other hand, can plausibly maintain that moral properties like net overall goodness/rightness are often subject to holistic, Quineian/isotropic, determinants—and thus that moral normativity itself is not fully systematizable by exceptionless descriptive-to-normative principles. Certain value monists can plausibly maintain this too—e.g., utilitarians of the kind who construe social utility as itself a moral-normative feature rather than a purely descriptive one.
For a cognitive-scientific defense of the idea of a universal “moral grammar,” see Hauser (2006). Of course, this idea is also compatible with (without presupposing) robustly objective moral normativity.
There might be certain innate cognitive-architectural features that influence moral-judgment formation in a manner that the agent could not reflectively endorse (and do not fit well with rest of the agent’s subjective moral normativity profile), but persist anyway. Examples might include judgments of the kind mentioned in note 20 above, such as the intuitive judgment that harmless brother-sister incest is morally wrong, or certain types of deontological judgments reflecting retributivist reactions to wrong-doing.
Both W. D. Ross and A. C. Ewing accepted soft principles of this sort, though they used the term ‘prima facie’ instead of ‘ceteris paribus’ in articulating them. In addition, for each basic soft principle of rightness or duty, they accepted a corresponding exceptionless contributory principle. In discussing the soft principle of fidelity, Ross remarks: “It remains a hard fact that an act of promise-breaking is morally unsuitable in so far as it is an act of promise-breaking, even when we decide that in spite of this it is the act that we ought to do” (1939: 86, our emphasis; see also 134). Ewing is even clearer on the matter: “’It is a prima facie duty not to tell lies’ does not mean ‘we ought usually to avoid telling lies’, but ‘that X would be a lie is always a valid reason, though not necessarily a conclusive reason, against saying X’” (1959: 110); “prima facie principles are still in a sense universal: the fact, e.g., that something gives pain is always a reason against doing it…” (1959: 126). One point we are calling attention to here is that embracing soft ceteris paribus principles does not commit one to corresponding exceptionless contributory principles; with a soft principle of rightness, one way in which cetera may fail to be paria is that the right-making feature mentioned in the principle fails in some instance to be relevant.
See, for instance, the “threshold default theory” of default reasons described in Horty (2007). This approach posits default generalizations with variable priorities assigned to them, where the priorities themselves have default values. In the context of a specific situation, the default priorities first get adjusted in light of the details of the situation, and then any default generalization whose post-adjustment priority is above a certain threshold gets “triggered.” The reason-providing default generalizations, in a specific situation, are the triggered ones. As Horty points out, on this account, cases of silencing can be accommodated despite the central reason-providing role played by default generalizations. Silencing occurs when an otherwise applicable default generalization (e.g., “Ceteris paribus, causing pleasure is a good-making characteristic of an action”) with an above-threshold default priority gets a below-threshold contextual priority, and hence does not get triggered. (Horty contests the putative cases of reversal advanced by Jonathan Dancy (1993, 2004). But genuine cases of reversal, if such there be, presumably would be a matter of a default generalization getting thus silenced, together with some additional attribute of the situation. Such an attribute might be the triggering of some other default generalization involving the feature in question, e.g., “Ceteris paribus, pleasure obtained from causing someone else pain is a bad-making characteristic.” Or the additional attribute might be the presence of a “reverse default generalization” that has a below-threshold default priority but in certain special contexts gets an above-threshold priority—e.g., “Lying is a good-making characteristic,” which could get an above-threshold priority in unusual situations such as games where lying is both acceptable and good strategy.) Jackson, Pettit, and Smith (2000) also purport to show how a generalist moral theory—specifically, a utilitarian theory—can accommodate silencing and reversal, but unfortunately their putative examples turn on a failure to distinguish fundamental principles from derivative ones. (In their account, features like the pleasure that an action causes the agent can sometimes get silenced or reversed, but only because in the specific situation, the presence of such features diminishes the net overall utility of the action. Maximal overall utility is here being treated as a descriptive feature that (i) is normatively basic rather than derivative, and (ii) is not itself subject to silencing or reversal.)
Aristotle, who arguably repudiated the idea that morality is fully systematizable by exceptionless general principles, held that adultery is always morally vicious (Nichomachean Ethics 1107a 10-20).
Partially systematizing generalizations also could play important roles in moral judgment-formation, including in ways other than by being consciously rehearsed and consciously applied to concrete cases; cf. Horgan and Timmons (2007). In this connection, we note that Hubert and Stuart Dreyfus (1990) argue on phenomenological grounds that much spontaneous moral behavior (“comportment,” as they call it) occurs without any use of moral principles by the agent. But in our view this phenomenology-based conclusion is too quick, because (as argued in Horgan and Timmons 2007) there are good reasons to hold that spontaneous moral judgments often rest in part on moral principles that operate morphologically and implicitly in the cognitive system, rather than by being consciously rehearsed by the agent. (We call this view morphological rationalism.) Indeed, some of our arguments in support of morphological rationalism appeal explicitly to certain features of moral phenomenology itself.
McKeever and Ridge (2006, pp. 12-14) call this the truth-maker conception of moral principles. Note that the truth-maker conception is orthogonal to the dispute between moral realists and moral irrealists: an irrealist, using the truth predicate disquotationally and deflationarily, can endorse the truth-maker conception—i.e., can maintain that whenever a concrete moral statement is true, the explanation for its truth is that it follows from some true conjunction of moral principles and descriptive claims. Note too that this sort of authority should be distinguished from questions about the epistemic priority (‘authority’) of moral principles vis-à-vis particular cases in the order of justification or knowledge. This epistemic issue, as we are here understanding it, has to do with whether being justified in believing, or knowing, a specific moral claim requires being justified in believing, or knowing, a moral principle, which then serves as (or is part of) an epistemically justificatory basis for belief in the specific moral claim. It might be the case, for example, that one can be epistemically justified in holding a specific moral claim on the basis of concrete moral experience (and thus without one’s belief being based on moral principles), even if moral principles are what ‘ground’ in the sense of make true (together with relevant nonmoral facts) specific moral claims.
The principles might be self-evident truths or they might themselves be groundable by some further non-moral (though not necessarily non-normative) considerations.
One structural possibility concerning soft principles, analogous to the structure proposed by Horgan and Tienson (1990) for deductive explanations adverting to soft laws, is a deductive justificatory argument containing an explicit premise to the effect that cetera are paria vis-à-vis the soft principle(s) invoked in the argument.
We ourselves would maintain that there is plenty of grounding/justificatory work that moral principles in fact do, but that is not a claim we seek to defend here. For some defense, see Horgan and Timmons (2007).
For some useful clearing of the conceptual terrain by way of helpful distinctions, see Chapter 1 of McKeever and Ridge (2006).
Dancy (1999) says, in the course of replying (promptly!) to Jackson, Pettit, and Smith (2000), “Particularism is the claim that there neither is nor needs to be such a pattern, not the claim that every such pattern is uncodifiable” (p. 60). Others might choose to use ‘pattern’ in such a way that any basis for projectibility counts as a pattern.
Thanks to Michael Gill, Neil Levy, Shaun Nichols, and two anonymous referees for helpful comments.
Aristotle (2000) Nichomachean Ethics. Roger Crisp (transl and ed). Cambridge, Cambridge University Press
Dancy J (1982) Ethical Particularism and Morally Relevant Properties. Mind 92:530–547
Dancy J (1993) Moral Reasons. Oxford, Basil Blackwell
Dancy J (1999) ‘Can the Particularist Learn the Difference Between Right and Wrong?’. In: Brinkmann K (ed) The Proceedings of the Twentieth World Congress of Philosophy, vol 1: Ethics. Bowling Green, OH, Philosophy Documentation Center
Dancy J (2004) Ethics without Principles. Oxford, Oxford University Press
Dreyfus H, Dreyfus S (1990) ‘What Is Morality? A Phenomenological Account of the Development of Ethics Expertise’. In: Rassumssen D (ed) Universalism vs. Communitarianism. Cambridge, MA, MIT Press
Ewing AC (1959) Second Thoughts in Moral Philosophy. London, Routledge & Kegan Paul, LTD
Fodor J (1983) The Modularity of Mind: An Essay in Faculty Psychology. Cambridge, MA, MIT Press
Fodor J (2001) The Mind Doesn’t Work That Way: The Scope and Limits of Computational Psychology. Cambridge, MA, MIT Press
Fodor J, Pylyshyn Z (1988) ‘Connectionism and Cognitive Architecture’. In: Pinker S, Mehler J (eds) Connections and Symbols. Cambridge, MA, MIT Press
Gigerenzer G (2007) Gut Feeling: The Intelligence of the Unconscious. London, Viking Publishing
Gigerenzer G, Selten R (eds) 2002 Bounded Rationality: The Adaptive Toolbox. Cambridge, MA, MIT Press
Greene J (2007) ‘The Secret Joke of Kant’s Soul’. In: Sinnott-Armstrong W (ed) Moral Psychology, vol. 3: The Neuroscience of Morality: Emotion, Brain Disorders, and Development. Cambridge, MA, The MIT Press
Guarini M (2006) Particularism and the Classification and Reclassification of Moral Cases. IEEE Intell Syst Mag 24:22–28 doi:10.1109/MIS.2006.76
Haidt J (2001) ‘The Emotional Dog and Its Rational Tail: A Social Intuitionist Approach to Moral Judgment’. Psychol Rev 108:814–834 doi:10.1037/0033-295X.108.4.814
Hauser MD (2006) Moral Minds. New York, Harper Collins Publishers
Horgan T (1997a) Connectionism and the Philosophical Foundations of Cognitive Science. Metaphilosophy 28:1–30 doi:10.1111/1467-9973.00039
Horgan T (1997b) Modelling the Non-computational Mind: Reply to Litch. Philos Psychol 10:365–371 doi:10.1080/09515089708573226
Horgan T, Tienson J (1990) Soft Laws. Midwest Stud Philos 15:256–279 doi:10.1111/j.1475-4975.1990.tb00217.x
Horgan T, Tienson J (1994) A Nonclassical Framework for Cognitive Science. Synthese 101:304–345 doi:10.1007/BF01063893
Horgan T, Tienson J (1996) Connectionism and the Philosophy of Psychology. Cambridge, MA, MIT Press
Horgan T, Timmons M (2006a) ‘Cognitivist Expressivism’. In: Horgan T, Timmons M (eds) Metaethics after Moore. Oxford, Oxford University Press
Horgan T Timmons M (2006b) ‘Expressivism, Yes! Relativism, No!’. In: Shafer-Landau R (ed) Oxford Studies in Metaethics, Vol. 1. Oxford and New York, Oxford University Press
Horgan T, Timmons M (2007) Morphological Rationalism and the Psychology of Moral Judgment. Ethical Theory Pract 10:279–295 doi:10.1007/s10677-007-9068-4
Horty J (2007) ‘Reasons as Defaults’, Philosophers’. Imprint 7(3):1–28
Jackson F, Pettit P, Smith M (2000) ‘Ethical Particularism and Patterns’. In: Hooker B, Little M (eds) Moral Particularism. Oxford, Oxford University Press
Kahneman D, Tversky A (1972) Subjective Probability: A Judgment of Representativeness. Cognit Psychol 3:430–454 doi:10.1016/0010-0285(72)90016-3
Kahneman D, Frederick S (2005) ‘A Model of Heuristic Judgment’. In: Holyoak K, Morrison R (eds) The Cambridge Handbook of Thinking and Reasoning. Cambridge, UK, Cambridge University Press
Kahneman D, Slovic P, Tversky A (1982) Judgment Under Uncertainty: Heuristics and Biases. Cambridge, UK, Cambridge University Press
Little MO (2000) ‘Moral Generalities Revisited’. In: Hooker B, Little M (eds) Moral Particularism. Oxford, Oxford University Press
McDowell J (1979) Virtue and Reason. Monist 62:331–350
McKeever S, Ridge M (2006) Principled Ethics: Generalism as a Regulative Ideal. Oxford, Oxford University Press
McNaughton D (1988) Moral Vision. Oxford, Basil Blackwell
Ross WD (1939) Foundations of Ethics. Oxford, Oxford University Press
Shanahan M (2004) ‘The Frame Problem’, Stanford Encyclopedia of Philosophy (http://plato.stanford.edu/entries/frame-problem/)
Timmons M (2007) ‘Toward a Sentimentalist Deontology’. In: Sinnott-Armstrong W (ed) Moral Psychology, vol. 3: The Neuroscience of Morality: Emotion, Brain Disorders, and Development. Cambridge, MA, The MIT Press
About this article
Cite this article
Horgan, T., Timmons, M. What Does the Frame Problem Tell us About Moral Normativity?. Ethic Theory Moral Prac 12, 25–51 (2009). https://doi.org/10.1007/s10677-008-9142-6
- Frame problem
- Relevance problem
- Computational cognitive science
- Moral normativity