• Hannes Leitgeb
Part of the Applied Logic Series book series (APLS, volume 30)


Both philosophers of mind and epistemologists sometimes develop their theories on a level of generality so high that they lose sight of the ground. In order to avoid a similar pitfall, we will develop our thoughts on inferences and their justification only for a specific class of cognitive agents and for a specific class of epistemic situations; both of these classes are defined by certain (simplifying) assumptions which we will state explicitly. Our theoretical account of inference is thus only meant to hold if also these assumptions are met. We will put the constraints in terms of a little story, or a sandtable exercise, by which we will try to illustrate the more abstract issues. We will try to keep things as simple as possible, but at the same time we will try to be as precise as necessary.


Turing Machine Implication Sign Belief State Cognitive Agent Perceptual Belief 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. The distinction between and ⇉ is explained below. ⇀ and ⇉ are metavariables ranging over binary connectives. tIn a context where we both consider general sentences and sentences in L, we will denote the latter sentences by explicit reference to the constant a, and say, e.g., α[a] instead of justa.Google Scholar
  2. We always adopt the convention that the connectives of propositional logic bind stronger than the additionally introduced implication signs.Google Scholar
  3. § A is a metalinguistic individual variable ranging over all cognitive agents which meet the constraints stated in the following chapters. Thus, the claims that we will make about A are actually universally quantified claims about cognitive agents of a certain type.Google Scholar
  4. Just as the string Bx is not a well-formed formula of the object language, since B may only be applied to formulas and not to singular terms.Google Scholar
  5. II When we say that our cognitive agent A is a system of parameters, we do so according to the same manner of talking according to which a physicist might say that the the sun and planets taken together are a dynamical system. More appropriately, we should say: real world objects (the sun, the planets, our cognitive agent A) instantiate concrete systems (the solar system, the cognitive system of A) that consist of variables which are features of the real world and which change in real time in accordance with natural laws; concrete systems realize abstract systems that are sets of abstract variables goverened by mathematical rules or laws, where the “time” variable ‘t’ ranges over natural or real numbers. In the following we will simplify matters by identifying the cognitive agent A with an abstract system of parameters that is realized by a concrete cognitive system instantiated by A (for more on this see Van Gelder [ 173] , pp.616f) . **For our purposes, the question of whether we understand such phrases in a referentially opaque or in a referentially transparent way will not be important at all. But since a is actually used as a definite description here and not as a proper name, the referentially opaque reading would be the more appropriate one.Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2004

Authors and Affiliations

  • Hannes Leitgeb
    • 1
  1. 1.Department of PhilosophyUniversity of SalzburgAustria

Personalised recommendations