Abstract
B.K. Matilal, and earlier J.F. Staal, have suggested a reading of the ‘Nyāya five limb schema’ (also sometimes referred to as the Indian Schema or Hindu Syllogism) from Gotama’s Nyāya-Sūtra in terms of a binary occurrence relation. In this paper we provide a rational justification of a version of this reading as Analogical Reasoning within the framework of Polyadic Pure Inductive Logic.
J.B. Paris—Supported by a UK Engineering and Physical Sciences Research Council (EPSRC) Research Grant.
A. Vencovská—Supported by a UK Engineering and Physical Sciences Research Council Research Grant.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
It has been suggested that under such a perspective, the role of the example may be to ensure existential import, see e.g. [4, p. 16].
- 2.
Notice that we are taking the evidence as a single instance of a kitchen, hence the switch from ‘whenever’ on line 1 to ‘when’.
- 3.
In place of \(a_i\) we sometimes use other letters to avoid subscripts or double subscripts.
- 4.
In our view this makes it an obvious logic to investigate ‘analogical arguments’ where it is subjective probability which is being propagated by considerations of rationality.
- 5.
This formulation of Ex is equivalent to that given in, say, [10], and avoids introducing extra notation.
- 6.
Of course one has a vast background knowledge about fires and kitchens etc. none of which is alluded to in these premises.
- 7.
In other words such reasoning is appropriate only in so far as one is content to apply a principle of ceteris paribus.
- 8.
To avoid problems with zero denominators we identify \(w(\theta \,|\,\phi ) \ge w(\psi \,|\,\eta )\) with \(w(\theta \wedge \phi )\cdot w(\eta ) \ge w(\psi \wedge \eta )\cdot w(\phi ).\)
- 9.
References
Gaifman, H.: Concerning measures on first order calculi. Israel J. Math. 2, 1–18 (1964)
Ganeri, J.: Indian Logic: A Reader. Routledge, New York (2001)
Ganeri, J.: Ancient Indian logic as a theory of case based reasoning. J. Indian Philos. 31, 33–45 (2003)
Matilal, B.K.: The Character of Logic in India. SUNY Series in Indian Thought. State University of New York Press, Albany (1998) (Ed. Halbfass, W.)
Matilal, B.M.: Introducing Indian logic. In: Ganeri, J. (ed.) Indian Logic, A Reader. Routledge (2001)
Oetke, C.: Ancient Indian logic as a theory of non-monotonic reasoning. J. Indian Philos. 24, 447–539 (1996)
Paris, J.B., Vencovská, A.: The Indian schema as analogical reasoning. http://eprints.ma.man.ac.uk/2436/01/covered/MIMS_ep2016_10.pdf
Paris, J.B., Vencovská, A.: The Indian schema analogy principles. IfCoLog J. Logics Appl. http://eprints.ma.man.ac.uk/2436/01/covered/MIMS_ep2016_8.pdf
Paris, J.B., Vencovská, A.: Ancient Indian Logic, Pakṣa and Analogy. In: Proceedings of the joint Conference of the 3rd Asian Workshop on Philosophical Logic (AWPL 2016) and the 3rd Taiwan Philosophical Logic Colloquium (TPLC 2016), Taipei, October 2016 (to appear)
Paris, J.B., Vencovská, A.: Pure Inductive Logic. Association of Symbolic Logic Perspectives in Mathematical Logic Series. Cambridge University Press, New York (2015)
Schayer, S.: On the method of research into Nyāya (translated by J. Tuske). In: Ganeri, J. (ed.) Indian Logic: A Reader, pp. 102–109. Routledge, London, New York (2001)
Staal, J.F.: The concept of Pakṣa in Indian Logic. In: Ganeri, J. (ed.) Indian Logic: A Reader, pp. 151–161. Routledge, London, New York (2001)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix
Appendix
To prove the theorem we need to appeal to a representation theorem for probability functions on L satisfying Ex. First we introduce some notation.
For the language L as above a state description for \(a_1,\ldots , a_n\) is a sentence of L of the form
where the \(\epsilon _{i,j} \in \{0,1\}\) and \(R(a_i,a_j)^1 =R(a_i,a_j), R(a_i,a_j)^0=\lnot R(a_i,a_j)\). By a theorem of Gaifman, see [1], or [10, Chap. 7], a probability function on SL is determined by its values on the state descriptions.
Let \(D=(d_{i,j})\) be an \(N \times N\) \(\{0,1\}\)-matrix. Define a probability function \(w^D\) on SL by setting
to be the probability of (uniformly) randomly picking, with replacement, \(h(1), h(2),\ldots ,h(n) \) from \( \{1,2, \ldots , N\}\) such that for each \(i,j \le n\), \( d_{h(i),h(j)}= \epsilon _{i,j}\). This uniquely determines a probability function on SL satisfying Ex. (For details see e.g. [10, Chap. 7]).
Clearly convex mixtures of these \(w^D\) also satisfy Ex. Indeed by the proof of [10, Theorem 25.1] it follows that any probability function w satisfying Ex can be approximated arbitrarily closely on QFSL by such convex mixtures. More precisely:
Lemma 2
For a probability function w on SL satisfying Ex and \(\theta _1, \ldots , \theta _m \in QFSL\) and \(\epsilon >0\) there is an and \(\lambda _D \ge 0\) for each \(N\times N\) \(\{0,1\}\)-matrix D such that \(\sum _D \lambda _D =1\) and for \(j=1,\ldots , m\),
We can extend this representation result to probability functions satisfying additionally SN as follows.
For \(\theta \in SL\) let \(\theta ^\lnot \) be the result of replacing each occurrence of R in \(\theta \) by \(\lnot R\) and similarly for matrix D as above let \(D^\lnot \) be the result of replacing each occurrence of 0/1 in D by 1/0 respectively. For w a probability function on SL set \(w^\lnot \) to be the function on SL defined by
Then \(w^\lnot \) satisfies Ex and the probability function \(2^{-1}(w+ w^\lnot )\) satisfies Ex+SN. Conversely if w satisfies Ex+SN then \(w=w^\lnot \) so
Thus every probability function satisfying Ex+SN is of the form \(2^{-1}(v + v^\lnot )\) for some probability function v satisfying Ex and conversely every such probability function satisfies Ex+SN.
Notice that if
then
and
In particular then by Lemma 2,
Lemma 3
For a probability function w on SL satisfying Ex+SN and \(\theta _1, \ldots , \theta _m \in QFSL\) and \(\epsilon >0\) there is an and \(\lambda _D \ge 0\) for each \(N\times N\) \(\{0,1\}\)-matrix D such that \(\sum _D \lambda _D =1\) and for \(j=1,\ldots , m\),
Let w be a probability function on SL satisfying Ex and for a \(2 \times 2\) \(\{0,1\}\)-matrix
let
We will omit the subscript w if it is clear from the context. Notice that when \(D=(d_{i,j})\) is an \(N \times N\) \(\{0,1\}\)-matrix, then for E as above we have
where \(x^1=x, x^0= 1-x\). We will write \(|E|_D\) in place of \(|E|_{w^D}\).
A useful observation is that for any probability function w satisfying Ex, |E| is invariant under permuting rows and permuting columns so for example
etc. We will use this observation frequently in what follows.
Let
Lemma 4
For any probability function w satisfying Ex we have \(T,Z \ge U\) and \(X \ge 2Z, 2T\).
Proof. We shall prove that \(T \ge U\), the other inequalities follow similarly. Let \(D=(d_{i,j})\) be an \(N \times N\) \(\{0,1\}\)-matrix and assume first that \(w= w^D \). By the above observation,
so \(T \ge U\) is the inequality
which is equivalent to the sum over r, s of
being nonnegative, and hence clearly true. From this it follows that the result holds for convex combinations of the \(w^D\) and hence by Lemma 2 for general w satisfying Ex.
Proof of Theorem 1 . We start with the left hand side inequality. Let w be a probability function satisfying Ex+SN. If \(w(R(s,h) \wedge (R(s,k) \rightarrow R(f,k))\) and/or w(R(s, h)) equals 0 then (2) holds by our convention, so assume that these values are nonzero. Consider an approximation \(2^{-1}\sum _D \lambda _D (w^D + w^{D^\lnot })\) of w for the \(\theta \) of the form
with small \(\epsilon \) and as guaranteed by Lemma 3.
For an \(N \times N\) \(\{0,1\}\)-matrix \(D=(d_{i,j})\), write u for \(2^{-1}(w^D + w^{D^\lnot })\). We have
Let \(\hat{D}\) be another (not necessarily distinct) \(N \times N\) \(\{0,1\}\) matrix. Working with approximations of w for arbitrarily small \(\epsilon \) it can be seen that to show (2) for w it suffices to demonstrate that for any pair \(D, \hat{D}\) we have
This simplifies to
and since by Lemma 4 we have \(Z_D \ge U_D\), \( Z_{\hat{D}}\ge U_{\hat{D}} \), it suffices to show that
We have
where
Similarly
where
and, using (5),
Similarly for \(\hat{D}= (\hat{d}_{i,j})\). Writing \(u_{i,j}\) for \(x_{i,j}+y_{i,j}\) etc., the inequality (6) becomes
which holds since for any particular pairs i, j and g, h,
Turning to the right hand side inequality it is enough to show that
equivalently
Proceeding as above (but much simpler since it does not need to involve the \(\hat{D}\)) it is sufficient to show that
and indeed this holds by Lemma 4. \(\Box \)
Theorem 5
Let w be a probability function on SL satisfying Ex+SN. Let h, k, s, f be distinct constants from amongst the \(a_1, a_2, a_3, \ldots \).
Then
Proof. Starting with the bi-implication case and proceeding as in the proof of the second inequality in Theorem 1 it is enough to show that
To this end notice that
Writing
the required inequality becomes
which clearly holds.
The second inequality in the theorem can likewise be reduced to showing that \(X_D \ge Y_D\) and this follows from (10) and Lemma 4. \(\Box \)
Rights and permissions
Copyright information
© 2017 Springer-Verlag GmbH Germany
About this paper
Cite this paper
Paris, J.B., Vencovská, A. (2017). Ancient Indian Logic and Analogy. In: Ghosh, S., Prasad, S. (eds) Logic and Its Applications. ICLA 2017. Lecture Notes in Computer Science(), vol 10119. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-54069-5_15
Download citation
DOI: https://doi.org/10.1007/978-3-662-54069-5_15
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-662-54068-8
Online ISBN: 978-3-662-54069-5
eBook Packages: Computer ScienceComputer Science (R0)