Juliet: If they do see thee, they will murder thee. A satisficing algorithm for pragmatic conditionals
In a recent Mind & Society article, Evans (2005) argues for the social and communicative function of conditional statements. In a related article, we argue for satisficing algorithms for mapping conditional statements onto social domains (Eur J Cogn Psychol 16:807–823,2004). The purpose of the present commentary is to integrate these two arguments by proposing a revised pragmatic cues algorithm for pragmatic conditionals.
KeywordsConditionals Pragmatics Satisficing algorithms
According to Evans (2005), speakers use conditional statements of the form “if P, then Q” to influence the actions and beliefs of listeners. And they do so in a context, by having them imagine the actual possibility of P and the practical consequence of Q, before making a decision on how to act or what to believe. Take his example of an editor telling an author: “if you submit your paper to our journal, we will publish it”. For Evans, this is an instance of a promise, because it strongly encourages the act of submission by the listener, as the reward of publication is controlled by the speaker. More broadly, it is a statement of the speaker meant to induce an action of the listener.
We basically agree with Evans’ (2005) conditionals, except for his distinction of inducement and advice in terms of influence strength. According to him, an inducement is stronger than an advice, because in an inducement the speaker controls the consequent event, whereas in an advice the speaker does not. Take his above example of a promise and compare it to his other example of a colleague telling an author: “If you submit your paper to their journal, they will publish it”. For him, this is an instance of a tip, because it weakly encourages the act of submission by the listener, as the reward of publication is not controlled by the speaker, but by others.
We do believe that the speaker’s control of the consequences is the discriminating feature between an inducement and an advice, but we do not believe that this feature makes an inducement necessarily stronger than an advice. Take the example of a modern Juliet telling her Romeo: “if my brothers see you, they will kill you”. According to Evans (2005) and ourselves, this is a warning, because it seeks to deter the act by Romeo, and the punishment is not controlled by Juliet, but by her brothers. However, this warning is stronger, not weaker than the threat of Juliet telling Romeo: “if my brothers see you, I will kill you”. Although now the punishment is controlled by Juliet, not her brothers. Or take a pharmacist’s advice: “if you take this pill, it will calm you”. And compare it to a pharmacist’s promise: “if you take this pill, I will calm you”. Evidently, an inducement is not necessarily stronger than an advice. In fact, it is context that determines the strength of a conditional. For example, a medical warning is stronger when made by an expert than a novice doctor.
2 A satisficing algorithm for pragmatic conditionals
However, the point of this commentary is another, namely, to integrate Evans’ (2005) detailed analysis of pragmatic conditionals in his recent article with a satisficing algorithm for pragmatic conditionals we advanced in a related article (López-Rousseau and Ketelaar 2004). Particularly, because in his suppositional approach, Evans does not address the possibility of conditional reasoning being driven by satisficing processes.
It is not clear whether Kirsten’s conditional is a threat to Julia, and this is why Julia asks Kirsten whether she is threatening her. Apparently, people’s cognitive algorithm for classifying conditionals is not optimal but satisficing, namely, a simple serial procedure sufficing for satisfactory classifications in most cases, but not in all cases. So, exactly how is this cognitive algorithm?
Kirsten: If you fail me, there will be consequences.
Julia: Are you threatening me?
The pragmatic cues algorithm is meant to simulate people’s cognitive algorithm for classifying conditionals. The algorithm is restricted to six pragmatic conditionals and three linguistic cues. In fact, the algorithm is meant to be the most simple by including the minimum possible of three cues to classify those six conditionals. Also, the algorithm is meant to be serial by adopting the sequential form of a decision tree, which simplifies the classification by discarding three conditionals after the first cue, and two more conditionals after the second cue. And the algorithm is meant to be satisficing by producing correct classifications in most but not all cases. In this regard, the pragmatic cues algorithm would misclassify any excluded conditional (e.g., requests) or any included conditional based on excluded cues (e.g., gestures). Evidently, people’s cognitive algorithm would include all conditionals and all cues (for details, see López-Rousseau and Ketelaar 2004).
Given that a number of complex, parallel or optimizing algorithms can be used for classifying conditionals, an empirical test was run on how well the pragmatic cues algorithm approximates the performance of people’s cognitive algorithm. Briefly, conditional promises, threats, advices, warnings, permissions and obligations were collected from people, and given to other people and the algorithm for classification. Their corresponding performances were then compared. Results show that people classified most conditionals correctly, and that the pragmatic cues algorithm did almost as well as people. Both the algorithm’s and people’s classifications were far better than chance, and their misclassifications were randomly distributed. These findings indicate that the pragmatic cues algorithm approximates well the performance of people’s cognitive algorithm for classifying conditionals, and suggest that this satisficing algorithm might be an integral part of that cognitive algorithm (see López-Rousseau and Ketelaar 2004).
Now take again the example of Kirsten telling Julia: ‘if you fail me, there will be consequences’. According to the pragmatic cues algorithm, it is not clear whether Kirsten’s conditional is a threat to Julia. Actually, it is unclear to herself as well, and this is why Julia asks Kirsten whether she is threatening her. To the algorithm, it is not clear firstly whether the stated consequences are meant as a benefit for the listener (Julia) or not, and secondly whether these consequences involve an act of the speaker (Kirsten) or not. The conditional’s context suggests that the consequences would not mean a benefit for the listener (Julia) and could involve an act of the speaker (Kirsten). Thus, Kirsten’s conditional is probably a threat to Julia.