This special issue of the International Journal of Game Theory contains a selection of papers presented at the 9th Conference on Logic and the Foundations of the Theory of Games and Decisions (LOFT9), which took place in Toulouse July 5–7, 2010. While this special issue collects papers with a stronger game-theoretic content, a second set of papers that are more logic-oriented can be found in a special issue of the Journal of Applied Non-Classical Logics. The LOFT conferences, which have been a regular biannual event since 1994, are interdisciplinary events that bring together researchers from a variety of fields: cognitive psychology, computer science and artificial intelligence, economics, game theory, linguistics, logic, mind sciences, philosophy, social choice and statistics.Footnote 1

In its original conception, LOFT had as its central theme the application of logic, in particular modal epistemic logic, to foundational issues in the theory of games and individual decision-making. Epistemic considerations have been central to game theory for a long time. The expression “interactive epistemology” has been used in the game-theory literature to refer to the analysis of strategic interaction based on an explicit modeling of the players’ beliefs about each other’s beliefs and rationality. The LOFT conferences arose from the realization that the tools and methodology that were used in game theory were closely related to those used in other fields, notably computer science, logic and philosophy. Modal logic turned out to be the common language that made it possible to bring together different professional communities.

It became apparent that the insights gained and the methodologies employed in one field could benefit researchers in other fields. Indeed, new and active areas of research have sprung from the interdisciplinary exposure provided by the LOFT events. Over time the scope of the LOFT conferences has broadened to encompass a wider range of topics, while maintaining its focus on the general issue of rationality and agency. Topics that have fallen within the LOFT umbrella include epistemic and temporal logic, theories of information processing and belief revision, models of bounded rationality, non-monotonic reasoning, theories of learning and evolution, social choice theory, etc.Footnote 2 This special issue contains papers that have a clear focus on game theory and at the same time reflect the general interests and interdisciplinary scope of the LOFT community.

The paper “AGM-consistency and perfect Bayesian equilibrium. Part I: definition and properties” by Giacomo Bonanno introduces a new notion of equilibrium for general extensive-form games. Its main ingredient is a purely qualitative condition, called “AGM-consistency”, which can be given an epistemic foundation based on the so called AGM theory of belief revision introduced by Alchourrón, Gärdenfors and Makinson. An assessment (consisting of a strategy profile and a system of beliefs) is AGM-consistent if there is a “plausibility” order on the set of histories such that, for every information set, (1) the histories that are assigned positive probability by the system of beliefs are precisely those that are most plausible in that information set and (2) the choices that are assigned positive probability (by the relevant strategy) are precisely those that “preserve plausibility”. The author shows that the proposed notion of equilibrium is a refinement of subgame-perfect equilibrium but weaker than sequential equilibrium

The motivation behind Hubie Chen’s paper “Bounded rationality, strategy simplification and equilibrium” is to address the criticism of game theory that its solution concepts assume unbounded rationality. He introduces the notion of “lean equilibrium”, which is an outcome of strategies at Nash equilibrium where each player’s strategy is maximally simplified with respect to a notion of simplification that accounts for potential deviations by other players. He studies lean equilibrium in two-player repeated games where strategies are represented as machines, or automata; the computational power of a player is expressed by the complexity of the automata. In this setting, the author presents techniques for establishing that outcomes are at lean equilibrium, and illustrates their use by a number of examples. He also presents results on the structure of machines that are at equilibria. If the complexity of a machine is measured by the number of transitions, a precise characterization of the structure of equilibria can be obtained.

The paper “Where do preferences come from?” by Franz Dietrich and Christian List describes a framework for conceptualizing preference formation and preference change. In the proposed model, an agent’s preferences are based on certain “motivationally salient” properties of the alternatives over which the preferences are held. The model allows for analyzing preference change resulting from new properties of the alternatives becoming salient or previously salient ones ceasing to be so, and captures endogenous preferences in various contexts, thus helping to illuminate the distinction between formal and substantive concepts of rationality, as well as the role of perception in rational choice.

In their contribution “Program equilibrium—A program reasoning approach” Wiebe van der Hoek, Cees Witteveen and Michael Wooldridge revisit the notion of program equilibrium, in which a player selects a strategy by entering a program, whose behavior may be conditioned on the programs submitted by other players. Thus, for example, in the prisoner’s dilemma, a player can enter a program saying “if and only if, on comparison, his program is the same as mine, I cooperate”. The authors investigate an approach in which comparison between programs, or strategies, is based on model checking. They then study a notion of coherent outcome: strategies in which every decision by every player is justified by the conditions put forward in his program.

“Stability and fairness in models with a multiple membership” by Michel Le Breton, Juan Moreno-Ternero, Alexei Savvateev and Shlomo Weber falls within the area of cooperative game theory. It studies a model of coalition formation for the joint production and finance of public projects, allowing agents to belong to multiple coalitions. The main focus is on the existence of a budget-balanced, minimum-cost solution to a project location problem where the projects being located are excludable public goods. The authors show that, when projects are divisible (in the sense that individuals may use more than one project to satisfy their demand), the minimum cost solution is secession-proof, that is, in the core. In the case of indivisibility, stable allocations may fail to exist and, for those cases, the authors resort to the least core in order to estimate the degree of instability.

The paper “Wisdom of the crowds vs. groupthink: learning in groups and in isolation” by Conor Mayo-Wilson, Kevin Zollman and David Danks revisits reinforcement learning strategies in multi-armed bandit problems. It evaluates the performance of boundedly rational strategies, where performance is measured in terms of the asymptotic tendency to play optimal actions, either in isolation or in networks of learners. “Awareness-dependent subjective expected utility” by Burkhard Schipper is a contribution to the recent and growing literature on the modeling of (un)awareness. The author addresses the issue of how to distinguish between events an individual is aware of, but assigns zero probability to, and events the individual is not aware of. The analysis is carried out in a framework that integrates the syntax-free structures introduced by Heifetz, Meier and Schipper and the Anscombe-Aumann approach to subjective expected utility. The author proposes a notion of “awareness-dependent subjective expected utility”, provides a characterization of it, and shows that unawareness has behavioral implications that are different from those of probability zero belief.

The paper “Epistemic characterizations of iterated deletion of inferior strategy profiles in preference-based type spaces” by Michael Trost provides an epistemic characterization of an iterative deletion procedure proposed by Stalnaker as a solution concept for strategic-form games, called strong rationalizability. An epistemic characterization of the pure-strategy version of that solution concept was proposed by Bonanno in the context of qualitative Kripke frames. Trost, on the other hand, follows the event-based approach and considers type space models where each type is associated with a preference relation on the state space. Within this context Trost provides an epistemic characterization of both the qualitative, pure-strategy version and the probabilistic version of the iterative deletion algorithm.

The editors of the special issue would like to thank the authors for their submissions, the LOFT participants for their lively discussions and the many reviewers for their invaluable help during the thorough reviewing and editorial process. Last but not least, our thanks go to Bernhard von Stengel, co-Editor of the International Journal of Game Theory, for making this special issue possible.