1 Introduction

Formal epistemologists, much like non-formal epistemologists, are interested in an agent’s knowledge and its rational and/or justified beliefs. Much of formal epistemology has been devoted to the idea that a rational agent’s beliefs come in degrees. The most popular explication of gradual beliefs are formal models of beliefs employing probabilities. Arguments to justify these models have been devised applying Dutch Book Arguments or scoring rules.

Economists and computer scientists have been much interested in properties of the allocation of goods and services to economic agents. Much work has been devoted to the notions of an efficient market and profit maximisation. A key insight is that allocations of goods and services do not appear magically but are part of a market that can be designed. In order to construct a market conducive to having desirable properties, economists and computer scientists created incentive structures for economic agents; a process referred to as mechanism design.

This article draws parallels between formal epistemology and mechanism design. Analysing key arguments in both fields, we discover the hitherto unobserved fact that researchers in both fields employ the same argument structure. This is a priori somewhat surprising, given that both fields are isolated from each other as evidenced by an absence of cross-referencing. Due to the unexpected closeness philosophical thinking can have immediate intellectual and practical impact. Furthermore, by engaging with the relevant literatures philosophers can discover concepts, techniques and considerations which are useful to their own philosophical endeavours. On the other hand, researchers working on mechanism design may apply their advanced mechanisms fruitfully in formal epistemology.

Close connections and potential cross-fertilisation between formal epistemology and other branches of computer science (machine learning and artificial intelligence) have previously been identified (Ortner and Leitgeb 2011; Williamson 2004).

The rest of the paper is organised as follows. The next two sections are used to introduce mechanism design and formal epistemology (Sects. 2 and 3). Section 4 explicitly traces parallels between mechanism design and formal epistemology. Section 5 offers some conclusions.

2 Mechanism Design

The basic underlying idea of mechanism design is familiar to all of us: to bring about preferable circumstances. Next, we show how this basic idea has given rise to the field of mechanism design. By discussing one truly remarkable mechanism we will delineate an argument structure which is key in mechanism designing and which we re-discover in three areas of formal epistemology.

2.1 Idea, Challenge and the Internet

Economists have a long-standing interest in analysing markets in all their facets (equilibria, actors, collapses, cycles, etc.). In the 1990s, the idea took hold that markets do not simply just exist, markets can also be created. Hence, the creator of a market can influence market behaviour of market actors by the way the market operates. It is not uncommon for a market creator to also be a market actor, such creators hence have strong preferences about how the market operates. Consider for example an auction house, which designs its own specific rules to further its own interests. The challenge arises to create a market—or design a market mechanism—that satisfies (as much as possible) the designer’s preferences.

The main difficulty the designer faces is that—within the set boundaries—actors are free to act any way they choose. That is, once the market has been designed, the designer does not have means to force actors to behave a certain way. For example, the auction house cannot force potential bidders to submit (high) bids.

Ceteris paribus, economic actors prefer lower coordination costs. Information technologies, such as the internet, have long been recognised to have enormous potential for reducing coordination costs, see for example Malone et al. (1987). Unsurprisingly, computer scientists hence also became involved in mechanism design. Nowadays, much effort is expended on the design of tailor-made web-based market mechanisms.

Two successful applications of designed web-based mechanisms are reported in Hohner et al. (2003) and Sandholm (2007). The former reports how tailor-made auctions at Mars Incorporated benefited Mars and its suppliers. The second tale is an impressive story of selling an eye-popping \(\$35\) billion through specially designed auctions. Mechanism design done well can be really beneficial to the designer and others. However, not all designed mechanisms are successful, the fitting titles of Cramton (2003), Klemperer (2002) are: “Electricity market design: the good, the bad, and the ugly” and “How (not) to run auctions”.

Overviews of scholarly work on mechanism design can be found in Cramton et al. (2006), Klemperer (1999), Nisan (2014), Roth (2002), Dash et al. (2003). The former two chart auctions while the latter two are more concerned with computational issues. A recent area of exploration are computational issues in auction design which is surveyed in Chawla and Sivan (2014), Hartline (2012).

We now visit a truly remarkable landmark mechanism, the Vickrey Auction.

2.2 Vickrey Auction

Let us consider a salesman wishing to sell one indivisible unit of good or one service, e.g., a painting. Clearly, he has a preference to sell at a high price but there are a number of other desires concerning a transaction mechanism that he may have. He might prefer to do business:

  1. (i)

    quickly (time is money),

  2. (ii)

    simply (so that everyone clearly understands how the transaction mechanism works),

  3. (iii)

    treat all potential trading partners equally (fairness),

  4. (iv)

    ensure that no actor revealing true preferences loses utility (keep future potential trading partners happy and honest),

  5. (v)

    allocate the good/service to the actor with highest private valuation—assuming all actors are rational (most beneficial to the economy as a whole, according to many economic theories),

  6. (vi)

    the outcome of the transaction mechanism is consistent with the private information of every actor (to qualm worries about fair play) and

  7. (vii)

    keep the private valuations of market actors private.

The Vickrey Auction, introduced in Vickrey (1961), satisfies all these desiderata. A Vickrey Auction is an ingenious single-round game for N players interested in buying one item from the salesman:

  1. (A)

    Rules Every player privately transmits one single bid (some positive number in \({\mathbb {R}}\)) to the salesman.

  2. (B)

    Outcome The item is awarded to the player with the highest bid who publicly pays the second highest bid submitted.

  3. (C)

    Utility The utility of the highest bidder is the difference between her utility of the item and the price. The utility of all other players is zero.

  4. (D)

    Decision norm Players ought to bid such that their bid is not weakly dominated by another bid: avoid bids for which there exists another bid which never returns less utility and sometimes more utility.

  5. (E)

    Mathematical theorem The only way for a player to avoid a weakly dominated bid is to bid her private true valuation of the item. If all players bid true valuations, then i–vii obtain.

Let us briefly discuss why (i)–(vii) obtain. The game is

  1. (i)

    single-round (only one bid),

  2. (ii)

    it does not involve higher maths,

  3. (iii)

    its outcome only depends on the amount agents bid,

  4. (iv)

    neglecting the time and effort spent for putting in a bid: the utility of actors submitting unsuccessful bids is zero, the utility of the highest bidder is strictly greater than zero, it is the difference between her bid (equalling her true valuation) and the second highest bid (demonstrated below),

  5. (v)

    holds (demonstrated below) and

  6. (vi)

    also holds.

  7. (vii)

    What becomes public knowledge is that the agent allocated the item put in the highest bid and that the announced price was the second highest bid, nothing more.

For the following discussion we assume that no two players submit the exact same bid. Couldn’t a player do better by bidding less than her true valuation in case she won the auction? If her new lower bid is not the highest bid any more, she is not allocated the item and her utility drops to zero. If she is still the highest bidder, then she still has to pay the same price (the second highest bid), in which case her utility remains unchanged. So, putting in a lower bid is not a good idea. What about a player bidding more than her true valuation, in case she was not the highest bidder? If the new higher bid is still below the maximal bid, then her utility is still zero. If the new higher bid becomes the highest bid, then the player has to pay a price which exceeds her valuation of the item which is not ideal. All other cases are similar.

Summing up, bidding true valuation is guaranteed to be as good as every other bidding strategy. Any other bidding strategy yields lower utilities in certain cases and it is never strictly better. This holds independently of what all the other bidding actors do.

But what about the salesman and his preference to sell at a high price? The item is traded for the second highest bid which equals, if all players are rational, the second highest true valuation. The item is hence sold for a high price if and only if at least two agents value it highly. Noting that any price greater than the highest true valuation is a salesman’s pipe-dream, the second highest true valuation appears like one reasonable way to reflect the salesman’s negotiation power. Clearly, if the salesman knows that there is only one actor who highly values the good or service, then it might make more economic sense to directly negotiate with this actor.Footnote 1

A lot of work has gone into modifying the Vickrey auction. For example, the celebrated Vickrey-Clarke-Groves (VCG) auction (Clarke 1971; Groves 1973; Vickrey 1961) generalises the above game to simultaneously selling multiple items; see Abrache et al. (2007) for an overview. This generalisation also satisfies (i), (ii), (iii), (iv), (vi) and (vii). As for (v): the distribution of the multiple items maximises the sum of private valuations.

Other directions of work are auctions designed for an agent seeking to buy—rather than sell—from a group of salesmen. Such auctions are called ‘procurement’ or ‘reverse’ auctions. One issue in such auctions is that items available may differ in quality and price simultaneously, refer to Branco (1997), Che (1993) for dated overviews and see Buettner and Landes (2012) for an online reverse VCG auction with differing qualities and prices, multiple bidders and multiple sellers as well as bids consisting of multiple components.

2.3 Structure of Mechanism Design

Economic actors (e.g., the salesman, procurer) have different preferences on how they want to do business. They hence want to use a mechanism tailored to their own preferences.

To achieve a particular set of economic goals in a particular economic situation, P, a mechanism designer follows a road map: design a game G specifying

  1. (A)

    rules,

  2. (B)

    outcomes,

  3. (C)

    utility and a

  4. (D)

    decision norm characterising rational action for agents playing game G.

  5. (E)

    Mathematical theorem: if all agents playing game G are rational in the sense of (iv), then P obtains.

We already saw that ensuring (i)–(vii), a designer’s preferences in a particular economic situation, through the design of the Vickrey auction was along this road map.

2.4 Mechanism Design–Up Close and Personal

At this point, the philosophical reader might wonder what all of this has to do with yourself. Well, mobile-phone licenses are sold by governments through auctions which are particularly tailored to the occasion. Every time you make a call on a mobile you use a license bought through an auction (Klemperer 2002).Footnote 2

Then there is eBay, a rather successful application of mechanism design. Reputation systems on eBay designed to take the function reputation plays in real-world markets are effective as reported in Houser and Wooders (2006), Resnick et al. (2006).

If the reader is at this point not persuaded that mechanism design is worth knowing about, let me briefly tell you that some have thought about how (not) to sell the monstrosities known as ‘nuclear weapons’ by designing a clever mechanism. The brave reader who wants to venture on such dangerous grounds is referred to Jehiel et al. (1996).

With this rough guide of the lands of mechanism design in mind, we now turn to philosophical shores.

3 Formal Epistemology

3.1 Some Background

Formal epistemologists, much like non-formal epistemologists, are interested in an agent’s knowledge and its rational and/or justified beliefs. Much of formal epistemology has been devoted to the idea that a rational agent’s beliefs come in degrees: a proposition F can be believed to a degree. The leading account of degrees of belief (often called credences), Bayesianism, has become a paradigm in the philosophy of science (Easwaran 2011a; 2011b; Sprenger and Hartmann 2019; Landes 2021b; Joyce 2011; Huber 2016; Weisberg 2015; Radzvilas et al. 2022; Skipper and Bjerring 2022; Schupbach 2022).

One core tenet of Bayesian epistemology is Probabilism, the norm that credences ought to satisfy the axioms of probability and all such credences are, in principle, admissible. There is considerable debate as to which other—if any—further constraints apply to a rational agent’s degrees of belief.

Another popular thought among formal epistemologists is that credences should not contradict known chances (sometimes referred to as objective probabilities). For example, if it is known that a coin is fair, then it seems unwise to belief that the coin to be tossed shortly will show ‘heads’ to a degree \(\frac{1}{10}\). This idea underlies the Principal Principle of Lewis (1980).

Many writers have also held that a symmetry in an agent’s body of evidence ought to entail a corresponding symmetry in credence. There is a number of formalisations of this principle, called the Principle of Indifference, on the market. In its weakest form this principle says: ‘Over a given and fixed finite set \(\Omega \) of N mutually exclusive and jointly exhaustive set of atomic propositions, a rational agent’s credence in every \(\omega \in \Omega \) ought to be \(\frac{1}{N}\), if the agent does not possess any evidence concerning the \(\omega \in \Omega \).’ Arguments in favour of this principle have recently been offered in Novack (2010), Paris (2014), Pettigrew (2016b), White (2010), Eva (2019), Decock et al. (2016). Surprisingly, some Bayesians controversially argue that the Principal Principle implies the Principle of Indifference (Hawthorne et al. 2017; Landes et al. 2021b), others have disagreed (Pettigrew 2020; Titelbaum and Hart 2020; Gyenis and Wroński 2017).

Undoubtedly, Probabilism is more widely accepted than all rival formalisations of graded beliefs. Nevertheless, there are alternative ways to capture the idea that rational belief comes in degrees. One objection to Probabilism is that it is sometimes impossible to precisely specify degrees of belief in a proposition given the available evidence. In particular, the Principle of Indifference cannot be accepted. Rather, the agent’s ignorance about an atomic proposition \(\omega \in \Omega \) ought to be reflected by not assigning any precise definite credence to it. One such alternative is Dempster-Shafer-Theory, see Shafer (2011) for a modern treatment, and imprecise probabilities more generally, see Bradley (2015) for an overview. Alternatively, ranking functions have been suggested as a model for rationality (Spohn 2012).

Summing up, there are a number of (mutually conflicting) epistemic attitudes a rational agent may have. A philosopher may think that (in a particular epistemic situation) a rational agent possesses some specific subset of epistemic attitudes. It is incumbent on the philosopher to give an argument for her thoughts.

Unbeknownst to them, formal epistemologists employ the exact same argument structure used by designers of mechanisms in economics and computer science I outlined in Sect. 2.3. Formal epistemologists design mechanisms to satisfy their epistemological goals. An unexpected consequence of our analysis is that these philosophical arguments share the same structure.

3.2 Dutch Book Arguments

The, arguably, most famous argument in rational belief formation is de Finetti’s Dutch Book Argument for Probabilism (de Finetti 1937; 1980; Pettigrew 2020b). De Finetti designed a single-round, two-player, zero-sum game between a bettor and a book-maker as follows:Footnote 3

  1. (A)

    Rules On a finite set of possible worlds, \(\Omega \), containing the actual world \(\omega ^*\in \Omega \), the bettor has to assign a betting rate to all \(F\subseteq \Omega \), \(b_F\in {\mathbb {R}}\). The book-maker chooses a stake \(s_{F}\in \{-1,0,+1\}\) for every \(F\subseteq \Omega \); selling or buying the bet at the betting rate \(b_F.\)

  2. (B)

    Outcome At the world \(\omega \in \Omega \), the bet on F returns \(r_{\omega }(b_F)=s_{F}\cdot b_F\), if \(\omega \notin F\) and \(r_{\omega }(b_F)=s_{F}\cdot (1-b_F)\), if \(\omega \in F\).

  3. (C)

    Utility The bettor’s utility at world \(\omega \) is the return of all bets, \(\sum _{F\subseteq \Omega }r_{\omega }(b_F)\). The book-maker’s utility is \(-\sum _{F\subseteq \Omega }r_{\omega }(b_F)\).

  4. (D)

    Decision norm The bettor ought to avoid sure loss, i.e., do not adopt credences b such that there exist stakes \(s_F\) for all \(F\subseteq \Omega \) such that for all \(\omega \in \Omega \) it holds that \(\sum _{F\subseteq \Omega }r_{\omega }(b_F)< 0\).

  5. (E)

    Mathematical theorem All probabilistic and only probabilistic credences avoid sure loss. Furthermore, no probabilistic credences are loss dominated: for all different probabilistic credences \(b,b'\) there exist different \(\omega ,\omega '\in \Omega \) such that \(\sum _{F\subseteq \Omega }r_{\omega }(b_F)>\sum _{F\subseteq \Omega }r_{\omega }(b_F')\) and \(\sum _{F\subseteq \Omega }r_{\omega '}(b_F)<\sum _{F\subseteq \Omega }r_{\omega '}(b_F')\).

By altering the argument structure a philosopher can bring about that different sets of credences appear to be rational. For example, the Dutch Book Argument has been modified such that the set of ‘optimal’ credence functions are the intuitionistic probability functions, see Weatherson (2003), or the Dempster-Shafer-Belief functions, see Shafer and Vovk (2001). Worth pointing out is that de Finetti and Weatherson designed a mechanism for two players while Shafer and Vovk devised a three-player game.

A further variant of the Dutch Book is the Czech Book in which the decision norm is altered to seek bets which ‘guarantee financial gain’. In this set-up, the purported rational credences are all those which are non-probabilistic, see Hájek (2008, 796-797). Extensions of the Dutch Book to groups of agents have been considered, see Kopec (2017) for some of the latest words. Dutch Book Arguments for different underlying logics were studied in Paris (2001).

There are two further credences which can be argued for by modifications of the Dutch Book Argument. These modifications sprung to my mind by taking a mechanism design perspective in which one is free to create. Firstly, a mantra in non-formal epistemology is: ‘seek truth and avoid error’. There is a natural sense in which credences with \(b_F=0\) whenever \({2}|F|<{|\Omega |}\) and \(b_F=1\) whenever \({2}|F|>{|\Omega |}\) best comply with the mantra. In Appendix A, these credences as well as the credences complying with the Principle of Indifference are shown to be following from modifications of de Finetti’s argument.

An assessment (of the epistemic force) of all these Dutch Book Arguments is outside the scope of this paper. What we think about these arguments turns, in part, on the design of the games. While an assessment is outside the scope, it is, however, clear that different writers will have (very) different assessments.

De Finetti also wanted his argument to elicit an agent’s degrees of belief. In his later writings, de Finetti turned against the game-theoretic approach noting that a bettor agent playing his game will have some ideas as to how the book-maker will chose stakes. These ideas inform the bettor’s betting rates which raises the pragmatic worry that operationalising the Dutch Book Argument to elicit probabilistic credences from the bettor may fail, see de Finetti (1974, Section 3.6.3). The designer of the game, de Finetti, did hence not satisfy his own preferences by designing the Dutch Book Argument.

3.3 Epistemic Utility Theory

Unsatisfied with his Dutch Book Argument, de Finetti (1974) gave a new argument for Probabilism. It only concerns one agent and his hence not susceptible to strategic game-theoretic considerations. This argument was made technically more sophisticated in Joyce (1998) and Predd et al. (2009). The innovative approach walked by Joyce is to construe utilities as ‘epistemic’ utilities. The resulting argument is hence an epistemic, i.e., non-pragmatic, argument for Probabilism. This led to the creation of a new region of formal epistemology, epistemic utility theory, see further Pettigrew (2013b; 2016).

Pettigrew (2013b; 2013a; 2015; 2016) charted Joyce’s single-agent argument for Probabilism as follows:

  1. (A)

    Rules On a finite set of possible worlds, \(\Omega \), containing the actual world \(\omega ^*\in \Omega \), fix a scoring ruleFootnote 4 within a particular class of scoring rules. The scoring rule measures epistemic utility of credences. The agent has to assign a credence to all \(F\subseteq \Omega \), \(c_F\in {\mathbb {R}}.\)

  2. (B)

    Outcome At the world \(\omega \in \Omega \) the score of the assigned credences.

  3. (C)

    Utility The agent’s utility at world \(\omega \) is the score of the assigned credences c.

  4. (D)

    Decision norm The agent ought to avoid dominated credences, do not adopt any credences c such that there exist some other credences \(c'\) which have a better score at all possible worlds. All credences which are not score-dominated are admissible, it is admissible to adopt c, if for all other credences \(c'\) there exists some world \(\omega \) at which c has a strictly better score than \(c'\).

  5. (E)

    Mathematical theorem All non-probabilistic credences are score-dominated. Furthermore, no probabilistic credences are score-dominated.

Following this map, Pettigrew and others have devised epistemic justifications of a number of well-known principles of rational belief formation: Bayesianism (Leitgeb and Pettigrew 2010a; 2010b), see Pettigrew (2016b) for the Principle of Indifference, Pettigrew (2012; 2013a) for the Principal Principle, Probabilism (Joyce 2009), Probabilism for a many-valued logic (Janda 2016), and imprecise probabilities (Konek 2016).

It is a well-kept secret, that all these philosophers have been engaged in one-round single-player mechanism design. That they have indeed engaged in this enterprise, should be apparent to the reader. It is perhaps not surprising that the connection to game theory has not been flagged up before, after all, epistemic utility theory was based on a move away from game theoretic territory. So, while epistemic utility theory provides a new interpretation it continues to make use of the argument structure of de Finetti’s Dutch Book delineated in Sect. 2.3.

An assessment (of the epistemic force) of all these epistemic utility arguments is outside the scope of this paper. What we think about these arguments turns, in part, on the design of the games. In particular, there are disagreements about the choice of a/the appropriate scoring rule, e.g., Pettigrew (2016), McCutcheon (2019). While an assessment of which scoring rule or rules is/are appropriate is outside the scope of this paper, it is, however, clear that different writers will have (very) different assessments.

3.4 Objective Bayesian Epistemology

Objective Bayesians—as opposed to subjective Bayesians—are not satisfied by adopting any old probabilistic credence function. They claim that the holy grail credences are given by the probability function which is calibrated to the evidence and has maximum entropy otherwise (Williamson 2010; 2022). In other words, objective Bayesians apply the Maximum Entropy Principle. See Paris and Vencovská (1989; 1990; 1997) for the classical justifications for the maximum entropy principle in terms of so-called ‘common sense principles’. Other classic works are Shore and Johnson (1980), Jaynes (2003), Shannon (1948), Rosenkrantz (1977). Recent works on objective Bayesianism consider multi-agent settings (Wilmers 2015), infinite domains (Landes et al. 2022; 2021a; Landes 2021a; 2021c; Williamson 2017) and computational aspects (Landes and Williamson 2016; 2022).

Recent works building on Topsøe (1979); see for example Grünwald and Dawid (2004), Landes and Williamson (2013), Landes and Williamson (2015), also argue for this seemingly outlandish claim. This work can be mapped out as a one-round two-player game between an agent and nature:

  1. (A)

    Rules On a finite set of possible worlds, \(\Omega \), containing the actual world, \(\omega ^*\in \Omega \), fix a set of probability functions defined on \(\Omega \), \({\mathbb {E}}\), and also fix a logarithmic scoring rule. The agent has to assign a credence to all \(F\subseteq \Omega \), \(d_F\in {\mathbb {R}}\) – or to all worlds \(\omega \in \Omega \). Afterwards, nature chooses some probability function in \({\mathbb {E}}\).

  2. (B)

    Outcome The credences chosen by the agent and nature’s choice of a probability function.

  3. (C)

    Utility The agent receives the expected logarithmic score of the assigned credences, where expectations are taken with respect to nature’s choice in \({\mathbb {E}}\).

  4. (D)

    Decision norm The agent ought to adopt credences with maximal worst-case expected logarithmic score.

  5. (E)

    Mathematical theorem The credences which are calibrated to the evidence; i.e., the credences have to be in \({\mathbb {E}}\); and which have maximum entropy otherwise are the only credences with maximal worst-case expected logarithmic score.

While such a map for objective Bayesianism has never been drawn before, objective Bayesians openly state that they are using game theory in their argumentation.

An assessment (of the epistemic force) of all these arguments from objective Bayesians is outside the scope of this paper. What we think about these arguments turns, in part, on the design of the games. In particular, there are disagreements about the implementation of a logarithmic scoring rule (Landes and Williamson 2013; Landes and Masterton 2017; Crupi et al. 2018). While an assessment is outside the scope of this paper, it is, however, clear that different writers will have (very) different assessments.

While all the three types of arguments in formal epistemology employ rather similar games (agents adopting credences), they stipulate different utilities and appeal to different decision norms (sure loss avoidance, avoidance of dominated credence and maximising worst-case expected utility). Although, the arguments were devised to support different norms of rationality, all arguments can be construed as epistemological mechanism design.

4 Mutual Relevance

One might wonder why mechanism design and formal epistemology are mutually relevant; one may think that formal epistemology is concerned with the inescapable aspects of cognitive life, rational belief formation. On the other hand, mechanisms are designed and implemented with the sole purpose of furthering the designer’s economic (or social) aims. Designed mechanisms are hence easily avoided. And so, the beautifully crafted mechanisms lack relevance for formal epistemology.

Here are some reasons to think that these fields are mutually relevant.

  1. (i)

    While it is true that market actors have great freedom to chose their trading partners, it is not always the case that a market actor can avoid transactions with the designer of a mechanism. For example, those who require a monopolist’s goods have no choice; they have to trade with the monopolist. The mobile phone service providers discussed in Sect. 2.4 must take part in licensing auctions designed by states.

  2. (ii)

    Clever epistemic agents may elect not to play games featuring in Dutch Book Arguments on the grounds that they promise them very little in return. Although avoidable in principle, Dutch Book Arguments have been hugely influential. So, formal epistemology is also concerned with games which may be avoidable to some degree.

  3. (iii)

    As we saw in Sect. 3.2 on Dutch Book Arguments there exists a number of different such arguments set in in different situations which have fundamentally opposed purported rational credences. This makes one question whether there really is one single game against nature a rational agent cannot avoid, that is so important, that credences must be optimal for this particular game and (possibly) very bad for other games. Indeed, taking a mechanism design perspective, naturally lead to the discovery of DBAs detailed in Appendix A.Footnote 5 But if there is no such game, then formal epistemology and mechanism design are also close in spirit and hence mutually relevant.

  4. (iv)

    Epistemologists are part of a publish-or-perish environment in which they ‘have to’ devise novel justifications of rationality norms. The designed game (including rules, outcomes, utilities and the decision norm) becomes almost a free parameter. The philosophical value of the designed game is then assessed in terms of how well the designed game captures rationality and thus furthers the epistemologist’s career. A designed mechanism is—from the outset—a free parameter. The economic (or social) value of the designed mechanism is assessed in terms of how strongly it furthers the designer’s aims. The parallels between the epistemologists and the designers of mechanisms are stark, the mutual relevance of the sub-fields follows.

5 Conclusions

This paper laid bare hitherto unnoticed analogies between mechanism design and formal epistemology. The study of computer science and economics literature may thus be helpful for, say, our understanding of boundedly rational agents (Simon 1959) and/or enlarging the area of applicability (of justifications) of norms of rationality. The expertise of mechanism designers can significantly advance our grasp of concepts and techniques in formal epistemology.

For example, economists are well-aware that economic agents bend and break rules at times. Such behaviour renders all the mathematical theorems moot. Designers of auctions have investigated mechanisms for dealing with such shady agents (Klemperer 2002; Laffont and Martimort 2000). Agents which can bend the rules are only just entering the philosophical literature, see Greaves (2013) for shady epistemic agents and Konek and Levinstein (2019) for a first proposal how to deal with these agents. It seems sensible to hope that such shared problems possess shared solutions.

Furthermore, philosophers are well-equipped to critically appraise current work in economics and computer science, after all they have experience designing mechanisms.