1 Introduction

This paper highlights the significance of financial practice in establishing mathematical criteria relating to judgement in the presence of uncertainty. When applied to physical systems, mathematics passively represents those systems whereas when mathematics is applied to social systems it can actively direct those systems. Finance provides an example of this effect and this paper considers the role of finance in the development of mathematical approaches to judgement under uncertainty.

The paper draws on recent research on the role of commercial ethics in the development of probability up to the eighteenth century, including Hadden (1994), Crosby (1997), Kaye (1998), Franklin (2001), Sylla (2003, 2006) and Bellhouse (2005). Developing Johnson (2015), this is combined with an analysis of contemporary theories of financial mathematics, developed between Harrison and Kreps (1979) and Delbaen and Schachermayer (1998), and provides insights into the Black–Scholes–Merton (BSM) model, which is of interest in the history and philosophy of mathematical practice (Wagner 2017, pp. 6–10). The paper finds similarities between the Fundamental Theorem of Asset Pricing (FTAP) and scholastic attitudes to commercial ethics that become clear when the abstract mathematics of the FTAP are considered with an understanding of the scholastic perspective. This highlights normative aspects of an ostensibly positive theory and reflects this paper’s motivation in Putnam’s criticism of the ‘fact-value dichotomy’ (Putnam 2002). The analysis of the FTAP leads to some novel insights on the role of financial practice in justifying the Dutch Book Argument (DBA), which is itself the most popular justification for subjective probability (Skyrms 1984; van Fraassen 1984; Hàjek 2009).

The DBA is explained in terms of the practice of ‘dual quoting’ in financial markets and how this practice imposes sincerity on market participants. On this basis, the final section explains how markets are deliberative and the FTAP’s basis in measure theoretic probability means that it can accommodate the dynamic nature of markets and so is not subject to some of the criticisms of the DBA when considered in terms of subjective probability. Building on Habermas (1984) and Misak (2002) the case is made that for financial markets to be successful, they must adhere to a principle of benevolence, or charity. The main conclusion is that judgement in social systems must consider the subjective truthfulness and social rightness, not just objective truth, associated with scientific representations.

The approach taken in the paper is abductive theoretical analysis (Peirce 1957, pp. 236–237; Swedberg 2015) that explores the relationship of financial practice, mathematics and ethics to judgement under uncertainty. Abductive reasoning is based on gathering observations as widely as possible and the argument presented here uses scholarship from mathematics, finance, history, sociology, religion, literature and philosophy. The novelty of the argument is based on its broad basis in scholarship and a focus on financial practice. This means it is distinctive from Thicke (2017), for example. An abductive argument should resemble a cable rather than a chain, meaning that different threads work coherently together rather than relying on a sequence of irrefutable deductive links.

The paper is empirical in Locke’s sense of exploring the origins and evolution of ideas, specifically the development of mathematical ideas and financial theory in the context of financial practice. Consequently, the paper might appear to be history: it is not. It uses historical research as strands in the overall argument. Much of this historical research might be unfamiliar and so summaries are presented.

Care should be taken in reading this paper not to confuse the financial practices described in relation to markets with economic theories of markets. Economics can exist without commercial exchange—in pure command economies—while commerce can exist without money—in barter exchange. Finance, in the context of this paper, is the use of money to enable temporally or spatially separated transactions: an immediate cash purchase is commercial not financial in this construction. Finance provides interesting problems for science because the temporal and spatial separation of transactions creates radical uncertainty (Knight 1921; King 2016) that defies prediction, so that there are few ‘matters of fact’ relating to financial transactions. The radical uncertainty of finance creates the connection to mathematical probability. While the ‘philosophy of economics’ is an established field, though relatively small compared to the ‘philosophy of mathematics’, little attention has been paid to the role of finance in generating norms that more theoretical fields rely on. We do not explore the reasons for this lack of attention but offer two tentative explanations. Firstly, finance is practical, rather than theoretical as economics and mathematics are. Secondly, finance is widely perceived as essentially corrupting, though this was not always the case (Fourcade and Healy 2007), and so cannot be considered as the basis of legitimate fields of study. This paper challenges this view.

The next section puts the paper in context and explains the approach it will take. Following this introduction the main argument, that the ethical assessment of financial contracts had an impact on the mathematical representation of chance, begins by observing that the financialisation of Medieval European commerce created a need to understand fluctuating, abstract relationships that stimulated the widespread use of mathematics. This occurred as Aristotelian philosophy began to dominate Catholic thinking and, on this basis, the church developed a doctrine on usury. A synthesis of commercial practice, Catholic doctrine and mathematics resulted in Albert the Great realising that Accidents, not Substances, were measured initiating the mathematisation of physics. In this context the idea of mathematical probability emerged out of the study of financial ethics, which was regarded as involving subjective and ethical judgement. These ideas were central in the development of probability through to the eighteenth century when the subject began to be understood in terms of repeatable, objective, experiments.

On this basis, the fourth section describes the Fundamental Theorem of Asset Pricing (FTAP), which is the foundation of contemporary financial mathematics. The FTAP is a consequence of Kolmogorov’s axiomatization of probability using measure theory and was formulated to provide a Grundbegriffe for asset pricing based on the Black–Scholes–Merton option pricing model, which was a result of academic finance. The FTAP only guarantees precise prices if markets are complete and the causes and consequences of incompleteness are discussed. The economic justification for how financial markets ensure that prices conform to the FTAP is explained. On the basis of the discussion in the second section, the FTAP is related to scholastic ethics. The section ends by observing that the FTAP can be viewed as an abstract mathematical result, a description of financial practice or an expression of financial morality.

The FTAP is closely related to the Dutch Book Argument (DBA), regarded as the most popular justification for modern approaches to subjective probability and Bayesian Inference. This is discussed in the fifth section of the paper. An objection raised about the DBA is that it is based on an assumption that there are ‘bookies’ that can compel an agent to bet on their beliefs. The main contribution of the paper is in a description of how this compulsion is accomplished in modern financial markets, based on the institution of ‘jobbers’ or ‘market makers’.

Jobbing is made possible because of the abstraction of physical assets into contracts through financialisation. This enables those without property to engage in market speculation and soon after the practice emerged there were attempts by property owners to suppress it. In order to address concerns that jobbers were able to manipulate markets, the policy developed of requiring jobbers to simultaneously quote prices at which they would buy and sell an asset, allowing the counterparty to choose which side to take. This ‘dual quoting’ imposes subjective truthfulness on the jobbers, since a jobber is required to act on their statements. Despite being sincere, a jobber’s price should not be assumed to represent the jobber’s belief. This toleration of falsehoods facilitates the reliability of jobbers’ quotes. While jobbers deliver subjective valuations, objective evaluation of a price is delivered through the FTAP. Utility plays an important role in economic valuation, by modelling individual preferences, where as it plays only a peripheral role in the FTAP of jobbers practice. The FTAP and DBA price by selecting a probability measure. A discussion of the historic relationship between utility based and probability-based approaches to pricing is given.

The DBA relates to establishing the coherence of a single statement. Jobber-mediated financial markets involve a sequence of statements—prices—being made by a community. Because the FTAP is based on Kolmogorov’s measure theoretic probability it accommodates the dynamic nature of markets and avoids some criticisms of the DBA, which is usually considered in terms of subjective probability. These points are made in the sixth section that also discusses the role of the abstracting process of financialisation in enabling jobber-mediated markets and the emasculation of power. There is a discussion of the idea that markets are centres of communicative action and the observation that this implies the social correctness of a price needs to be accommodated. An argument is presented that the classical virtue charity/caritas/ἀγάπη/ihsan (إحسان)/ren (仁) covers this criterion, just as the mathematics of the FTAP cover the objective criterion and the institution of jobbers the subjective criterion for effective market deliberation. This argument is made with reference to Shakespeare’s The Merchant of Venice, the failure of the hedge fund, Long Term Capital Management in 1997 and the role of British Quakers in funding the ‘Industrial Revolution’. This section addresses concerns that markets are intrinsically corrupting and so should not be considered as enabling mathematics. It relates to discussions about the moral status of markets (Fourcade and Healy 2007) and challenges the conventional view that financial economics is ‘under-socialised’ (Granovetter 1985).

2 Context and fundamental ideas

The topic this paper addresses is that of rational methods of dealing with uncertainty. In classical Greece, the application of knowledge, techne, was seen as counterbalancing the problems of Tyche (Nussbaum 2001, pp. xvii–xviii). Later, Aristotle distinguished phronesis, episteme, and techne. Techne was the ability to produce something material and, by the nature of material objects, was a means to some other end (Aristotle 2011, p. VI.4). Episteme was related to understanding universal ideas, so could be taught, and provided the basis for techne. Aristotle did not think different ethical goods were commensurable (Nussbaum 2001, pp. 294–297). This implied that there could not be a universal episteme for morality and, consequently, morality had to be understood on the basis of individual experience rather than universal ideas. This experimental knowledge was called phronesis (Aristotle 2011, p. 109b15; Long 2006, p. 162), and enabled people to “manage well the circumstances which they encounter day by day, and who possess a judgment which is accurate in meeting occasions as they arise” (Paul 2014, p. 11).

Aristotle’s definition of phronesis, as a characteristic founded in experience rather than episteme meant it was incompatible with mathematics, which related to abstract generalisations not to specific circumstances (Aristotle 2011, p. VI.8). The context of this paper is the how mathematical, universal and indubitable, approaches to dealing with uncertainty replaced approaches rooted in personal experience that were contingent and specific. This transformation began in the mid-seventeenth century, for example in Hobbes’ claim that ethics, relating to personal preferences, could not be part of philosophy, which dealt with universal truths (Hobbes 2017, pp. VIII, XII, XLVI). By the mid-nineteenth century the process was complete, with consequentialist morality being defined in terms of prudential calculation.

The idea that provides the analytical framework for this paper is Locke’s distinction of physica, practica and semeiotika (Locke 1690, pp. 21.1–4). Physica related to the knowledge of things (including spirits); practica to the attainment of the ‘right’ (ethics); and semeiotika referred to the signs used to understand and convey ideas to others (logic, language, mathematics). The distinction between physica and practica is evident in Kant’s separation of the Kritik der reinen Vernunft and the Kritik der praktischen Vernunft. The distinction is also echoed in Laplace’s separation of mathematics into Mécanique Céleste (1799–1825)/Exposition du système du monde (1796) and Théorie analytique des probabilités (1812)/Théorie des probabilités (1819), suggesting that there was a mathematics pertinent to ‘things’ and a mathematics relevant to judgements. This paper is concerned with the use of mathematics as applied to judgements in the attainment of the right. Locke’s distinction of physica, practica and semeiotika was formulated and was influential throughout the period when mathematical approaches to dealing with chance were replacing the approaches founded on personal phronesis.

Hacking (2006) stimulated a revival of interest in the history of mathematical approaches to dealing with uncertainty by arguing that before the Renaissance, probability related to matters of opinion, with opinions being founded on authority. During the Renaissance, the idea of non-deductive inference, ‘internal evidence’ emerged and argued that the older, subjective ‘probable opinion’ co-existed with a more modern objective, epistemic and ‘statistical’ approach until the former was supplanted by the latter in the nineteenth century. Daston (1980) adapted Hacking’s epistemic-aleatory distinction into a ‘moral’ approach to probability, centred on equity and justice, and a ‘prudential’ approach, which “weighed individual possibilities for profit or loss with an eye to securing an advantage” (1980, pp. 241–242). The ‘prudential’ approach to probability replaced the ‘moral’ approach with “the advent of a new model of explanation for the social sciences which emphasized social regularities rather than individual rationality, coupled with recognition of the independence of mathematical probability from its applications” (Daston 1980, p. 235). In Daston (1998) the argument becomes one of a transformation of the subjectively reasoning l’homme éclair into the objectively calculating l’homme moyen.

The connection between the emergent mathematical ideas around chance and Locke’s philosophy was evident in the description of probability given in the 1765 edition of the Encyclopédie (Lubières 2008). There, a physical approach to probability is described based on the “nature of things” while the practical approach to probability is founded on experience on the past which is used to predict the future. In this paper the two approaches are labelled as objective and subjective, respectively. Classical probability, as developed before 1838, made little distinction between objective and subjective probability. However, by the first half of the twentieth century, there were clear demarcations between objective and subjective approaches (Kendall M. G. 1949; Aldrich 2008). Objective probability encompasses the empirical approaches associated with Montmort, de Moivre, Venn, Fisher and von Mises. In this paper, it also includes the logical approach to probability, touched on by Keynes and developed by Carnap. Subjective probability is implicit in Bernoulli (Hacking 1971) and explicit in Bayes, Morgan, Ramsey, de Finetti and Savage.

Daston created an association between the objective and prudential approaches to probability in contrast to the moral and subjective approaches. The canonical origins of mathematical probability is in the 1654 Pascal and Fermat solution to the Problem of Points and is considered to be part of the ‘moral’ approach to probability (Sylla 2003, 2006). However, the model Fermat and Pascal employed is formally identical, as a multi-period binomial model, to the 1979 Cox–Ross–Rubinstein (CRR) option pricing model (Cox et al. 1979), which is regarded as an objective and prudential model of finance. While the formal equivalence is clear, what is less obvious is whether there is a correspondence between Pascal and Fermat’s conception of probability with that used by CRR. On the basis of scholarship originating in this observation, the paper makes an assumption that the material nature of financial contracts, their physica as ‘things’, has not changed significantly between medieval times and today but there have been significant changes in the understanding of the mathematics that describe contracts—semeiotika—and in society’s ethical understanding of them—practica. This assumption is based on comparing medieval and contemporary financial instruments. For example, the paper explains that the underlying structure of modern securitisation is present in the medieval ‘Triple Contract’; the tranching of securities, associated with modern Collateralised Debt Obligations, is present in medieval corpo/supra corpo structures; Bills of Exchange, forward contracts and options all existed by 1650. Historians are wary of making this type of comparison across time. The approach is justified here on the basis that the argument is that the nature of the contracts as material ‘things’ has not changed but the understanding of them in terms of mathematics and ethics have changed radically. The paper argues that pre-modern ethics are vestigial in the contemporary financial mathematics of the FTAP.

On the basis of this observation about mathematical probability, the paper focuses on the role of finance in stimulating mathematical ideas. Financial practice is closely related to quantification and, then, the mathematisation of society that resulted in the replacement of the moral/subjective approach to handling uncertainty with the prudential objective approach. The relationship began in classical Greece. Netz (2002) discusses how money, as a counter, stimulated the development of numeracy and how numeracy impacted political decision making. Seaford (2004) makes a more extensive argument that the widespread use of money by the Greeks enabled them to abstract from the concrete to the imaginary and promoted individualism while a communal attitude to money ensured that society remained cohesive. A consequence of these ideas is found in the Platonic view of justice that was employed in Aristotle’s consideration of commercial exchange in Nicomachean Ethics.

The impact that finance had on the development of western science is a fundamental idea underpinning this paper. The general idea can be traced to Franz Borkenau’s Der Ubergang vom feudalen zum bürgerlichen Weltbild of 1934, in the middle of a decade that saw a series of important works on the development of science. Hadden (1994) presents a summary of the origins of Borkenau’s theory, its decline and re-emergence, in the introduction to his development of Borkenau’s thesis. In summary, Hadden writes that Borkenau took a Marxist perspective and argued

The proliferation of commodity exchange in early modern Europe – the comparison of dissimilar goods and of the different labours contained in each for the purpose of reckoning up value – provided a model [Vorbild] for what [Borkenau] terms “the mathematical mechanistic world picture.” The reduction of social relations to the value of commodities and the calculation this value paralleled, and was extended to, the reduction of nature to body and the calculation of the motion of bodies. (Hadden 1994, p. xi)

Part of the liberal response to fascism and communism from the late 1930s was the ‘Whig’ interpretation that presented the development of western science as a consequence of Protestant reason to counter Marxists explanations, and, as a result, Borkenau’s thesis was largely ignored until it was republished in 1976. Kaye (1998) took a non-Marxist perspective on the thesis by focusing on a synthesis of medieval finance with the scholastic interpretations of Greek ideas carried in Nicomachean Ethics. This was in response to earlier work on the role of medieval universities in the emergence of western science by John E. Murdoch and Edith Dudley Sylla.

An important justification of the role of finance in stimulating modern science is that it provides an answer Needham’s question as to why European science overtook Islamic and Asian sciences in the early modern period. The claim is that in medieval continental Europe there was, uniquely, a heterogeneity of money and prohibitions on usury. Together, these required merchants to model mathematically, particularly for the long-distance trade of ‘merchant adventurers’ and kaufman (as distinct from handler), and this mathematical world-view provided the basis for the mathematisation of science (Restivo 1982, p. 128; Hadden 1994, p. 84; Crosby 1997, pp. 69–74; Kaye 1998, pp. 1–10; Goetzmann 2004, pp. 203–275).

The work of Hacking, Daston and Kaye has been followed by work investigating the ethical origins of probability. Bellhouse (2005) considers Cardano’s Liber Ludo Aleae, of the mid-sixteenth century, by observing that earlier historians of probability have found the work incoherent. Bellhouse argues that this apparent incoherence is a consequence of those historians examining the work in the context of modern probability theory. He finds it coherent if it is read as an attempt to establish the ethical basis of gambling. Sylla (2003, 2006) investigates the works of Pascal and Fermat, Huygens and Bernoulli in the seventeenth century. She notes that underpinning all these works there is a basis in ethics. Like Bellhouse, she observes that the important final part of Ars Conjectandi, which introduces the Law of Large Numbers, seems incoherent in the context of the modern understanding of probability. As well as arguing that the work was inspired by Bernoulli’s religious beliefs, Sylla (2006, p. 27) highlights its problematic nature in that it discusses situations where the sum of probabilities is greater than one. This is a problem for objective probability but has a clear ethical interpretation, as this paper explains. The practical approach to probability is present in Laplace and Poisson but became obscured by objective probability, beginning with the widely read and influential works of de Moivre and Montmort, in the early eighteenth century, which developed probability independently of ethical considerations, as a problem of physica.

This paper does not aim to develop the paradigm that medieval finance stimulated the development of modern science or that commercial ethics inspired mathematical probability. Rather, in terms of Locke’s differentiation of knowledge, it seeks to address a problem implicit in Borkenau’s original thesis connected to the fact-value dichotomy. Specifically: does a reliance on mechanistic interpretations of phenomena, in general, create problems in understanding social phenomena; does a focus on physica mean there is an inadequate understanding of practica?

The paper addresses this question based on the following understanding. Hirschman has argued that the achievement of Adam Smith, and the creation of capitalism, is in Smith’s synthesis of an incompatible pursuit of passions and interests into the monomania of wealth accumulation (Hirschman 1997, pp. 110–113). Smith’s synthesis can be seen as addressing Hume’s law: what ‘is’ cannot imply what ‘ought’ to be because material money, what ‘is’, can measure the ‘right’, what ‘ought to be’. This is related to utilitarian/consequentialist morality and is the basis of the orthodox economic paradigm, and principle moral imperative, of maximising expected utility. Smith’s observation, when taken in isolation, conforms to the teleological materialism of both Marxism and liberal scientific positivism. However, it diverges from Aristotle’s teleology, the pursuit of eudaimonia, because in Aristotle’s conception, moral goods are not commensurable, hence morality is not part of episteme. A consequence of this divergence is the post-Enlightenment focus on the materiality of physica while neglecting the social relations underpinning practica.

A conventional refutation of Hume’s Law is in discursive ethics (MacIntyre 1959). This is reflected in this paper employing Habermas’ Theory of Communicative Action (1985), which directly addresses the problem that the scientific advances of the eighteenth century resulted in the catastrophes of twentieth century totalitarianism. As a result, central to the paper is that financial markets are viewed as discursive arenas aimed at agreeing on the price of an asset in a radically uncertain world. In this sense, a quoted price is a statement and must conform to the principles of communicative action. This points to thinking of money as being part semeiotika rather than physica: it is a language rather than a thing. This view is rooted in Montesquieu’s conception of ‘ideal’ money as the sign and representative of things while everything is a sign and representative of money (Montesquieu 1752, p. 408). William James, making a connection between money and language, argued that “Our thoughts and beliefs ‘pass’, so long as nothing challenges them, just as bank-notes pass so long as nobody refuses them.” (James 1907, p. 207). Another observation supporting this approach is in de Finetti’s use of “Pr” to represent ‘probability’, ‘price’ or ‘prevision’, signifying a correspondence between probabilities, which according to de Finetti do not exist, and prices. More recently, Shell has examined the connections between money and language (Shell 1982), but the focus on literature, as distinct from science, is not particularly relevant here.

The underlying thesis of the paper is regressive in the sense that it rejects the modern assumption that moral right can be justified through the anticipated accumulation of money. Implicit in the paper is the assumption that judgement must be based as much on practica as on physica, as recognised in an Aristotelian framework and the overall argument relates to Anscombe’s (1958) observation that modern, consequentialist, ethics are inadequate in the face of radical uncertainty, since consequences cannot be foreseen. However, the paper recognises that the Aristotelian framework, that excludes mathematics from ethical judgement is also inadequate, hence the reliance on Locke’s physica/practica/semeiotika in order to create room for mathematics. The paper is interested in how finance is represented mathematically and how the mathematical tools are socially constructed based on the practica of financial contracts; how norms—rules that guide behaviour: part of practica—emerge out of practice and become formulated as explicit rules or principles—expressed through of semeiotika—because they work (Brandom 1994, p. 21).

Combining this approach with Locke’s categorisation of knowledge means that the paper understands financial contracts in three dimensions. They are ‘things’ that are known through physica. They are also understood through semeiotika, the mathematical models that represent value and the prices quoted. Finally, financial contracts should be understood in terms of practica. It needs to be emphasised that financial contracts should not be considered exclusively in terms of either physica, practica or semeiotika, rather the different perspectives interact: to quote Locke “All that can fall within the range of human understanding is in three categories. The nature of things as they are in themselves, their relations, and their manner of operation.” (Locke 1690, p. 21.1)

The contribution of the paper is to employ the idea that ethics are embedded into mathematics used in valuing financial contracts to understand the relationship between markets employing ‘dual quoting’ and the Dutch Book Argument. In this case it is again argued that financial practices inform mathematical representations and, furthermore, the ethics underpinning those practices are important in contemporary applications of the mathematics. This argument is based on the idea that ethics are implicit to the Fundamental Theorem of Asset Pricing, which is a consequence of Kolmogorov’s abstract axiomatisation of probability.

Kolmogorov’s axiomatisation of probability (Kolmogorov 1933) was formulated with little reference to the application of either objective or subjective probability to phenomena. After Kolmogorov received his doctorate, he was given permission to visit France and Germany, returning to Moscow in 1931. Kolmogorov had become interested in probability, which had been an important topic in Russian mathematics at the end of the nineteenth century but peripheral in France and Germany, before he had left Russia. However, it is possible that his decision to focus on probability on his return had been motivated as much by politics as an interest in practical applications of mathematics. By the 1930s, Cantor’s theories dominated mathematics but most Marxist mathematicians, committed to materialism, rejected proofs that relied on ‘ideal’ entities that had no physical manifestation, such as trans-finite numbers. In 1930, Kolmogorov’s doctoral supervisor, Nikolai Luzin, was criticised for being too abstract and bourgeois in this context and he would be criminally convicted in 1936 (Lorentz 2001, pp. 29–30). Kolmogorov’s decision to focus on probability might have satisfied his personal preference to develop an abstract Grundbegriffe, which would have been admired in France and Germany, under the constraints of ‘Soviet’ mathematics, which required material foundations (Kendall et al. 1990).

In abstracting probability away from applications, it was quickly realised that Kolmogorov had resolved the objective/subjective distinction because Bayes’ Theorem, associated with subjective probability, could be deduced from Kolmogorov’s axioms while the Grundbegriffe directly addressed the specific problem of managing infinite sets, an issue in the objective approach to probability (Reitz 1934). However, Kolmogorov’s abstract approach came in for criticism from a range of applied probabilists. Von Mises criticised Kolmogorov’s generalised framework as unnecessarily complex (von Mises 1982, p. 99) while Kendall argued that abstract measure theory failed “to found a theory of probability as a branch of scientific method” (Kendall M. G. 1949, p. 102). De Finetti was suspicious of trans-finite mathematics, from a Machian empiricist perspective rather than a Marxist one, and preferred probability founded on finite additivity rather than countable additivity (Bingham 2010, p. 5). More recently, Jaynes has championed Savage’s subjectivist approach in comparison to Kolmogorov’s measure theoretic approach as having a “deeper conceptual foundation which allows it to be extended to a wider class of applications, required by current problems of science” (Jaynes 2003, p. 655). The objections to the abstract nature of measure theoretic probability for empirical scientists can be accounted for as a lack of physicality, or not being concerned with the ‘nature of things’. However, Kolmogorov’s approach is more general, and this lack of physicality means that probability can be applied to abstract concepts. Kolmogorov had separated the mathematical representation of probability from specific applications, whether they related to either physica or practica (Snell 1997, pp. 304–305). On this basis, the paper categorises Kolmogorov’s approach to probability as neither subjective nor objective but as semeiotika.

3 The foundations of mathematical probability in commercial ethics

In Catholic Europe, between 950 and 1250 CE, the population doubled while the amount of coin circulating increased six-fold (Pounds 1994, pp. 40–124); (Kaye 1998, pp. 15–16; Nicholas 2006, p. 72). This monetisation signified a shift in commercial practice as transactions became abstracted from commodity for commodity exchange to commodity for money. In Italy, twenty-eight cities issued their own currency at one time or another (Goetzmann 2004, p. 18), while the French King minted the Livre Tournais and Livre Parisies in competition. Typical commercial projects, such as an Italian cloth merchant buying wool and selling cloth, could involve at least five currencies (Crosby 1997, p. 201). Coins were often debased, by counterfeiters or states, so merchants had to deal with the variability of the value of currency as well as fluctuations in the supply and demand of commodities. Hence, the heterogeneous monetisation of Medieval Europe not only involved abstraction but also created a commercial environment where there were no stable relationships that could be relied on by merchants (Crosby 1997, pp. 205–210; Goetzmann 2004, p. 19).

To facilitate long-distance trade, monetisation developed into financialisation as specie was replaced by contract. The Bill of Exchange, combining contracts for forward delivery and foreign exchange, emerged and would remain the principle commercial contract well into the twentieth century. Practices associated with the financial crises of 2007–2009 are evident in medieval finance. For example, the basic structure of securitisation (the basis of ‘Mortgage Backed Securities’) consisting of; a loan backed by assets; the transformation of a variable cash flow into a constant one; and facilities for guaranteeing the constant cash flows, all featured in the ‘triple’ or ‘German’ contact that was banned by the Catholic Church in 1586 (Decock 2012). Collateralised debt obligations (CDO) existed in corpro/supracorpro structures that medieval merchant-bankers, such as the Fuggers, employed (Parker 1974, p. 554).

In 1202 CE, Fibonacci published the Liber Abaci, a manual providing merchants with the mathematical tools to enable them to manage complex financial transactions. The Liber provided a focus for Abaco, or ‘rekoning’ schools that emerged to teach mathematical techniques to Europeans that had been developed to the east and south of the Mediterranean (Høyrup 2014). These taught merchants mathematical techniques that supported abstracting from concrete commodities into quantified prices and helped them to understand the changing relationships between the different currencies (Crosby 1997, pp. 72–74). Abaco trained merchants created a reservoir of mathematically literate people on which the scientific developments of the seventeenth century were built (Poitras 2000, pp. 22–29; Fibonacci and Sigler 2003, pp. 1–11; Heeffer 2008).

A related consequence of the monetisation of society was the Catholic Church’s elevation of usury into a mortal sin in 1179 CE, at the same time as exchange was being monetised, and Aristotle’s philosophy began to dominate Catholic thought. Usury and interest are distinct; usury is charging for the use of money whereas interest originates in the compensation for breaking a contract (a poena/ποινή) (Poitras 2000, p. 87). Aristotle had condemned usury on the basis that money is ‘barren’ (Politics, Book I, 10, no. 5, 1258b) unlike grain or livestock, which are productive. Hence a grain loan could always charge interest based on damnum emergens, lost production, or lucrum cessans, lost opportunities to profit, but a money loan would have to justify any interest charged. Around 1200, the scholastic Peter the Chanter argued that “a buyer or a seller may be excused from usury if they expose themselves to the risk of receiving more or less” and some 40 years later Alanus Anglicus determined that turpe lucrum, the shameful profit of usury, did not exist if the future price of the goods was uncertain in the mind of the merchant (Rothbard 1996, pp. 41, 45; Franklin 2001, p. 263). The Church’s attention to the issue of usury placed a constraint on financial practice and inhibited a careful examination of uncertainty, since the charging of interest was only possible in the presence of uncertainty.

Albert the Great’s study of Nicomachean Ethics involved a synthesis of commercial practice, mathematics and scholastic scholarship that would have important consequences for European scientific thought. In Book V of Ethics Aristotle considered ‘Justice’ within a Platonic framework wherein it was considered to be the virtue that ensured a functionally differentiated system, such as a society, worked well (Plato 1969, pp. 4.434a–c). Aristotle argued that exchange was essential in binding society together and, in order to deliver social cohesion, there needed to be equality in what was exchanged; it had to be a fair and clearly reciprocal arrangement: “there is no giving in exchange” (Aristotle 2011, pp. 1133a1–1133a5; Aristotle 2011, pp. 1133a15–1133a30; Judson 1997). Aristotle then observed that “there would be no association without exchange, no exchange without equality and no equality without commensurability” (Aristotle 2011, pp. 1133b15–1133b20). Aristotle had discussed measurement, necessary for commensurability, in the Organon and claimed that a measure shared the same Substance as the subject of measurement. Albert realised that money, the measure employed in commerce, only occasionally shared the Substance of the measured and it was the Accidents, not the Substance, that were being quantified. This implied that different Substances could be commensurable; length and duration could be handled together just as arms of cloth, rolls of cotton and Pisan pounds were commensurable. These observations enabled a revolution in science that saw a novel integration of physics and mathematics, initiated by the Merton Calculators and then developed by Buridan and Oresme prior to its full expression in Europe during the seventeenth century (Hadden 1994; Crosby 1997; Kaye 1998).

Medieval Europe’s commercial circumstances of heterogeneous currencies, changing their relationships constantly, in a financialised society, abstracting physical commodities into contracts, under Church scrutiny monitoring the usury laws, provided a vital environment for the development of European mathematics. Thomas Bradwardine, of the Merton Calculators, Copernicus, Mercator, Stevin, da Vinci and Gallileo all had an abaco based mathematical training, which had an impact on how they understood mathematics. Fibonacci introduced the use of Hindu/Arabic numerals to aid merchants. Stevin popularised the use of decimals that lead to the idea of a polynomial. The identification of the number \(e\) originated in Bernoulli’s study of the growth rate of bank accounts. These are all examples of the impact of the synthesis of mathematics and finance in the abaco culture.

The financial practice of money abstracting from the concrete commodities combined with Albert’s realisation that money provided a universal measure also gave rise to the concept of mathematical probability. Albert’s student, Thomas Aquinas, examined the morality of exchange in the ‘Second part of the Second part’ of the Summa Theologica. Aquinas began by asking the question ‘Whether it is lawful to sell something for more than it is worth?’ and deduced that, as long as there is no fraud involved and an equality, required by Aristotle, between what is being exchanged is established, then there is no ethical problem (Aquinas 1947, pp. Q77,1). Aquinas then discussed a specific question: ‘Whether the seller is bound to state the defects of the thing sold?’ and presented a well-known problem from stoic philosophy.

A grain merchant from Alexandria arrives at Rhodes, which is gripped by famine. The merchant knows that other merchants are following with plentiful supplies of grain, though the town’s inhabitants do not know this. How should the merchant price the grain they have?

The classical argument was that the merchant should not charge the ‘market price’, given the knowledge of more supplies coming. Aquinas disagreed.

in the case cited, the goods are expected to be of less value at a future time, on account of the arrival of other merchants, which was not foreseen by the buyers. Wherefore the seller, since he sells his goods at the price actually offered him, does not seem to act contrary to justice through not stating what is going to happen. If, however, he were to do so, or if he lowered his price, it would be exceedingly virtuous on his part: although he does not seem to be bound to do this as a debt of justice. (Aquinas 1947, pp. Q77,3)

Aquinas’ justification was based on the observation that while the merchant may believe there are more grain shipments on the way, they do not know; the future is uncertain and this uncertainty creates the opportunity for profit in accordance with usury doctrine (Rothbard 1996, p. 53).

The ‘Spiritual Franciscan’ Pierre-Jean Olivi disagreed with Aquinas’ position. Olivi argued that the metaphysical probability of more grain arriving in Rhodes created an abstract expectation that was as important in market exchange as the concrete facts of the market prices. This was the basis of a significant conceptual leap: since these expectations where expressed as quantified prices, the implication was that probability, itself, was quantifiable (Kaye 1998, p. 119; Franklin 2001, pp. 265–267).

Olivi and Aquinas were united in recognising that if a price was simply the ‘market price’ or based on a calculation then personal responsibility was removed from economic activity. They recognised that merchants had to employ their reason to guide their actions, meaning that a price needed to be just (Kaye 1998, p. 25). The ‘just price’ was a nebulous ideal that guaranteed fairness in exchange and was determined as much by morality as either market sentiment or calculation (Monsalve 2014).

This approach to probability, as a moral science concerned with establishing equality in exchange, persisted through to the early eighteenth century. Cardano’s mid-sixteenth century work on probability, Liber de Ludo Aleae, examined the morality of gambling on the basis of Nicomachean Ethics (Bellhouse 2005). Cardano sought to identify the equal conditions that ensured a gamble was just and realised that this could be done by counting the ways a player could win and comparing that number to the ways a player would lose. On this basis, he noted that the chance of rolling a six with a fair dice was one-in-six and so the winnings on a roll of six should equal six times the stake. If this equality was not maintained, then the gamble was unjust. After coming to these conclusions, Cardano noted that

These facts contribute a great deal to understanding but hardly anything to practical play

since his reasoning provided no concrete predications.

The canonical origin of mathematical probability is in the correspondence of 1654 between Pascal and Fermat on the ‘Problem of Points’. This problem originated in the abaco tradition and related to understanding the just, ethically right, distribution of capital amongst partners if the partnership was forced to dissolve prematurely. In the context of contemporary mathematical finance, the Pascal-Fermat solution is the same as using the Cox–Ross–Rubinstein model to price a digital call. The Problem of Points had been considered by Cardano, who miscounted the possible combinations of outcomes. Huygens also addressed it in the first published text on probability, Van Rekeningh in Spelen van Geluk (1657). Huygens highlighted the normative aspect of probability when he opened the text with the axiom,

I take as fundamental for such games that the chance to gain something is worth so much that, if one had it, one could get the same in a fair game, that is a game in which nobody stands to lose. (Hald 1990, p. 69)

Bernoulli’s Ars Conjectandi (1713) considered probability in terms of fairness (Sylla 2006) and the ‘moral’ approach to probability is evident in Laplace’s Essai philosophique sur les probabilités (1812) and Poisson’s Recherches sur la probabilité des jugements en matiére criminelle et en matiére civile (1837).

The ‘objective’ approach to probability began to eclipse the ‘moral’ approach after it was introduced by de Moivre and Montmort in the second decade of the eighteenth century. While the early texts on probability were rooted in ideas of fairness and degrees of belief, objective approaches originated in gaming and focused on repeatable experiments in finite state spaces. The Rhodean merchant had a degree of belief; they could not ascertain the ‘physical’ probability of additional grain deliveries arriving.

The overshadowing of the practical approach to probability, addressing problems of moral judgement, by the physical approach, related to repeatable experiments, is demonstrated in Todhunter’s A History of the Mathematical Theory of Probability from the time of Pascal to Laplace of 1865. Todhunter ignored Cardano’s 1564 discussion of probability and raised a problem with Ars Conjectandi, where Bernoulli discussed probabilities that did not sum to one. This is illogical in the objective conception of probability (Hald 1990, pp. 220–250) but meaningful, as will become clear in the next section, in the context of fair exchange (Sylla 2006, p. 28).

As late as 1975, Ian Hacking claimed that probability emerged ‘suddenly’ around 1650 (Hacking, The emergence of probability 2006, p. 1). This fitted into Foucault’s theories that there was a radical break in European history at that time that saw an end to determinism (Hacking 2006, pp. x–xi). The problem, as James Franklin points out (Franklin 2001, pp. 330–331), is that there had to have been a working theory of probability before 1650 because communities of merchants had been pricing aleatory contracts. The lack of evidence for the theory before 1650 is indicative of several factors. Firstly, Olivi’s works were supressed after 1326 and his observations on probability and commerce only came to light in the twentieth century. In addition, merchants needed to be wary of broadcasting their methods for pricing contracts in case they revealed profits that would have been declared usurious, as happened to the ‘triple contract’. Calvinism’s tolerance of usury enabled more open discussion of probability in Calvinist jurisdictions (Poitras 2000, p. 30; Rothbard 1996, pp. 140–143; Daston 1998, pp. 172–174). Finally, Cardano’s work on probability was presented during a period, marked by the careers of the abaco trained Lucca Pacioli and Simon Stevin, when ideas were being transferred from the vernacular into the academic, where they were recorded and disseminated through printing (Bellhouse 2005, p. 184; Poitras 2000, pp. 132–132; Dear 2009, p. 17).

4 The Fundamental Theorem of Asset Pricing

A consequence of Kolmogorov’s theory of probability is the Fundamental Theorem of Asset Pricing. It consists of two statements, (Shreve 2004, p. 5.4)

  1. 1.

    A market admits no arbitrage, if and only if, the market has a martingale measure.Footnote 1

  2. 2.

    Every contingent claim can be hedged, if and only if, the martingale measure is unique.

The theorem emerged between 1979 and 1983 as Michael Harrison (Harrison and Kreps 1979; Harrison and Pliska 1981, 1983) sought to establish a Grundbegriffe for the Black–Scholes–Merton (BSM) methodology for pricing options (MacKenzie 2008, pp. 140–141), which had been introduced in 1973 (Black and Scholes 1973; Merton 1973).

BSM had been developed at a time when derivative (option) pricing was a relatively unimportant activity. Gambling legislation in the United States meant that derivatives were only traded on ‘deliverable’ assets, principally agricultural commodities, and these markets were stagnant (MacKenzie 2008, pp. 142–145). However, following fundamental shifts in global economic relationships the Bretton–Woods system of fixed exchange rates collapsed in August 1971. In order to control fluctuating currency values, governments adjusted interest rates while the fluctuating currencies resulted in volatile commodity prices. Options, which have been a feature of financial practice since the seventeenth century and were widely traded before the suspension of the European financial markets during the First World War, re-emerged as a tool to manage the risks associated with randomly fluctuating prices.

Despite the financial rational for options, their legitimacy with regard to gambling legislation was still ambiguous. The introduction of BSM in 1973 delivered a mathematical equation that defined the price of an option in terms of parameters known, in the sense of statistical confidence. This implied that option prices could be inferred and hence trading in them was not a form of gambling (MacKenzie 2008, p. 158).

The effort by Harrison and his collaborators to create a Grundbegriffe was successful and opened finance to investigation by pure mathematicians (Schachermayer 1984; Delbaen and Schachermayer 1994, 1998) and by 2000, any mathematician working on asset pricing did so within the context of the FTAP. While the FTAP is important in mathematics, it is not well known in finance or economics. Practitioners focus on the models that are a consequence of the Theorem while social scientists focus on the original Black–Scholes–Merton approach as an exemplar.

From the mid-1980s, some practitioners had become sceptical as to the validity of the prices produced by their models (Miyazaki 2007, pp. 409–410; MacKenzie 2008, p. 248). The market-crash of 1987 was seen as confirming this scepticism and today the BSM equation is used to measure market volatility, a proxy for uncertainty, from prices rather than to calculate the prices using observed parameters.

Despite its decline in relevance in financial practice, the status of the BSM model as an exemplar in financial economics was enhanced by the development of the FTAP. This was because the FTAP, which originated in BSM, unifies different approaches in financial economics. The clearest example of this synthesis was that the Radon-Nikodym derivative, a mathematical object employed in the FTAP, connected the stochastic calculus Merton had employed in his proof and the market-price of risk (Sharpe ratio), underpinning the approach Black and Scholes had taken. Without the FTAP, the two approaches appeared incongruous (MacKenzie 2003a, b, p. 834). Overall, the FTAP brings together a number of different financial methods or theories: Merton’s approach employing stochastic calculus advocated by Samuelson; CAPM, developed by Treynor and Sharpe; martingales, a mathematical concept employed by Fama in the development of the Efficient Markets Hypothesis; and the idea of incomplete markets, introduced by Arrow and Debreu.

While the proof of the FTAP in its full expression (Delbaen and Schachermayer 1998) is sophisticated, its essence can be understood in terms of a simple, single period, binomial model of a market, which Pascal and Fermat would have comprehended, given that their solution to the ‘Problem of Points’ was based on a multi-period binomial model.

The concept of ‘no arbitrage’ is at the heart of the FTAP. The word ‘arbitrage’ derives from ‘arbitration’ (Oxford English Dictionary) and the idea was presented in Fibonacci’s Liber Abaci

20 arms of cloth are worth 3 Pisan pounds and 42 rolls of cotton are similarly worth 5 Pisan pounds; it is sought how many rolls of cotton will be had for 50 arms of cloth. (Fibonacci and Sigler 2003, p. 180)

The equivalence of rolls of cotton to arms of cloth is established through the arbitration, or ‘mediation’ (Aristotle 2011, pp. 1133a19–20) of Pisan pounds and by applying Euclid’s First Common Notion (if A = B and A = C then B = C). In the context of finance, from as early as the late seventeenth century and de Witt’s Waerdye van Lyf-renten Naer Proportie van Los-Renten of 1671, this has been known as the ‘Law of One Price’, which states that two assets delivering identical cash-flows in the future must have the same price today.

When there is a temporal separation of cash-flows in finance, as there will be in a single period model, which starts at time t = 0, and ends at time t = T>0, the ‘time value of money’ needs to be accounted for. The value of money declines as a consequence of its devaluation, for example through its dilution by the minting authority. This creates a distinction between the nominal value of a unit of currency (which decays) and its real value (which is constant).

If an asset is guaranteed to pay out, at time T, an amount X (real value) then it must cost X today, otherwise the Law of One Price is breached or, logically, it is being argued that \(X \ne X\). If it was the case that the asset could be brought for \(y < X\), the logical course of action would be to buy the asset at time t = 0 in order to make a profit of \(X - y > 0\), at time T. This would be an ‘arbitrage profit’ and its existence means the market admits arbitrage. Alternatively, if the asset was being traded for \(z > X\), the logical course of action would be to sell the asset, at time t = 0, and make an arbitrage profit of \(z - X > 0\), at time T. The ability to make an arbitrage profit assumes that the market allows anyone to either buy or sell any quantity of any asset in the market. This is a significant assumption, which will be discussed in the next section, and means that the market is ‘liquid’.

The binomial model is more complex. The initial asset price is still the amount \(X\) but now the asset is guaranteed to be worth (pay-out) either \(X^{u}\) or \(X^{d}\) (in real terms), and that, without loss of generality, \(X^{u} > X^{d}\). Both outcomes are possible but each with an unknown probability. If \(X^{u} = X\), then buying the asset at time t = 0 for \(X\) would result in a profit of 0 if the asset turned out to be worth \(X^{u}\) and \(X^{u} - X^{d} > 0\) if the asset turned out to be worth \(X^{d}\). Since the value \(X^{d}\) is possible, however small the probability, then a strategy of buying the asset at time t = 0 and then selling it in the future will provide a positive expected profit, with there being a guarantee of no loss. This represents an arbitrage. Similar arguments imply that, for there to be no arbitrages, \(X^{d} < X < X^{u}\). An arbitrage presents a riskless profit and, therefore, would have been regarded as turpe lucrum by the scholastics.

Into the market, free of arbitrage opportunities, a derivative is introduced. A derivative is another liquid asset whose price is deterministically determined, at time T, by the price of the underlying asset (\(X^{d}\) or \(X^{u}\)); if the asset price is \(X^{u}\) then the derivative is guaranteed to be worth \(f^{u}\) (real terms); if the asset price is \(X^{d}\) then the derivative is guarantee to be worth \(f^{d}\) (real terms). Since \(X^{d}\) and \(X^{u}\) are known at time t = 0, then so are \(f^{d}\) and \(f^{u}\). The problem that the FTAP addresses is: what is the correct price for the derivative at time t = 0, when the asset price is \(X\).

The original, monomial, model was solved on the basis that there was certainty at time T; to resolve the binomial model the approach is to make the outcome, at time T, certain. This can be done if all the assets are liquid so that any quantity of any asset can be bought or sold in the market. This means that we can construct a portfolio made up of one unit of the derivative and \(\delta\) units of the underlying such that

$$f^{d} + \delta X^{d} = f^{u} + \delta X^{u} .$$

This implies that, for there to be certainty at time T,

$$\delta = \frac{{f^{u} - f^{d} }}{{X^{d} - X^{u} }}.$$

Since there is certainty at time T, the principle of no arbitrage/Law of One Price requires that

$$f + \delta X = f^{d} + \delta X^{d} \left( { = f^{u} + \delta X^{u} } \right)$$

This expression can be solved for the one unknown, \(f\), to yield

$$f = \frac{{X^{u} - X}}{{X^{u} - X^{d} }}f^{d} + \frac{{X - X^{d} }}{{X^{u} - X^{d} }}f^{u}$$
(.)

Writing

$$\frac{{X^{u} - X}}{{X^{u} - X^{d} }} = q^{d} \,{\text{and}}\,\frac{{X - X^{d} }}{{X^{u} - X^{d} }} = q^{u}$$

it is clear that

$$q^{u} + q^{d} = 1$$

while the no arbitrage condition means that \(0 < q^{u} , q^{d} < 1\). On this basis we have

$$\begin{aligned} f & = q^{u} f^{u} + q^{d} f^{d} \\ & \equiv {\text{E}}_{Q} \left[ f \right]. \\ \end{aligned}$$

We can regard \(q^{u}\) as representing the probability, in the sense of Kolmogorov’s Axioms, of the asset price being \(X^{u}\) at time T, while \(q^{d}\) represents the probability that the asset price becomes \(X^{d}\). The probability system defined by \(q^{u}\) and \(q^{d}\) is the martingale measure, sometimes known as the risk neutral pricing measure. Furthermore

$$X = q^{u} X^{u} + q^{d} X^{d}$$

or

$$X = X^{d} + q^{u} \left( {X^{u} - X^{d} } \right)$$

and \(q^{u}\) represents the location of \(X\) between \(X^{d}\) and \(X^{u}\). If \(q^{u} \notin \left] {0,1} \right[\) then \(X < X^{d}\) or \(X > X^{u}\). The martingale measure only exists as a probability measure, satisfying Kolmogorov’s Axioms, if the no arbitrage condition, \(X^{d} < X < X^{u}\), holds true. This is the meaning of the first statement of the FTAP.

The second statement of the FTAP can be understood by extending the single period binomial model into a single period trinomial model and using a specific example. Consider the situation where the asset’s initial price is \(X = 2\) and in the future, at time T, the value of the asset will take on one of three values: \(X^{u} = 3, X^{m} = 2, X^{d} = 1\). Into this market a derivative is introduced that pays out: \(f^{u} = 1, f^{m} = 0, f^{d} = 0\) at time T (in finance this represents a “call option with a strike 2”). The question remains: what is the correct price of \(f\) at time t = 0, when \(X = 2\)?

The problem the trinomial model presents is that there is not a unique value of \(\delta\), the holding in the underlying asset \(X\), that will make the value of a portfolio consisting of the derivative and the holding at time t = T the same across all three states. We have

$$f^{u} + \delta X^{u} = 1 + 3\delta , \;f^{m} + \delta X^{m} = 2\delta , \;f^{d} + \delta X^{d} = \delta$$

On this basis, equating the ‘up’ and ‘middle’ states yields \(\delta = - 1\), equating the ‘middle’ and ‘down’ states yields \(\delta = 0\), while equating the ‘up’ and ‘down’ states yields \(\delta = - 1/2\). This problem is known as ‘incompleteness’.

The solution is to maintain the essential idea that the market should preclude arbitrages; equivalently, there must be a martingale measure. The martingale measure represents the probability of being in a specific state in the future, and so applies to all assets in the market, including the underlying asset. This means that

$$q^{u} + q^{m} + q^{d} = 1,$$

since the probabilities of each state should satisfy Kolmogorov’s Axioms. For the underlying asset

$$q^{u} X^{u} + q^{m} X^{m} + q^{d} X^{d} = X,$$

since the martingale measure applies to assets, specifically

$$3q^{u} + 2q^{m} + q^{d} = 2.$$

Combining the two equations yields

$$q^{m} = 1 - 2q^{d} \,{\text{and}}\,\;0 < q^{m} = q^{d} < 1/2.$$

The derivative, \(f\), can be priced using by choosing any value for \(q^{m} = q^{d}\) providing \(q^{m} \in \left( {0,\frac{1}{2}} \right)\). For example, if \(q^{m} = q^{d} = 1/4\), then \(f = q^{u} f^{u} + q^{m} f^{m} + q^{d} f^{d} = 1/4\); if \(q^{m} = q^{d} = 0.35\), then \(f = 0.35\). The consequence of incompleteness is that derivatives cannot be priced precisely, the theorem can only provide a range of possible prices.

Introducing, and pricing, the asset \(f\) ‘completes’ the market and there are now three equations: the one relating to Kolmogorov’s Axioms; one relating to \(X\); and one relating to \(f\), and three unknowns, \(q^{u} ,q^{d} , q^{m}\), and so Cramer’s Rule ensures that there is a unique solution for the three probabilities and assets can be priced precisely.

These arguments have been made with reference only to real values. In the more realistic situation where there is ‘time value of money’, an asset is chosen to be the ‘numeraire’ and all other assets are priced in terms of the numeraire. The nominal price of any asset in any state is divided by the price of the numeraire asset in that particular state, giving ‘real’ prices. The condition that \(q^{u} + q^{m} + q^{d} = 1\) arises because the real price of the numeraire asset in all states is one. On this basis, in a single period model, as long as there are as many assets in a market as uncertain states of the world, then the market is complete, and all assets can be precisely priced. Completeness is not a consequence of no arbitrage and vice versa: a market can be complete or incomplete and admit arbitrages or incomplete or complete and be arbitrage free.

The idea of complete and incomplete markets was introduced into economics in the 1950s in a series of papers by Arrow, Debreau and McKenzie (Arrow and Debreau 1954; McKenzie 1959; Arrow 1964). The arguments centred on linear algebra and the relative number of traded assets to, random, future states of the market. This led to the attitude amongst policy makers that creating more traded assets would improve economic welfare and played a role in justifying the ‘financialisation’ of advanced economies from the 1980s. In the context of the FTAP, the principle source of incompleteness is the existence of transaction costs, known as ‘frictions’ that mean, for example, the buying and selling costs are unequal. As well as assuming that the market being considered is liquid and frictionless, an assumption is also made that a market is efficient, that all parties in the market have access to the same information. Financial regulation can be designed to promote market efficiency and, to a lesser extent, ensure liquidity, but there will always be frictions and so derivatives, in practice, can never be priced precisely.

The justification for the FTAP is explained in terms of trading activity in a liquid market. Say that, in the preceding example, the derivate is introduced and priced at \(f = \frac{1}{4}\), setting \(q^{m} = q^{d} = \frac{1}{4},q^{m} = \frac{1}{2}.\) A third asset is introduced to the market that pays out: \(g^{u} = 4, g^{m} = 2, g^{d} = 1\) at time T. The no-arbitrage price of this asset is given by

$$g = q^{u} g^{u} + q^{m} g^{m} + q^{d} g^{d} = 2\frac{1}{4}.$$

If the party who introduces this asset prices it at \(g = 2\frac{1}{2}\) instead, above the no-arbitrage, ‘fair’, price it has the potential to offer excess profits. The explanation why this would not occur is that, since the market is liquid, other traders in the market can sell this new asset, to the person who has introduced it, at price \(g = 2\frac{1}{2}\) yielding cash of \(2\frac{1}{2}\). They can then use this money to buy the underlying asset, \(X\), and derivative, \(f\), for a total cost of \(2\frac{1}{4}\), meaning they are left with a net income of \(\frac{1}{4}\). At the end of the period, this portfolio will deliver a pay-out in the ‘up’ state of 4; in the ‘middle’ state, 2; and in the ‘down state’, 1. However, by selling the new asset they are obliged to pay to the person they sold the asset 4 in the ‘up’ state; 2 in the ‘middle’ state and 1 in the ‘down’ state. These payments cancel each other but the trader still holds the \(\frac{1}{4}\) earned in creating the initial portfolio, or strategy. This is the arbitrage profit the trader has gained, at the expense of the person who sought to make an arbitrage profit by mis-pricing in the first place. If the market was as economists expect, then the traders selling the new asset, at \(g = 2\frac{1}{2}\), and buying the original assets will move the relative prices until there are a coherent set of probabilities, \(q^{u} ,q^{d} , q^{m}\), that defined the martingale measure.

The account presented relates to the most trivial markets imaginable but the FTAP has been shown to hold in the most complex situations that are amenable to stochastic analysis. While the proof is based on a hedging argument and the Law of One Price, employed in the original 1973 papers, it is now understood there are situations where the hedging argument cannot be employed, such as if a price process is given by a jump diffusion, yet the FTAP holds and the existence of the martingale measure ensures there are no arbitrages (Cont and Tankov 2004, p. 10.5.2). This implies that in the mathematics (semeiotika) there is something beyond the material act of hedging (physica) underpinning the theory.

Vestiges of scholastic financial ethics can be found in the FTAP. In the first statement, the martingale measure creates an equality between the price paid and the future worth of the asset, making the exchange reciprocal. The no arbitrage condition, which is equivalent to the existence of the martingale measure, prohibits the earning of a guaranteed, risk-less, profit that would have been regarded as usurious, turpe lucrum, by scholastics. The opening statement of the Black and Scholes paper, and the conception of the FTAP, is the observation that “it should not be possible to make sure profits” (Black and Scholes 1973). This was rooted in Knight’s (1921) argument that profits were derived from uncertainty, which had been presented as an amoral fact. The religious injunction against usury has morphed into a statement of scientific impossibility. The second statement of the FTAP establishes that manifested markets are incomplete. This means that a price of an asset in a market can never be known precisely and market participants must always employ judgement, as advocated by the scholastics.

The observation that the FTAP can be mapped onto scholastic doctrine relating to the ethics of commerce does not mean that the FTAP is an expression of those ethics. Rather, it highlights how the contemporary mathematical representation of exchange still carries vestiges of ethical ideas that had originally generated probability, the means through which mathematics represents uncertainty. This illuminates two points. Firstly, how semeiotika is constructed from practica, as well as physica, and that mathematics can represent moral principles.

On this basis, the FTAP can be understood in several ways. It can be proved mathematically as a result of stochastic analysis independent of a financial context (semeiotika), though the financial motivation is important. It can be explained more heuristically as a consequence of rational traders operating in a competitive financial market (physica), as is usually done amongst practitioners. Or, it can be understood from the perspective of scholastic ethics (practica), where usury is prohibited and merchants must exercise their judgement in manifested, incomplete, markets. The ethical argument is as irrelevant to the mathematician as a mathematical argument is to an ethicist. Only when the financial argument is made, both the mathematical and ethical aspects are present. The utility of including the ethical perspective is that it explains mathematical results that cannot be accounted for in terms of the, material, hedging argument (such as in the case of jump-diffusions).

5 Jobbers: justifying theorems

The first statement of the FTAP is related to the Dutch Book Argument (DBA) (Skyrms 1984, p. 21; Hàjek 2009). The DBA was introduced into philosophy in a passing remark in Ramsey’s Truth and Probability (Ramsey 1931), which was a response to Keynes’ A Treatise on Probability. Keynes (1921) had observed that in some cases cardinal probabilities of events could be deduced, in others, ordinal probabilities—one event was more likely than another—could be inferred, but there were a large class of problems that were not reducible to the concept of probability. This tripartite separation echoes Aristotle’s understanding that there were three classes of phenomena: events that were determined (the development of a bird embryo); those that were predictable (the weather) and those not amenable to science (the discovery of buried treasure). Keynes’ argument was challenged by Ramsey who argued that probability relations between a premise and a conclusion could always exist (Ramsey 1931; Ramsey and Mellor 1980; Davis 2004; Edgington 2012).

Ramsey defined ‘probability’ as ‘a degree of belief’ and noted that a standard way of measuring these was through betting odds (Ramsey 1931, p. 171). On this basis, he formulated laws of probability, finishing with the observation that

These are the laws of probability, … If anyone’s mental condition violated these laws, his choice would depend on the precise form in which the options were offered him, which would be absurd. He could have a book made against him by a cunning better and would then stand to lose in any event. (Ramsey 1931, p. 182)

While Ramsey did not develop this statement, de Finetti did (de Finetti 1980) and it became the most famous justification of the Bayesian/subjective thesis that rational beliefs should conform to the axioms of probability (Hàjek 2009, pp. 173–174).

Through the 1980s and 1990s, as the FTAP was being developed, the DBA was stimulating discussion. An objection to the DBA is that, in practice, there are no ‘bookies’ that can compel someone to bet on their beliefs (Armendt 1993, p. 3; Christensen 1996, p. 451). This requirement is equivalent to the liquid markets assumption in the FTAP. In the FTAP, this is based on the reality of finance market practice and this practice plays a fundamental role in enabling Bayesian/subjective probability by justifying the DBA.

Financial markets are built on two distinctive institutions: brokers and ‘jobbers’ (in the UK), ‘market makers’ or ‘dealers’ (in the US). Broker mediated markets are the standard form of market exchange and are the focus of most economic theories. Retail shops are brokers bringing consumers and producers together and typically charge a commission of 100%; auctioneers charge both sellers’ and buyers’ commissions running at 10–20% each; real estate agents charge 0.5–3%; modern financial exchanges charge a fraction of a percentage commission. Brokers act on behalf of property owners. In financial markets, they work for investors who make prudent assessments as to the value of different assets and on that basis hold them for the long term. Current financial exchanges are essentially computer systems that take investors buy and sell orders and electronically match them at the best price through a ‘double auction’; they act as mechanical brokers. Thicke’s argument (2017) that market beliefs should be considered a type of collective belief is based on broker-mediated markets.

During the development of modern financial markets, in early seventeenth century Amsterdam and later seventeenth century London, property owners found that they could not rely on there being enough activity in the market to ensure they could buy or sell an asset when they wanted to; the market was illiquid. This problem of illiquidity was addressed by jobbers who ‘made the market’ (Poitras 2000, pp. 288–293). Whereas brokers derive an income from the commission they charge to property owners, jobbers make their money by trading on their own account, looking to buy an asset for less than they can sell it. This is made possible by in blanco trading, trading in abstracted contracts rather than physical commodities.

The practice of abstracting assets to contracts that stipulated ‘cash on delivery’ at a future date had become common in the thirteenth century and was standard by the seventeenth century. It enables a jobber, who anticipates a commodity’s price to fall, to enter into a contract to sell the asset at a specific location at a pre-determined price on a specific date. The jobber needs not be in possession of the asset, they only need to be able to deliver according to the terms of the contract. This is now known as ‘short selling’. Having ‘sold short’, the jobber will endeavour to enter into a symmetric contract to take delivery of, to buy, an identical asset at the same time and place before the agreed delivery date. This would be with another party but, hopefully, at a lower price.

Having entered into symmetric contracts, the jobber does not have to handle the physical asset, only the cash difference between the prices agreed on the date agreed. Financial markets developed based on jobbers ‘making the market’ by constantly trading in blanco amongst themselves with brokers coming to the market as and when a property owner wished to trade the physical asset.

Since jobbers derive their income by buying and selling at different prices without handling any physical commodity, they have developed a reputation for promoting uncertainty and jobbing quickly developed a dubious position in society. In 1719, Daniel Defoe described stock-jobbing in The Anatomy of Exchange Alley as

a trade founded in fraud, born of deceit, and nourished by trick, cheat, wheedle, forgeries, falsehoods, and all sorts of delusions; coining false news, this way good, this way bad; whispering imaginary terrors, frights hopes, expectations, and then preying upon the weakness of those whose imaginations they have wrought upon. (Poitras 2000, p. 290)

Defoe also mentioned the diversity of jobbers, which was portrayed in Colley Cibber’s 1720 play, The Refusal

There you’ll see a duke dangling after a director; here a peer and ‘prentice haggling for an eighth; there a Jew and a parson making up the differences; there a young woman of quality buying bears of a Quaker; and there an old one selling refusals to a lieutenant of grenadiers. (Ackroyd 2001, p. 308)

While, in 1761, Thomas Mortimer made the point that there are different types of jobber: foreigners, gentry, merchants and tradesmen; and “by far the greatest number”, people

with very little, and often, no property at all in the funds, who job in them on credit, and transact more business in several government securities in one hour, without having a shilling of property in any of them, than the real proprietors of thousand transact in several years. (Poitras 2000, p. 291)

One important aspect of the way jobbers have been perceived comes from the fact that the financialisation of commodities into contracts enabled those without property to engage in market speculation, often against the interests of the property owners.

For example, ‘ducaton’ shares appeared in the Netherlands in the early seventeenth century (Poitras 2000, pp. 276–277). These contracts had a nominal value of one tenth of a Dutch East India Company (VOC) share, but it was always understood that holding ten ducatons would not entitle someone to a VOC share. Ducatons emerged because, at the time, it was impossible for the general public to own VOC shares, which were held exclusively by the Dutch elite and their trading incurred substantial transaction costs. Ducatons provided a means through which the wider public could challenge the VOC owners’ assessment of the value of the firm. In response to these opinions being voiced, the VOC board petitioned the Dutch government to prohibit all in blanco trading in 1610. The ban was ineffective, and had to be repeated in 1624, 1630, 1636 and 1677.

A similar phenomenon, ‘bucketshops’, appeared in the USA in the late nineteenth century. Bucketshops essentially traded ducatons based on prices quoted on the Chicago Board of Trade (CBOT) or the New York Stock Exchange, again without involving any claim on the actual asset. The bucketshops took trade away from exchanges and drew comparisons between the ‘reputable’ CBOT and the ‘disreputable’ bucketshops in the context of, illegal, gambling. Between 1900 and 1905, CBOT was engaged in several court cases attempting to supress bucketshops (de Goede 2005, pp. 70–71). In the first of series of cases against a Missouri firm, a Chicago judge ruled that the bucketshops were enabling gambling. In 1903, CBOT went to court again, in Missouri, and lost. The judge ruled that there was little difference between bucketshops and speculation on CBOT, apart from the wealth of CBOT members. CBOT took the case to the US Supreme Court in 1905 who ultimately ruled in favour of the CBOT making a distinction between ‘competent’ men—those who had paid to be members of CBOT—and ‘irresponsible gamblers’ serviced by the bucketshops (de Goede 2005, p. 71).

To limit the ability of jobbers to manipulate markets, the practice emerged of jobbers being required to simultaneously quote ‘bid’ prices, at which they would buy an asset, and ‘offer’ or ‘ask’ prices, at which they would sell, without knowing if the counter-party was seeking to buy or sell the asset—though the quantity would affect the quoted price. On this basis, the role of jobbers in the London markets became an established part of the financial system from the late eighteenth century. While their role was regularised, jobbers were still associated with outsiders of limited resources (Attard 2000, pp. 13–14; Mackenzie and Millo 2001, pp. 19–22). Following the ‘Big Bang’ reforms in the UK in 1986, the legally recognised distinctive role of jobbers disappeared, though the practice of dual-quoting and market-making still persist, particularly for ‘over the counter’ (OTC), specialised, trades conducted amongst investment banks. Today, however, the bulk of trading is conducted on electronic exchanges, broker-mediated markets, and regulators prefer centralising trades on public exchanges rather than in ‘opaque’, on account of them involving private bi-lateral agreements, OTC markets based on dual quoting.

The effect of dual quoting can be appreciated by developing the problem of the Rhodean merchant. Say the merchant arrives on Rhodes during a thick sea-fog, which means inhabitants of the island can only see a few metres, whereas those at sea can see for kilometres. When the merchant arrives at Rhodes, they know that other ships are a few hours away, but cannot be seen by the Rhodeans on land. The merchant immediately goes to the market where they meet a Rhodean who asks for a price quote for grain for delivery in a few hours’ time. By this time, the other ships will have arrived. What price does the merchant offer? If the merchant was seeking to maximise their profit, as modern economic theory would argue, they would ask a high price; it was the Rhodean’s problem that they did not have the information on the coming cargoes. However, if the merchant was required to quote as a jobber and give a price at which they would buy grain as well as the price at which they would sell grain in a few hours, they would have to act differently. If the merchant quoted a high ask price with a small spread, so the bid price was also high, they would expose themselves to the risk that the Rhodean was aware of the coming shipments and would take the bid price, agreeing to sell the grain at a high price knowing that they would be able to deliver it out of stock from one of the arriving cargoes. The merchant is forced to quote a price that reflects what they know, in this case that in a few hours’ time the grain price would be low. They could do this by giving a large spread, bidding much lower than asking, or a small spread at a low ask price. In modern markets, a large spread can indicate either uncertainty or that the jobber is not interested in trading while a narrow spread indicates confidence in the price. The belief that jobbers promote uncertainty originates in the association of large spreads providing large profits for jobbers.

In respect of the DBA, the practice of dual quoting also means that Dutch Books and Czech Books (Hàjek 2008) are identical.

Jobbers do not like to hold positions (contracts to buy or sell in the future) for the long term. A jobber who has contractually agreed to buy an asset, taken a long position, will try and enter into a contract to sell, take a short position, in the same asset as soon as they can. However, they may find the market has moved against them, for example the price may have fallen after they have entered a long position to buy at a pre-agreed fixed price. In order to close this disadvantageous position, they will quote an attractive price relative to the market and far from their originally contracted price. This has nothing to do with their valuation of the asset but rather is motivated by their wish to exit a bad position. It means that jobbers do not necessarily believe the prices they quote but, never the less, the quotes are ‘reliable’. Dual-quoting guarantees this since the jobber’s “manifest intention is meant as it is expressed” (Habermas 1984, p. 99) because they are required to act on the prices they quote. Because jobbers are trading contracts, not physical assets, they can give price quotes without exposing themselves to dire consequences; the ability to close a disadvantageous position provides a means of forgiving mistaken beliefs.

The fact that jobbers might offer prices, make statements, that they do not believe in, undermines the idea of scientific realism and contributes to the dubious reputation of jobbers. However, the issue of a falsehood playing a role in guaranteeing reliability has been raised in connection with mathematical modelling of physical systems (Winsberg 2006) and the falsehood of jobbers’ quotes, which help deliver reliability, can be seen as analogous to the pragmatic falsehoods found in mathematical models.

Jobbers regard it as a sign of unprofessionalism to talk of ‘buying’ or ‘selling’ assets, since buying and selling implies a commitment to a physical asset rather than to the abstract process of pricing (Beunza and Stark 2012, p. 394). In effect, they should demonstrate a version of the virtue apatheia (ἀπάθεια). While this disinterest in the asset is often perceived as cynical the dual-quoting mechanism ensures the sincerity of their prices.

Evaluation is distinct from valuation (Aspers 2018). Evaluation is an objective assessment against a recognised standard. On this basis, pricing in the context of the FTAP is evaluative, assessing against the requirement to exclude arbitrage and so ensure reciprocity. Valuation is about individual preferences, rather than uniform standards. Jobbers are engaged in subjective valuation. The subjective nature of valuation makes it susceptible to problems arising out of differences in status that the practice of dual-quoting resolves. For example, Charlemagne set the price at which agricultural commodities could settle tax debts; the state was a fixed-price buyer. This meant that shortages would not be corrected by the market, since a merchant would face a certain loss of buying a commodity in an area of excess and then paying for it to be shipped to an area of shortage where they could only sell it at the purchase price. If Charlemagne was required to both buy and sell at the price he set, the problem of distribution would be solved. A contemporary luxury goods manufacturer would not be able to charge a ‘brand premium’ if they were required to buy identical products at the prices they charged.

The concept of utility presents itself in some discussions of the DBA (Armendt 1993; Baccelli 2017). In the FTAP, utility plays only a peripheral role, being confined to problems relating to the choice of a single martingale measure in an incomplete market. The martingale measure, often referred to as a ‘risk neutral pricing measure’, removes the need to make adjustments for risk preferences, which is the role of utility functions in economics. One of the initial attractions of the BSM approach to pricing derivatives was that it does not require the use of an unobservable utility function. In practice, the apatheia of jobbers combined with their habit of holding positions for short periods, such that price fluctuations are small, imply that jobbers’ utility functions, in relation to their trading activities, are linear. Economics retains utility functions because it is based on broker mediated markets where property owners determine the prices they will accept on the basis of their utility functions. In the FTAP, a jobber selects a probability measure that defines how they price.

Replacing the need for choosing a utility function by choosing the correct measure is not original. The origin of utility theory in finance and economics is in the Petersburg Game, introduced in in 1713 in some correspondence between de Montmort and Nikolaus Bernoulli (Jorland 1987). The game is based on tossing a fair coin. The pot starts with 1. If the coin comes up heads, the player wins the pot, if it comes up tails the pot is doubled, and the coin tossed again. This rule is repeated until a heads comes up.

The problem for Bernoulli and Montmort was that mathematical probability argued that a game should have been valued by calculating the sum over all possible outcomes of the product of the payoff and its probability, the mathematical expectation. The Petersburg Game is designed so that the expectation is infinite, being a sum of an infinity of one halves. However, it was observed that nobody would stake more than 20 coins to play the game and typically they would only offer 4–6 coins to play.

Nikolaus Bernoulli argued that it was a ‘moral impossibility’ to win large sums. In 1728, Cramer offered an alternative explanation and argued that the root of the problem “comes from this; that the mathematicians estimate money in proportion to its quantity, and men of good sense in proportion to the usage that they may make of it” (Pulskamp 1999, p. 4). In this sense, jobbers, whose stock in trade is money “estimate money in proportion to its quantity” where as investors estimate it “in proportion to the usage that they may make of it”, in terms of its utility. Cramer suggested that the marginal utility of money should diminish, an idea that Daniel Bernoulli then used to explain why people took out insurance in a paper published in 1738 (Bernoulli 1954) and the concept became widespread with the growth of Utilitarianism.

Daniel’s approach had been largely ignored in the eighteenth century. D’Alembert suggested that the game should end after the person doubling the pot was bankrupted. In 1777, Buffon took a practical approach to the problem and asked a young boy to conduct 2048 experiments of the Game and tabulated the results. He found that the total pay-out of the 2048 games was a little over 10,057 coins, suggesting a fair price for the Game of around 5, close to the original observations of Nikolaus Bernoulli and Montmort. In 1781, Condorcet worked out that the value of the game was a function of the maximum number of times the gambler considered a head to come up in a row. If the gambler thought it was a ‘moral impossibility’ for \(n\) heads in a row then the expected value of the game was \(\frac{n}{2}\) (Jorland 1987, pp. 169–170). This meant that if a gambler considered events with chances less than 1 in 10,000 to be ‘morally impossible’, such as seeing 14 heads in a row, they would value the game around 6.5–7. The Petersburg ‘paradox’ can be explained in terms of subjective probability measures as coherently as through subjective utility functions.

The flexibility of measure theoretic probability in addressing difficult problems in economics has also been shown in Brown and Rogers (2012). Here, the ideas underpinning the FTAP are used to argue that problems based on information asymmetries might be more tractable by approaching them through measure changes rather than by modelling information.

Subjective probability is an important topic of mathematics since it provides a basis of modelling people’s beliefs about the future. The DBA has played a significant role in justifying the connection between coherent (rational) beliefs and probability theory while the most famous justification for the DBA is founded on how financial markets operate.

6 The Communal Nature of Markets

A difference in discussions of Dutch Book Arguments compared to those relating to the FTAP is that the DBA focuses on an individual engaging with a ‘cunning bookie’ while the FTAP assumes individuals are part of a dynamic, jobber-mediated, market. A jobber makes an assertion as to the future value of an asset by giving the market a bid and offer price. If other jobbers agree with the bid-offer, they let the quote pass and do nothing. If, however, another market-maker felt the jobber had mispriced the asset, they would challenge the assertion by buying at the quoted price if they thought the price was too low, or, selling at the quoted price if they believed it were too high. This process continues until there is a consensus in the market on the asset’s price, at which point ‘silence implies consent’ and the jobbers cease trading the asset. The process of how market-makers engage in reflexive modelling such that dissonances gives way to resonance is described in detail in Beunza and Stark (2012).

The financial institution of jobbing addresses issues relating to the practicality of someone being compelled to bet on their beliefs, central to the DBA. The FTAP employs the technology of measure theory to resolve additional issues raised in connection with the DBA that have not be resolved through subjective approaches to probability, such as de Finetti’s or Jaynes’. For example, the problem of additivity (Armendt 1993; Williamson 1999) is solved by employing sigma-algebras, partitions and the concept of measurability. The FTAP is concerned with the dynamics of prices whereas the DBA is focused on one or two step decisions, as is much of Bayesian analysis. As a result, the FTAP handles issues relating to diachronic Dutch Books and Fraassen’s ‘reflection’ (van Fraassen 1984, pp. 244–246) through the technology of filtrations and the Tower property of conditional expectation.

The dynamic nature of financial markets and the FTAP sheds light on another issue raised in connection with the DBA, that of an agent assigning a probability of less than one to a known logical, or tautological, truth (van Fraassen 1984, pp. 239–242). This is possible in the basic DBA, which requires beliefs to be coherent but not necessarily correct. A financial market addresses both coherence and a form of correctness. If a jobber did not price a logical truth as having probability one, they would create an arbitrage by representing a monomial situation by a binomial model. If this were the case, other jobbers would immediately perceive this quote as a mispricing. The demand for these contracts would indicate to the agent offering the arbitrage quotes that they had made an error in their assessment and, if they wished to carry on participating in the market, they should revise their beliefs, or be bankrupted.

The dynamic nature of asset pricing has played a role in the development of modern probability. Bachelier’s 1904 thesis, The Theory of Speculation, is often associated with Einstein’s discussion of Brownian motion of 1905. However, since Bachelier and Einstein were concerned with very different questions it is not reasonable to suggest Bachelier pre-empted Einstein. Bachelier’s thesis was important, however, as it introduced Rayonnement de la probabilité (Radiation of probability) (Bachelier 2006, p. 40), the idea that a probability distribution could change in time, anticipating the Fokker–Planck/Kolmogorov Forward Equation (Taqqu 2001; Courtault et al. 2000, p. 344).

A jobber-mediated market can be regarded as a dynamic social process wherein market participants are seeking “The opinion which is fated to be ultimately agreed to by all who investigate” (Peirce 1934, p. 407). This pragmatic conception of truth rests on the idea of a ‘community’ that stands for the ‘all’ that comes to an agreement (Peirce 1934, p. 311). While the DBA, and the FTAP, are concerned with establishing the objective coherence of an individual’s beliefs and the institution of jobbers delivers subjective reliability, a financial market is seeking to arrive at communal agreement. This highlights that an individual’s beliefs can only be confirmed, or refuted, through discussion with others and is the essence of jobber-mediated markets (Muniesa 2007). This points to the idea that these markets operate as places where opinions—expressed as prices—are discussed and markets are ‘centres of communicative action’.

Two essential components of deliberation, whether in markets, democracies or scientific research, are that there is a plurality of views, to maximise the chance that the best solution is identified, and that views are challenged, and defended, without resorting to authority (Misak 2002). Financialisation enables these two features.

Financialisation involves the quantification of the commodity into a price. Money, the measure of price, must be fungible so that it does not take on the characteristics of the person who holds it. This impersonality of money means that it is universal and makes no distinctions; it is used by rich and poor. Money has the power to transform objects; it can turn a cow into a car. These properties enable money to perform multiple functions simultaneously and its myriad uses means that it becomes a universal aim of all the members of the community using it (Seaford 2004, pp. 149–172). On this basis, Adam Smith argued that all passions and interests can be represented by money (Hirschman 1997, pp. 110–113) and capitalism emerges, focussing on the accumulation of money as a store of wealth and unit of account.

The de-personalisation that comes about with the idealisation of money, away from a material ‘thing’, militates against power and status imbalances that are a potential problem in the valuation process (Aspers 2018, p. 141). Abstracting from the physical commodity into the abstract contract should, in principle, enable those without property to challenge the opinions, the price quotes, of those with power. This has been demonstrated by ducaton shares and bucketshops and the relative status of jobbers compared to investors. More generally, scholastics had observed in 1305 that the power of the French king, Philip IV, could not force decrees on the market (Kaye 1998, pp. 24–26). Montesquieu, who understood the concept of arbitrage (Montesquieu 1752, pp. 407–425), noted that ideal money enabled people to avoid the ‘violence’ of the church and state and forced rulers to govern with greater prudence (Montesquieu 1752, p. 392). Financialisation has enabled minorities, such as European Jews or British Quakers, to prosper.

Financialisation also enables dual quoting, since a jobber can sell a representation of something they do not have and buy a representation of something they do not want. The ‘dual-quoting’ requirement forces jobbers to be sincere in their pricing while emasculating any power they might have through accumulated wealth. The jobber has little status or authority, only beliefs represented by traded contracts, and so disagreements cannot be resolved by force of authority.

Markets do not always deliver prices that accurately represent asset values. The most common manifestation of this failure is in the formation of ‘bubbles’. This can happen when there is irrational optimism regarding a particular asset, or class of assets, or because there is rational pessimism about all other asset classes. The problem of bubbles is still open and the jobbers’ maxim that the market can stay irrational longer than they can stay solvent will be relevant so long as a correct minority cannot persuade a majority of their error. This is not just a feature of markets but is also evident in science and politics.

A broker-mediated market is concerned with the exchange of property, whereas a jobber-mediated market is concerned with price discovery. Financialisation enables jobber-mediated markets, though it is not a necessary requirement of broker-mediated markets. Brokers deliver collective belief, based on agreement of a price at which an exchange takes place, money accounts for the concrete values of the commodities exchanged. Jobbers trade at a price that represents disagreement. Furthermore, since a jobber might not quote the price they believed represented the value of the asset and they never justify how they have come to offer their price, they cannot be said to be conveying knowledge as it is usually defined. This means that Thicke’s analysis of broker-mediated markets does not extend into jobber-mediated markets. Jobbers’ behaviour is more like what Thicke describes as ‘rejectionist’ rather than the ‘believers’, which is the focus of his analysis of broker-mediated markets (Thicke 2017, p. 4).

The failure of Thicke’s analysis to translate from broker- to jobber-mediated markets can be explained in terms of uncertainty. Broker-mediated markets are concerned with the physical exchange of assets; jobber-mediated markets are concerned with the pricing of abstract contracts relating to uncertain future events. A jobber-mediated market should not exist for an asset where there is general agreement on its price. Such an asset would still be traded in a broker-mediated market, where one person’s utility for the asset, with a known price, might differ from another person’s: jobbers do not make the market in fresh milk or television subscriptions, brokers do.

Jobbers are engaged in financial practice. Norms emerge out of practice and become formulated as explicit rules or principles because they work (Brandom 1994, p. 21). Jobbers are seeking to converge on understanding and Habermas (1984) has explored the norms necessary for effective deliberation. Statements need to be comprehensible and, where appropriate, conform to matters of fact. They must be objectively true. Statements must also represent the honest intention of the speaker; they must be truthful. Finally, they must conform to what the community believes is right and be ethically or morally acceptable.

The comprehensibility of statements is accounted for by everyone engaged in the market having been indoctrinated into the grammar of price quoting and how prices imply beliefs. Meanwhile, the practice of jobbers holding limited positions for short periods of time implies they have linear utility functions that avoid ambiguity. However, in a jobber-mediated market there are no ‘matters of fact’. In these circumstances, the objective truth of a statement is addressed by it conforming to the evaluative standard that a price precludes arbitrage. The institution of dual-quoting ensures the truthfulness of the jobber, while ensuring reliability by allowing a jobber to present a falsehood to correct an earlier incorrect price quotation. Having addressed the objective and subjective validity of a price quote, there remains the norm that addresses its social rightness.

The rightness of the statements in a jobber-mediated market is partially addressed by the reciprocity embedded in the FTAP. Reciprocity is essential because it delivers justice in exchange that supports social cohesion. The DBA has been associated with the ‘Golden Rule’—“Do to others as you would have them do to you” (Slater 1993; Wattles 1996), since dual-quoting ensures the jobber cannot exploit a counterparty and delivers jobbers’ sincerity. However, if financial markets are to be regarded as centres of communicative action the ‘rightness’ of prices must be addressed more explicitly.

Shakespeare’s The Merchant of Venice characterises the virtue of charity in the form of Antonio, a merchant of Venice. The play is popular, although problematic in interpretation (Midgley 1960, p. 119) with contemporary audiences finding it incoherent with the last scene of the play redundant. However, if the play is interpreted as highlighting the necessity of charity in human affairs (Gollancz 1931; Coghill 1950; Lewalski 1962), the play appears coherent.

The play is motivated by a young Venetian, Bassanio, who wishes to marry a wealthy heiress, Portia, but needs three thousand ducats to fund the courtship. He approaches his friend, Antonio, a wealthy merchant. Because all Antonio’s funds are tied up in long-distance commercial ventures, the merchant approaches the Jew, Shylock, to arrange a loan of cash. Shylock does not charge usury but imposes a legitimate poena on the loan: if Antonio fails to pay, he must forfeit a ‘pound of flesh’. All Antonio’s investments in trading ventures are lost and he is unable to repay Shylock and so must forfeit the poena. Portia, having been successfully courted by Bassanio, disguises as a lawyer and saves Antonio by noting that Shylock is entitled to a pound of flesh, but not to any blood, making it impossible for Shylock to receive the poena. This is where many feel the play should end, but there is one last scene where there is an exchange of rings and Portia delivers a letter that reports the safe return of his trading vessels. This final scene represents the repayment of Antonio’s original loan by its ultimate beneficiary, Portia.

The play is about commerce, in its broadest sense, and thirteen exchanges occur in the play. The religious aspect of the play is in its use of Antonio as a metaphor for Christ who brings Bassanio, as Everyman, to Portia, as Grace or Mercy. Shylock personifies Judaism’s commitment to ‘the Law’ (Lewalski 1962, p. 331; Merchant of Venice, IV.i.104; IV.i.144, Heb 2:17–18; Phill 2:7). A secular interpretation of the play is that the play explores the problem of deontological ethics in an uncertain environment. Both Antonio and Shylock are confident of the future: Antonio believes his investments can be liquidated, Shylock believes he can legally kill Antonio. However, neither prediction comes true, emphasising the unpredictability of the world. Antonio, representing Christian charity, is supported in this uncertain environment by a social network that rescues him from disaster; Shylock’s commitment to the law does not help him. The play shows that, in an uncertain world, judgements cannot be based solely on established law but, to be ‘wise’, should relate to mercy and judgement (Coolidge 1976, p. 256). The play emphasises that an individual’s experience, knowledge and judgement will be insufficient in identifying the best actions when the future is unpredictable. Robust solutions need to be developed through a communal, deliberative process, such as a jobber-mediated market.

An absence of the norm charity played a part in the 1998 failure of Long Term Capital Management (LTCM), the most significant financial failure of the second half of the twentieth century. LTCM constructed trading strategies that would take advantage of small price discrepancies that were funded by short-term loans backed by securities, so called repurchase or ‘repo’, agreements, which are the high-finance equivalent of low-finance pawning. The strategy proved highly successful and provided LTCM’s investors with exceptionally high returns at, apparently, little risk. This seemed to confound the established financial belief that high returns are only possible at high risk.

In 1997, there was a collapse in Asian financial markets that had ramifications across the globe but had little impact on LTCM’s performance. On August 17 1998, the Russian government defaulted on its debt, a scenario LTCM had considered and were, in theory, immunised against. However, while LTCM had considered the risk that Russia would default, others had not and in the aftermath of the default, investors exchanged their riskier assets for more secure ones, such as US Government bonds (MacKenzie 2008, p. 230). This ‘flight to quality’ presented LTCM with an opportunity to make greater profits by selling the overpriced government bonds and buying the under-valued, riskier, assets.

On 2 September, LTCM faxed its investors to inform them of some losses experienced in August but went on to highlight the opportunities that the market volatility presented and asked for more money to exploit them. Within five minutes of the fax being sent out, it had been posted on the internet (MacKenzie 2003a, b, p. 365). This had two effects. The market anticipated that LTCM would sell assets to raise money, and so the price of any asset LTCM was rumoured to hold, fell. More critically for LTCM, counterparties noted that the firm was asking investors for more money and questioned its credit worthiness; they were focusing on the first and last message in the fax and ignoring the middle part of the message, which identified opportunities. As a result, LTCM were forced to deposit more collateral to support the repurchase agreements funding its trading strategies. This was perfectly reasonable behaviour by LTCM’s counterparties.

As LTCM no longer had access to the repo market at advantageous rates that funded their positions they were forced to sell-off their sound investments in US Government bonds. The apparent arbitrage that underpinned their success was based on the existence of the repo market. This assumption had proved to be unreliable and represented the inherent risk of the strategy, which explained the strategies’ high returns. A sense of schadenfreude developed in the markets as less successful firms bet against LTCM, further undermining it. At the end of August, LTCM had had around $2 billion available to cover its trading activities. This quickly evaporated and on 20 September the US government brokered a deal where by a consortium of banks would provide the hedge fund with $3.6 billion in exchange for 90% of the company. The original shareholders were left with only a fraction of what they thought they had had at the end of August (MacKenzie 2008, pp. 225–231).

In the aftermath of the failure of LTCM it became popular to accuse the firm of recklessness. A more accurate explanation is that LTCM’s failure was an example of hubris followed by nemesis, similar to Shakespeare’s portrayal of Shylock. LTCM employed scientific, critical, thinking in developing its trading strategies that were based on mathematical models. On this basis they imagined that they could earn risk-less profits, unlike their competitors who were bound by the established conventions. This arrogance isolated them and so at the first sign of weakness, competitors acted in a way that destroyed, rather than supported, the firm.

An alternative to the competitive approach to finance is given by the experience of the Quakers between the late seventeenth and the middle of the nineteenth centuries. While relatively small in number, Quakers came to dominate English finance. Like Antonio, the Quakers were scrupulous in repaying debts during a time characterised by high levels of default (Prior and Kirby 2006, pp. 121–129; Walvin 1998, pp. 55–57), highlighting their reciprocity. They “detested that which is common, to ask for more goods than the market price, or what they may be afforded for; but usually set the price at one word” (Walvin 1998, p. 32), while there have been reports—in the context of a discussion of the future of science, relevant here—of Quaker shop-keepers ‘dual quoting’ in everyday commerce (Russell 2005, p. 60), emphasising their sincerity. Quakers were also renowned for their charity (Cookson 2003; Walvin 1998, pp. 81–90), and their attitudes to lending were encapsulated in their proverb:

“Well, Friend”, said the Quaker Banker, “Tell me the answers to these questions so that I may help you in your projects, for you have opportunities: Firstly, how much do you seek to borrow? For how long? And how will you repay the loan plus its interest?” These are the issues all good bankers must explore. (Phillips, n.d.)

Adhering to these three moral norms ensured that Quakers were trusted, which was the foundation of their commercial success, when all around them there were “usurious contracts, false chevisance and other crafty deceits” (Murphy 2009, p. 83).

Charity, reciprocity and sincerity are fundamental to the trust that lays the foundation of finance. The decline in financial ethics is not endogenous as ideas external to finance have been imposed on financial practice. An important precedent in English civil law is Buttle v Saunders ([1950] 2 All ER 193), where it was judged that commercial morality was subordinate to maximising profits. The ascendency of economic positivism and the doctrines of efficient markets and expected utility maximisation have originated in academic theory and then have been established through the courts, not endogenously in market practice.

Unlike reciprocity and sincerity, the role of charity does not play a significant part in enabling mathematics. ‘Social rightness’ does highlight that communities need to be held together by shared beliefs, a factor important in mathematics as a social enterprise. The charitable aspect of finance is useful to understand in relation to mathematical practice in order to address concerns of mathematicians of engaging with finance in the aftermath of successive financial crises (Rogalski 2010; Korman 2011). Some respond to these criticisms by arguing that mathematicians have an essential role in redeeming finance, for example by guiding better regulation (Ekeland 2010; Haggstrom 2012). The Merchant of Venice highlights the inadequacy of these, deontological, approaches when faced with uncertainty and there are similar arguments against calculating consequences in such circumstances (Anscombe 1958, pp. 9–16). In the face of unpredictability, reliance needs to be founded on social cohesion that enables mutual support so that the best solution to an unfamiliar problem will emerge. Good science is necessary for finance, but not sufficient.

7 Conclusions

The abstraction of a physical commodity into a quantified financial contract has played an important role in the mathematisation of western society. In particular, financial ethics has provided the prototype on which mathematical probability has been developed.

Probability was originally conceived in relation to subjective ethical judgement but began to take on an objective character in Cardano’s investigation of the ethics of gambling. The objective nature of probability came to dominate as it was used to address questions of uncertainty in the physical sciences. There was a renaissance of subjective probability in the twentieth century associated with problems related to finance and other social domains. Ramsey introduced the Dutch Book Argument to counter Keynes’ assertion that there were phenomena not amenable to probability. The work of De Finetti and L. J. Savage led to a revolution in Bayesian/subjective approaches to inference.

The most popular modern justification for subjective probability is the Dutch Book Argument, which only works if the agents involved are required to act as jobbers and dual quote. Less well known is the Fundamental Theorem of Asset Pricing which is the foundational theorem of financial mathematics. As a synthesis of financial practice and Kolmogorov’s abstract approach to probability, the FTAP resolves many of the problems that have been raised in respect to the DBA. As a result, finance has enabled the development and practical understanding of subjective probability.

The paper identifies that the institution of jobbers allows market-makers to offer prices—make statements—that they do not believe are true. This is done to enable errors in assertions to be corrected and aids the reliability of the jobber-mediated market. The idea that falsehoods support reliability has been observed in mathematical modelling of physical systems.

Issues with the application of utility functions have also been discussed in the paper. Utility is not a core concept in mathematical finance, where the emphasis is on choosing a pricing, subjective probability, measure. This has implications since utility theory is still central to economic theories such as in providing insurers with a justification for charging customers premiums that will prove profitable.

In a broader context, subjective probability relates to social, as distinct from physical, systems. As such, it represents the use of mathematics to aid practical, as distinct from pure, reasoning. This reflects Locke’s division of understanding into practica and physica and Kant’s separation of practical and pure reasoning. The application of mathematics to finance highlights that, in regard to practical reasoning, decisions cannot be based solely on objective criteria since there are few ‘matters of fact’ that will persist in relation to social systems. This is important because, while mathematics plays a passive role in describing physical systems, as soon as mathematics is applied to social systems it has the potential to actively direct them.

Relying solely on objective criteria when representing social phenomena, like finance, is fraught with problems. Firstly, by their very nature, objective criteria can be manipulated. A fraudster can adopt the dress of the clergy to gain trust. Social media ‘likes’ and ‘retweets’ can be mechanised giving a false impression of widespread support. Trust in Bitcoin, and other crypto-currencies, is founded on the block-chain and the way Bitcoins are minted. These algorithms are supposed to offer an objective basis for trust in the currency that circumvents the need for subjective and social foundations (Christopher 2016). However, the ‘objective’ criteria on which crypto-currencies are built give a misplaced perception of privacy (they leave a digital trail that can only be disrupted through the use of ‘tumblers’), reliability (transactions are dropped when traffic is high) and security (an agent who controls 50% + 1 of the nodes, controls the ledger).

Current experiments with electronic money and privatised ‘tokens’ invite renewed consideration of ‘what is money?’ Traditionally, money has had to be universal and fungible. This is changing with new forms of money and presents a risk that inequalities within society are reinforced as the communality of money is lost. Hence financialisation can be beneficial or harmful depending on how it is deployed. More generally, the arguments of this paper suggest that money acts like a language and so is open to a ‘linguistic turn’ in its investigation. This observation suggests comparison with efforts to understand the nature of mathematics as a language. Skovsmose et al. (2016) discusses mathematics as language by investigating the relationship between mathematics and objective description, subjective inscription, social prescription that culminates in communities subscribing to mathematical models. This appears to relate to the objective reciprocity of a price, its subjective sincerity, and social charity leading to communal trust presented here. These connections between financial markets and mathematics as language suggest deep connections between financial and mathematical cultures that might be explored further.

This paper has approached markets through discourse ethics to address radical uncertainty. Discourse ethics, the paper argues, address the problem of Hume’s Law, which is associated with the emergence of capitalism. Sotiropoulos, Milios, & Lapatsioras (2013) present a contemporary Marxist re-evaluation of finance that recognises the significance of uncertainty, and makes observations compatible to the ones presented here, specifically the necessity of a democratic finance, owned and controlled by the users of money. This suggests that by approaching finance as a radically uncertain domain commonality between radically different social theories can be found.

More broadly, there is a current perception (Heimans and Timms 2018) that the nature of power is changing. This is based on the belief that the advent of internet based modes of communication means that power is shifting from those that own resources to those who can manipulate social networks. Such a change would not be novel. Seaford (2004) distinguishes Mesopotamian hierarchical society from Greek monetised and deliberative society. The Medieval monetisation of Catholic Europe saw sovereigns challenged by commercial networks. Financialisation from the seventeenth century has enabled persecuted minorities, such as Jews, Dutch Calvinists and Quakers, who had strong communal networks, to exert extra-ordinary influence. Understanding how financial technology and influence has emerged out of communities, rather than ex catherdra theories of finance and society, might help in understanding current fluxes in society.

These observations are relevant to the increasingly important topic of algorithmic decision making, which is beginning to affect people’s daily life as it is applied to social media and on-line retailing (Chen et al. 2016). Algorithmic pricing, which exists only in electronic, broker mediated markets, is justified on the basis that it improves competitiveness and efficiency in markets. Unfortunately it can also have unintended consequences, such as ‘flash crashes’, or might even be designed to manipulate markets to the benefit of the algorithm designer. The crude approach that all market activity is beneficial (Foresight 2012, p. Section 8.2) pre-supposes that liquidity is a utility service and not a more complex consequence of market trust built on the sincerity required in ‘dual quoting’.

There is a risk, identified in finance, that algorithmic decision making can result in ‘super-portfolios’ that give the illusion of diversification that results in crises (MacKenzie 2003a, b). With the growth of algorithmic decision-making, the danger of ‘unthinking’ algorithms delivering a similar monism of opinions needs careful consideration. A problem in finance that is sometimes observed is that outputs from different models using the same input data are often contradictory. These differences are often understood by expert modellers but can lead to doubt in the minds of decision-makers, who believe a model should present an objective representation. A common response to this uncertainty is to build more complex models that integrate more statistical data, create a Laplacian Demon, rather than accept the differences and make a judgement.

An alternative to building ever more accurate representational models is to view the models as signifiers. Each model presents a slightly different perspective on an unknowable future. In this conception a collection of models represents a ‘college’ rather than a ‘toolbox’ and the decision-maker must integrate the different perspectives to make a judgement. This is possible through deliberation even when views are based on implicit intuition and not explicit deduction. This is important outside finance where results of machine learning and agent based modelling are often not amenable to audit. Establishing the rhetorical conventions that can accommodate the intuition of machines, just as we can accept the intuition of humans in deliberation, would be revolutionary. This relates to questions about the deployment of Artificial Intelligence within an organisation and how to make AI more trustworthy (Lei et al. 2016; Tarafdar et al. 2017).

Knowing about the role of finance in stimulating the development of the practical mathematics of judgement under uncertainty is useful because it highlights that practical mathematics has a different genealogy to the pure mathematics of representing the physical world. An important distinction highlighted by finance is that, in uncertain environments, relying on the objective validity, on its own, is insufficient and account must be made of subjective and social validity. These points are important when thinking about the role of mathematics when applied to social systems more generally, where mathematical models can actively affect the system. The risk of not clearly understanding the distinction between pure and practical mathematics is that humans will come to behave like computers, not the risk that computers will come to behave like humans.