International Currencies in the Lens of History
What makes for an international currency? What explains the predominant use of one or more national units in cross-border transactions? What explains changes in their popularity in absolute terms and relative to rivals? In the second half of the twentieth century, when the US dollar was the world’s international currency par excellence, these questions appeared to have obvious answers. The United States was far and away the largest economy in the world, and it engaged in the largest volume of international transactions. Doing international business in dollars was logical and attractive insofar as the dollar was stable and the United States had the largest and most liquid financial markets in the world. Finally, the United States had the capacity to project military and diplomatic power. As a country with a strong military, it was less vulnerable to attack from abroad of a sort that can destabilize its finances and economy and undermine confidence in its currency. Other countries held its currency as reserves as a way of signaling their allegiance – equivalently – of offering hostages. But does this same answer to the question of what makes for an international currency – size, stability, liquidity, and ability to project military power – also explain the rise and fall of international currencies earlier in world history? This chapter reviews almost two millennia of international monetary history and concludes that the answer to this further question is, to a surprising extent, yes.
KeywordsInternational currency Reserve currency Gold standard Dollar
What makes for an international currency? What explains the predominant use of one or more national units by individuals, firms, and governments undertaking cross-border transactions? What explains their rise and fall – that is to say, changes in their popularity in absolute terms and relative to rivals? And how many are there likely to be? Is international currency status a natural monopoly, in which a single leading national unit will always dominate international transactions, or can multiple currencies simultaneously play consequential roles in the international domain?
In the second half of the twentieth century, when the US dollar was the world’s international currency par excellence, these questions appeared to have obvious answers. The United States was far and away the largest economy in the world, and it engaged in the largest volume of international transactions. It was logical and convenient for American banks, firms, and individuals, when engaged in cross-border transactions, to expect payment to be made in dollars. And what made sense for them made sense equally for foreign banks, firms, and individuals seeking to attract their business.
Doing international business in dollars was logical and attractive insofar as the dollar was stable. Under the Bretton Woods System of the 1950s and 1960s, the dollar was pegged to gold at a fixed price of $35 an ounce, and other currencies were effectively pegged to the dollar. Subsequently the dollar fluctuated more widely on foreign exchange markets, but without obvious trend. Aside from a brief period in the late 1970s, the United States did not experience the kind of chronic high inflation that tends to undermine confidence in a currency. Nominal stability in the United States was not absolute, but it was impressive relative to that now evident in countries whose currencies had been widely used in international transactions in earlier periods, Britain and France, for example.
Utilizing dollars in international transactions was also attractive because the United States had the largest and most liquid financial markets in the world. Dollars could be bought and sold at low cost, subject to minimal spreads. US banks with extensive foreign operations could make payments and extend loans to counterparties in virtually every corner of the globe. The liquidity of the market meant that dollars could be bought and sold in substantial quantities without moving prices against the investor initiating the transaction. And the fact that US treasury bonds were traded in the single largest and most liquid financial market made dollar securities the logical form for central banks to hold their foreign reserves and financial and nonfinancial firms to hold their working balances.
A fourth and final prerequisite for international currency status is the capacity to project military and diplomatic power. A country with a strong military will be less vulnerable to attack from abroad of a sort that can destabilize its finances and economy and undermine confidence in its currency. Other countries will want to hold its currency as reserves as a way of signaling their allegiance – equivalently – of offering hostages.
These answers to the question of what makes for an international currency – size, stability, liquidity, and the ability to project military and diplomatic power – seemed obvious in the second half of the twentieth century, when the dollar dominated international transactions and only the United States possessed these attributes in abundance, and in some cases at all. Moreover, that the dollar far and away dominated in international transactions, in virtually all parts of the world, strongly pointed to the conclusion that international currency status is a natural monopoly and that only one national unit will play a consequential international role at a point in time.
Today in the twenty-first century, the answers to these questions are less obvious. America no longer accounts for as large a share of global GDP as in the dollar’s heyday after World War II. The United States has been overtaken by China as the world’s largest exporter. China has negotiated currency swap arrangements with foreign central banks and designated official renminbi clearing banks for financial centers around the world. It is prepared to challenge the United States in the geopolitical sphere, through foreign investments by its state-owned enterprises in Africa and by the Asian Infrastructure Investment Bank in East, South, and Central Asia (foreign investments, it should be noted, that are denominated in Chinese renminbi) and by building islands, complete with runways, in the South China Sea. Last but not least, the global crisis of 2007–2008 centered on the United States understandably raised questions about the stability and liquidity of US financial markets.
But with what implications is unclear. For the moment, the dollar still remains the dominant currency in the international monetary and financial sphere. Perhaps this is an indication that the network (“it pays to do what everyone else is doing”) effects supporting dollar dominance in the past are not just powerful but persistent. Perhaps it means that large shocks like the two world wars that caused the dollar to finally supplant the pound sterling as the leading international currency, half a century and more after the United States became the leading exporter and overtook Britain as the world’s largest economy, will be required for the baton to again be passed. Alternatively, perhaps as we navigate the transition from a post-World War II era dominated by the United States to a more multipolar world in which economic leadership is provided simultaneously by several powers, we will similarly experience a transition to a more multipolar monetary and financial world where several national currencies play consequential international roles. We may just have to wait a bit longer to see it.
These are hypotheses and questions on which history can presumably shed light. The rise first of sterling and then of the dollar and the transition between them is the most immediate such history. But there is also a longer prehistory of international currencies, prior to the period of sterling and dollar dominance, on which informed observers might usefully draw.
A Byzantine Arrangement
The silver drachma coined in ancient Athens in the fifth century B.C. is sometimes referred to as the first international currency since it circulated beyond the borders of the Athenian Empire (Chown 1994; Dwyer and Lothian 2002). Its successors, the Roman gold aureus and silver denarius, circulated more widely, reflecting the greater geographical scope and greater military and administrative capacity of the Roman Empire. In practice the silver drachma continued to circulate alongside these Roman units for an extended period, an observation consistent with the “new view” of international currency status that multiple international currencies can coexist (Cohen 1998; Eichengreen et al. 2017).
These gold and silver coins were used by the Romans to pay their legions. Their soldiers of course had to transform those high-value gold and silver coins into smaller units practical for use in everyday transactions. It followed that the gold and silver coins passed into other hands and came to be used in a range of transactions in a range of places.
Acceptance of the aureus and especially the silver denarius declined from the first to fourth centuries AD with the mounting financial challenges and declining power of the Roman Empire. These problems were met in part by the imperial authorities through traditional methods, namely, currency debasement and inflation. The aureus was then supplanted by the Byzantine solidus introduced by Constantine the Great as part of a set of administrative, financial, and economic reforms starting in 306 AD, which were designed to combat inflation and strengthen the Eastern Roman Empire, what came to be known as Byzantium. The solidus was actually introduced by Diocletian in 301, but minted only on a small scale, and entered into widespread circulation 11 years later under Constantine. It was also Diocletian who had divided the increasingly ungovernable Roman Empire into western and eastern halves, the latter of which was inherited by Constantine.
The solidus, like the aureus and denarius before it, was used to pay imperial soldiers. Indeed the word “soldier” derives from solidus, referring to the solidi with which solders were paid. In the fifth century, a soldier received an allowance of four solidi a year instead of the food that his predecessors received when taxes had been paid in kind (Spufford 1988, p. 8). The solidus was utilized, necessarily, in large-value transactions; its gold content, at a purity or fineness of 23 carats (95.8%), is today worth about US$185. Solidi were the heaviest gold coins circulating anywhere in the world, despite the fact that they were only the size of a modern American dime. As with the aureus and denarius before it, soldiers paid in solidi exchanged them for smaller monetary units, food, and other goods. In practice this meant that the coin was used mainly by merchants and aristocrats. Cipolla (1967, p. 26) refers to the solidus as an “aristocratic coin.”
The solidus circulated through much Europe and Asia for the better part of a millennium. Hoards have been found in Central Europe, Russia, Georgia, Syria, and other Arabic countries. That the imperial seat was moved from Rome to Byzantium, later Constantinople, elevated the stature of the Eastern Empire and encouraged use of the solidus in the Near East. Cipolla (1967, p. 16) writes that, from the fifth to seventh centuries, the coin circulated in the rich towns of the Near East, in the markets of North Africa, in the ports of Italy, and around the monasteries and castles of France and Spain. He quotes Kosmas Indicopleustes, a Greek monk and traveler, to the effect that the solidus was accepted “from end to end of the Earth.” Lopez (1951) summons archeological and numismatic evidence to conclude that the solidus was used in transactions everywhere from England to India.
With which hypothesis or hypotheses about international currency status is this historical experience consistent? The answer, it would appear, is all of them. First, international use of the solidus was encouraged by the relatively large economic size of the Byzantine Empire and its substantial volume of its international transactions. Lopez (1951, p. 224) argues that Byzantium had “one of the highest average standards of living in the early Middle Ages.” It was the richest and most cultured part of the Roman Empire once this was split into western and eastern halves by Diocletian. It engaged in a significant amount of foreign trade, which Lopez goes on to describe as “fairly large and well balanced,” at least prior to the eighth century, when that trade, especially that following land routes, went into steep decline (Brownworth 2009). Other authors like Herrin (2007) portray the attitude of the Byzantine aristocracy toward trade in less positive terms. The Byzantines put together a collection of regulations governing naval contracts, so-called Rhodian Sea Law, to guarantee local merchant compensation for damage or loss from shipowners (Herrin 2007, p. 150). It is logical that their currency and not that of the rump Roman Empire, now descending into the Dark Ages, should have been the successor to the aureus and the denarius.
Second, historians emphasize the stability of the unit as key to its attractiveness as an international currency. The gold content of the solidus remained the same from its introduction in the fourth century until well into the tenth, a striking long period of time. Lopez writes that “its record [in stability and intrinsic value] has never been equaled or even approached by any other currency.” The coin was minted mainly in Constantinople, under close watch of the emperor. Subsidiary mints in other cities such as Rome, Milan, Syracuse, Alexandria, and Jerusalem were, by contrast, subject to problems of quality control.
Byzantium, and Constantinople in particular, was able to avoid currency debasement because it maintained a balanced budget, seemingly for more than 300 years. Starting with Diocletian’s own reforms, the government built up an impressive administrative apparatus with which to raise revenues through taxes on land, persons, and trade. And even when the budget was not balanced, prudent emperors accumulated a reserve of coin in good times on which they could draw in bad times (in military or political emergencies).
These policies were rooted in public support. The Byzantine Empire was characterized by a powerful landed aristocracy that supported the emperor. A landed aristocracy that depended on fixed dues paid by serfs and tenants would not have been pleased by inflation. The emperor, on the other hand, was largely indifferent to the interests of those whose obligations would have been eroded by currency debasement.
At a symbolic level, the solidus was identified with the power of the emperor, and this was one factor encouraging the government to maintain its value. If the solidus was stable and strong, thinking went, then the empire would be regarded as stable and strong. As Herrin (2007) puts it, the coin had “propaganda value.”
Finally, international use of the solidus was supported by Byzantium’s strong military. The solidus was the currency of an empire that for centuries after Constantine succeeded in controlling large swathes of territory and repelling invaders. The Byzantines’ ability to mobilize tax revenues had more than a little to do with this.
But the solidus, like the drachma, had rivals, again consistent with the so-called new view of international currency competition. From the late seventh century, cross-border transactions were also undertaken using the dinar, a solidus-like gold coin introduced by Abd al-Malek, the fifth caliph of the Syrian Umayyad dynasty, who sought to unify the Moslem lands. The Byzantines were forced to import most of the gold from which their coins were struck. Al-Malek had the advantage of an indigenous supply of gold, obtained from the Upper Nile, which was an advantage for overcoming entry barriers.
Al-Malek’s motivation was to put to an end the monetary disorder that prevailed in the Moslem lands, where myriad different coins of different weights and fineness circulated, causing confusion and complicating economic development. In modern terminology, he sought to create a “uniform currency.” In addition, this was a period of discord over the merits of Islam and Christianity between the Umayyads and Byzantines. The Umayyads were less than pleased by the circulation of coins bearing Christian religious symbols. Where the Byzantine cross appeared on the front of the solidus, al-Malek substituted a column placed on three steps topped by a sphere and the phrase “In the name of Gold, there is no deity but Gold; He is One; Muhammad is the messenger of God.” Justinian II’s response was to strike a new solidus with the head of Christ on the front and himself, robbed and holding a cross, on the back (Ali 1999). The result was rising tensions between the Byzantine and Umayyad empires, culminating in war.
When al-Malek defeated Byzantine forces at the Battle of Sebastopolis in 692, the dinar became the sole circulating medium in the Moslem world, in a clear example of the association of international currency status with military power. It circulated not just in Syrian lands but in North Africa and Spain when these came under Umayyad control. Al-Malik issued a decree requiring that all Byzantine coins circulating in Umayyad territory be handed over to the treasury, which would melt them down and restrike them. Anyone failing to comply was subject to the death penalty. This, evidently, is one way that an existing international currency can be rapidly replaced by another.
Initially, the dinar had only a slightly lower gold content than the solidus (20 karats or 4.0 g) and was therefore used mainly in large-value, long-distance transactions. Al-Malek’s successors sought to generalize his success by introducing one-half and one-third dinar coins. The dinar’s gold content remained unchanged for fully two centuries following its introduction, again underscoring the association between currency stability and international use.
The subsequent period provided proof by counterexample, as both Byzantium and the Umayyads encountered problems of imperial overstretch not unlike those of the Greeks and Romans before them. Fiscal strains led to money creation and inflation achieved through debasement (in the Byzantine case mainly in the period ending around 1080). Reductions in the weight of the solidus (also now known as the nomisma) started in the late tenth century and continued into the eleventh. These reductions in size and weight, which were the principal method of debasement initially, made relatively little difference for acceptance, since in larger transactions coins were weighed rather than counted (Lopez 1951). Reductions in fineness, in contrast, were harder to detect and more corrosive of confidence. These followed in the late eleventh and twelfth centuries. “Byzantium’s prestige plummeted as international merchants abandoned the worthless coins” (Brownworth 2009, p. 221). For the Byzantine Empire, the arrival of the Fourth Crusade at the beginning of the thirteenth century (organized by the Republic of Venice, more on which below) was then a political and economic disaster. The empire was dismembered and forced to pay a costly tribute to the victorious Crusaders (Harris 2014).
The Italian Job
The principal coins to circulate internationally in the subsequent period were the Genoese genoin and Florentine florin introduced in the mid-thirteenth century. Genoa, which went first in 1252, was on “bad terms” (Lopez 1951, p. 213) with the emperors of Nicaea who presided over what remained of Byzantine Empire and continued to mint gold coins. It therefore sought a circulating medium of its own. Venice followed, minting the ducat in the image of the florin in the fourteenth century, prompted by the continued debasement of the solidus by Emperor Andronicus II. (Andronicus was in a battle to the death with the Ottomans, entailing costly expenditures on mercenaries. The Venetians for their part continued to conduct a good deal of trade and business with Byzantium, making for their reliance on the solidus and now their creation of an alternative.) These 24 carat gold coins were state of the art; they were as pure as the technology of the time allowed, encouraging their acceptance. They circulated side-by-side and alongside token coinage and silver coins used for smaller-value, mainly local, transactions, again consistent with the so-called new view.
The three Italian city-states had important mercantile connections; Genoa and Venice were entrepôt centers between the Levant and Western Europe. Florence had links with the Champagne fairs and traded with the east using ships leased from the Genoese and Venetians (Najemy 2006). The Venetian ducat in particular acquired and maintained its international role on this basis. Genoa and Venice had significant military prowess and were able to maintain enclaves abroad useful for their mercantile activities. They possessed a degree of natural protection by virtue of their geography, which made for security and political stability. They had relatively advanced fiscal administrations capable of raising the revenues required to finance the military operations needed to defend the state. They were technologically advanced by the standards of the day: Venetian merchant ships could carry as much as 700 tons of cargo already in the twelfth century (Madden 2012), helping to explain the Venetians’ prominence not only in commerce but also the Crusades.
But as city-states they lacked the scale one might expect of the issuer of a leading international currency. For a time they controlled far-flung empires, but even then their territorial scope was limited. One can argue that what mattered more were trade connections, as opposed to productive capacity, in a period when there was an absence of other unitary states with extensive industrial, agricultural, or even pastoral potential. And like other international currency issuers before them, their units benefited from having a stable value. The florin was particularly stable, with no change in its design or metallic content for nearly 300 years from the mid-thirteenth through the mid-sixteenth centuries. It was a high-value coin containing 3.5 g of gold of relatively (99% plus) high purity, meaning that it was used mainly for large commercial and financial transactions.
The “financial” part of this last observation is key. What mattered most, surely, for the three Italian currencies’ international roles was their development of large and liquid financial sectors, by the standards of the time. Venice and Genoa developed elaborate contracts, supported by a body of contract law, to facilitate collective finance of expensive commercial voyages (Epstein 1996). Venice invented double-entry bookkeeping. Florence and Venice were the inventors of deposit banking, which the Venetians claim grew out of the activities of their money changers at Rialto. The state required bankers to obtain a license, and as a condition for a license, the banker had to deposit with the state a sum of money that would be used to pay off depositors if the bank failed (see Madden 2012, p. 207 on the Venetian case, although the point is more general). These, then, were the early modern equivalent of capital requirements. They were an indication that for a financial system to acquire the depth and liquidity needed to support an international currency, it required more than the spontaneous stimulus of commerce. It depended in addition on the existence of a strong state capable of adequate regulation.
In time, these bankers became major providers and brokers of bills of exchange and letters of credit that were in practice denominated in and convertible into gold ducats, florin, and genoin. This class of credit instruments was first issued and changed hands among merchants and then others at the Champagne fairs, subsequently in settlements between Champagne and the banking houses of Tuscany and Venice, and ultimately more widely. They could be reliably converted into gold at a fixed rate of exchange. When large imbalances in bills of exchange built up in Florence, Genoa, or Venice, they were settled through transfers of bullion and coin. In this way bills of exchange became interchangeable, from the point of view of merchants and bankers, with bullion and coin (Spufford 1988, pp. 254–255).
Thus, bills on Venice, Genoa, and Florence substituted for and supplemented gold coins issued by these city-states. Effectively, Venice, Genoa, and Florence met the needs of international trade and finance with more than just coin, just as the United States today meets those needs with more than physical dollar bills. The Italian city-states pioneered the large-scale provision of financial services, first to local merchants, then to the fairs where large amounts of mercantile activity took place, and finally more broadly. Because Florentine banks had branches and did business throughout Europe (the Peruzzi company’s bank had branches everywhere from Rhodes to Paris), the florin in particular became the dominant trade coin and unit of denomination for large-scale transactions across the western part of the continent. At the beginning of the fifteenth century, Florence then experienced some monetary and financial troubles, and the florin was debased. Before long the Venetian ducat had overtaken the florin as the leading international currency (Cipolla 1967, p. 21).
It is interesting to contrast the widespread use of the three Italian currencies in international trade and finance with much more limited use of the Spanish dollar, especially toward the end of the period. Spain accumulated massive troves of gold, silver, and copper when it acquired its colonial possessions in the Western Hemisphere starting in the fifteenth century. Mints in Seville and Burgos began minting high-quality silver and copper coins as early as 1505. Spanish coins were then minted in the Americas, starting in Mexico City and later in Santo Domingo, Lima Panama City, Potosi, Cartagena, and Bogota. These coins circulated throughout the Western Hemisphere; the Spanish dollar and its sub-units (“pieces of eight”) were the dominant form of coin in the British North American colonies for two centuries prior to the Revolutionary War, since the British Parliament prohibited minting by the colonies themselves, accounting for their importation and reliance on Spanish dollars. The coins in question were supplemented by paper money issued by colonial governments (Rothbard (2002) provides additional detail). These New World coins were also exported to Spain, where they were used as collateral and in settlement for the crown’s debts and were circulated in Florence, Genoa, the Low Countries, and the Baltics. They were accepted in payment for imports from China, India, and other parts of Asia. As Stein and Stein (2000, p. 201) put it, “Europe’s internal exchanges were multiplying [in the period after 1720], matched by trade expansion to South India and on to the South China coast. Both were based on the reliable, mutually acceptable monetary foundation that only Spanish American silver pesos fuertes provided.” (Pesos fuertes were pesos imported into Asia via the Spanish East Indies (the Philippine Islands in particular).)
Spanish money does not appear to have been used as widely as Florentine, Venetian, or Genoese coinage, however, or for as extended a period. One explanation for this may be questions about stability and uniformity. From the seventeenth century, there were complaints about the weight and fineness of newly minted Spanish coins. An investigation organized under Philip IV in the mid-seventeenth century documented several decades of fraud and abuse at the Potosi mint. In the early eighteenth century, there were then widespread complaints about the silver content of coins produced by the Mexico City mint, where the business was farmed out to a group of wealthy merchants, who in turn bankrolled local mine owners. Although the substandard coins were recalled and, in the Mexico City case, the state assumed direct control of the mint, still the reputational damage was done.
In addition, the chronic fiscal difficulties of the crown, met in part by forced levies on taxpayers and intermittent debt restructuring (forced conversion of the monarch’s liabilities), slowed the development of Spanish financial markets (Drelichman and Voth 2014). Finally, it is argued that the very foundations of the dollar’s international currency status, namely, the silver “invasion,” created Dutch disease problems for the economy, slowing the growth of merchandise exports. These Dutch disease problems are discussed by Kindleberger (1996), who argues that Spanish industry and commerce did not revive until the eighteenth century. A recent empirical treatment is Drelichman (2003), who documents a strong and persistent increase in the relative price of non-traded goods, implying a decline in the production of exportables, following the silver discoveries.
Evidently, three of the key foundations that might have led to wider international use of the Spanish dollar – stability, extensive mercantile links, and financial development – were missing or at least inadequately provided. Thus, there is evidence of Spanish silver dollars being used in China from the sixteenth century, mostly in coastal provinces like Fukien and Kwangtung disproportionately engaged in cross-border transactions (Yang 1952, p. 48). But those coins circulated by weight rather than face value, indicating problems of standardization, reflecting the aforementioned political and also technical problems at the mint leading to lack of uniformity.
Widespread international use of the three Italian currencies rested on a high level of mercantile activity – in modern terms, on the complementarity between trade and finance. It followed as commercial leadership shifted from the Mediterranean to the Low Countries, Italian monies were replaced by the Dutch guilder (also referred to as the florin, reflecting the positive reputation and circulation there of Florentine coins) as the leading international currency. Starting in the early seventeenth century, the Dutch Republic became not just the leading commercial power but also the leading source of credit and international finance for trade-related activity. Once it acquired this role, the guilder remained the leading currency used in cross-border transactions for the balance of the seventeenth and eighteenth centuries, reflecting the substantial value of the Dutch trade and financial transactions and the Dutch Republic’s retention of this commercial predominance.
A further factor, once again, was the currency’s stable value: there were no debasements of the guilder for more than 150 years after 1630. The guilder banco, deposit entries on the books of the Bank of Amsterdam (more on which below), was the prevailing unit of account. The physical guilder coin was the medium of exchange, at least initially. The guilder banco was traded at a slight and strikingly stable premium, known as the agio, relative to the physical guilder.
Here too financial innovation and development were key. The Bank of Amsterdam, established in 1609 by the governing council of the city, was central to the operation of this financial infrastructure. The Bank of Amsterdam is sometimes regarded as a proto-central bank (Quinn and Roberds 2005). It provided clearing and settlement services; converted specie into bank deposits at stable, standardized rates of exchange (after first deducting a small management fee, the agio); and held both gold and silver coin and bullion as reserves. It accepted and converted foreign as well as domestic coin, thereby supporting the growth of Amsterdam’s international financial connections (Dehing and t’Hart 1997, pp. 46–47). It paid large bills drawn on Amsterdam in bank money, that is, through the transfer of bank deposits. It came to dominate the market, both in Amsterdam where city council regulation required all large bills of exchange to be settled through transfers of Bank of Amsterdam balances and elsewhere on the basis of reputation.
In the prior period, a wide variety of different coins, foreign as well as domestic, clipped and worn as well as full bodied, had circulated in the Dutch Republic. This created a reluctance on the part of foreigners to accept payment in coin, or for that matter to settle transactions in Amsterdam, since they were uncertain in which coin exactly they would be paid. With the substitution of bank money for this heterogeneous circulation, this confidence problem was solved. From the establishment of the Bank of Amsterdam and especially after 1700, bills on Amsterdam were accepted by merchants and bankers throughout the Baltic region and in Russia. In the same way, it has been argued that the establishment of the Federal Reserve System in 1913 and its subsequent support for the development of a market in trade credits was a key condition for the subsequent emergence of the US dollar as an international currency (Eichengreen 2011); the establishment of the Bank of Amsterdam was critical for wide international acceptance of the guilder.
This emphasis on finance is not to deny the importance of commerce. The Dutch Republic was the commercial superpower of the time. Technological advances in shipbuilding – design of the “fluyt” as a dedicated cargo vessel and new industrial methods for its construction – supported the explosive growth of the Dutch merchant marine, which accounted for fully half of all European shipping tonnage by the middle of the seventeenth century. The lion’s share of the merchandise they carried passed through Dutch ports, encouraging use of the guilder. Still, it can be argued that the Dutch case, coming on the heels of the Italian city-states, illustrates the growing importance of financial as opposed to commercial prerequisites for international currency status, just as today US financial development dominates China’s export prowess as a factor in the rivalry between the dollar and the renminbi.
In addition, the stability of the guilder, like the stability of the Italian currencies before it, raises interesting questions of political economy with relevance to modern experience. In Byzantium, as noted, the stability of the unit and avoidance of debasement were rooted in the support that the emperor derived from large landowners, who as creditors dependent on rents and dues fixed in nominal terms had a natural aversion to inflation. Support for the governing institutions of the Dutch Republic derived not so much from landed interests as from bankers and merchants. For those bankers and merchants, what was important was not so much avoiding modest inflation as avoiding serious volatility, which might be a source of high uncertainty that disrupted trade and finance. In fact, the guilder was allowed to depreciate modestly, notably in the period before 1850. What were successfully avoided were major outbreaks of volatility that would be bad for banking and trade. There is an obvious parallel with the dollar in the second half of the twentieth century: the currency could decline at least modestly on the foreign exchange market without eroding its international currency status, but only so long as serious spikes in volatility were avoided.
Conductor of the International Orchestra
Sterling’s rise to prominence as international currency primus inter pares can similarly be dated from the establishment of the Bank of England in 1694. In contrast to the Federal Reserve, in whose case it has been argued that the desire to internationalize the dollar was one of the motivations for founding the central bank (Broz 1997), enhancing sterling’s international role was not one of the immediate objectives of the founders of the Bank of England. Rather, the bank was established in the wake of England’s defeat by France in a series of naval encounters ending in 1690. William III had exhausted his credit in the unsuccessful campaign, leaving him few resources with which to rebuild the navy. The solution, following a plan devised by William Patterson, a Scottish merchant banker, was a bank to organize a loan. The Bank of England would manage balances generated through investor subscriptions, at some cost to itself, in return for specified monopoly privileges, notably the exclusive right, as a joint-stock company, to issue bank notes.
This was a hybrid public/private institution, like the Bank of Amsterdam before it and like the Swedish Riksbank established in 1688. It was also an institution, like the Bank of Amsterdam, whose responsibilities evolved over time. The Bank of England’s duties as debt manager expanded over the course of the eighteenth century, mirroring the expansion of the English national debt. Its role in overseeing the operation of both the gold standard and the British financial system was acknowledged by the Bank Charter Act of 1844, which divided the bank into an Issue Department responsible for the convertibility of the currency and a Banking Department responsible for the stability and operation of the financial system. When liquidity grew scarce, other banks could now turn to the Bank of England to rediscount their bills. The bank intervened as a lender of last resort starting in the Overend, Gurney Crisis of 1866 (Flandreau and Ugolini 2011), reassuring international investors that the market in sterling-denominated claims would remain liquid. It could adjust its discount rate to attract gold from abroad (7%, according to the popular maxim, was enough to “draw gold from the moon”), stabilizing its reserve and buttressing confidence in the sterling parity and the gold standard generally. Operations like these were what led Keynes (1930) to dub the bank “conductor of the international orchestra.”
Entire books have been written about the early history of the Bank of England. But this is enough to establish the essential point that the existence of a central bank to provide liquidity to the market was a necessary but not sufficient condition for sterling’s rise to international prominence.
“Necessary but not sufficient” because, historical experience suggests, other elements, plausibly four in number, had to fall into place to cement sterling’s international role. First, the financial system had to develop further to enhance the stability and liquidity of the market. The middle decades of the nineteenth century, in particular, were a period of rapid growth and structural change in the banking system. A formerly fragmented financial sector underwent significant consolidation, creating a more stable and confidence-inspiring banking system. Overseas lending expanded, increasing the exposure of foreign investors to sterling and raising London’s profile as an international financial center. The period after 1870 also saw growing circulation of treasury bills and bonds. These provided the banking system with a liquid asset in which to invest and the Bank of England with a convenient instrument with which to conduct open market operations, further enhancing the stability and attractions of the London market. Increasingly, the Bank of England bought and sold treasury securities on the market as a way of stabilizing interest rates at desired levels (Bloomfield 1959, p. 45). Others like Wood (1939) and Clapham (1944) note that the Bank of England engaged in sporadic open market purchases and sales in earlier periods, such as the 1830s, but such operations only became commonplace toward the end of the nineteenth century. Be this as it may, with interest rates and security prices relatively stable, treasury securities now became an increasingly attractive alternative to bank deposits for foreign governments and central banks seeking safe and liquid foreign assets.
Second, the economy had to develop so as to stimulate the volume of cross-border transactions. Britain’s position as the first industrial nation was intimately related to sterling’s position as the leading international currency. Britain was the world’s leading exporter throughout the nineteenth century, where 80% of those exports were of manufactured goods at the century’s end. Liverpool was a leading entrepôt center for the import and reexport of raw materials, starting with the cotton that was an essential input into the textile industry at the center of the first Industrial Revolution but extending eventually to a wide range of other commodities. London was the leading gold market, where a majority of the world’s newly mined gold was priced and traded. Britain had the world’s largest merchant fleet, a status it maintained well into the twentieth century. These developments in the real economy made for a large volume of overseas transactions (overseas rather than foreign because the Empire and Commonwealth were intimately involved), and a substantial fraction was naturally denominated in sterling.
Third, there had to be a consensus favoring currency stability. Some will trace this to the Glorious Revolution of 1688, which strengthened the political position of the large landowners who were the natural opponents of inflation and debasement (North and Weingast 1989). From these political changes limiting the arbitrary power of the crown and eliminating confiscatory government, it is argued, flowed the need to create the Bank of England, the deepening of British financial markets, and support for maintenance of the gold parity, established first by the Master of the Mint under royal authority in 1717 and then by Parliamentary Act in 1816. Gold convertibility might still be suspended under duress, as during the Napoleonic Wars and the financial crises of 1847, 1857, and 1866. But on each occasion it was restored subsequently at the earlier statutory price. The maintenance of convertibility was supported not just by the landed interests but by bankers, who saw it as central to London’s status as an international financial center, and by merchants and industrialists, who saw it as critical to Britain’s success as an exporter. By the late nineteenth century, as Frank Fetter (1965) put it, sterling’s fixed gold parity had attained almost constitutional status. “A suspension of the gold standard would have been almost inconceivable in Britain except under the most extraordinary circumstances,” as David Glasner (1989, p. 42) writes. And foreigners knew it.
Fourth and finally, the country had to be militarily secure. Napoleon had contemplated invading the British Isles, but this maneuver was beyond even his very considerable reach. Not only did Britain enjoy the natural protection of the channel, but Britannia ruled the waves: its extensive merchant marine was complemented by an equally extensive naval fleet. Eventually, in the run-up to World War I, its naval preeminence, like its economic preeminence, would be challenged by other rising powers, notably Germany. This in turn raised questions in the minds of contemporaries about Britain’s capacity to anchor the international gold standard (de Cecco 1974). But for much of the nineteenth century, sterling’s heyday as an international currency, these questions were remote.
The preceding makes it seem all but inevitable that sterling should have been the leading international currency in the second half of the nineteenth century, when these five forces so fortuitously combined. It is important therefore to emphasize that it did not monopolize this function. Both the French franc and German mark were consequential rivals, especially toward the end of the period. Data gathered by Flandreau and Jobst (2009) show that while sterling was quoted and actively traded on every foreign exchange market worldwide circa 1900, the French franc was also traded on 80% of those markets and the German mark on 60%. While sterling accounted for half of global foreign exchange reserves at the turn of the century, the French franc accounted for fully 30% and the German mark 15%, according to the estimates of Lindert (1969). The franc was backed by the Bank of France, established in 1800, and by the creation of important new deposit banks starting in the 1860s, including big banks that engaged in a considerable volume of foreign lending. The international role of the mark was supported by the Reichsbank, founded in 1876, 5 years after the creation of Imperial Germany, and a rapidly developing financial sector, including some institutions like Deutsche Bank, founded in 1870 with the express purpose of financing foreign trade (“to promote trade relations between Germany, other European countries and overseas markets” as stated in its 1870 statute).
To be sure, sterling had a head start as an international currency. The relevant political changes (the French Revolution, German unification) came later than the Glorious Revolution. The Industrial Revolution took time to diffuse from England and Wales to the European continent. But by the end of the nineteenth century, it is clear that sterling had nothing resembling a natural monopoly.
The currency that is prominent by its absence from this list is, of course, the US dollar. The United States had already overtaken the United Kingdom as the single largest economy by the 1870s. It overtook the United Kingdom as the single largest exporter on the eve of World War I. And as the war made clear, it now had the most powerful military, backed by the largest industrial sector, of any country.
At first sight, it thus seems paradoxical that the dollar played essentially no role as a currency in which to invoice and settle export and import transactions, as a unit for denominating international bonds, and as a form in which central banks and governments held their foreign reserves. On closer look, however, the paradox dissolves. The United States lacked a central bank to act as lender and liquidity provider of last resort prior to the establishment of the Federal Reserve System in 1913. In the absence of an elastic currency, the US financial system was prone to periodic bouts of financial stringency and crisis, a pattern that did nothing to attract foreign business to New York or enhance the attractions of the dollar. US banks were essentially prohibited from branching abroad under the National Banking Act put in place during the American Civil War. That internecine conflict was hardly a shining example of political stability and solidarity of the sort needed to engender confidence on the part of foreign investors. And the country’s commitment to the gold standard was a perennial question, at least prior to William Jennings Bryan’s defeat in the 1896 presidential election and passage of the Gold Standard Act of 1900, given its system of universal male suffrage, which extended the vote to small farmers and other debtors, and a Populist Movement that associated gold with deflation and hardship.
As in the case of the Bank of England before it, the creation of a central bank was a necessary condition for altering these conditions. A variety of factors and considerations came together to prompt the establishment first of the National Monetary Commission and then passage of the Federal Reserve Act. The most prominent was surely the perceived need for an “elastic currency,” as the concept was put in the act, provided by a central bank that could utilize its discount rate, in the manner of the Bank of England, to modify money and credit conditions as needed to prevent the seasonal spikes in interest rates that were a source of financial dislocation and distress.
But an important subsidiary motive was to internationalize the dollar: to create an institutional framework in which America’s currency could play a larger international role. Under the provisions of the Federal Reserve Act, US banks were permitted to open foreign branches and originate foreign business. The reserve banks were encouraged to discount and purchase trade acceptances – essentially, promissory notes financing import and export transactions – denominated in dollars, both on their own account and on behalf of foreign central banks. Paul Warburg, the German-American financier who testified before the National Monetary Commission and subsequently became a founding member of the Federal Reserve Board, was intimately familiar with the advantages accruing to European exporters from the existence of markets in trade acceptances denominated in European currencies. He actively sought the same advantages for his adopted country. While Warburg’s stint on the Federal Reserve Board was short, his influence was enduring. The System was quick to enter the market in trade acceptances, on which, for the better part of two decades, it was the dominant player.
The impact of the establishment of the Fed on the international role of the dollar is hard to pinpoint because the new central bank’s opening for business coincided with the outbreak of World War I. The European belligerents embargoed gold exports and placed financial flows under government control. Credit that had previously financed import and export transactions in third countries was now directed exclusively toward domestic needs. Previously stable European exchange rates began to fluctuate. In some cases they would fluctuate even more violently after the war, when administrative controls on capital flows were removed but normalcy in its other aspects was not yet restored.
From this monetary and financial turmoil, the United States stood apart, just as it stood apart from the war itself until 1917. Only the dollar exchange rate against gold did not move. Only US gold exports were officially free of embargo. US banks, having been liberated by the Federal Reserve Act, could step in and fill the vacuum created by the absence of European banks. Already during the war, they established branches in Latin America and Asia, followed after 1919 by branches in Europe itself. They were now well positioned to originate trade credits and underwrite foreign loans, all denominated in dollars.
It was not as if London and sterling were prepared to abdicate the throne. Maintaining London’s position as an international financial center was an important factor in Winston Churchill’s decision, as Chancellor of the Exchequer, to restore the prewar gold parity and $4.86 exchange rate against the dollar in 1925. Following earlier suspensions, resumption had always been at the earlier gold parity, as we saw above. To do otherwise now would diminish confidence in sterling and undermine the international position of the City (Moggridge 1971; Boyce 2004).
That decision was and remains controversial. It did not succeed in rejuvenating the British economy, if anything having the opposite effect. It did not even succeed in stabilizing the exchange rate for more than a brief period; Britain was forced to abandon the gold standard and once again allow the currency to depreciate in 1931. But it did allow sterling to regain its position as the leading international currency and to maintain it until after World War II, or so it is widely asserted (see, e.g., Chinn and Frankel 2007). This historical experience, so interpreted, thus lends support to the traditional interpretation of international currency status in network-effect and natural-monopoly terms.
Yet the fact that the sterling parity restored in 1925 was now referred to in dollar terms (“the Norman conquest of $4.86” after Montagu Norman, governor of the Bank of England), terms of reference that would not have been used before World War I, sits uneasily with this traditional interpretation. Those terms of reference are a clear indication that something had changed, that something being a more prominent dollar. By the mid-1920s, central banks already held as many reserves in dollars as sterling (Eichengreen and Flandreau 2009). Trade acceptances denominated in dollars were already as important as trade credits denominated in sterling (Eichengreen and Flandreau 2012). More international bonds were denominated in dollars than sterling, reflecting the rise of New York as an international financial center and the impact of controls and moral suasion utilized in an effort to limit long-term foreign investment by Britons and strengthen the country’s balance of payments (Eichengreen et al. 2013). All this is more easily reconciled with the “new view” questioning the power of network effects and increasing returns and suggesting that multiple international currencies can coexist.
Two explanations suggest themselves for why these facts were not better appreciated until recently. One is data limitations. Data on the currency composition of foreign exchange reserves in the 1920s and 1930s were only fragmentary until new evidence was extracted from the archives in the last decade. And data on the composition of reserves in the second half of the 1940s seemingly suggested – misleadingly, it will be argued below – that sterling remained the dominant reserve currency even in the aftermath of World War II.
The other explanation is the retreat of the dollar in the 1930s – its declining share in various international markets, as mentioned above. In part this reflected the retreat of international currencies more generally. Having suffered capital losses on their foreign balances due to currency devaluation, central banks liquidated their foreign deposits and bonds, shifting instead into gold. The collapse of world trade in the depression of the 1930s reduced the demand for trade credit whether sourced in New York, London, or other financial centers. Sovereign defaults demoralized the market and depressed the value of new bond issues marketed to international investors, independent of currency of denomination. The proliferation of capital controls repressed cross-border financial flows more generally.
But the decline in international use of the dollar was even more pronounced than the decline in the use of sterling on several of these metrics. The liquidation of dollar reserves was more complete. There was no dollar analog to the Sterling Area, a group of countries that continued to peg their currencies to sterling, with varying degrees of rigidity, and to hold the bulk of their foreign exchange reserves in London. The cohesion of the Sterling Area reflected ties of Commonwealth and Empire; for reasons of tradition and affiliation, London was the logical place to maintain reserves for countries like Australia and New Zealand. The Ottawa Agreements and Imperial Preference gave these and other countries preferential access to the British market, again making London the logical place to do international financial business. When the United States took its protectionist turn in the 1930s, it did not extend analogous preferences.
In addition, sovereign default rates were higher on dollar bonds than sterling bonds, discouraging new dollar issuance. The US banking and financial crisis was more severe, causing American banks to disproportionately curtail their security-market operations. Even Federal Reserve support for the market in trade acceptances became a liability when the central bank, preoccupied by other matters, withdrew from the market starting in 1931, revealing the extent to which its active acceptance purchase and discounting program had slowed the entry of other investors.
The 1930s, a period of high turbulence and economic and financial crisis for the United States, thus marked a pause in the dollar’s rise to the status of leading international and reserve currency. But there would be no analogous pause after World War II.
The second half of the twentieth century was the period of dollar dominance. The United States dominated the post-World War II international monetary system, just as it dominated the wartime Bretton Woods Conference where the institutional contours of that system were forged. The United States was the world’s sole monetary superpower. The Soviet Union and the Republic of China were present at Bretton Woods, but following the conclusion of World War II, Stalinist Russia withdrew from the international economy, taking the rest of the Soviet Bloc with it. Chiang Kai-shek’s Republican government withdrew to the island of Taiwan, leaving the Chinese mainland in Communist hands.
Of the countries that remained active participants in the international system, only the United States possessed the geopolitical clout associated with widespread international use of its currency for trade, investment, and reserve-holding purposes. It had the largest GDP of any country, and for a brief period after World War II, it accounted for fully half of the free world’s industrial production. It was far and away the leading exporter. Other countries, in particular European countries grappling with the challenges of postwar reconstruction, were desperate to get their hands on US-produced capital goods, resulting in chronic US trade surpluses. This made dollars a precious commodity. US financial markets survived the war intact, not something that could be said of many other countries. The United States was the leading source of foreign investment, mainly direct foreign investment in the immediate postwar years. It was the leading source of foreign aid, most famously extended through the Marshall Plan. The US investment and aid in question, along with eventually their own exports, enabled other countries to accumulate dollar balances. And the heavy weight of the United States in global current- and capital-account transactions, not to mention the security umbrella it provided its allies, gave other countries an incentive not just to hold dollars but to use them in their international transactions.
The mirror image of US economic and financial strength was weakness abroad. Germany and Japan had both demonstrated considerable industrial muscle during the war, and Germany if not Japan, had been a significant international monetary player in an earlier era, as argued in the previous sections of this chapter. But both countries were now under foreign occupation, and Germany was partitioned. They were reluctant to relax the exchange and capital controls whose removal was the sine qua non for widespread international use of a currency. Germany only restored full convertibility for transactions on current account at the end of 1958, while Japan delayed that step until 1964. Japan only finally removed the last of its capital controls in the 1980s. Germany moved more quickly to restore capital-account convertibility, but it was not afraid to reintroduce controls when free international capital mobility and domestic policy priorities proved to be at odds, as it was the case in the early 1970s. Free capital mobility was incompatible with the industrial policy operated by the Japanese Ministry of International Trade and Industry (MITI), which directed financial resources to priority uses. It was incompatible with the West German desire for a competitive exchange rate to fuel the postwar growth miracle (the Wirtschaftswüunder), and the now deep-seated German aversion to inflation, insofar as the combination of a pegged exchange rate and capital mobility made an independent monetary policy and domestic inflation control effectively impossible. Thus, Tokyo and Bonn both resisted currency internationalization. They took regulatory and other measures to discourage wider international use of their currencies (Eichengreen et al. 2016).
But, in any meaningful economic sense, this image was an illusion. The British economy was one of the most slowly growing in Europe. The United Kingdom was no longer able to finance substantial foreign military commitments, a fact made clear by its 1947 withdrawal from Greece and its acquiescence to colonial independence movements. Its balance of payments was in chronic deficit, as underscored by the failed attempt to restore current-account convertibility in 1947, devaluation in 1949, and then ballooning external deficits in 1951–1952. This was not the kind of stability expected of a leading international currency – to the contrary.
Foreign central banks and governments thus had a clear incentive to liquidate their sterling balances – to convert them into merchandise or dollars – while they still had value. The loyalty and sense of comradery that had been felt so powerfully during the war, among members of the British Commonwealth in particular, was no longer so pervasive following its conclusion (Schenk 2010). As a result, the British government was forced to take measures to limit the conversion of sterling balances. Those balances were held in London: hence they were subject to UK regulation on their withdrawal (in the case of bank deposits) or sale (in the case of bonds). A mitigating factor was that the United Kingdom itself desired market access and, in negotiations with the United States, access to technology and financial assistance. The result was a negotiated settlement in which the sterling balances of different countries were treated differently. The balances of Sterling Area countries could be used for purchases of merchandise and other financial assets within the Sterling Area itself but not elsewhere. The so-called Transferable Account countries, mainly European countries in practice, were permitted to use their sterling reserves for payments between Transferable Accounts and Sterling Area accounts but not for payments to so-called American account countries (members of the Dollar Area).
The effect was to limit opportunities for converting sterling into dollars and using it to purchase merchandise in the Dollar Area. Sterling could be redistributed among Sterling Area countries, but residents could liquidate their sterling reserves only by using them in settlements with the United Kingdom itself. The British government fostered the practice by maintaining trade and capital controls.
The consequences are evident in Fig. 1, which shows not the sudden liquidation of sterling balances but an ongoing decline in its relative position (i.e., its share of global total foreign exchange reserves) stretching over several decades. In particular, there was a relatively rapid decline in the share of sterling in identified global foreign exchange reserves in the first post-World War II decade, a somewhat slower decline in the second postwar decade, and then accelerating decline again in the third following another sterling devaluation in 1967 and intensifying British balance-of-payment problems. Numerically, sterling was supplanted first by the dollar, for all the reasons described above, and then by the deutschemark, the yen, and a variety of subsidiary currencies, as these and other countries liberalized their financial markets and opened their capital accounts.
Also evident from Fig. 1 is that the dollar was never the sole international reserve currency at any point in the second half of the twentieth century. When currency holdings are valued at current exchange rates, the dollar’s share peaked at the end of the 1970s at around 80% of the global total. There was then an undulating decline in that share toward a relatively stable 60%. Thus, the dollar’s share of identified foreign exchange reserves at the beginning of the twenty-first century was roughly the same as sterling’s share at the beginning of the twentieth. Like sterling a century before, the dollar now accounted for the largest single fraction of official foreign currency holdings, but not the entirety.
The other international reserve currency in this period was not, in fact, a currency: it was the IMF’s Special Drawing Rights. SDRs are accounting units credited to IMF members, upon agreement by 85% of the institution’s membership, which members are obliged to accept in official transactions with the IMF itself and one another. The idea of a multilateral source of international liquidity to supplement national currencies had been mooted by Keynes at Bretton Woods, but serious negotiations only got underway in the second half of the 1960s. Agreement was reached in 1969, and the first allocation of SDRs occurred on January 1, 1970. Following additional allocations in 1971 and 1972, SDRs accounted for approximately 10% of global non-gold reserve assets.
Additional SDR allocations followed periodically. But the overall trend in the SDR’s share was downward, to the point where it accounted for little more than 2% of non-gold reserve assets in 2015. This trend reflected constraints on both the supply and demand sides. On the supply side, agreement of 85% of IMF membership, as required to amend the institution’s Articles of Agreement, was a formidable hurdle. To achieve this it was necessary to agree on the distribution of additional drawing rights across countries – whether to allocate them in proportion to countries’ quota shares in the institution, in which case the lion’s share would accrue to high-income countries, or instead to distribute them to low-income countries, in which case a different, ad hoc formula would have to be devised. Insofar as the purpose of creating additional SDRs was to supplement and eventually supplant the dollar, the fact that the United States alone possessed more than 15% of voting power in the Fund constituted a further obstacle.
On the demand side, there was the fact that the SDR lacked a number of the essential attributes of the dollar and other national currencies used in international transactions. The SDR couldn’t be used to settle private transactions, only official transactions among governments. There was limited demand by private investors for international bonds denominated in the basket of currencies comprising the SDR. There was no liquid secondary market in SDR-denominated instruments. A few token SDR bonds were issued in the 1970s, and the World Bank sold SDR-denominated bonds in the Chinese market in 2016, but no more. Banks and other private financial institutions might have issued SDR-linked or denominated securities – private financial institutions having shown themselves as good, sometimes too good, at concocting innovative financial instruments – but there was no demand. With the progressive liberalization of domestic and international financial markets, it was possible for investors to construct a bond portfolio with currency weights of their choice, that dominated the IMF’s unit of six national currencies with fixed weights.
The SDR was created in response to the idea that the Bretton Woods System was unstable. There was a contradiction at the heart of a system predicated on a stable dollar price of gold that also depended on dollars as the incremental source of international liquidity, a point made by Triffin (1960) and his followers. Absent rapid progress on substituting SDRs for dollars, it might become impossible to keep the dollar price of gold stable. The dollar might have to be devalued, undermining confidence in the principal source of international liquidity and throwing the global economy and international financial system into chaos.
In the event, the forecast of dollar devaluation was right, but the prediction of international financial chaos was wrong. The Bretton Woods System collapsed in 1971–1973, when first the dollar was devalued and then other currencies were floated. But there was no sharp reduction in the demand for international reserves, as some experts had predicted, and no sharp shift away from the dollar as a form in which to hold them. From the time of the Roman aureus through the post-World War II dollar, international currencies had been minted from or otherwise linked to precious metal. The architects of the Bretton Woods System carried on in this tradition when they obliged IMF member countries to declare par values for their currencies “in dollars of constant gold content” and singled out the dollar because it alone was convertible into gold at a fixed price by official foreign holders. Many observers therefore concluded that with the end of the dollar’s fixed link to gold, the currency’s international role would be significantly attenuated if not eliminated.
That this was not the case is again evident from Fig. 1. As had been true for two millennia, the acceptability of a national currency in international transactions still depended on the size, stability, and security of the issuer and the liquidity of its financial markets. But now stability was gauged by more than the stability of its value in terms of gold. It was gauged rather by the stability of its finances, its policies, and, ultimately, its economy.
Looking to the Future
While history provides no road map for the future, it does offer hints for how to think about it – in this case, for how to think about the future of international currencies. Many historical periods have seen a dominant unit, where a significant fraction of cross-border transactions were conducted in a particular national currency, network increasing returns serving as a powerful attractor. But any such dominance has regularly fallen short of absolute. Working in the other direction is the desire of central banks, governments, and other investors not to put all their eggs in one basket but rather to hold a diversified portfolio of foreign currencies and to conduct cross-border transactions using settlement mechanisms utilizing different national units. That said, the desired degree of portfolio diversification can in general be achieved by accumulating and utilizing a limited number of currencies. Nor is there reason to expect the international currencies in question to be held in equal amounts; one currency like the guilder, then sterling, and most recently the dollar has always been disproportionately important in international markets, reflecting the continuing sway of those network effects.
The long sweep of history suggests that the balance is tipping, if slightly and gradually, away from network effects and toward portfolio diversification. With the development of modern financial markets and instruments, it becomes less advantageous to utilize the same national unit as one’s trading partners when engaging in international transactions and easier to exchange one currency for another. With advances in information technology, it becomes easier to compare prices denominated in different currencies. With the development of national financial markets, it becomes more attractive to diversify portfolios – to hold and transact in a variety of different currencies. This historical trend, if it continues, suggests that the dollar will have more rivals, and perhaps more consequential rivals, in the future than the past.
The euro and the Chinese renminbi are most obvious such rivals, since only the Euro Area and China compare to the United States in economic size. But historical experience also suggests questions about whether these economies possess the other attributes required of the issuer of an international currency. In the case of the Euro Area, these center on whether the monetary union can develop the strong state typically associated with a sound and stable currency that is widely accepted and utilized internationally. This means a state apparatus with the capacity to efficiently regulate financial markets – banking union, in other words. It means a state with the fiscal capacity to execute its core functions in noninflationary ways – fiscal union or at least significant fiscal integration at the level of the union. It means a state capable of defending its integrity and dispatching doubts about its future. Historically this has meant a state with a strong military. In the European case, it probably means a union of states capable of collectively securing their common external borders and sufficiently committed to their monetary-union project to eliminate doubts about its permanence. This is not a project that will be completed overnight.
In China’s case, in contrast, what is needed is not a stronger state but a more limited state whose authority is subject to checks and balances. Just as political change acknowledging the interests of landowners, merchants, and other investors was needed to secure the status of international currencies from the Byzantine solidus and the Venetian ducat to the Dutch guilder and the British pound, so too political change limiting the arbitrary exercise of power by the Standing Committee of the Communist Party will be needed to secure widespread international acceptance of the renminbi. Limits on the arbitrary exercise of political power and confidence in the arm’s-length adjudication of contract disputes, including disputes involving foreign investors, will be needed for the development of deep and liquid financial markets on which there is active foreign participation. How far these political changes will have to go is uncertain. What is certain, once again, is that they will not be completed overnight.
But then, as any issuer of aureus would tell you, Rome was not built in a day.
- Ali W (1999) The Arab contribution to Islamic art from the 7th–15th centuries. American University in Cairo Press and Royal Society of Fine Arts, AmmanGoogle Scholar
- Bloomfield A (1959) Monetary policy under the international gold standard, 1880–1914. Federal Reserve Bank of New York, New YorkGoogle Scholar
- Brownworth L (2009) Lost to the west: the forgotten Byzantine Empire that rescued Western civilization. Crown Publishers, New YorkGoogle Scholar
- Broz L (1997) The international origins of the Federal Reserve System. Cornell University Press, IthacaGoogle Scholar
- Chown J (1994) The history of money from AD 800. Routledge, LondonGoogle Scholar
- Cipolla C (1967) Money, prices and civilization in the Mediterranean world: fifth to seventeenth century. Gordian Press, New YorkGoogle Scholar
- Clapham H (1944) The Bank of England: a history, 2 volumes. Cambridge University Press, CambridgeGoogle Scholar
- Cohen B (1998) The geography of money. Cornell University Press, IthacaGoogle Scholar
- De Cecco M (1974) Money and empire: the international gold standard. Blackwell, OxfordGoogle Scholar
- Drelichman M (2003) The curse of Moctezuma: American silver and the Dutch disease. Unpublished manuscript. University of British Columbia (November)Google Scholar
- Drelichman M, Voth H-J (2014) Lending to the borrower from hell: debt, taxes and default in the age of Philip II. Princeton University Press, PrincetonGoogle Scholar
- Dwyer G Jr, Lothian J (2002) International and common currencies in historical perspective. Unpublished manuscript. Federal Reserve Bank of Atlanta and Fordham University (May)Google Scholar
- Eichengreen B (2011) Exorbitant privilege: the rise and fall of the dollar and the future of the international monetary system. Oxford University Press, New YorkGoogle Scholar
- Eichengreen B, Mehl A, Chiţu L (2013) When did the dollar overtake sterling as the leading international currency? Evidence from the bond markets. J Dev Econ 111:225–245Google Scholar
- Eichengreen B, Mehl A, Chitu L (2017) International currencies past, present and future: two views from economic history. Princeton University Press, PrincetonGoogle Scholar
- Epstein S (1996) Genoa and the Genoese 958–1528. University of North Carolina Press, Chapel HillGoogle Scholar
- Fetter F (1965) The development of British monetary orthodoxy 1797–1873. Harvard University Press, Cambridge, MAGoogle Scholar
- Flandreau M, Ugolini S (2011) Where it all began: lending of last resort and the Bank of England in the Overend-Gurney Panic of 1866. Norges Bank Working Paper no. 2011/03 (March)Google Scholar
- Harris J (2014) Byzantium and the crusades, 2nd edn. Bloomsbury Academic, LondonGoogle Scholar
- Herrin J (2007) Byzantium: the surprising life of a medieval empire. Princeton University Press, PrincetonGoogle Scholar
- Keynes JM (1930) A treatise on money. Macmillan, LondonGoogle Scholar
- Kindleberger C (1996) World economic primacy, 1500–1990. Oxford University Press, New YorkGoogle Scholar
- Lindert P (1969) Key currencies and gold, 1900–1913. Princeton Studies in International Finance, vol 24. International Finance Section, Department of Economics, Princeton University, PrincetonGoogle Scholar
- Madden T (2012) Venice: a new history. Viking, New YorkGoogle Scholar
- Moggridge D (1971) British controls on long-term capital movements, 1924–31. In: McCloskey D (ed) Essays on a mature economy: Britain since 1840. Princeton University Press, Princeton, pp 113–138Google Scholar
- Quinn S, Roberds W (2005) The big problem of large bills: the Bank of Amsterdam and the origins of central banking. Federal Reserve Bank of Atlanta working paper no. 2005–16 (August)Google Scholar
- Rothbard M (2002) A history of money and banking in the United States: the colonial era to World War II. Ludwig Von Mises Institute, AuburnGoogle Scholar
- Stein S, Stein B (2000) Silver, trade and war: Spain and America in the making of early modern Europe. Johns Hopkins University Press, BaltimoreGoogle Scholar
- Triffin R (1960) Gold and the dollar crisis: the future of convertibility. Yale University Press, New HavenGoogle Scholar