Abstract
I introduce a general equilibrium model of non-optimizing agents that respond to aggregate variables (prices and the average demand profile of agent types) by putting a “prior” on their demand. An interim equilibrium is defined by the posterior demand distribution of agent types conditional on market clearing. A Bayesian general equilibrium (BGE) is an interim equilibrium such that aggregate variables are correctly anticipated. Under weak conditions, I prove the existence and the informational efficiency of BGE. I discuss the conditions under which the set of Bayesian and Walrasian equilibria coincide and show that the Walrasian equilibrium arises from a large class of non-optimizing behavior.
Similar content being viewed by others
Notes
The support of a Borel measure \(\mu \) on a second countable topological space \(X\), denoted by \(S=\mathrm{supp }\mu \), consists of the points that do not have a neighborhood with measure zero. Let \(\fancyscript{U}\) be the countable base of the topology (for the case of \(X=\mathbb {R}^C\), it suffices to take the family of all open balls with rational radii and centers with rational coordinates) and \(S=X\backslash \bigcup _{U\in \fancyscript{U}:\mu (U)=0}U\). Since \(\fancyscript{U}\) is the countable base of the topology, it follows that \(S\) is closed, \(\mu (S)=\mu (X)\), and \(\mu (U\cap S)>0\) whenever \(U\) is open and \(U\cap S\ne \emptyset \). Hence, \(S=\mathrm{supp }\mu \).
I follow Foley (1994) in calling \(X_i(p,x)\) the offer set, but a more appropriate term might be offer correspondence, since sets are parametrized by the price vector \(p\) and the average demand profile \(x\).
This definition can be understood as follows. If we take the prior measure \(\mu _i(p,x)\) as the reference measure \(\mu \) in (7.2) and noting that \(f_i\) corresponds to the posterior density, we get \(q=1\) and hence the Kullback–Leibler information of a single agent of type \(i\) is
$$\begin{aligned} H_i=H(f_i;1)=\int f_i\log f_i\mathrm {d}\mu _i(p,x). \end{aligned}$$Now suppose that there are \(N_i\) agents of type \(i\), and let the total number of agents be \(N=\sum \nolimits _i N_i\) and the proportion be \(n_i=N_i/N\). In general, the entropy of the joint distribution of two independent random variables is the sum of the entropy of each variable. (See, for example, the excellent introductory textbook of Cover and Thomas 2006). The additivity of the entropy carries over to the Kullback–Leibler information. Therefore, the economy-wide information is
$$\begin{aligned} H=\sum _{i=1}^I N_iH_i=\sum _{i=1}^I N_i\int f_i\log f_i\mathrm {d}\mu _i(p,x). \end{aligned}$$Dividing this expression by \(N\), we obtain the per capita information (2.2).
\(\mathrm{cl }A\) and \(\mathrm{co }A\) denote the closure and the convex hull of \(A\), respectively.
Here, I am using the term ‘arbitrage’ informally to refer to a situation that agents execute trades that yielded higher values than they had expected. Thus, ‘arbitrage’ here is synonymous to “getting a good deal by luck.”
See, for example, Equation (6.3.3) in Ljungqvist and Sargent (2004), p. 144.
In the literature this quantity is also known as the relative entropy, cross-entropy, information gain, \(I\)-divergence, etc.
\(\mathrm{dom }f=\left\{ x\in X|f(x)<\infty \right\} \) is the domain of \(f\).
For a subset \(A\) of a vector space, \(\dim A\) denotes the dimension of the smallest affine space that contains \(A\).
For example, \(f_n\rightarrow _c f\) if \(f_n\rightarrow f\) uniformly on compact sets and \(f\) is continuous. To see this, let \(K\) a compact neighborhood of \(x\) and take \(N\) such that \(n>N\) implies \(x_n\in K\). Then
$$\begin{aligned} \left|f_n(x_n)-f(x)\right|\le \left|f_n(x_n)-f(x_n)\right|+\left|f(x_n)-f(x)\right|\rightarrow 0. \end{aligned}$$
References
Becker, G.S.: Irrational behavior and economic theory. J. Polit. Econ. 70(1), 1–13 (1962)
Becker, R.A., Chakrabarti, S.K.: Satisficing behavior, Brouwer’s fixed point theorem and Nash equilibrium. Econ. Theory 26(1), 63–83 (2005). doi:10.1007/s00199-004-0519-z
Bewley, T.F.: Why Wages Don’t Fall During a Recession? Harvard University Press, Cambridge (1999)
Borwein, J.M., Lewis, A.S.: Duality relationships for entropy-like minimization problems. SIAM J. Control Optim. 29(2), 325–338 (1991). doi:10.1137/0329017
Borwein, J.M., Lewis, A.S.: Partially finite convex programming, part I: Quasi relative interiors and duality theory. Math. Program. 57(1–3), 15–48 (1992). doi:10.1007/BF01581072
Cabrales, A., Gossner, O., Serrano, R.: Entropy and the value of information for investors. Am. Econ. Rev. 103(1), 360–377 (2013). doi:10.1257/aer.103.1.360
Caticha, l., Giffin, A.: Updating probabilities. In: Mohammad-Djafari, A. (ed.) Bayesian Inference and Maximum Entropy Methods in Science and Engineering, Volume 872 of AIP Conference Proceedings, pp. 31–42 (2006). doi:10.1063/1.2423258
Chakrabarti, S.K.: On the robustness of the competitive equilibrium: utility-improvements and equilibrium points. J. Math. Econ. (2014). doi:10.1016/j.jmateco.2014.08.008
Cover, T.M., Thomas, J.A.: Elements of Information Theory, 2nd edn. Wiley, Hoboken (2006)
Csiszár, I.: Sanov property, generalized \(I\)-projection and a conditional limit theorem. Ann. Prob. 12(3), 768–793 (1984)
Drèze, J.H.: Existence of an exchange equilibrium under price rigidities. Int. Econ. Rev. 16(2), 301–320 (1975). doi:10.2307/2525813
Duffie, D., Geanakoplos, J., Mas-Collel, A., McLennan, A.: Stationary Markov equilibria. Econometrica 62(4), 745–781 (1994). doi:10.2307/2951731
Foley, D.K.: A statistical equilibrium theory of markets. J. Econ. Theory 62(2), 321–345 (1994). doi:10.1006/jeth.1994.1018
Foley, D.K.: Statistical equilibrium in a simple labor market. Metroeconomica 47(2), 125–147 (1996). doi:10.1111/j.1467-999X.1996.tb00792.x
Foley, D.K.: Statistical equilibrium in economics: method, interpretation, and an example. In: Fabio, P., Frank, H. (eds.) General Equilibrium: Problems and Prospects, Chapter 4. Routledge, London and New York (2003)
Geanakoplos, J.: Nash and Walras equilibrium via Brouwer. Econ. Theory 21(2–3), 585–603 (2003). doi:10.1007/s001990000076
Herings, J.-J.P.: Static and Dynamic Aspects of General Disequilibrium Theory, Volume 13 of C: Game Theory, Mathematical Programming and Operations Research. Kluwer Academic Publishers, Dordrecht (1996)
Jaynes, E.T.: Information theory and statistical mechanics. Phys. Rev. 106(4), 620–630 (1957). doi:10.1103/PhysRev.106.620
Jaynes, E.T.: Where do we stand on maximum entropy. In: Raphael, D.L., Myron, T. (eds.) The Maximum Entropy Formalism, Chapter 1, pp. 15–118. MIT Press, Cambridge (1978)
Jaynes, E.T.: Probability Theory: The Logic of Science. In: Bretthorst G.L. (ed.) Cambridge University Press, Cambridge (2003)
Kahneman, D., Tversky, A.: Prospect theory: an analysis of decision under risk. Econometrica 47(2), 263–292 (1979). doi:10.2307/1914185
Knuth, K.H., Skilling, J.: Foundations of inference. Axioms 1(1), 38–73 (2012). doi:10.3390/axioms1010038
Krebs, T.: Statistical equilibrium in one-step forward looking economic models. J. Econ. Theory 73(2), 365–394 (1997). doi:10.1006/jeth.1996.2231
Kullback, S., Leibler, R.A.: On information and sufficiency. Ann. Math. Stat. 22(1), 79–86 (1951)
Ljungqvist, L., Sargent, T.J.: Recursive Macroeconomic Theory, 2nd edn. MIT Press, Cambridge (2004)
Luenberger, D.G.: Optimization by Vector Space Methods. Wiley, New York (1969)
McCall, J.J.: Economics of information and job search. Q. J. Econ. 84(1), 113–126 (1970)
McKelvey, R.D., Palfrey, T.R.: Quantal response equilibria for normal form games. Games Econ. Behav. 10(1), 6–38 (1995). doi:10.1006/game.1995.1023
Serfozo, R.: Convergence of Lebesgue integrals with varying measures. Sankhyā: Indian J. Stat. Ser. A 44(3),380–402 (1982).
Shannon, C.E.: A mathematical theory of communication. Bell Syst. Tech. J. 27(379–423), 623–656 (1948)
Shore, J.E., Johnson, R.W.: Axiomatic derivation of the principle of maximum entropy and the principle of minimum cross-entropy. IEEE Trans. Inf. Theory 26(1), 26–37 (1980). doi:10.1109/TIT.1980.1056144
Simon, H.A.: Theories of decision-making in economics and behavioral science. Am. Econ. Rev. 49(3), 253–283 (1959)
Sims, C.A.: Implications of rational inattention. J. Monet. Econ. 50(3), 665–690 (2003). doi:10.1016/S0304-3932(03)00029-1
Toda, A.A.: Existence of a statistical equilibrium for an economy with endogenous offer sets. Econ. Theory 45(3), 379–415 (2010). doi:10.1007/s00199-009-0493-6
Uhlig, H.: A law of large numbers for large economies. Econ. Theory 8(1), 41–50 (1996). doi:10.1007/BF01212011
Van Campenhout, J.M., Cover, T.M.: Maximum entropy and conditional probability. IEEE Trans. Inf. Theory 27(4), 483–489 (1981). doi:10.1109/TIT.1981.1056374
Veldkamp, L.L.: Information Choice in Macroeconomics and Finance. Princeton University Press, Princeton (2011)
Author information
Authors and Affiliations
Corresponding author
Additional information
This paper has been first drafted in 2009 and circulated under various titles. I thank Sylvain Barde, Andrew Barron, Truman Bewley, Gaetano Bloise, Donald Brown, Alessandro Citanna, Paul Feldman, Duncan Foley, Xavier Gabaix, John Geanakoplos, Sander Heinsalu, Johannes Hörner, Kazuya Kamiya, Mishael Milaković, Kohta Mori, Herakles Polemarchakis, Larry Samuelson, Paolo Siconolfi, and seminar participants at Yale and ESHIA 2010 for comments and feedback. I am grateful to two anonymous referees for comments and suggestions that significantly improved the paper. The financial supports from the Cowles Foundation, the Nakajima Foundation, and Yale University are greatly acknowledged.
Appendices
Appendix 1: Bayesian inference and maximum entropy
Given a multinomial distribution \(\mathbf {p}=(p_1,\dots ,p_K)\), where \(p_k\ge 0\) and \(\sum \nolimits _{k=1}^Kp_k=1\), Shannon (1948) defined its entropy by
Jaynes (1957) proposed that when we want to assign probabilities \(\mathbf {p}=(p_1,\dots ,p_K)\) given some background information (such as moment constraints), we should maximize the Shannon entropy (7.1) subject to the constraints imposed by the background information. This is the original maximum entropy principle (MaxEnt). A prototypical example of such an inference problem is the die problem in Jaynes (1978), which dates back to a 1962 lecture:
Suppose a die has been tossed \(N\) times and we are told only that the average number of spots up was not 3.5 as one might expect for an “honest” die but 4.5. Given this information, and nothing else, what probability should we assign to \(i\) spots in the next toss?
One drawback of the Shannon entropy is that it is not clear how to define it for distributions on a continuous space (say, Euclidean space). To circumvent this difficulty, Kullback and Leibler (1951) introduced the information measure
where \(q(x)\) is the “prior” and \(p(x)\) is the “posterior” density. Here we are implicitly using the Lebesgue measure \(\mathrm {d}x\) and the density functions with respect to the Lebesgue measure, but it does not need to be so. A big advantage of the Kullback–Leibler information is that it is invariant to the choice of the reference measure: if measures \(P,Q,\mu _1,\mu _2\) are mutually absolutely continuous and \(p_i=\mathrm {d}P/\mathrm {d}\mu _i, q_i=\mathrm {d}Q/\mathrm {d}\mu _i\) are Radon–Nikodym derivatives (which correspond to density functions of probability distributions), then
so the choice of \(\mu _1,\mu _2\) is irrelevant. Thus, given any “prior” measure \(Q\) and “posterior” measure \(P\), we can define the Kullback–Leibler informationFootnote 7 of \(P\) with respect to \(Q\) by
where \(\mu \) is any reference measure and \(p=\mathrm {d}P/\mathrm {d}\mu \), \(q=\mathrm {d}Q/\mathrm {d}\mu \) are Radon–Nikodym derivatives (density functions).
The Shannon entropy (7.1) corresponds to the Kullback–Leibler information (7.2) with respect to the uniform prior on a discrete set modulo the sign and an additive constant. Thus, the maximum entropy principle of Jaynes (1957) can be generalized to what I refer to as the minimum information principle, which prescribes to minimize the Kullback–Leibler information (7.2) subject to the given constraints. Axiomatizations of the minimum information principle have been obtained by Shore and Johnson (1980), Caticha and Giffin (2006), and Knuth and Skilling (2012).
Returning to Bayesian inference, Van Campenhout and Cover (1981) showed that Bayes’s theorem implies the minimum information principle in the following sense: the conditional distribution of a random variable \(X_n\) given the empirical observation
where \(X_n\)’s are i.i.d. with prior density \(g\), converges to \(f_\lambda (x)=\mathrm {e}^{\lambda 'T(x)}g(x)\) (suitably normalized) as \(N\rightarrow \infty \), where \(\lambda \) is chosen to satisfy the population moment constraint
This \(f_\lambda (x)\) turns out to be the solution to
i.e., the minimum information problem, and that \(\lambda \) is the corresponding Lagrange multiplier.Footnote 8 Although Van Campenhout and Cover (1981) proved the statement only for the case \(T\) is a real function, Csiszár (1984) showed that the same conclusion holds even if \(T\) is vector-valued and the sample moment constraints \(\frac{1}{N}\sum \nolimits _{n=1}^NT(X_n)=\bar{T}\) are replaced by the condition that the sample moments belong to a specified convex set, in particular with inequality constraints. Thus, computing the posterior distribution (in the Bayesian sense) reduces to solving the minimum Kullback–Leibler information problem, at least in the large sample limit.
Appendix 2: Proof of equilibrium existence
Proof of Step 1
That \(H^*(\xi ;p,x)\) is \(C^1\) in \(\xi \) and is differentiable under the integral sign follow by Assumption 1 and Lebesgue’s dominated convergence theorem. I only show that \(H^*(\xi ;p,x)\) is continuous in \((p,x,\xi )\) since the case for its first derivatives is similar. Let \((p_n,x_n,\xi _n)\rightarrow (p,x,\xi )\) as \(n\rightarrow \infty \), \(f_n(y)=\mathrm {e}^{-\xi _n'y}\), and \(f(y)=\mathrm {e}^{-\xi 'y}\) for \(y\in \mathbb {R}_+^C\). Then for any sequence such that \(y_n\rightarrow y\), we have \(f_n(y_n)\rightarrow f(y)\). (This property is referred to as “\(f_n\) continuously converges to \(f\).”) Since \(f_n\le 1, \left\{ f_n\right\} \) are uniformly \(\mu _i(p_n,x_n)\)-integrable, i.e.,
(Just take \(\alpha \ge 1\).) Therefore, by Theorem 9.2, we have
Hence, \(\int \mathrm {e}^{-\xi 'y}\mu _i(\mathrm {d}y;p,x)\) is continuous in \((p,x,\xi )\), and so is \(H^*(\xi ;p,x)\). \(\square \)
Proof of Proposition 3.2
\(\fancyscript{E}\) has a Bayesian general equilibrium because (3.14) is stronger than Assumption 2. Suppose that \(\fancyscript{E}\) has a non-degenerate equilibrium \((p,x,(f_i))\). Then
By market clearing and the nature of Lagrange multipliers, we have
and \(p\cdot \sum _{i=1}^In_i(x_i-e_i)=0\). Then for any \(y=(y_i)\) with \(y_i\in X_i(p,x)\), by (3.14) we get
so \(p\cdot y\ge p\cdot x_i\) for all \(i\) and \(y\in X_i(p,x)\). Hence, by Definition 2.4, \((p,x)\) is a degenerate equilibrium.
Let \(\fancyscript{E}^n=\left\{ I,\left\{ n_i\right\} ,\left\{ e_i\right\} ,\left\{ \mu _i^n\right\} \right\} \) be an economy such that
for any Borel set \(B\), that is, the prior \(\mu _i^n(p,x)\) is obtained by shrinking the domain of \(\mu _i(p,x)\) by \(1-1/n\) about the origin. Then the offer set is \(X_i^n(p,x)=\mathrm{supp }\mu _i^n(p,x)=(1-1/n)X_i(p,x)\), so
Hence, by (3.14) we get
By Theorem 3.2, the economy \(\fancyscript{E}^n\) has a non-degenerate equilibrium \((p^n,(x_i^n),(f_i^n))\), where
Then by market clearing and the nature of Lagrange multipliers, after some algebra we obtain
where \(\bar{e}=\sum _{i=1}^I n_ie_i\) is the aggregate endowment. Since \(x_i^n\in \mathrm{cl }\mathrm{co }X_i(p^n,x^n)\subset X\subset \mathbb {R}_+^C\) and \(\left\{ x_i^n\right\} \) is bounded (\(\because \) market clearing), by taking a subsequence if necessary, we may assume \(x_i^n\) converges to some \(x_i\) as \(n\rightarrow \infty \). Since \(\varDelta ^{C-1}\) is compact, we may also assume \(p^n\rightarrow p\). Letting \(n\rightarrow \infty \) in (8.1), we get \(\sum _{i=1}^In_i(x_i-e_i)\le 0\) and \(p\cdot \sum _{i=1}^In_i(x_i-e_i)=0\). By Assumption 4, we have \(x_i\in \mathrm{cl }\mathrm{co }X_i(p,x)\), where \(x=(x_i)\). By the same argument as above, \((p,x)\) is a degenerate equilibrium. \(\square \)
Appendix 3: Mathematical results
Lemma 9.1
(Chebyshev’s inequality) If \(f,g:\mathbb {R}\rightarrow \mathbb {R}\) are increasing (decreasing) functions and \(X\) is a random variable, then
Proof
Let \(X'\) be an i.i.d. copy of \(X\). Since \(f,g\) are monotone, we have
Taking expectations of both sides, noting that \(X,X'\) are i.i.d., and rearranging terms, we obtain
\(\square \)
Clearly if one of \(f,g\) is increasing and the other is decreasing, the reverse inequality holds.
Theorem 9.1
(Generalized Kuhn–Tucker) Let \(X\) be a linear vector space, \(Z_1,Z_2\) normed spaces, \(\Omega \) a convex subset of \(X\), and \(P\) the positive cone in \(Z_1\). Assume that \(P\) contains an interior point.
Let \(f\) be a real-valued convex functional on \(\Omega \), \(G_1:\Omega \rightarrow Z_1\) a convex mapping, and \(G_2:X\rightarrow Z_2\) an affine mapping. Assume the existence of a point \(x_1\in \Omega \) for which \(G_1(x_1)<0\) (i.e., \(G_1(x_1 )\) is an interior point of \(N=-P\)) and \(G_2(x_1)=0\), and that 0 is an interior point of \(G_2(\Omega )\). Let
and assume \(\mu _0\) is finite. Then there exist \(z_1^*\ge 0\) in \(Z_1^*\) and \(z_2^*\in Z_2^*\) such that
Furthermore, if the infimum is achieved in (9.1) by \(x_0\in \Omega \), it is achieved by \(x_0\) in (9.2) and \(\left\langle G_2(x_0),z_2^*\right\rangle =0\).
Proof
Similar to Luenberger (1969), Theorem 1, p. 217. \(\square \)
Proposition 9.1
Let \((X,\fancyscript{B},\mu )\) be a measure space, where \(X\) is a topological space, \(\fancyscript{B}\) is the Borel \(\sigma \)-algebra, and \(\mu (X)>0\). Let \(T:X\rightarrow \mathbb {R}^C\) be measurable. Then,
is convex and lower semi-continuous on \(\mathrm{dom }f\).Footnote 9 Furthermore, \(f\) is strictly convex if \(\dim T(\mathrm{supp }\mu )=C\).Footnote 10
Proof
See Proposition B.4 in Toda (2010). \(\square \)
Proposition 9.2
Let \((X,\fancyscript{B},\mu )\) be as in Proposition 9.1 and \(\phi :X\rightarrow \mathbb {R}\) be measurable. If \(\int \mathrm {e}^{t\phi (x)}\mu (\mathrm {d}x)<\infty \) for some \(t>0\), then
If \(\phi \) is upper semi-continuous, then (9.3) is equal to \(\sup \left\{ \phi (x)|x\in \mathrm{supp }\mu \right\} \).
Proof
Let \(E_n=\left\{ x\in X|\phi (x)\ge -n\right\} \). If \(\mu (E_n)=0\) for all \(n\), then
which contradicts \(\mathrm{supp }\mu \ne \emptyset \). Therefore, \(\mu (E_n)>0\) for some \(n\). Letting
we have \(v>-\infty \).
Let us first prove (9.3) when \(v<\infty \). Define
Since \(X_+=\bigcup _{n=1}^\infty X_n\) and \(\mu (X_n)=0\) by the definition of \(v\), we have \(\mu (X_+)=0\). Obviously, \(X_\pm \) are disjoint and \(X_+\cup X_-=X\), so \(\mu (X_-)=\mu (X)>0\). Fix \(t_0>0\) such that \(\int \mathrm {e}^{t_0\phi (x)}\mathrm {d}\mu <\infty \). Then, for all \(t>0\) we obtain
Denote the integral over \(X_-\) in (9.4) by \(I(t)\). Since \(\phi (x)\le v\) for \(x\in X_-\), for each \(x\in X_-\) the integrand \(\mathrm {e}^{t(\phi (x)-v)}\) is decreasing in \(t\), so \(I(t)\) is decreasing in \(t\). (In particular, \(0<I(t)<\infty \) for \(t\ge t_0\).) Hence, for \(t\ge t_0\) we obtain
Letting \(t\rightarrow \infty \) in (9.5), we obtain
To show the reverse inequality, take any \(\epsilon >0\) and let
By assumption and the definition of \(X_\pm \), we have
By taking a compact subset of \(A\) if necessary, we may assume \(0<\mu (A)<\infty \) since \(\mu \) is regular. Therefore, we obtain
Letting \(t\rightarrow \infty \) in (9.7) and then \(\epsilon \rightarrow 0\), we obtain
(9.3) follows by (9.6) and (9.8).
If \(v=\infty \), let \(F_n=\left\{ x\in X|\phi (x)\ge n\right\} \). By the definition of \(v\), we have \(\mu (F_n)>0\). Then we obtain the same result as (9.7) with \(A\) replaced by \(F_n\) and \(v-\epsilon \) replaced by \(n\). Letting \(n\rightarrow \infty \) we get (9.3).
Finally let us show \(\mathrm{ess }\sup \phi =\sup \left\{ \phi (x)|x\in \mathrm{supp }\mu \right\} \) if \(\phi \) is upper semi-continuous. Let \(u=\sup \left\{ \phi (x)|x\in \mathrm{supp }\mu \right\} \). If \(u<\infty \), for all \(\epsilon >0\) there exists an \(x_0\in X\) such that \(u-\epsilon <\phi (x_0)\). Since \(\phi \) is upper semi-continuous, there exists an open neighborhood \(U\) of \(x_0\) such that \(x\in U\) implies \(\phi (x)>u-\epsilon \). Since \(\mu (U\cap X)>0\) by assumption, it follows that \(v\ge u-\epsilon \). Since \(\epsilon >0\) is arbitrary, we obtain \(v\ge u\). A similar reasoning holds for the case \(u=\infty \).
To show the reverse inequality, take any \(\epsilon >0\). By the definition of \(v\), we have \(\mu (\left\{ x\in X|\phi (x)\ge v-\epsilon \right\} )>0\). In particular, there exists an \(x_0\in \mathrm{supp }\mu \) such that \(\phi (x_0)\ge v-\epsilon \). Therefore,
Since \(\epsilon >0\) is arbitrary, we obtain \(u\ge v\). Therefore, \(u=v\). \(\square \)
Finally, I need a convergence theorem of Lebesgue integrals with varying measures. Let \(X\) be a locally compact second countable Hausdorff space (e.g., Euclidean space) with Borel \(\sigma \)-algebra \(\fancyscript{B}\). We say that \(f_n\) continuously converges to \(f\), denoted by \(f_n\rightarrow _c f\), if \(\lim f_n(x_n)\rightarrow f(x)\) for any \(x_n\rightarrow x\).Footnote 11
Theorem 9.2
Let \(\mu , \left\{ \mu _n\right\} \) be finite Borel measures on \(X\). Suppose that \(f_n\ge 0, f_n\rightarrow _c f, \mu _n\rightarrow \mu \) weakly, and \(\int f_n\mathrm {d}\mu _n<\infty \) for all \(n\). Then
if and only if \(\left\{ f_n\right\} \) is uniformly \(\left\{ \mu _n\right\} \)-integrable, i.e.,
Proof
See Serfozo (1982), Theorem 3.5. \(\square \)
Rights and permissions
About this article
Cite this article
Toda, A.A. Bayesian general equilibrium. Econ Theory 58, 375–411 (2015). https://doi.org/10.1007/s00199-014-0849-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00199-014-0849-4