1 Introduction

Kinetic wealth exchange models (KWEMs) constitute a popular class of econophysical models in which agents exchange their wealth according to some stochastic rules, always preserving the total amount of wealth in the economy. The aim is to understand some important properties of the dynamics of wealth distribution, such as wealth concentration, stationary distributions and time dependent correlation functions. For a recent review about KWEMs, we refer to [3]. The apparently economically strong assumption of wealth conservation—which also rules out the possibility of (endogenous) growth—is justifiable by choosing the appropriate time scale (or time unit) for the economy. An interesting feature of KWEMs is their similarity with another family of models, known as (generalized) KMP processes [1]. Introduced in [7], KMP models are microscopic models of heat conduction and are meant to provide a microscopic foundation of the Fourier law; in those models the exchanged quantity represents energy. As shown in [1], duality is a powerful tool to study the properties of these KMP models. Thanks to duality it is possible to investigate invariant measures, ergodic results, and important macroscopic properties such as hydrodynamic limits, the propagation of local equilibrium, and the local equilibrium of boundary-driven non-equilibrium states.

In [4], the authors show that duality can also be fruitfully applied to kinetic wealth exchange models, obtaining relevant information about the stationary distributions of a model with saving propensities.

In this paper we aim to extend the use of duality techniques in the field of KWEMs, by focusing our attention on a recent model, the so-called “Immediate Exchange Model”. The model has been first proposed in [5], where it is studied via simulations, and it has been later analytically explored in [6]. In that model, upon exchange, each agent gives a fraction of his/her wealth to the other. In [6] it is proved that, if this fraction is a uniformly distributed random variable with support [0, 1], then the exchange process has a product invariant measure, which is the product of Gamma(2) distributions. It is now worth noticing that an invariant measure which is a product of Gammas also occurs in the redistribution models presented in [1]. In these models duality is characterized by duality polynomials that are naturally associated with the Gamma distribution and it is shown that these polynomials are also the duality functions linking a discrete particle system, the symmetric inclusion process SIP(k), with a diffusion process, the Brownian energy process BEP(k). It is therefore natural to conjecture that these polynomials also occur as duality functions in the Immediate Exchange Model of [5], relating this model to a simpler discrete dual model. In this paper we show that this is indeed the case, and we generalize the Immediate Exchange Model to the case in which the random fraction of wealth the agents exchange is Beta(st) distributed. In this more general setting, the invariant measure shows to be a product of \(Gamma(s+t)\) distributions. As in [4], using duality we are able to directly infer basic properties of the time-dependent expected wealth, together with an ergodic result.

The rest of our paper is organized as follows: in Sect. 2 we describe the Immediate Exchange Model when the economy is just made up of two agents and prove duality with a discrete two-agent model. In Sect. 3 we extend the model to the case of many agents and we give some relevant consequences of duality. In Sect. 4 a further generalization is proposed, by assuming Beta(st)-distributed exchanged fractions of wealth; also for this generalized model we obtain duality with a discrete model and stationary product measures which are Gamma with shape parameter \(s+t\). In Sect. 5 we study various properties of the discrete dual process, which is an interesting model in itself. We characterize its reversible product measures and prove that in an appropriate scaling limit it scales to a simple variation of the original continuum model. Finally, in Sect. 6 we show self-duality of the discrete model for the general case via a Lie algebraic approach, where we actually obtain the full SU(1, 1) symmetry of the discrete model, and, as a further consequence, of the continuous model, too. Self-duality then follows by acting with an appropriate symmetry on the so-called cheap duality function obtained from the reversible product measure [2].

2 The Immediate Exchange Model with Two Agents and Its Dual

2.1 Definition of the Model

We start by considering a toy economy with just two agents, as given in [5] and [6]. More complex models can be built by addition of two-agent generators along the edges of a graph. Most properties such as duality and self-duality transfer immediately from the two-agent model to the many agent models. We will define the processes in terms of their infinitesimal generators, and refer to [10, 11] for general background on Markov processes, generators, ergodicity and duality. More formally, we write \((x,y)\in \Omega \), with \(\Omega =[0,\infty )^2\). With \(s=x+y\) we indicate the total wealth in the economy. Then the dynamics of two agents is described as follows, starting from an initial state \((X_0,Y_0)=(x,y)\), after an exponential waiting time (with mean one), an exchange of wealth occurs, whereby the wealth configuration (xy) is updated towards \((x',y')\), with

$$\begin{aligned} x'= & {} x(1-U)+yV \nonumber \\ y'= & {} y(1-V)+xU, \end{aligned}$$
(1)

where U and V are two i.i.d. Uniform(0, 1) random variables. This gives a continuous-time Markov jump process \((X_t,Y_t)\) for which the total wealth \(X_t+Y_t= X_0+Y_0=x+y\) is conserved.

The infinitesimal generator of this exchange process is defined on bounded continuous functions f via

$$\begin{aligned} L f(x,y)= & {} \lim _{t\rightarrow 0} \frac{\mathbb E_{x,y} f(X_t,Y_t) - f(x,y)}{t}\nonumber \\= & {} \int _0^1\int _0^1 \left( f(x(1-u)+yv,y(1-v)+xu)-f(x,y)\right) \text {d}u\text {d}v. \end{aligned}$$
(2)

Notice that L can be rewritten as \(P-I\), where P is the discrete-time Markov transition operator

$$\begin{aligned} Pf(x,y)= \int _0^1\int _0^1 f(x(1-u)+yv,y(1-v)+xu) \text {d}u\text {d}v, \end{aligned}$$

and I is the identity.

We denote \((X_0,Y_0)=(x,y)\) to be the initial wealth configuration of the two agents, and \((X_t,Y_t)\) indicates the wealth of the two agents at time \(t\ge 0\).

2.2 Duality for the Two-Agent Model

We will now first define a discrete wealth distribution model, i.e., where wealth can only be a nonnegative integer quantity. See Fig. 1 for the continuous model and its discrete dual. This model will be related to the original one via a duality relation.

Fig. 1
figure 1

The continuous model and its discrete dual

In the discrete model the couple \((x,y)\in \Omega \) is replaced by a couple \((n,m)\in \mathbb N^2\), where \(\mathbb N\) denotes the set of non-negative integers (including zero).

On this couple we define a continuous-time Markov process with generator

$$\begin{aligned} {\mathcal L}f(n,m) = \sum _{k=0}^n\sum _{l=0}^m \frac{1}{n+1}\frac{1}{m+1}(f(n-k+l, m-l+k)-f(n,m)). \end{aligned}$$
(3)

In this process, when initiated at (nm), for a given kl with \(0\le k\le n, 0\le l\le m\), the wealth configuration changes from (nm) to \((n-k+l,m-l+k)\) at rate \(\frac{1}{(n+1)(m+1)}\). We denote this discrete state space continuous-time Markov process by \((N_t,M_t)\), with \((N_0,M_0)=(n,m)\). It follows from an easy detailed balance computation that for \(0<\theta <1\) the product of discrete Gamma(2) measures given by

$$\begin{aligned} \nu _\theta (k,l)=(1-\theta )^4\left( \theta ^k (k+1)\theta ^l (l+1)\right) , \ k,l\in \mathbb N\end{aligned}$$
(4)

is reversible for the process with generator (3) (cf. also proposition 5.1 below for a more general case).

We now show that the processes \((X_t,Y_t)\) and \((N_t,M_t)\) are related via duality. To introduce this, we need some further notation.

Define, for \(x\in [0,\infty ), n\in \mathbb N\), the polynomial

$$\begin{aligned} d(n,x)= x^n\frac{\Gamma (2)}{\Gamma (2+n)}= \frac{x^n}{(n+1)!} \end{aligned}$$
(5)

and

$$\begin{aligned} D(n,m;x,y)= d(n,x)d(m,y). \end{aligned}$$
(6)

The \(d(n,\cdot )\) polynomials are naturally associated to the Gamma distribution \(\nu _\theta \) with shape parameter 2 and scale parameter \(\theta \), i.e.

$$\begin{aligned} \nu _\theta (\text {d}x) = \frac{1}{\theta ^2}xe^{-x/\theta } \text {d}x \end{aligned}$$

by

$$\begin{aligned} \int d(n,x) \nu _\theta (\text {d}x) = \theta ^n \end{aligned}$$

for all \(n\in \mathbb N\).

With a slight abuse of notation, we will denote by \(\nu _\theta (\text {d}x\text {d}y)\) the product measure with marginals \(\nu _\theta \).

We are now ready to state the first main result.

Theorem 2.1

The processes \((X_t,Y_t)\) and \((N_t,M_t)\) are each others dual with duality function given by (6).

More precisely, for all \((x,y)\in [0,\infty )^2, (n,m)\in \mathbb N^2\), and for all \(t>0\), we have

$$\begin{aligned} \mathbb E_{x,y} D(n,m; X_t,Y_t)= \widehat{\mathbb E}_{n,m} D(N_t,M_t;x,y), \end{aligned}$$
(7)

where \(\mathbb E_{x,y}\) and \(\widehat{\mathbb E}_{n,m}\) are the expectations in the path-space measures started from \((X_0,Y_0)=(x,y)\) and \((N_0,M_0)=(n,m)\) respectively.

Proof

To prove (7) it is sufficient to show the same relation at the level of the generators. In other words, we have to show that

$$\begin{aligned} L D(n,m;x,y)= {\mathcal L}D(n,m;x,y), \end{aligned}$$
(8)

for all \((x,y)\in [0,\infty )^2\) and \((n,m)\in \mathbb N^2\), and where L works on (xy), and \({\mathcal L}\) on (nm).

We compute

$$\begin{aligned}&P D(n,m;x,y) \\&=\int _0^1\int _0^1 \frac{1}{(n+1)!(m+1)!}(x(1-u)+yv)^n(y(1-v)+ux)^m\ \text {d}u\text {d}v \\&= \frac{1}{(n+1)!(m+1)!}\sum _{k=0}^n\sum _{l=0}^m {n\atopwithdelims ()k}{m\atopwithdelims ()l} x^{n-k} y^{k} y^{m-l} x^l \int _0^1\int _0^1 (1-u)^{n-k} v^{k} (1-v)^{m-l} u^l\ \text {d}u\text {d}v \\&= \frac{1}{(n+1)!(m+1)!}\sum _{k=0}^n\sum _{l=0}^m {n\atopwithdelims ()k}{m\atopwithdelims ()l} x^{n-k+l}y^{m-l+k} \frac{k!(m-l)!}{(k+m-l+1)!}\frac{l!(n-k)!}{(n-k+l+1)!} \\&= \frac{1}{(n+1)!(m+1)!}\\&\times \left( \sum _{k=0}^n\sum _{l=0}^m \frac{n!}{(n-k)!k!}\frac{m!}{(m-l)!l!}\frac{k!(m-l)!}{(k+m-l+1)!}\frac{l!(n-k)!}{(n-k+l+1)!}x^{n-k+l}y^{m-l+k} \right) \\&= \sum _{k=0}^n\sum _{l=0}^m\frac{1}{(n+1)(m+1)}D(n-k+l,m-l+k;x,y). \end{aligned}$$

Now we have

$$\begin{aligned} \sum _{k=0}^n\sum _{l=0}^m\frac{1}{n+1}\frac{1}{m+1}=1. \end{aligned}$$
(9)

Therefore, we indeed find that

$$\begin{aligned} L D(n,m;x,y)= & {} \sum _{k=0}^n\sum _{l=0}^m\frac{1}{n+1}\frac{1}{m+1} \left( D(n-k+l,m-l+k;x,y) - D(n,m;x,y)\right) \nonumber \\= & {} {\mathcal L}D(n,m;x,y). \end{aligned}$$
(10)

\(\square \)

As a consequence of duality, and thanks to the relation between the duality functions and the measure \(\nu _\theta \), we obtain relevant information about the invariant measures. Let us denote by \({\mathcal P}_f\) the set of probability measures on \([0,\infty )^2\) with finite moments of all order and which are such that their finite moments determine the probability measure uniquely. I.e., two measures in \({\mathcal P}_f\) with identical moments are equal. We say that such a measure satisfies the “finite moments condition”. Similarly for a probability measure on \([0,\infty )\) we say that it satisfies the “finite moments condition” if it has finite moments of all order and which are such that these finite moments determine the probability measure uniquely. This is e.g. assured by the Carleman’s moment growth condition. We will focus from now on only probability measures in this set \({\mathcal P}_f\).

Theorem 2.2

A probability measure \(\nu \in {\mathcal P}_f\) is invariant if and only if its D-transform

$$\begin{aligned} \hat{\nu } (n,m)= \int D(n,m;x,y) \nu (\text {d}x \text {d}y) \end{aligned}$$

is harmonic for the dual process, i.e., if and only if

$$\begin{aligned} \widehat{\mathbb E}_{n,m}\hat{\nu } (N_t,M_t)= \hat{\nu } (n,m). \end{aligned}$$

for all \(n,m\in \mathbb N\). In particular the product measures \(\nu _\theta (\text {d}x \text {d}y)\) are invariant for the process \((X_t,Y_t)\).

Proof

To have invariance of \(\nu \in {\mathcal P}_f\), it is sufficient to have, for all \((n,m)\in \mathbb N^2\)

$$\begin{aligned} \int \mathbb E_{x,y} D(n,m;X_t,Y_t) \nu (\text {d}x\text {d}y)= \int D(n,m;x,y) \nu (\text {d}x\text {d}y) = \hat{\nu } (n,m). \end{aligned}$$
(11)

Combining this with duality and Fubini’s theorem we obtain

$$\begin{aligned} \hat{\nu } (n,m)= & {} \int \mathbb E_{x,y} D(n,m;X_t,Y_t) \nu (\text {d}x\text {d}y) \\= & {} \int \widehat{\mathbb E}_{n,m} D(N_t,M_t;x,y) \nu (\text {d}x\text {d}y)= \widehat{\mathbb E}_{n,m} \hat{\nu } (N_t,M_t). \end{aligned}$$

As a result, we find that \(\nu \) is invariant if and only if

$$\begin{aligned} \widehat{\mathbb E}_{n,m}\hat{\nu } (N_t,M_t)= \hat{\nu } (n,m). \end{aligned}$$

To show the invariance of the \(\nu _\theta \) measures, just notice that

$$\begin{aligned} \hat{\nu _\theta } (n,m)= \theta ^{n+m}, \end{aligned}$$

and recall that in the process \((N_t,M_t)\) the sum \(N_t+M_t\) is conserved. \(\square \)

Another consequence of duality is the ergodicity of the process \((X_t,Y_t)\). i.e., starting from any initial condition (xy) the process converges to a unique stationary distribution determined by the conserved sum \(x+y\). Indeed, the dual process starting from (nm) is an irreducible continuous-time Markov chain on the finite set \(\Sigma _{n+m}:=\{(k,l)\in \mathbb N^2: k+l=n+m\}\) and therefore converges to a unique stationary distribution on the set \(\Sigma _{n+m}\), denoted by \(\nu _{n+m}\), and given by

$$\begin{aligned} \nu _{n+m}(k,l)= \frac{(k+1)(l+1)}{{\mathcal Z}_{n+m}},\ (k,l)\in \Sigma _{n+m} \end{aligned}$$
(12)

where

$$\begin{aligned} {\mathcal Z}_{n+m}=\sum _{k,l: k+l=n+m} (k+1)(l+1). \end{aligned}$$
(13)

This follows from the reversibility (for the dual process) of the product measure given in (4), and the fact that conditioning this product measure on the sum \(k+l\) being equal to \(n+m\) gives exactly the “micro-canonical” measure (12).

For all \((n,m)\in \mathbb N^2\) we can therefore obtain

$$\begin{aligned} \lim _{t\rightarrow \infty }\mathbb E_{x,y} D(n,m;X_t,Y_t)= & {} \lim _{t\rightarrow \infty }\hat{E}_{n,m}( D(N_t,M_t;x,y)) \nonumber \\= & {} \sum _{k,l: k+l=n+m} D(k,l;x,y)\nu _{n+m}(k,l) \end{aligned}$$
(14)

It then follows from an easy computation using (12) that

$$\begin{aligned} \sum _{k,l: k+l=n+m} D(k,l;x,y)\nu _{n+m}(k,l) = \frac{(x+y)^{n+m}}{(n+m)!{\mathcal Z}_{n+m}}. \end{aligned}$$
(15)

where \({\mathcal Z}_{n+m}\) is given by (13) i.e., the limit in the r.h.s. of (14) only depends on \(x+y\). On the other hand, in the process \((X_t,Y_t)\) we know that the sum \(X_t+Y_t\) is conserved. Therefore, the conditional measure obtained by conditioning the stationary product measure \(\nu _\theta \) on the sum being equal to s is an invariant measure concentrating on the set \(\{(u,v)\in [0,\infty )^2: u+v=s\}\). This measure is exactly the distribution of \((s\epsilon , s(1-\epsilon ))\), with \(\epsilon \) being Beta(2, 2) distributed. If we combine this fact with (14), we obtain the following ergodic theorem and complete characterization of the set of invariant measures satisfying the finite moment condition.

Theorem 2.3

  1. (a)

    The process \((X_t,Y_t)\) is ergodic, i.e., \((X_t,Y_t)\) converges in distribution to \((S\epsilon , S(1-\epsilon ))\), with \(\epsilon \sim Beta(2,2)\) and \(S=X_0+Y_0\).

  2. (b)

    The set of invariant measures contained in \({\mathcal P}_f\) is given by the distributions of couples of the form \((S\epsilon , S(1-\epsilon ))\) where S is an arbitrary random variable on \([0,\infty )\) satisfying the finite moments condition and \(\epsilon \) is independent (of S) and Beta(2, 2) distributed.

3 Generalization to Many Agents

Consider now an economy populated by many agents. Let us assume that the economy can be represented by a countable set of agents V, i.e., each element (vertex) \(i\in V\) represents an agent. Consider now an irreducible symmetric random walk kernel p(ij) on V, i.e., such that \(p(i,i)=0\), \(p(i,j)=p(j,i)\ge 0, \sum _{j} p(i,j)=1\), and for all \(i,j\in V\) there exists n with \(p^{(n)} (i,j)>0\).

In this setting, the wealth configuration of the economy is an element of the set \(\Omega =[0,\infty )^V\). For \(\mathbf {x}\in \Omega \) (from now on simply x), we denote with \(x_i\) the wealth of the agent i, that is of vertex i.

We then define the generator of the model via

$$\begin{aligned} L f(x,y) = \sum _{ij} p(i,j) L_{ij} f(x), \end{aligned}$$
(16)

with

$$\begin{aligned} L_{ij} f(x) = \int \left( f(x^{ij;uv})- f(x)\right) \ \text {d}u\text {d}v, \end{aligned}$$

where

$$\begin{aligned} x^{ij;uv}_k= {\left\{ \begin{array}{ll} x_k &{} \text {if}\ k\not \in \{i,j\}\\ x_i(1-u)+x_jv &{} \text {if}\ k=i\\ x_j(1-v)+x_iu &{} \text {if}\ k=j \end{array}\right. }. \end{aligned}$$

Accordingly, the dual process has state space \(\mathbb N^V\) and the elements of this state space are denoted by \(\varvec{\xi }\) (from now on just \(\xi \)), where \(\xi _i\) is the number of “dual units” at vertex i. A configuration \(\xi \) is called finite if \(|\xi |=\sum _i \xi _i\) is finite.

The generator of the dual process is then

$$\begin{aligned} {\mathcal L}f(n) = \sum _{ij} p(i,j) {\mathcal L}_{ij} f(n), \end{aligned}$$
(17)

with

$$\begin{aligned} {\mathcal L}_{ij} f(\xi ) = \sum _{K=0}^{\xi _i}\sum _{L=0}^{\xi _j}\frac{1}{(\xi _i+1)(\xi _j+1)} \left( f\left( \xi ^{ij;KL}\right) - f(\xi )\right) \ \text {d}u\text {d}v, \end{aligned}$$

where

$$\begin{aligned} \xi ^{ij;KL}_k= {\left\{ \begin{array}{ll} \xi _k &{} \text {if}\ k\not \in \{i,j\}\\ \xi _i - K+L &{} \text {if}\ k=i\\ \xi _j - L+K &{} \text {if}\ k=j \end{array}\right. }. \end{aligned}$$

Now, for \(\xi \in \mathbb N^V\) and \(x\in \Omega \), define

$$\begin{aligned} D(\xi ,x)= \prod _{i\in V} d(\xi _i, x_i) \end{aligned}$$
(18)

The relation between these duality polynomials and the product measure \(\nu _\theta :=\otimes _{i\in V} \nu _\theta (\text {d}x_i)\) is

$$\begin{aligned} \int D(\xi ,x) \nu _\theta (dx)= \theta ^{|\xi |} \end{aligned}$$
(19)

with

$$\begin{aligned} |\xi |=\sum _{i\in V} \xi _i \end{aligned}$$

the number of dual particles.

In the many agents economy model, the duality relation between both processes is then given by the following theorem. Its proof is direct from the two agents case, because the generator is a sum of two agents generators.

Theorem 3.1

Let \(\xi \in \mathbb N^V\) be a finite configuration. For all \(x\in \Omega \) and for all \(t>0\), we have

$$\begin{aligned} \mathbb E_x D(\xi ,x_t) = \widehat{\mathbb E}_\xi D(\xi _t, x). \end{aligned}$$
(20)

As a consequence, the product measures \(\nu _\theta =\otimes _{i\in V} \nu _\theta (\text {d}x_i)\) are invariant.

Notice that when V is finite, the product measures \(\otimes _{i\in V} \nu _\theta (\text {d}x_i)\) can never be ergodic because the total wealth is conserved. However, for infinite V, we have ergodicity under an additional condition. Let us denote by \(p_t(\xi ,\xi ')\) the probability to go from the finite configuration \(\xi \in \mathbb N^S\) to the finite configuration \(\xi '\) in time \(t>0\), in the dual process with generator (16). Assume that

$$\begin{aligned} \lim _{t\rightarrow \infty } p_t(\xi ,\xi ')=0 \end{aligned}$$
(21)

for all \(\xi ,\xi '\in \mathbb N^V\). As an example we have \(V=\mathbb Z^d\) and p(ij) symmetric nearest neighbor random walk.

Proposition 3.1

Let V be infinite and let p(ij) be such that (21) holds. Then the product measure \(\otimes _{i\in V} \nu _\theta (\text {d}x_i)\) is ergodic

Proof

Abbreviate \(\nu :=\otimes _{i\in V} \nu _\theta (\text {d}x_i)\). Because ergodicity is implied by mixing, it suffices to show that

$$\begin{aligned} \lim _{t\rightarrow \infty }\int \mathbb E_x D(\xi , x_t) D(\xi ', x) \nu (\text {d}x)= \int D(\xi , x) \nu (\text {d}x)\int D(\xi ', x) \nu (\text {d}x)= \theta ^{|\xi |+|\xi '|} \end{aligned}$$
(22)

because linear combinations of the polynomials \(D(\xi ,x)\) are dense in \(L^2(\nu _\theta )\). To prove (22) denote \(\xi \perp \xi '\) if the support of \(\xi \) and \(\xi '\) are disjoint, i.e., if there are no vertices \(i\in V\) which contain both particles from \(\xi \) and \(\xi '\). If \(\xi \perp \xi '\) then under the measure \(\nu _\theta \), the polynomials \(D(\xi ,\cdot )\) and \(D(\xi ',\cdot )\) are independent. Because of (21) it then follows, using duality and conservation of the total number of particles in the dual process:

$$\begin{aligned}&\lim _{t\rightarrow \infty }\int \mathbb E_x D(\xi , x_t) D(\xi ', x) \nu (\text {d}x) \\&\quad = \lim _{t\rightarrow \infty }\sum _{\zeta } p_t(\xi ,\zeta )\int D(\zeta , x) D(\xi ', x) \nu (\text {d}x) \\&\quad = \lim _{t\rightarrow \infty }\sum _{\zeta \perp \xi '} p_t(\xi ,\zeta )\int D(\zeta , x) D(\xi ', x) \nu (\text {d}x) \\&\quad = \lim _{t\rightarrow \infty }\sum _{\zeta \perp \xi '} p_t(\xi ,\zeta )\int D(\zeta , x) \ \nu (\text {d}x)\int D(\xi ', x)\ \nu (\text {d}x) \\&\quad = \lim _{t\rightarrow \infty }\sum _{\zeta \perp \xi '} p_t(\xi ,\zeta )\theta ^{|\xi |+|\xi '|} \\&\quad = \lim _{t\rightarrow \infty }\sum _{\zeta } p_t(\xi ,\zeta )\theta ^{|\xi |+|\xi '|} \\&\quad =\theta ^{|\xi |+|\xi '|} \end{aligned}$$

\(\square \)

Notice that, for a single dual particle, that is to say when \(\xi =\delta _i\), we have

$$\begin{aligned} D(\xi ,x) = \frac{x_i}{2}. \end{aligned}$$

In the dual process, the motion of a single dual particle is simply a continuous-time random walk jumping with rate \(\frac{p(i,j)}{2}\) from i to j.

If we denote by \(p_t(i,j)\) the time \(t>0\) transition probability of this walk, then duality with a single dual particle implies the following “random walk” spread of the expected wealth at time \(t>0\).

Proposition 3.2

In the model with generator (16), for all \(x\in \Omega \) and \(i\in V\) we have

$$\begin{aligned} \mathbb E_x (x_i(t))= \sum _j p_t(i,j) x_j. \end{aligned}$$

4 Generalized Immediate Exchange Model

Consider the update rule (1) and assume that U and V are now independent and Beta(st) distributed (the original model is then recovered for \(s=t=1\)). In other words, we consider the generator

$$\begin{aligned} L_{s,t} f(x,y)= \int _0^1\int _0^1 \left( f(x(1-u)+yv,y(1-v)+xu)-f(x,y)\right) \phi _{s,t}(u,v) \text {d}u\text {d}v,\nonumber \\ \end{aligned}$$
(23)

where

$$\begin{aligned} \phi _{s,t}(u,v)= \left( \frac{1}{B(s,t)}\right) ^2u^{s-1} (1-u)^{t-1} v^{s-1} (1-v)^{t-1}. \end{aligned}$$

is the probability density of two independent Beta(st) distributed random variables.

As before, the generator can be rewritten as \(L= P-I\), where I is the identity and P the discrete Markov transition operator

$$\begin{aligned} P_{s,t}f(x,y)=\int _0^1\int _0^1 f(x(1-u)+yv,y(1-v)+xu)\phi _{s,t}(u,v)\text {d}u\text {d}v. \end{aligned}$$

In this generalized setting, the polynomials which we need for duality are now given by

$$\begin{aligned} d_{s,t}(k,x)= \frac{x^k\Gamma (s+t)}{\Gamma (s+t+k)} \end{aligned}$$
(24)

and

$$\begin{aligned} D_{s,t}(n,m;x,y)= d_{s,t}(n,x) d_{s,t}(m,y). \end{aligned}$$
(25)

These polynomials are associated to the Gamma distribution \(\nu ^{s+t}_\theta (\text {d}x)\) with shape parameter \(s+t\),

$$\begin{aligned} \nu _\theta ^{s+t}(\text {d}x) = x^{s+t-1} e^{-x/\theta }\frac{1}{\Gamma (s+t)\theta ^{s+t}} dx \end{aligned}$$

via

$$\begin{aligned} \int d_{s,t}(k,x) \nu _\theta ^{s+t} (\text {d}x) = \theta ^k. \end{aligned}$$
(26)

As before, with a slight abuse of notation we also denote \(\nu ^{s+t}_\theta (dx dy)\) the product measure with marginals \(\nu ^{s+t}_\theta (\text {d}x)\).

The same computation as the one following (8) now yields that for a given kl with \(0\le k\le n, 0\le l\le m\), the dual process will jump from (nm) towards \((n-k+l,m-l+k)\), at rate

$$\begin{aligned} r_{s,t}(n,m;k,l)= \frac{n!m!}{B(s,t)^2}\frac{(k+s-1)!(m-l+t-1)!(n-k+t-1)!(l+s-1)!}{(s+t+n-1)!(s+t+m-1)! (n-k)! k! (m-l)! l!} \end{aligned}$$
(27)

where the factorials are to be interpreted as \(x!=\Gamma (x+1)\), when x is non-integer. Notice that as before in (9) we have that the rates sum up to one

$$\begin{aligned} \sum _{k=0}^n\sum _{l=0}^m r_{s,t}(n,m;k,l)=1. \end{aligned}$$
(28)

This follows via rewriting

$$\begin{aligned} r_{s,t}(n,m; k,l)= w_{s,t}(n,k) w_{s,t}(m,l) \end{aligned}$$

with

$$\begin{aligned} w_{s,t}(n,k)= \frac{n! (k+s-1)! (n-k+t-1)!}{B(s,t) (s+t+n-1)! k! (n-k)!} \end{aligned}$$

and recognizing the probability mass function of the Beta binomial distribution with parameters nst, given by

$$\begin{aligned} \text {BetaBin}(n,s,t) (k)= {n\atopwithdelims ()k} \frac{1}{B(s,t)}\left( \int _0^1 p^k (1-p)^{n-k} p^{s-1} (1-p)^{t-1} dp\right) \end{aligned}$$

as a consequence one has

$$\begin{aligned} \sum _{k=0}^n w_{s,t}(n,k)=1 \end{aligned}$$

We can then state the generalized duality result, and its consequences, as in Theorem 2.1. The dual process when initiated at (nm) is once more an irreducible continuous-time Markov chain on the finite set \(\{(k,l): k+l=n+m\}\) which converges to unique stationary distribution which we denote by \(\nu ^{s+t}_{n+m}(k,l)\) and is given by

$$\begin{aligned} \nu ^{s+t}_{n+m}(k,l)= \frac{\Gamma (s+t+k)}{\Gamma (s+t) k!}\frac{\Gamma (s+t+l)}{\Gamma (s+t) l!}\frac{1}{{\mathcal Z}^{s+t}_{n+m}} \end{aligned}$$
(29)

where

$$\begin{aligned} {\mathcal Z}^{s+t}_{n+m}= \sum _{k,l: k+l=n+m} \frac{\Gamma (s+t+k)}{\Gamma (s+t) k!}\frac{\Gamma (s+t+l)}{\Gamma (s+t) l!} \end{aligned}$$
(30)

Notice now that we have the analogue of (15), i.e., if we consider the product measure \(\nu ^{s+t}_\theta (k,l)\) conditioned on \(k+l=n+m\) then

$$\begin{aligned} \sum _{k,l: k+l=n+m} D(k,l;x,y) \nu ^{s+t}_{n+m}(k,l)= \frac{(x+y)^{n+m}}{(n+m)! {\mathcal Z}^{s+t}_{n+m}} \end{aligned}$$
(31)

is only a function of \(x+y\). As a consequence, we obtain the following result in the generalized model.

Theorem 4.1

  1. 1.

    The processes \((N_t,M_t)\) and \((X_t,Y_t)\) with generator (23) and rates (27) are dual with duality function (25). This means that, for all \((n,m)\in \mathbb N^2\) and \( (x,y)\in [0,\infty )^2\), we have

    $$\begin{aligned} \mathbb E^{s,t}_{x,y} D_{s,t}(n,m; X_t,Y_t) = \widehat{\mathbb E}^{s,t}_{n,m} D_{s,t}(N_t,M_t, x,y). \end{aligned}$$
  2. 2.

    As a consequence, the product measure \(\nu _\theta ^{s,t} (\text {d}x\text {d}y)\) is invariant.

  3. 3.

    Moreover, starting from any initial state (xy), the process \((X_t, Y_t)\) converges in distribution to \((S\epsilon , S(1-\epsilon ))\) where \(\epsilon \) is \(Beta(s+t,s+t)\)-distributed, and \(S=x+y=X_0+Y_0\).

  4. 4.

    The invariant measures with finite moments are of the form \((S\epsilon , S(1-\epsilon ))\), with \(\epsilon Beta(s+t,s+t)\)-distributed.

We can then build the analogue of this model for many agents associated to the vertices of a graph V, as in equations (16) and (17). First notice that for a single dual particle, when \(\xi =\delta _i\), we get

$$\begin{aligned} D(\xi ,x) = \frac{x_i}{s+t}. \end{aligned}$$

Just as before, the motion of single dual particle in the dual process is a continuous-time random walk, jumping with rate \(p(i,j)\frac{s}{(s+t)}\) from i to j. If we denote by \(p_t(i,j)\) the time \(t>0\) transition probability of this walk, we then have the following result.

Proposition 4.1

In the model with generator (16), for all \(x\in \Omega \), \(i\in V\) we have, for all \(r>0\)

$$\begin{aligned} \mathbb E_x^{s,t} (x_i(r))= \sum _j p_r(i,j) x_j(0). \end{aligned}$$

5 Properties of the Discrete Dual Process

The discrete dual process is a redistribution model of independent interest. In the case of the KMP process, introduced in [7], it was already found that the discrete dual process is also a natural discrete analogue of the original process, in the sense that the total mass of the two vertices (continuous in the original KMP process, and discrete in its discrete dual) is uniformly redistributed over the two vertices. The same holds for the one-parameter family of KMP-like processes, called Thermalized Brownian Energy process and their dual discrete Thermalized SIP processes in [1]. Here the redistribution of the total mass is Beta(ss) distributed.

In our context, the dual of the generalized immediate exchange model is a discrete redistribution model of the same type as the original continuous model exactly as in the context of the KMP process and its generalizations in [1]. It is therefore useful also here to understand more about the discrete dual process and its connection to the original process.

5.1 Reversible Measures

Define the discrete Gamma distribution with shape parameter \(s+t\) and scale parameter \(0<\theta <1\) as the probability measure on \(\mathbb N\) with probability mass function

$$\begin{aligned} \nu ^{s+t}_\theta (n)= \frac{1}{Z_\theta }\frac{\theta ^n}{n!} \frac{\Gamma (s+t+n)}{\Gamma (s+t)} \end{aligned}$$
(32)

where \(Z_\theta = (1-\theta )^{-s-t}\) is the normalizing factor. We first recall that the dual process has generator

$$\begin{aligned} \mathcal {L}f(n,m) = \sum _{k=0}^n\sum _{l=0}^m r_{s,t}(n,m;k,l) \left( f(n-k+l, m-l+k)- f(n,m)\right) \end{aligned}$$
(33)

where the rates are given by (27). It is important to notice here that this generator can be rewritten as follows

$$\begin{aligned} \mathcal {L}f(n,m) = \mathbb Ef(n-X_1+X_2, m-X_2+X_1)- f(n,m) \end{aligned}$$
(34)

where \(X_1=X_1^{(n)}\) is Beta binomial distributed with parameters nst and \(X_2=X_1^{(m)}\) independent Beta binomial with parameters mst, and \(\mathbb E\) is expectation w.r.t. these variables.

Proposition 5.1

For all \(\theta \in (0,1)\), the product probability measures with marginals \( \nu ^{s+t}_\theta (n)\) are reversible for the process with generator (33).

Proof

The reversibility of \(\nu ^{s+t}_\theta \) for the generator \(\mathcal {L}\) follows from a standard detailed balance computation. Indeed, fix two configurations (nm) and \((n',m') \in \mathbb N^2\) with \(n+m=n'+m'\); now, for any \( 0 \le k \le n\) and \(0 \le l \le m\) such that \(n'=n-k+l\) and \(m'=m-l+k\), it trivially follows that \(l \le n'= n-k+l\) and \(k \le m'= m-l+k\) and \(n= n'-l+k\), \(m= m'-k+l\). In other words, for each redistribution of (nm) according to r(nmkl), we can find a “reverse” redistribution of \((n',m')\) according to \(r(n',m';l,k)\). Furthermore, these two redistributions are indeed reversible, as one may see by explicit computation, combining (27) and (32) that

$$\begin{aligned} r(n,m;k,l) \nu ^{s+t}_{\theta }(n) \nu ^{s+t}_{\theta } (m)= r(n+l-k,m+k-l;l,k) \nu ^{s+t}_{\theta }(n+l-k) \nu ^{s+t}_{\theta } (m+k-l) \end{aligned}$$

which implies detailed balance and thus reversibility. \(\square \)

5.2 Scaling Limit

The fact that the rescaled Beta Binomial converges to the Beta distribution (by the law of large numbers) provides a connection between the discrete dual process and the continuous process. The continuous process arises as a limit of the discrete dual process where the number of initial “coins” is suitably rescaled to infinity. This is expressed in the following result.

Theorem 5.1

Let \(n_K, m_K\) be a sequence of integers indexed by \(K\in \mathbb N\), and such that

$$\begin{aligned} \frac{n_K}{K}\rightarrow x, \ \frac{m_K}{K}\rightarrow y \end{aligned}$$

as \(K\rightarrow \infty \). Then we have that the corresponding processes \(n_K(t)/K, m_K(t)/K\), with generator (33) converge to the continuous process with generator (23), starting from (xy).

Proof

Define a number \(A> x+y\). Because convergence of generators on a core implies convergence of the processes, it suffices to show that for smooth \(f: [0,A]^2\rightarrow \mathbb R\)

$$\begin{aligned} \lim _{K\rightarrow \infty } (\mathcal {L}f_K) (n_K, m_K) = L_{s,t} f(x,y) \end{aligned}$$
(35)

where \(f_K(n,m)= f(n/K, m/K)\), \(\mathcal {L}\) is given by (33), and \(L_{s,t}\) by (23). Consider \(X^{(n_K)}\) Beta binomial with parameters \(n_K, s,t\), and \(X^{(m_K)}\) independent Beta binomial with parameters \(m_K, s,t\). By the law of large numbers it follows that

$$\begin{aligned} \frac{X^{(n_K)}}{K}\rightarrow xY_{s,t}, \ \frac{X^{(m_K)}}{K}\rightarrow yY'_{s,t} \end{aligned}$$

with \(Y_{s,t}\), \(Y'_{s,t}\) being independent Beta(st) distributed. Therefore, by smoothness of f and dominated convergence, as \(K\rightarrow \infty \) we have

$$\begin{aligned}&\lim _{K\rightarrow \infty }\mathbb E( f_K (n_K-X^{(n_K)}+X^{(m_K)},m_K-X^{(m_K)}+X^{(n_K)}))\\&\quad = \mathbb E( f(x-xY_{s,t}+yY'_{s,t}, y-yY'_{s,t}+xY_{s,t})\\&\quad = L_{s,t} f(x,y)+ f(x,y) \end{aligned}$$

which shows (35). \(\square \)

6 Self Duality and SU(1, 1) Symmetry of the Dual Process

In this section we show self-duality with the self-duality polynomials which are naturally associated to the reversible discrete Gamma distributions. More precisely, we define the following discrete polynomials:

$$\begin{aligned} d_{s,t}(k,n)= \frac{n!}{(n-k)!}\frac{\Gamma (s+t)}{\Gamma (s+t+k)} \end{aligned}$$
(36)

where negative factorials are defined to be infinite. These polynomials are naturally connected to the discrete reversible Gamma distribution via

$$\begin{aligned} \sum _{n} d_{s,t}(k,n) \nu ^{s+t}_\theta (n)= \rho (\theta )^k \end{aligned}$$
(37)

with \(\rho (\theta ) = \theta /(1-\theta )\). Next we have the associated polynomial in two variables:

$$\begin{aligned} D_{s,t}(k,l;n,m)= d_{s,t}(k,n) d_{s,t}(l,m) \end{aligned}$$
(38)

Notice that in the case \(n=\lfloor Nx\rfloor , m=\lfloor Ny\rfloor \), divided by \(N^{k+l}\), and in the limit \(N\rightarrow \infty \), these discrete polymials converge to the duality polynomials (25). We recall that the dual process has a generator of the form

$$\begin{aligned} \mathcal {L}f(n,m) = \sum _{k=0}^n\sum _{l=0}^m r_{s,t} (n,m; k,l) (f(n-k+l, m+k-l)-f(n,m))= (P-I)f (n,m) \end{aligned}$$

where the discrete transition operator

$$\begin{aligned} Pf(n,m)= \sum _{k=0}^n\sum _{l=0}^m r_{s,t} (n,m; k,l) f(n-k+l, m+k-l) \end{aligned}$$

is indeed a Markov transition operator because, as we showed before,

$$\begin{aligned} \sum _{k=0}^n\sum _{l=0}^m r_{s,t} (n,m; k,l)=1. \end{aligned}$$

To prove self-duality of the process with generator (33), we show that it commutes with a SU(1, 1) raising operator \(K_1^++ K_2^+\), from which we can generate the self-duality function via the strategy described in [2], namely by acting with \(e^{K_1^++K_2^+}\) on a cheap self-duality function coming from the reversible product measure.

In order to proceed with this, we introduce the SU(1, 1) raising operators [9],

$$\begin{aligned} K^+ f(n)= (s+t+n) f(n+1). \end{aligned}$$
(39)

For a function f(nm) of two discrete variables, we denote \(K_1^+\), resp. \(K_2^+\) the operator \(K^+\) defined in (39) working on the first (resp. second) variable. Similarly we have the lowering and diagonal operators

$$\begin{aligned} K^- f(n)= nf(n-1), \qquad K^0 f(n)= \left( \tfrac{s+t}{2} + n\right) f(n). \end{aligned}$$
(40)

Together, the \(K^-, K^+, K^0\) generate a discrete (left) representation of SU(1, 1); i.e. they satisfy the SU(1, 1) commutation relations

$$\begin{aligned}{}[K^+, K^-]= 2K^0, \qquad [ K^\pm , K^0]=\pm K^\pm . \end{aligned}$$
(41)

where \([A,B]= AB-BA\) denotes the commutator. We will show in this subsection that the generator \(\mathcal {L}\) defined in (33) has SU(1, 1) symmetry and that the self-duality follows as a consequence, in the spirit of [1, 9]. We start by noticing that by reversibility of the measure \(\nu ^{s+t}_\theta \), the function

$$\begin{aligned} {\mathcal D}(n',m'; n,m)= \delta _{n',n}\delta _{m',m} \frac{n!\Gamma (s+t)}{\Gamma (s+t+n)} \frac{m!\Gamma (s+t)}{\Gamma (s+t+m)} \end{aligned}$$

is a “cheap” self-duality function [2, 9]. Furthermore, we remark that the claimed self-duality polynomials can be obtained via

$$\begin{aligned} D(n',m';n,m)= e^{K_1^++K_2^+} {\mathcal D}(n',m'; n,m) \end{aligned}$$

where the operator \(e^{K_1^+ + K_2^+} \) is working on the \(n'\), \(m'\) variables. Therefore, in order to prove that self-duality holds with the claimed polynomials (36), (38), it suffices to prove that \(K_1^++ K_2^+\) commutes with the generator. Indeed, then from the general theory developed in [9], see also [2], it follows that \(e^{K_1^++K_2^+} {\mathcal D}(k,l; n,m)\), which arises from the action of a symmetry (an operator commuting with the generator) on a self-duality function, is again a self-duality function.

Theorem 6.1

The generator \(\mathcal {L}\) in (33) and the operator \(K_1^++K_2^+\) commute, i.e., for all \(f: \mathbb N^2\rightarrow \mathbb R\) we have

$$\begin{aligned} \mathcal {L}(K_1^++K_2^+) f= (K_1^++K_2^+)\mathcal {L}f. \end{aligned}$$
(42)

Remark 6.1

(Hypergeometric Functions) We briefly recall some definitions and properties about hypergeometric functions we will need in the proof of Theorem 6.1. On a suitable subdomain of \(\{z \in \mathbb {C}: \mathfrak {R}{(z)}> 0\}\), the hypergeometric function is defined via the following series expansion

Note that for all \(n, k \in \mathbb N\) and \(t \in \mathbb R_+\),

$$\begin{aligned} (-n)_k = (-1)^k n \cdot (n-1) \cdots (n-k+1) = (-1)^k \frac{\Gamma (n+1)}{\Gamma (n-k+1)} \end{aligned}$$

and

$$\begin{aligned} (1-n-t)_k= (-1)^k \frac{\Gamma (n+t)}{\Gamma (n-k+t)}. \end{aligned}$$

Moreover, as a particular case of Gauss’s summation theorem ([8, Theorem 2]), we can state that

Some useful formulas are listed below:

Proof

Let us prove that for all functions \(f: \mathbb N^2 \rightarrow \mathbb R\) and \((n,m) \in \mathbb N^2\)

$$\begin{aligned} P \left( K^+_1 + K^+_2 \right) f(n,m) = \left( K^+_1 + K^+_2 \right) P f(n,m). \end{aligned}$$
(43)

By straightforward computations and substitutions, if we adopt the notation

$$\begin{aligned} \begin{bmatrix} a \\ b \end{bmatrix}_{s,t}:= \frac{\Gamma (a+s+t)}{\Gamma (b+s) \Gamma (a-b+t)}, \quad a \ge b \ge 0, \quad s, t > 0, \end{aligned}$$

the l.h.s. rewrites (\(K^+:= K^+_1 + K^+_2\))

$$\begin{aligned} P K^+ f(n,m)&=\sum _{k=0}^n \sum _{l=0}^m w_{s,t}(n,k) w_{s,t}(m,l) \left( \left( K^+_1 + K^+_2 \right) f \right) (n-k+l, m-l+k)\\&=\frac{1}{B(s,t)^2}\sum _{k=0}^n \sum _{l=0}^m \left( \left( {\begin{array}{c}n\\ k\end{array}}\right) \left( {\begin{array}{c}m\\ l\end{array}}\right) \right) \bigg / \left( \begin{bmatrix} n \\ k \end{bmatrix}_{s,t} \begin{bmatrix} m \\l \end{bmatrix}_{s,t}\right) \\&\qquad \times \bigg ( (s+t+(n-k+l)) f(n-k+l+1, m-l+k)\\&\qquad + (s+t+(m-l+k))f(n-k+l, m-l+k+1)\bigg ), \end{aligned}$$

while the r.h.s

$$\begin{aligned} K^+ P f(n,m)&= (s+t+n) Pf (n+1, m) + (s+t+m) Pf(n, m+1)\\&=\frac{s+t+n}{B(s,t)^2} \sum _{k'=0}^{n+1} \sum _{l'=0}^m \left( {\begin{array}{c}n+1\\ k'\end{array}}\right) \left( {\begin{array}{c}m\\ l'\end{array}}\right) \bigg / \left( \begin{bmatrix} n+1 \\ k' \end{bmatrix}_{s,t} \begin{bmatrix} m \\ l'\end{bmatrix}_{s,t} \right) \\&\qquad \times f(n+1-k'+l', m-l'+k')\\&\qquad + \frac{s+t+m}{B(s,t)^2} \sum _{k''=0}^n \sum _{l''=0}^{m+1} \left( {\begin{array}{c}n\\ k''\end{array}}\right) \left( {\begin{array}{c}m+1\\ l''\end{array}}\right) \bigg / \left( \begin{bmatrix} n \\ k'' \end{bmatrix}_{s,t} \begin{bmatrix} m+1 \\ l'' \end{bmatrix}_{s,t} \right) \\&\qquad \times f(n-k''+l'', m+1 -l''+k''), \end{aligned}$$

Let us introduce another shortcut:

$$\begin{aligned} z_s(k):= \frac{\Gamma (k+s)}{\Gamma (k+1)}, \quad k \in \mathbb N, \quad s > 0. \end{aligned}$$

As it is enough to show the identity only for the functions \(f: \mathbb N^2 \rightarrow \mathbb R\) in the form

$$\begin{aligned} f(n,m):= \theta _1^n \theta _2^m, \quad \theta _1, \theta _2 \in (0,1), \quad (n,m) \in \mathbb N^2, \end{aligned}$$

we can recast (43) as follows:

$$\begin{aligned}&\frac{n! m!}{\Gamma (n+s+t) \Gamma (m+s+t)} \sum _{k=0}^n \sum _{l=0}^m z_s(k) z_t(n-k) z_s(l) z_t(m-l) \cdot \\&\quad \left( (s+t+(n-k+l)) \theta _1^{n-k+l}\theta _2^{m-l+k} \theta _1 + (s+t+(m-l+k)) \theta _1^{n-k+l} \theta _2^{m-l+k} \theta _2 \right) \\&\qquad =\frac{(n+s+t) (n+1)! m!}{\Gamma (n+1+s+t) \Gamma (m+s+t)} \sum _{k=0}^{n+1} \sum _{l=0}^m z_s(k) z_t(n+1-k) z_s(l) z_t(m-l) \theta _1^{n-k+l} \theta _2^{m-l+k} \theta _1 \\&\quad \qquad +\frac{(m+s+t) n! (m+1)!}{\Gamma (n+s+t) \Gamma (m+1+s+t)} \sum _{k=0}^n \sum _{l=0}^{m+1} z_s(k) z_t(n-k) z_s(l) z_t(m+1-l) \theta _1^{n-k+l} \theta _2^{m-l+k} \theta _2 \end{aligned}$$
$$\begin{aligned} \Longleftrightarrow \end{aligned}$$
$$\begin{aligned}&\sum _{k=0}^n \sum _{l=0}^m z_s(k) z_t(n-k) z_s(l) z_t(m-l) \left( \frac{\theta _1}{\theta _2}\right) ^{l-k} \cdot \\&\quad \cdot \bigg \{\theta _1 \bigg [(n+s+t)-(n+1)\frac{n-k+t}{n-k+1} \bigg ] + \theta _2 \bigg [m+s+t - (m+1) \frac{m-l+t}{m-l+1} \bigg ] \bigg \}\\&\qquad + (\theta _1 - \theta _2) \sum _{k=0}^n \sum _{l=0}^m z_s(k) z_t(n-k) z_s(l) z_t(m-l) \left( \frac{\theta _1}{\theta _2} \right) ^{l-k} (l-k)\\&\quad =\theta _1 (n+1) z_s(n+1) z_t(0) \left( \frac{\theta _1}{\theta _2} \right) ^{-(n+1)} \sum _{l=0}^m z_s(l) z_t(m-l) \left( \frac{\theta _1}{\theta _2}\right) ^{l} + \\&\qquad + \theta _2 (m+1) z_s(m+1) z_t(0) \left( \frac{\theta _1}{\theta _2} \right) ^{m+1} \sum _{k=0}^n z_s(k) z_t(n-k) \left( \frac{\theta _1}{\theta _2} \right) ^{-k}. \end{aligned}$$

Since

$$\begin{aligned} n+s+t-(n+1) \frac{n-k+t}{n-k+1}= s - (1-t) \frac{ k}{n-k+1}, \end{aligned}$$

we can further simplify

$$\begin{aligned}&s(\theta _1 + \theta _2) \sum _{k=0}^n \sum _{l=0}^m z_s(k) z_t(n-k) z_s(l) z_t(m-l) \left( \frac{\theta _1}{\theta _2} \right) ^{l-k} \\&\qquad +(1-t) \sum _{k=0}^n \sum _{l=0}^m z_s(k) z_t(n-k) z_s(l) z_t(m-l) \left( \frac{\theta _1}{\theta _2} \right) ^{l-k} \cdot \bigg \{\theta _1 \frac{k}{n-k+1} + \theta _2 \frac{l}{m-l+1} \bigg \} \\&\qquad + (\theta _1 - \theta _2) \sum _{k=0}^n \sum _{l=0}^m z_s(k) z_t(n-k) z_s(l) z_t(m-l) \left( \frac{\theta _1}{\theta _2} \right) ^{l-k} (l-k)\\&\quad = \theta _1 (n+1) z_s(n+1) z_t(0) \left( \frac{\theta _1}{\theta _2} \right) ^{-(n+1)} \sum _{l=0}^m z_s(l) z_t(m-l) \left( \frac{\theta _1}{\theta _2}\right) ^{l} \\&\qquad + \theta _2 (m+1) z_s(m+1) z_t(0) \left( \frac{\theta _1}{\theta _2} \right) ^{m+1} \sum _{k=0}^n z_s(k) z_t(n-k) \left( \frac{\theta _1}{\theta _2} \right) ^{-k}. \end{aligned}$$

Now, by noting that

$$\begin{aligned} \frac{k}{\Gamma (k+1)}=\frac{1}{\Gamma (k)} \quad \text {and} \quad \frac{1}{\Gamma (n-k+1) (n-k+1)}= \frac{1}{\Gamma (n-k+2)}, \end{aligned}$$

and by using the shortcuts

$$\begin{aligned} N:= \sum _{k=0}^n z_s(k) z_t(n-k) \left( \frac{\theta _1}{\theta _2}\right) ^{-k} \quad \text {and} \quad M:= \sum _{l=0}^m z_s(l) z_t(m-l) \left( \frac{\theta _1}{\theta _2}\right) ^{l}, \end{aligned}$$
$$\begin{aligned} \hat{N}:= \sum _{k=0}^n \frac{\Gamma (k+s)}{\Gamma (k)} \frac{\Gamma (n-k+t)}{\Gamma (n-k+1)} \left( \frac{\theta _1}{\theta _2} \right) ^{-k} \quad \text {and} \quad \hat{\hat{N}}:= \sum _{k=0}^n \frac{\Gamma (k+s)}{\Gamma (k)} \frac{\Gamma (n-k+t)}{\Gamma (n-k+2)}\left( \frac{\theta _1}{\theta _2} \right) ^{-k}, \end{aligned}$$

and similarly for \(\hat{M}\) and \(\hat{\hat{M}}\), we can continue with

$$\begin{aligned}&M \left( s \theta _1 N+ (1-t) \theta _1 \hat{\hat{N}} - (\theta _1 - \theta _2) \hat{N} - \theta _2 \frac{\Gamma (n+1+s) \Gamma (t)}{\Gamma (n+1)} \left( \frac{\theta _1}{\theta _2} \right) ^{-n} \right) \nonumber \\&\quad = N \left( -s \theta _2 M - (1-t) \theta _2 \hat{\hat{M}} - (\theta _1 - \theta _2) \hat{M} + \theta _1 \frac{\Gamma (m+1+s) \Gamma (t)}{\Gamma (m+1)} \left( \frac{\theta _1}{\theta _2} \right) ^m \right) . \end{aligned}$$
(44)

Note that, as in Remark 6.1, we can rewrite these quantities N, \(\hat{N}\) etc., in terms of hypergeometric functions as follows

and

Therefore, the expression

$$\begin{aligned} s \theta _1 N+ (1-t) \theta _1 \hat{\hat{N}} - (\theta _1 - \theta _2) \hat{N} - \theta _2 \frac{\Gamma (n+1+s) \Gamma (t)}{\Gamma (n+1)} \left( \frac{\theta _1}{\theta _2} \right) ^{-n} \end{aligned}$$

simplifies to

(45)

By some standard manipulations of hypergeometric functions, the expression

reduces to

In conclusion, if we go back and plug the latter expression into (45), we can rewrite the l.h.s. in (44) as

By simply replacing n by m, \(\theta _1\) by \(\theta _2\) etc. and exchanging the sign in the latter expression, one simply obtains the explicit form of the r.h.s. in (44), which indeed proves identity (43). \(\square \)

We extend now the commutation of the generator with \(K_1^++K_2^+\) to full SU(1, 1) symmetry of both the discrete and the continuous model. For this we need some additional notation. Denoting the operators \({\mathcal K}^\alpha \) (for \(\alpha \in \{+,-,0\}\)) working on functions \(f:[0,\infty )\rightarrow \mathbb R\) via

$$\begin{aligned} {\mathcal K}^+ f(x)= & {} xf(x)\end{aligned}$$
(46)
$$\begin{aligned} {\mathcal K}^- f(x)= & {} \left( x\partial ^2_x+ (s+t)\partial _x\right) f(x)\end{aligned}$$
(47)
$$\begin{aligned} {\mathcal K}^0 f(x)= & {} \left( x+\tfrac{s+t}{2}\right) f(x) \end{aligned}$$
(48)

we have that the algebra generated by \({\mathcal K}^\alpha \) forms a (right) representation of SU(1, 1), i.e., satisfy the commutation relations (41) with opposite sign. Moreover, this continuous right representation is linked with the discrete left representation used before via the duality polynomials (24), i.e.,

$$\begin{aligned} {\mathcal K}^\alpha d_{s,t}(n,x)= K^\alpha d_{s,t} (n,x), \quad \alpha \in \{+,-,0\} \end{aligned}$$
(49)

where \({\mathcal K}\) works on x, and K on n (see e.g. [2] for the proof).

We now first formulate a simple lemma, showing that \(\theta ^{-1} K^-\) is the adjoint of \(K^+\) in \(L^2(\nu _\theta )\).

Lemma 6.1

Let \(\nu ^{s+t}_\theta \) be the reversible measure for the discrete dual process, defined in (32). We have in \(L^2(\nu ^{s+t}_\theta )\)

$$\begin{aligned} (K^+)^*= \frac{1}{\theta } K^- \end{aligned}$$

where \(K^\alpha \) are the operators introduced in (39),(40).

Proof

Let \(f,g:\mathbb N\rightarrow \mathbb R\) be functions with compact support, then we compute

$$\begin{aligned}&\sum _{n \ge 0} f(n) K^+ g(n) \nu ^{s+t}_\theta (n)\\&\quad = \frac{1}{Z_\theta }\sum _{n \ge 0} f(n) (n+s+t) g(n+1)\frac{\theta ^n}{n!} \frac{\Gamma (s+t+n)}{\Gamma (s+t)} \\&\quad = \frac{1}{Z_\theta }\sum _{n \ge 0} f(n) g(n+1)\frac{\theta ^n}{n!} \frac{\Gamma (s+t+n+1)}{\Gamma (s+t)} \\&\quad = \frac{1}{\theta }\frac{1}{Z_\theta }\sum _{n \ge 1} nf(n-1) g(n)\frac{\theta ^n}{n!} \frac{\Gamma (s+t+n)}{\Gamma (s+t)} \\&\quad = \frac{1}{\theta } \sum _{n \ge 0} K^-f(n) g(n) \nu ^{s+t}_\theta (n) \end{aligned}$$

\(\square \)

We are now ready to prove the full SU(1, 1) symmetry of both the original continuous process and the discrete dual process. To explain this, we denote the coproduct

$$\begin{aligned} \Delta : {\mathcal U}(SU(1,1))\rightarrow {\mathcal U}(SU(1,1))\otimes {\mathcal U}(SU(1,1)) \end{aligned}$$

which is defined on the generators as \(\Delta (K^\alpha ):= K^\alpha _1+K^\alpha _2\) and extended to the algebra as a homomorphism. We then say that the process with generator L has full SU(1, 1) symmetry if it commutes with every element of the form \(\Delta (A)\), \(A\in {\mathcal U}(SU(1,1))\). This in turn follows if it holds for the generators \(K^\alpha \), by the bilinearity of the commutator.

Theorem 6.2

Let \(\mathcal {L}\) denote the generator of the discrete dual process, defined in (33), and L the generator of the continuous process defined in (23). Then we have for \(\alpha \in \{+,-,0\}\) the commutation properties

$$\begin{aligned}{}[\mathcal {L}, K^\alpha _1+K^\alpha _2] = [L, {\mathcal K}^\alpha _1+{\mathcal K}^\alpha _2]=0 \end{aligned}$$
(50)

As a consequence both \({\mathcal L}\) and L have full SU(1, 1) symmetry.

Proof

We start with the discrete process. Because the sum of the wealths is conserved, \(\mathcal {L}\) trivially commutes with \(K^0_1+K^0_2\). We showed in (42) that it commutes with \(K^+_1+K^+_2\). To show that it commutes with \(K^-_1+K^-_2\) we use lemma 6.1 and the fact that \(\mathcal {L}\) is self-adjoint in \(L^2(\nu ^{s+t}_\theta )\) by the reversibility of \(\nu ^{s+t}_\theta \).

$$\begin{aligned}{}[\mathcal {L}, K^-_1+ K^-_2]= \theta [\mathcal {L}^*, (K^+_1+ K^+_2)^*]=-\theta \left( [\mathcal {L}, (K^+_1+ K^+_2)]\right) ^*=0 \end{aligned}$$

We then turn to the continuous model, using (49). We show the commutation with \({\mathcal K}^+_1+{\mathcal K}^+_2\), the other cases are similar. We consider \(D_{s,t}(n,m;x,y)\), the duality polynomial defined in (25), (26), and abbreviate it simply by D, where in what follows we tacitly understand that operators of the form \({\mathcal K}\) are working on xy and of the form K on nm. In this notation, remark that operators working on different variables always commute (e.g. \({\mathcal K}\) commutes with \(\mathcal {L}\), etc.). We can then proceed as follows, using duality which reads \(\mathcal {L}D= LD\).

$$\begin{aligned} {\mathcal L}({\mathcal K}^+_1+{\mathcal K}^+_2) D= & {} ({\mathcal K}^+_1+{\mathcal K}^+_2){\mathcal L}D \\= & {} ({\mathcal K}^+_1+{\mathcal K}^+_2)LD \end{aligned}$$

On the other hand, via (49)

$$\begin{aligned} L ({\mathcal K}^+_1+{\mathcal K}^+_2)D= & {} L(K^+_1+K^+_2) D\\= & {} (K^+_1+K^+_2) \mathcal {L}D \\= & {} \mathcal {L}(K^+_1+K^+_2) D \\= & {} \mathcal {L}({\mathcal K}^+_1+{\mathcal K}^+_2) D \end{aligned}$$

where in the third equality we used the commutation of \(\mathcal {L}\) with \(K^+_1+ K^+_2\). Combination of these computations then gives indeed

$$\begin{aligned} ({\mathcal K}^+_1+{\mathcal K}^+_2)L= L ({\mathcal K}^+_1+{\mathcal K}^+_2) \end{aligned}$$

on the functions D, and then by standard arguments on all f in \(L^2 (\nu ^{s+t}_\theta )\). \(\square \)