Skip to main content
Log in

Inverse Problems in Models of Resource Distribution

  • Published:
The Journal of Geometric Analysis Aims and scope Submit manuscript

Abstract

We continue to study the problem of modeling of substitution of production factors motivated by the need for computable mathematical models of economics that could be used as a basis in applied developments. This problem has been studied for several decades, and several connections to complex analysis and geometry have been established. We describe several models of resource distribution and discuss the inverse problems for the generalized Radon transform arising in these models. We give a simple explicit range characterization for a particular case of the generalized Radon transform, and we apply it to show that the most popular production functions are compatible with these models. Besides, we give a necessary condition and a sufficient condition for solvability of the model identification problem in the form of an appropriate moment problem. These conditions are formulated in terms of rhombic tilings.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Agaltsov, A.D.: A characterization theorem for a generalized Radon transform arising in a model of mathematical economics. Funct. Anal. Appl. 49(3), 201–204 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  2. Agaltsov, A.D.: On the injectivity of the generalized Radon transform arising in a model of mathematical economics. Inverse Probl. 32(11), 115022 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  3. Alekseev, V.M., Tikhomirov, V.M., Fomin, S.V.: Optimal control. Consultants Bureau, New York (1987)

    Book  MATH  Google Scholar 

  4. Aubin, J.P.: Analyse non linéaire et ces motivations économiques. Masson, Paris (1984)

    Google Scholar 

  5. Bateman, H., Erdélyi, A.: Tables of integral transforms, vol. 1. McGraw-Hill, New York (1954)

    Google Scholar 

  6. Bochner, S.: Harmonic analysis and the theory of probability. University of California press, Berkeley (1955)

    MATH  Google Scholar 

  7. Cornwall, R.: A note on using profit functions. Int. Econ. Rev. 14(2), 211–214 (1973)

    Article  MathSciNet  MATH  Google Scholar 

  8. Danilov, V.I., Karzanov, A.V., Koshevoy, G.A.: Separated set-systems and their geometric models. Russ. Math. Surv. 65(4), 659–740 (2010)

    Article  MATH  Google Scholar 

  9. Goodman, J.E., Pollack, R.: Proof of Grünbaum’s conjecture on the stretchability of certain arrangements of pseudolines. J. Comb. Theory Ser. A. 29(3), 385–390 (1980)

    Article  MATH  Google Scholar 

  10. Henkin, G.M., Shananin, A.A.: Bernstein theorems and Radon transform. Application to the theory of production functions. Trans. Math. Mon. 81, 189–223 (1990)

    MathSciNet  MATH  Google Scholar 

  11. Henkin, G.M., Shananin, A.A.: \({\mathbb{C}}^n\)-capacity and multidimensional moment problem. In: Stoll, W. (ed.) Proceedings Symposium on Value Theory in Several Complex Variables, Ser. Notre Dame Mathematical Lectures, vol. 12, pp. 69–85. University of Notre Dame, Notre Dame (1990)

  12. Henkin, G.M., Shananin, A.A.: The Bernstein Theorems for the Fantippie Indicatrix and Their Applications to Mathematical Economics. Ser. Lecture Notes in Pure and Applied Mathematics, vol. 132, pp. 221–227 (1991)

  13. Hildenbrand, W.: Short-run production functions based on micro-data. Econometrica 49(5), 1095–1125 (1981)

    Article  MathSciNet  MATH  Google Scholar 

  14. Houthakker, H.S.: The Pareto distribution and the Cobb–Douglas production function in activity analysis. Rev. Econ. Stud. 23(1), 27–31 (1955–1956)

  15. Johansen, L.: Production functions. North Holland, Amsterdam (1972)

    MATH  Google Scholar 

  16. Karzanov, A.V., Shananin, A.A.: On stable correspondences of finite sets of the Euclidean space and their applications. Ekonomika i Matematicheskie Metody 41(2), 111–112 (2005)

    Google Scholar 

  17. Molchanov, E.G.: Combinatorial properties of polyhedral cone classes in the resource distribution problem. Proc. Moscow Inst. Phys. Technol. 5(3), 67–74 (2013)

    Google Scholar 

  18. Molchanov, E.G.: Rhomboidal tiling’s modifications in the resource distribution problem. Proc. Moscow Inst. Phys. Technol. 5(4), 87–95 (2013)

    Google Scholar 

  19. Petrov, A.A., Pospelov, I.G., Shananin, A.A.: Experience of mathematical modeling for economy. Energoatomizdat, Moscow (1996)

    MATH  Google Scholar 

  20. Sato, K.: A two-level constant-elasticity of substitution production function. Rev. Econ. Stud. 34, 201–218 (1967)

    Article  Google Scholar 

  21. Shananin, A.A.: Investigation of a class of production functions arising in a macrodescription of economic systems. USSR Comput. Math. Math. Phys. 24(6), 127–134 (1984)

    Article  MathSciNet  MATH  Google Scholar 

  22. Shananin, A.A.: Study of a class of profit functions arising in a macro description of economic systems. USSR Comput. Math. Math. Phys. 24(1), 34–42 (1985)

    Article  MATH  Google Scholar 

  23. Shananin, A.A.: The generalized model of a pure industry. Matem. Mod. 9(9), 117–127 (1997)

    MathSciNet  MATH  Google Scholar 

  24. Shananin, A.A.: The investigation of the generalized model of a pure industry. Matem. Mod. 9(10), 73–82 (1997)

    MathSciNet  MATH  Google Scholar 

  25. Shananin, A.A.: Duality for generalized programming problems and variational principles in models of economic equilibrium. Doklady Akademii Nauk 366(4), 462–464 (1999)

    MathSciNet  MATH  Google Scholar 

  26. Shananin, A.A.: Non-parametric method for the analysis of industry technological structure. Matem. Mod. 11(9), 116–122 (1999)

    MATH  Google Scholar 

  27. Shananin, A.A.: Integrability problem and the generalized non-parametric method for the consumer demand analysis. Proc. Moscow Inst. Phys. Technol. 1(4), 84–98 (2009)

    MATH  Google Scholar 

  28. Shor, P.W.: Stretchability of pseudoline arrangements is NP-hard. In: Applied Geometry and Discrete Mathematics: The Victor Klee Festschrift. Ser. DIMACS Series in Discrete Mathematics and Theoretical Computer Science, vol. 4, pp. 531–554. AMS, Providence (1991)

Download references

Acknowledgements

The present work is supported by the Russian Science Foundation (project number 16-11-10246).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to A. D. Agaltsov.

Additional information

This article is dedicated to G. M. Henkin.

Appendices

Appendix 1: Proofs of the Results of Sect. 3

Proof of Proposition 3.1

One can show that if a group of industries satisfies (3.7) and \(l = (l_1,\ldots ,l_n)>0\), then optimization problem (3.3)–(3.6) satisfies Slater’s condition (see [3] for definition).

The Lagrange function \(L = L(X^0,X^1,\ldots ,X^m,l^1,\ldots ,l^m,p_0,q,s)\) for convex programming problem (3.3)–(3.6) is given by

$$\begin{aligned} \begin{aligned} L&= p_0 F_0(X^0) + \sum _{j=1}^m q_j \biggl ( F_j(X^j,l^j)- \sum _{i=0}^m X^i_j \biggr ) + \sum _{k=1}^n s_k \biggl ( l_k - \sum _{j=1}^m l_k^j \biggr ) \\&= s \cdot l + \bigl ( p_0 F_0(X^0) - q \cdot X^0) + \sum _{j=1}^m \bigl ( q_j F_j(X^j,l^j) - q \cdot X^j - s \cdot l^j \bigr ). \end{aligned} \end{aligned}$$
(7.1)

By the Kuhn–Tucker theorem (see, e.g., [3]), the set of vectors \(\{ \widehat{X}^0, \widehat{X}^1,\ldots ,\widehat{X}^m,\widehat{l}^1,\ldots ,\widehat{l}^m\}\) satisfying (3.4)–(3.6) is a solution of convex programming problem (3.3)–(3.6) with Slater condition if and only if:

  1. (i)

    There exist Lagrange multipliers \(p_0 > 0\), \(q = (q_1,\ldots ,q_m) \ge 0\), \(s = (s_1,\ldots ,s_n) \ge 0\), such that the complementary slackness conditions (3.9), (3.10) are satisfied.

  2. (ii)

    The maximum of Lagrange function L on the set (3.6) is attained at the set of vectors \(\{ \widehat{X}^0, \widehat{X}^1,\ldots ,\widehat{X}^m,\widehat{l}^1,\ldots ,\widehat{l}^m\}\).

It follows from formula (7.1) that the above condition (ii) is equivalent to (3.8), (3.11). Proposition 3.1 is proved. \(\square \)

Proof of Proposition 3.3

Using the Fenchel duality theorem (see, e.g., [4]) we have that

$$\begin{aligned} F^A(l) = \tfrac{1}{p_0} \min \left\{ s \cdot l + \sum _{j=1}^m \Pi _j(q,s) \biggm | q_0(q) \ge p_0, \; q \ge 0, \; s \ge 0 \right\} , \quad p_0>0. \end{aligned}$$
(7.2)

Formulas (3.15), (7.2) imply (3.16).

Using the Fenchel–Moreau theorem (see, e.g., [4]) and formula (3.16), we obtain (3.17). Proposition 3.3 is proved. \(\square \)

Appendix 2: Proofs of the results of Sect. 4

Proof of Proposition 4.1

Necessity. Let \(\Pi (p,p_0)\) be defined by (4.25). It follows from (4.24) that the definition is correct. Put \(P_r(x_1,\ldots ,x_n) = (x_1^{-r},\ldots ,x_n^{-r})\). Then

$$\begin{aligned} \begin{aligned} \tfrac{\partial \Pi }{\partial p_0}(p,p_0)&= \int _{\mathbb R^n_+} \theta \bigl ( p_0 - h(p \circ x) \bigr ) \varphi (x) \, \text {d}x \\&= -r^{-1} \int _{\mathbb R^n_+} \int _0^{+\infty } \theta ( p_0 - h(p \circ x)) t^{n-1}\text {e}^{-t} f(t P_r(x) ) \text {d}t \text {d}x_1^{-r} \ldots \text {d}x_n^{-r} \\&= -r^{-1} \int _{\mathbb R^n_+}\int _0^{+\infty } \theta \bigl (p_0 - h( p \circ P_{1/r}(y) ) \bigr ) t^{n-1} \text {e}^{-t} f(ty) \, \text {d}t \text {d}y \\&= -r^{-1} \int _{\mathbb R^n_+}\int _0^{+\infty } \theta \bigl ( p_0^{-r} - P_r(p) \cdot y \bigr ) t^{n-1} \text {e}^{-t} f(ty) \, \text {d}t \text {d}y \\&= -r^{-1} \int _{\mathbb R^n_+} \int _0^{+\infty } \theta (t - p_0^r P_r(p) \cdot w) \tfrac{\text {e}^{-t}}{t} f(w) \text {d}t \text {d}w \\&= -r^{-1} \int _{\mathbb R^n_+} f(w) \int _{p_0^r P_r(p) \cdot w}^{+\infty } \tfrac{\text {e}^{-t}}{t} \text {d}t \, \text {d}w. \end{aligned} \end{aligned}$$
(8.1)

Using (8.1), we obtain

$$\begin{aligned} \tfrac{\partial ^2 \Pi }{\partial p_0^2}(p,p_0) = p_0^{-1} \int _{\mathbb R^n_+} \text {e}^{-p_0^r P_r(p) \cdot w} f(w) \, \text {d}w. \end{aligned}$$
(8.2)

Formula (4.28) follows from (8.2). Formulas (4.26), (4.27) follow from (4.25), (8.1).

Sufficiency. Let \(\widetilde{\Pi }(p,p_0)\) be a function satisfying (4.26)–(4.28) and let \(\Pi (p,p_0)\) be the function defined by (4.25).

It follows from formulas (4.27), (4.28) for \(\widetilde{\Pi }\) and \(\Pi \) that

$$\begin{aligned} \tfrac{\partial ^2 \widetilde{\Pi }}{\partial p_0^2}(p,p_0) = \tfrac{\partial ^2 \Pi }{\partial p_0^2}(p,p_0), \quad (p,p_0) > 0. \end{aligned}$$
(8.3)

Using formulas (4.27), (4.28) for \(\widetilde{\Pi }\) and \(\Pi \) and formula (8.3), we obtain that \(\widetilde{\Pi }(p,p_0) = \Pi (p,p_0)\), \((p,p_0)>0\). \(\square \)

Demonstration of Example 4.2

It is shown in [14] that \(F_\text {CD}(l_1,l_2)\) is the production function for the resource distribution problem (2.1)–(2.3) with \(\mu \) defined in (2.12). Using formulas (2.7), (2.12), we compute the profit function \(\Pi _\text {CD}(p_1,p_2,p_0)\) corresponding to \(F_\text {CD}(l_1,l_2)\):

$$\begin{aligned} \Pi _\text {CD}(p_1,p_2,p_0) = A \tfrac{B(\alpha _1,\alpha _2)}{(\alpha _1+\alpha _2)(\alpha _1+\alpha _2+1)} p_0^{\alpha _1+\alpha _2+1} p_1^{-\alpha _1} p_2^{-\alpha _2}. \end{aligned}$$
(8.4)

Taking the second derivative of (8.4) with respect to \(p_0\), we get

$$\begin{aligned} \tfrac{\partial ^2 \Pi _\text {CD}}{\partial p_0^2}(p_1,p_2,1)= & {} A B(\alpha _1,\alpha _2) p_1^{-\alpha _1} p_2^{-\alpha _2}, \nonumber \\ \tfrac{\partial ^2 \Pi _\text {CD}}{\partial p_0^2}(p_1^{-\frac{1}{r}},p_2^{-\frac{1}{r}},1)= & {} AB(\alpha _1,\alpha _2) p_1^{\frac{\alpha _1}{r}} p_2^{\frac{\alpha _2}{r}}. \end{aligned}$$
(8.5)

One can see that

$$\begin{aligned} \tfrac{\partial ^2 \Pi _\text {CD}}{\partial p_0^2}(p_1^{-\frac{1}{r}},p_2^{-\frac{1}{r}},1)= & {} \int _{\mathbb R^2_+} \text {e}^{-p \cdot x} H_\text {CD}(x_1,x_2) \text {d}x_1 \text {d}x_2,\nonumber \\ H_\text {CD}(x_1,x_2)= & {} \tfrac{AB(\alpha _1,\alpha _2)}{\Gamma (-\frac{\alpha _1}{r})\Gamma (-\frac{\alpha _2}{r})} x_1^{-\frac{\alpha _1}{r}-1} x_2^{-\frac{\alpha _2}{r}-1}. \end{aligned}$$
(8.6)

Using Proposition 4.1, we obtain that

$$\begin{aligned} \Pi _\text {CD}(p_1,p_2,p_0)= & {} \int _{\mathbb R^2_+} \bigl ( p_0 - ( p_1^{-r} x_1^{-r} + p_2^{-r} x_2^{-r} )^{-\frac{1}{r}} \bigr )_+ \, \varphi _\text {CD}(x_1,x_2) \, \text {d}x_1 \text {d}x_2,\nonumber \\ \varphi _\text {CD}(x_1,x_2)= & {} (-r)A \tfrac{B(\alpha _1,\alpha _2)}{B(-\frac{\alpha _1}{r},-\frac{\alpha _2}{r})} x_1^{\alpha _1-1} x_2^{\alpha _2-1}. \end{aligned}$$
(8.7)

Formula (8.7) implies that \(\Pi _\text {CD}(p_1,p_2,p_0)\) is the profit function for the resource distribution problem (4.3)–(4.5) with (4.29), (4.30). As a corollary, \(F_\text {CD}(l_1,l_2)\) is the production function for the same problem. \(\square \)

Demonstration of Example 4.3

The profit function \(\Pi _\text {CES}(p,p_0)\) corresponding to \(F_\text {CES}(l_1,l_2)\) is given by formula (2.21). We rewrite formula (2.21) as follows:

$$\begin{aligned} \Pi _\text {CES}(p_1,p_2,p_0)= & {} \gamma ^{\frac{1}{1-\gamma }}(1-\gamma ) p_0^{\frac{1}{1-\gamma }} \bigl ( \beta _1 p_1^{-\frac{r}{2}} + \beta _2 p_2^{-\frac{r}{2}} \bigr )^{-b}, \end{aligned}$$
(8.8)
$$\begin{aligned} \beta _1= & {} \alpha _1^{\frac{1}{1+\rho }}, \; \beta _2 = \alpha _2^{\frac{1}{1+\rho }}, \; r = -\tfrac{2\rho }{1+\rho }, \; b = \tfrac{\gamma }{1-\gamma } \tfrac{1+\rho }{\rho }.\qquad \end{aligned}$$
(8.9)

Taking the second derivative of (8.8) with respect to \(p_0\), we get

$$\begin{aligned} \tfrac{\partial ^2 \Pi _\text {CES}}{\partial p_0^2}(p_1,p_2,1)= & {} \tfrac{\gamma ^{\frac{1}{1-\gamma }}}{1-\gamma } \bigl ( \beta _1 p_1^{-\frac{r}{2}} + \beta _2 p_2^{-\frac{r}{2}} \bigr )^{-b}, \nonumber \\ \tfrac{\partial ^2 \Pi _\text {CES}}{\partial p_0^2}(p_1^{-\frac{1}{r}},p_2^{-\frac{1}{r}},1)= & {} \tfrac{\gamma ^{\frac{1}{1-\gamma }}}{1-\gamma } \bigl ( \beta _1 \sqrt{p_1} + \beta _2 \sqrt{p_2} \bigr )^{-b}. \end{aligned}$$
(8.10)

Using Lemma 8.2 formulated below we have that

$$\begin{aligned} \tfrac{\partial ^2 \Pi _\text {CES}}{\partial p_0^2}(p_1^{-\frac{1}{r}},p_2^{-\frac{1}{r}},1)= & {} \int _{\mathbb R^2_+} \text {e}^{-p \cdot x} H_\text {CES}(x_1,x_2) \, \text {d}x_1 \text {d}x_2,\nonumber \\ H_\text {CES}(x_1,x_2)= & {} \tfrac{\gamma ^{\frac{1}{1-\gamma }}}{1-\gamma } \tfrac{2^{b-1}}{\pi } \beta _1 \beta _2 \tfrac{\Gamma (\frac{b}{2} +1)}{\Gamma (b)} (x_1 x_2)^{-\frac{3}{2}} \bigl ( \tfrac{\beta _1^2}{x_1} + \tfrac{\beta _2^2}{x_2} \bigr )^{-\frac{b}{2} - 1}.\qquad \qquad \end{aligned}$$
(8.11)

Using Proposition 4.1, we obtain that

$$\begin{aligned} \Pi _\text {CES}(p_1,p_2,p_0)= & {} \int _{\mathbb R^n_+} \bigl ( p_0 - ( p_1^{-r} x_1^{-r} + p_2^{-r} x_2^{-r} )^{-\frac{1}{r}} \bigr )_+ \varphi _\text {CES}(x_1,x_2) \, \text {d}x_1 \text {d}x_2, \nonumber \\ \varphi _\text {CES}(x_1,x_2)= & {} \tfrac{\gamma ^{\frac{1}{1-\gamma }}}{1-\gamma } \tfrac{2^{b-1}}{\pi } \beta _1 \beta _2 \tfrac{\Gamma (\frac{b}{2} +1)\Gamma (\frac{b}{2})}{\Gamma (b)} (x_1 x_2)^{\frac{r}{2}-1} \bigl ( \beta _1^2 x_1^r + \beta _2^2 x_2^r \bigr )^{-\frac{b}{2} - 1}.\nonumber \\ \end{aligned}$$
(8.12)

Formula (8.12) and definitions (8.9) imply that \(\Pi _\text {CES}(p_1,p_2,p_0)\) is the profit function for the resource distribution problem (4.3)–(4.5) with (4.31), (4.32). As a corollary, \(F_\text {CES}(l_1,l_2)\) is the production function for the same problem. \(\square \)

Lemma 8.1

Let f(s) be a function such that \(\int _0^{+\infty } \text {e}^{-As} |f(s)| \, \text {d}s < \infty \) for any \(A>0\). Then

$$\begin{aligned} \int _0^{+\infty } \text {e}^{-s(\sqrt{p}_1 + \sqrt{p}_2)} f(s) \, \text {d}s= & {} \int _{\mathbb R^2_+} \text {e}^{-p \cdot x} H(x) \, \text {d}x, \quad p = (p_1,p_2)>0, \nonumber \\ H(x_1,x_2)= & {} \tfrac{1}{8\pi } (x_1 x_2)^{-\frac{3}{2}} G( \tfrac{1}{4x_1} + \tfrac{1}{4x_2} ),\nonumber \\ G(s)= & {} \int _0^{+\infty } \text {e}^{-st} \sqrt{t} f(\sqrt{t}) \, \text {d}t. \end{aligned}$$
(8.13)

Proof

The following formula is well known, see, e.g., [5]:

$$\begin{aligned} \frac{1}{4\pi } \int _{\mathbb R^2_+} \text {e}^{-p\cdot x} \frac{s_1 s_2}{(x_1 x_2)^{\frac{3}{2}}} \text {e}^{-\frac{s_1^2}{4x_1}-\frac{s_2^2}{4x_2}} \text {d}x_1 \text {d}x_2 = \text {e}^{-s_1 \sqrt{p_1} - s_2 \sqrt{p_2}}, \end{aligned}$$

where \(s = (s_1,s_2) > 0\), \(p = (p_1,p_2) > 0\). We set \(s_1 = s_2\), multiply this equation by f(s) and integrate over \(s \in [0,+\infty )\):

$$\begin{aligned}&\frac{1}{4\pi } \int \frac{\text {e}^{-p \cdot x}}{(x_1 x_2)^{\frac{3}{2}}} \int _0^{+\infty } s^2 \text {e}^{-s^2\bigl ( \frac{1}{4x_1}+\frac{1}{4x_2} \bigr )} f(s) \text {d}s \, \text {d}x_1 \text {d}x_2 \nonumber \\&\quad = \int _0^{+\infty } \text {e}^{-s(\sqrt{p_1}+\sqrt{p_2})} f(s) \, \text {d}s. \end{aligned}$$

Making the change of variable \(s^2 = t\) in the inner integral on the left, we get (8.13). \(\square \)

Lemma 8.2

Let \(\beta _1\), \(\beta _2\), \(b > 0\). The following formula is valid:

$$\begin{aligned}&( \beta _1 \sqrt{p_1} + \beta _2 \sqrt{p_2})^{-b} = \int _{\mathbb R^2_+} \text {e}^{-p\cdot x} \widetilde{H}(x_1,x_2) \mathrm{d}x_1 \mathrm{d}x_2, \quad p = (p_1,p_2)>0,\nonumber \\&\quad H(x_1,x_2) = \tfrac{2^{b-1}}{\pi } \beta _1 \beta _2 \tfrac{\Gamma (\frac{b}{2} +1)}{\Gamma (b)} (x_1 x_2)^{-\frac{3}{2}} \bigl ( \tfrac{\beta _1^2}{x_1} + \tfrac{\beta _2^2}{x_2} \bigr )^{-\frac{b}{2} - 1}, \quad (x_1,x_2) > 0,\qquad \qquad \end{aligned}$$
(8.14)

where \(\Gamma \) is the gamma function.

Proof

Consider the case \(\beta _1 = \beta _2 = 1\). Put \(f(s) = \frac{s^{b-1}}{\Gamma (b)}\). We have

$$\begin{aligned} \int _0^{+\infty } \text {e}^{-s(\sqrt{p_1}+\sqrt{p_2})} f(s) \, \text {d}s = (\sqrt{p_1}+\sqrt{p_2} )^{-b}. \end{aligned}$$
(8.15)

We define functions G(s) and \(H(x_1,x_2)\) according to (8.13). Then

$$\begin{aligned} G(s)= & {} \int _0^{+\infty } \text {e}^{-st} \frac{t^{\frac{b}{2}}}{\Gamma (b)} \text {d}t = s^{-\frac{b}{2} - 1} \tfrac{\Gamma (\frac{b}{2} + 1)}{\Gamma (b)}, \nonumber \\ H(x_1,x_2)= & {} \tfrac{2^{b-1}}{\pi } \tfrac{\Gamma (\frac{b}{2} + 1)}{\Gamma (b)} (x_1 x_2)^{-\frac{3}{2}} \bigl ( \tfrac{1}{x_1} + \tfrac{1}{x_2} \bigr )^{-\frac{b}{2} - 1}. \end{aligned}$$
(8.16)

Using (8.13), (8.15), (8.16), we get (8.14) with \(\widetilde{H} = H\) for \(\beta _1 = \beta _2 = 1\).

Besides, recall the scaling property of the Laplace transform:

$$\begin{aligned} \int _{\mathbb R^2_+} \text {e}^{-\beta _1^2 p_1 x_1 - \beta _2^2 p_2 x_2} H(x) \text {d}x = \frac{1}{\beta _1^2 \beta _2^2} \int _{\mathbb R^2_+} \text {e}^{-p\cdot x} H(\tfrac{x_1}{\beta _1^2},\tfrac{x_2}{\beta _2^2})\,\text {d}x. \end{aligned}$$

It follows from this scaling property that for arbitrary \(\beta _1\), \(\beta _2\) we have (8.14) with \(\widetilde{H}(x_1,x_2) = (\beta _1 \beta _2)^{-2} H(\tfrac{x_1}{\beta _1^2},\tfrac{x_2}{\beta _2^2})\).

Lemma 8.2 is proved. \(\square \)

Appendix 3: Proofs of the results of Sect. 5

Proof of Proposition 5.2

It follows from (5.9) that

$$\begin{aligned} \sum _{t \in \Omega _i} w_G(t)= & {} N_i(G), \quad G \in \Lambda _h ( \widehat{p} ) \quad (i =1,2),\nonumber \\ N_i(G)= & {} \bigl | \bigl \{ t \in \Omega _i \mid G \subseteq \{ x \mid p_0(t) > h( p(t) \circ x ) \} \bigr \} \bigr |, \end{aligned}$$
(9.1)

where |S| denotes the number of elements of set S.

Suppose that Problem 5.1 is solvable and \(\mu \) is a solution. Then using (5.3) we get

$$\begin{aligned} \sum _{t \in \Omega _i} y(t) = \sum _{G \in \Lambda _h ( \widehat{p} )} \mu (G) N_i(G) \quad (i = 1,2), \end{aligned}$$
(9.2)

where \(N_i(G)\) is defined in (9.1).

Formulas (5.10), (9.1), (9.2) and non-negativity of measure \(\mu \) imply (5.11). \(\square \)

Proof of Proposition 5.3

Consider an arbitrary face of cone \(\Gamma _h(\widehat{p}) = \Gamma _h\{ \widehat{p}(t) \mid t = 1,\ldots , T\}\). This face is the linear span of some linearly independent spectra \(Z(G_1)\), ..., \(Z(G_{T-1})\) of \(G_1\), ..., \(G_{T-1} \in \Lambda _h ( \widehat{p} )\). Thus, it can be described by the following equation for \(Z = (Z_1,\ldots ,Z_T)\):

$$\begin{aligned} \det \begin{pmatrix} Z \\ Z(G_1) \\ \dots \\ Z(G_{T-1}) \end{pmatrix} = 0. \end{aligned}$$
(9.3)

As far as the coordinates of \(Z(G_1)\), ..., \(Z(G_{T-1})\) are integer, the coefficients of Eq. (9.3) are also integer. Besides, this vector of coefficients is a normal vector to the face described by (9.3). \(\square \)

Proof of Proposition 5.4. Sufficiency

Suppose that \(\Gamma _h(\widehat{p}) = \Gamma _h\{ \widehat{p}(t) \mid t = 1,\ldots ,T \}\) is discretely convex. Let \(y = ( y(1), \ldots , y(T) )\) be a vector satisfying the necessary condition of Proposition 5.2. In view of Proposition 5.1, it is sufficient to show that \(y \in \Gamma _h (\widehat{p} )\).

In turn, in order to show that \(y \in \Gamma _h ( \widehat{p} )\), it is sufficient to check that for any inner normal \(\nu = (\nu _1,\ldots ,\nu _T)\) to \(\Gamma _h(\widehat{p})\), whose coordinates belong to \(\{-1,0,1\}\), we have \(y \cdot \nu \ge 0\).

Put

$$\begin{aligned} \Omega _1 = \{ j \mid \nu _j = 1\}, \quad \Omega _2 = \{j \mid \nu _j = -1 \}. \end{aligned}$$
(9.4)

Using that \(z(G) \in \Gamma _h(\widehat{p})\), \(G \in \Lambda _h(\widehat{p})\), we have that

$$\begin{aligned} \sum _{t \in \Omega _1} w_G(t) - \sum _{t \in \Omega _2} w_G(t) = Z(G) \cdot \nu \ge 0, \quad G \in \Lambda _h ( \widehat{p} ). \end{aligned}$$
(9.5)

By hypothesis, y satisfies the necessary condition of Proposition 5.2. Thus, (9.5) implies

$$\begin{aligned} y \cdot \nu = \sum _{t \in \Omega _1} y(t) - \sum _{t \in \Omega _2} y(t) \ge 0. \end{aligned}$$
(9.6)

Hence, \(y \in \Gamma _h ( \widehat{p} )\).

Necessity. For a given pair \((\Omega _1,\Omega _2)\) of subsets \(\Omega _1\), \(\Omega _2 \subset \{1,\ldots ,T\}\) such that \(\Omega _1 \cap \Omega _ 2 = \varnothing \), \(\Omega _1 \cup \Omega _2 \ne \varnothing \), we set

$$\begin{aligned} \nu _{\Omega _1,\Omega _2}= & {} \bigl ( \nu _{\Omega _1,\Omega _2}(1), \ldots , \nu _{\Omega _1,\Omega _2}(T) \bigr ),\nonumber \\ \nu _{\Omega _1,\Omega _2}(t)= & {} {\left\{ \begin{array}{ll} 1, &{} t \in \Omega _1, \\ -1, &{} t \in \Omega _2, \\ 0, &{} t \not \in \Omega _1 \cup \Omega _2. \end{array}\right. } \end{aligned}$$
(9.7)

Let

$$\begin{aligned} M= & {} \{ (\Omega _1,\Omega _2) \mid \Omega _1 \cap \Omega _2 = \varnothing , \Omega _1 \cup \Omega _2 \ne \varnothing , \; (5.10)\text { holds} \}, \nonumber \nonumber \\ N= & {} \bigl \{ \nu _{\Omega _1,\Omega _2} \mid (\Omega _1,\Omega _2) \in M \bigr \},\nonumber \\ \Gamma= & {} \{ x \mid \forall \nu \in N \;\; x \cdot \nu \ge 0 \}. \end{aligned}$$
(9.8)

Using these notations, Proposition 5.2 can be reformulated as \(\Gamma _h(\widehat{p}) \subseteq \Gamma \). Suppose that the necessary condition of Proposition 5.2 is also sufficient. Then

$$\begin{aligned} \Gamma _h (\widehat{p}) = \Gamma . \end{aligned}$$
(9.9)

By construction, the cone \(\Gamma _h(\widehat{p})\) is spanned by vectors with coordinates in \(\{-1,0,1\}\). It also follows from (9.7), (9.8), (9.9) that each face of \(\Gamma _h ( \widehat{p} )\) admits a non-zero normal vector with coordinates in \(\{-1,0,1\}\). It follows that \(\Gamma _h (\widehat{p} )\) is discretely convex. \(\square \)

Appendix 4: Proofs of the results of Sect. 6

Proof of Proposition 6.2

We prove that using moves (6.23a), (6.23b) one can transform \(\omega _1\) and \(\omega _2\) to a fixed word depending only on \(\Sigma _\rho = \Sigma \{ \tilde{p}(t,\rho ) \mid t = 1,\ldots ,T \} \in S_T\). One can see that it is true for \(T=2\). For arbitrary \(T \ge 3\) we prove it using induction.

Let \(t^* \in \{1,\ldots ,T\}\) be such that

$$\begin{aligned} \widetilde{p}_2(t^*,\rho )=\max _{t=1,\ldots ,T}\left\{ \widetilde{p}_2(t,\rho )\right\} . \end{aligned}$$
(10.1)

We continuously increase \(\widetilde{p}_2(t^*,\rho )\) leaving the parameters \(\widetilde{p}_1(t,\rho )\), \(t = 1\), ..., T, and \(\widetilde{p}_2(t,\rho )\), \(t = 1\), ..., \(t^*-1\), \(t^*+1\), ..., T, unchanged. This leads to transformations of the word of (6.22) associated to the partition of \(\mathbb R^2_+\) by the lines of (6.13).

As we increase \(\alpha \) from 0 to \(\tfrac{\pi }{2}\), the ray \(R_\alpha \) consecutively meets intersection points (6.19) of the pairs of lines of (6.13). Variations of \(\widetilde{p}_2(t^*,\rho )\) can change the order in which \(R_\alpha \) meets these points. This situation corresponds to application of move (6.23a) to the word (6.22) associated to partition.

Besides, if one increases \(\widetilde{p}_2(t^*,\rho )\), the \(t^*\)th line of (6.13) can meet the intersection point for a pair of other lines of (6.13). This situation corresponds to application of move (6.23b) to the word (6.22) associated to partition.

One can see that the \(t^*\)th line of (6.13) intersects the lines with numbers \(t = t^*+1\), ..., T and does not intersect the other lines of (6.13). Let \(\alpha _{tt^*}\), \(t = t^*+1\), ..., T, be the values of \(\alpha \) for which \(R_\alpha \) meets the intersection point of the \(t^*\)th line with the tth line. Note that

$$\begin{aligned} \alpha _{tt^*} \rightarrow 0, \quad \text {as }\tilde{p}_2(t^*,\rho )\rightarrow +\infty , \quad t = t^*+1,\dots ,T. \end{aligned}$$
(10.2)

Choose \(\tilde{p}_2(t^*,\rho )\) in such a way that \(\alpha _{tt^*} < \alpha ^*\), \(t=t^*+1\), ..., T, where \(\alpha ^*\) be the minimal of the angles for which the ray \(R_\alpha \) meets an intersection point of a pair of lines of (6.13) with \(t \ne t^*\).

As \(\alpha \) increases from 0 to \(\tfrac{\pi }{2}\), the ray \(R_\alpha \) consecutively meets the intersection points of the pairs of lines of (6.13) with numbers \((t^*,t^*+1)\), ..., \((t^*,T)\) and then the intersection points of the lines of (6.13) different from the \(t^*\)th line. As a corollary, the word (6.22) corresponding to this partition of \(\mathbb R^2_+\) by the lines of (6.13) starts as

$$\begin{aligned} \sigma _{t^*}\sigma _{t^*+1}\ldots \sigma _{T-2}\sigma _{T-1}. \end{aligned}$$
(10.3)

Note that this subword is completely determined by \(\Sigma _\rho = \Sigma \{ \widetilde{p}(t,\rho ) \mid t = 1,\ldots ,T\} \in S_T\).

Also note that if \(\alpha >\alpha ^*\) then \(\pi _T(\alpha )=t^*\) and the \(t^*\)th line does not appear in intersections of \(R_\alpha \) with the pairs of lines of (6.13). Thus, symbol \(\sigma _{T-1}\) does not appear in the next part of the word (6.22).

Then remove the \(t^*\)th line from family (6.13), renumber remaining lines, and define the new permutation \(\Sigma ' \in S_{T-1}\) according to (6.31). The new permutation \(\Sigma '\) is uniquely determined by \(\Sigma _\rho \).

The word (6.22) corresponding to the new partition is obtained from the old word by removing the beginning (10.3).

It remains to apply the induction hypothesis to the new partition and corresponding formal word (6.22).

Proposition 6.2 is proved. \(\square \)

Proof of Proposition 6.3

Let \(\alpha \in (0,\tfrac{\pi }{2})\) be such that \(\pi (\alpha )=\lambda \), where \(\pi (\alpha )\) is defined in (6.20). Let \(G_0,\ldots ,G_T\) be the domains of partition \(\Lambda \{ \widetilde{p}(t,\rho ) \mid t = 1,\ldots ,T \}\) consecutively traversed by the point \((z_1,z_2(z_1)) \in R_\alpha \) as \(z_1\) goes from \(+\infty \) to 0.

Let \(\zeta _j \in \mathbb R^2_+\), \(r_j > 0\), \(j = 1\), ..., T, be such that

$$\begin{aligned} B_{r_j}(\zeta _j)= \bigl \{ x\in \mathbb R^2 \mid | x - \zeta _j | < r_j \bigr \} \subset G_j \quad (j=1,\ldots ,T). \end{aligned}$$
(10.4)

Put

$$\begin{aligned} f(x)= & {} \frac{y(\lambda (T))}{\pi r_T^2}, \quad x \in B_{r_T}(\zeta _T), \nonumber \\ f(x)= & {} \frac{ y(\lambda (T-j))-y(\lambda (T-j+1))}{\pi r_{T-j}^2}, \quad x\in B_{r_j}(\zeta _j), \; 1 \le j \le T-1, \nonumber \\ f(x)= & {} 0, \quad x \not \in B_{r_j}(\zeta _j), \; 1 \le j \le T. \end{aligned}$$
(10.5)

Then \(\mu (\text {d}x) = f(x) \text {d}x\) is a non-negative absolutely continuous measure which solves the moment problem (6.1). \(\square \)

Proof of Proposition 6.4

We begin by showing that if \(Sn(\lambda )\) does not belong to the closed bounded domain bounded by \(Sn({{\mathrm{id}}}_{S_T})\) and \(Sn(\Sigma _\rho )\), then

$$\begin{aligned}&\text {there exist }t_1,\,t_2 \in \{1,\ldots ,T\}\text { such that}\nonumber \\&\quad t_1< t_2, \quad \widetilde{p}_2(t_1,\rho )< \widetilde{p}_2(t_2,\rho ), \quad y(t_1)<y(t_2). \end{aligned}$$
(10.6)

Suppose that (10.6) is not true. Let \(Y_t = (y(t),y(t))\), \(t = 1\), ..., T. We join \(Y_t\) by the line segments with \((\tfrac{1}{\widetilde{p}_1(t,\rho )},0)\) and \((0,\tfrac{1}{\widetilde{p}_2(t,\rho )})\). We call this pair of segments a wire.

If (10.6) does not hold, each pair of wires has at most one intersection point, and thus the set of these wires is a strict wiring diagram. The boundaries of the closed domain bounded by this diagram coincide with boundaries of the closed domain bounded by the rhombic tiling corresponding to partition \(\Lambda \{ \widetilde{p}(t,\rho ) \mid t = 1,\ldots ,T \}\). This closed domain contains \(Sn(\lambda )\), and this contradicts the hypothesis.

Next, suppose that \(\mu \) is a solution of the moment problem (6.1), and suppose that \(Sn(\lambda )\) is not contained in the closed bounded region bounded by \(Sn({{\mathrm{id}}}_{S_T})\) and \(Sn(\Sigma _\rho )\). As it was shown above, we have (10.6).

Condition \(\widetilde{p}_2(t_1,\rho ) < \widetilde{p}(t_2,\rho )\) implies that

$$\begin{aligned} \bigl \{ (z_1,z_2)\in \mathbb R^2_+ \mid \widetilde{p}_1(t_2,\rho )z_1 + \widetilde{p}_2(t_2,\rho )z_1< 1 \bigr \} \qquad \qquad \qquad \nonumber \\ \qquad \qquad \qquad \subset \bigl \{ (z_1,z_2) \in \mathbb R^2_+ \mid \widetilde{p}_1(t_1,\rho )z_1 + \widetilde{p}_2(t_1,\rho )z_2 < 1 \}. \end{aligned}$$
(10.7)

Applying Proposition 5.2 with \(\Omega _1 = \{t_1\}\), \(\Omega _2 = \{t_2\}\) and using (10.7), we have that \(y(t_2) \le y(t_1)\). It contradicts (10.6). Thus, the moment problem 6.1 is not solvable.

Proposition 6.4 is proved. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Agaltsov, A.D., Molchanov, E.G. & Shananin, A.A. Inverse Problems in Models of Resource Distribution. J Geom Anal 28, 726–765 (2018). https://doi.org/10.1007/s12220-017-9840-1

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12220-017-9840-1

Keywords

Mathematics Subject Classification

Navigation