Skip to main content
Log in

Bivariate densities in Bayes spaces: orthogonal decomposition and spline representation

  • Regular Article
  • Published:
Statistical Papers Aims and scope Submit manuscript

Abstract

A new orthogonal decomposition for bivariate probability densities embedded in Bayes Hilbert spaces is derived. It allows representing a density into independent and interactive parts, the former being built as the product of revised definitions of marginal densities, and the latter capturing the dependence between the two random variables being studied. The developed framework opens new perspectives for dependence modelling (e.g., through copulas), and allows the analysis of datasets of bivariate densities, in a Functional Data Analysis perspective. A spline representation for bivariate densities is also proposed, providing a computational cornerstone for the developed theory.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  • Bigot J, Gouet R, Klein T, López A (2019) Geodesic pca in the wasserstein space by convex pca. Ann. Inst. Henri Poincaré Probab. Stat. 53(1):1–26

    MathSciNet  MATH  Google Scholar 

  • de Boor C (1978) A practical guide to splines. Springer, New York

    Book  MATH  Google Scholar 

  • Delicado P (2011) Dimensionality reduction when data are density functions. Comput Stat Data Anal 55:401–420

    Article  MathSciNet  MATH  Google Scholar 

  • Dierckx P (1993) Curve and surface fitting with splines. Oxford University Press, New York

    MATH  Google Scholar 

  • Egozcue JJ, Diaz-Barrero JL, Pawlowsky-Glahn V (2008) Compositional analysis of bivariate discrete probabilities. In: Proceedings of CODAWORK 08

  • Egozcue JJ, Pawlowsky-Glahn V (2016) Changing the reference measure in the simplex and its weighting effects. Aust J Stat 45(4):25–44

    Article  Google Scholar 

  • Egozcue JJ, Pawlowsky-Glahn V, Templ M, Hron K (2015) Independence in contingency tables using simplicial geometry. Commun Stat Theory Methods 44:3978–3996

    Article  MathSciNet  MATH  Google Scholar 

  • Freedman D, Lane D (1983) A nonstochastic interpretation of reported significance levels. J Bus Econ Stat 1(4):292–298

    Google Scholar 

  • Gába A, Přidalová M (2014) Age-related changes in body composition in a sample of czech women aged 18–89 years: a cross-sectional study. Eur J Nutr 53(1):167–176

    Article  Google Scholar 

  • Gába A, Přidalová M (2016) Diagnostic performance of body mass index to identify adiposity in women. Eur J Clin Nutr 70:898–903

    Article  Google Scholar 

  • Guégan D, Iacopini M (2019) Nonparametric forecasting of multivariate probability density functions. ArXiv report arXiv:1803.06823v1

  • Hron K, Menafoglio A, Templ M, Hrůzová K, Filzmoser P (2016) Simplicial principal component analysis for density functions in bayes spaces. Comput Stat Data Anal 94:330–350

    Article  MathSciNet  MATH  Google Scholar 

  • Kokoszka P, Miao H, Petersen A, Shang HL (2019) Forecasting of density functions with an application to cross-sectional and intraday returns. Int J Forecasting 35(4):1304–1317

    Article  Google Scholar 

  • Kwiatkowski D, Phillips PCB, Schmidt P, Shin Y (1992) Testing the null hypothesis of stationarity against the alternative of a unit root. J Econ 54:159–178

    Article  MATH  Google Scholar 

  • Machalová J (2002) Optimal interpolatory splines using b-spline representation. Acta Univ Palacki Olomuc Fac rer nat Mathematica 41:105–118

    MathSciNet  MATH  Google Scholar 

  • Machalová J (2002) Optimal interpolatory and optimal smoothing spline. J Electr Eng 53(12/s):79–82

    MATH  Google Scholar 

  • Machalová J, Hron K, Monti GS (2016) Preprocessing of centred logratio transformed density functions using smoothing splines. J Appl Stat 43(8):1419–1435

    Article  MathSciNet  MATH  Google Scholar 

  • Machalová J, Talská R, Hron K, Gába A (2020) Compositional splines for representation of density functions. Comput Stat. https://doi.org/10.1007/s00180-020-01042-7

    Article  MATH  Google Scholar 

  • Martín-Fernández JA, Hron K, Templ M, Filzmoser P, Palarea-Albaladejo J (2015) Bayesian-multiplicative treatment of count zeros in compositional data sets. Stat Model 15(2):134–158

    Article  MathSciNet  MATH  Google Scholar 

  • Menafoglio A, Guadagnini A, Secchi P (2014) A kriging approach based on aitchison geometry for the characterization of particle-size curves in heterogeneous aquifers. Stoch Environ Res Risk Assess 28(7):1835–1851

    Article  Google Scholar 

  • Menafoglio A, Grasso M, Secchi P, Colosimo BM (2016) A class-kriging predictor for functional compositions with application to particle-size curves in heterogeneous aquifers. Math Geosci 48(4):463–485

    Article  MathSciNet  MATH  Google Scholar 

  • Menafoglio A, Grasso M, Secchi P, Colosimo BM (2018) Monitoring of probability density functions via simplicial functional pca with application to image data. Technometrics 60(4):497–510

    Article  MathSciNet  Google Scholar 

  • Menafoglio A, Gaetani G, Secchi P (2018) Random domain decompositions for object-oriented kriging over complex domains. Stochastic Environmental Research and Risk Assessment

  • Nelsen RB (2006) An introduction to copulas. Springer, New York

    MATH  Google Scholar 

  • Nerini D, Ghattas B (2007) Classifying densities using functional regression trees: applications in oceanology. Comput Stat Data Anal 51(10):4984–4993

    Article  MathSciNet  MATH  Google Scholar 

  • Panaretos VM, Zemel Y (2019) Statistical aspects of wasserstein distances. Annu. Rev. Stat. Appl. 6(1):405–431

    Article  MathSciNet  Google Scholar 

  • Pawlowsky-Glahn V, Egozcue JJ, Tolosana-Delgado R (2015) Modeling and analysis of compositional data. Wiley, Chichester

    Book  Google Scholar 

  • Petersen A, Müller HG (2016) unctional data analysis for density functions by transformation to a Hilbert space. Ann Stat 44(1):183–218

    Article  MATH  Google Scholar 

  • Petersen A, Xi L, Divani AA (2019) Wasserstein f-tests and confidence bands for the fréchet regression of density response curves. ArXiv report arXiv:1910.1341

  • Pini A, Stamm A, Vantini S (2018) Hotelling’s t2 in functional hilbert spaces. J Multiv Anal 167:284–305

    Article  MATH  Google Scholar 

  • Ramsay J, Silverman BW (2005) Functional data analysis. Springer, New York

    Book  MATH  Google Scholar 

  • Schumaker L (2007) Spline functions: basic theory. Cambridge University Press, Cambridge

    Book  MATH  Google Scholar 

  • Seo WK, Beare BK (2019) Cointegrated linear processes in Bayes Hilbert space. Stat Probab Lett 147:90–95

    Article  MathSciNet  MATH  Google Scholar 

  • Sklar A (1959) Fonctions de répartition à n dimensions et leurs marges. Publ Inst Stat Univ Paris 8:229–231

    MATH  Google Scholar 

  • Srivastava A, Jermyn I, Joshi S (2007) Riemannian analysis of probability density functions with applications in vision. IEEE Xplore. https://doi.org/10.1109/CVPR.2007.383188

    Article  Google Scholar 

  • Talská R, Menafoglio A, Machalová J, Hron K, Fišerová E (2018) Compositional regression with functional response. Comput Stat Data Anal 123:66–85

    Article  MathSciNet  MATH  Google Scholar 

  • Talská R, Menafoglio A, Hron K, Egozcue JJ, Palarea-Albaladejo J (2020) Weighting the domain of probability densities in functional data analysis. Stat. https://doi.org/10.1002/sta4.283

    Article  MathSciNet  Google Scholar 

  • Tran HD, Pham UH, Ly S, Vo-Duy T (2015) A new measure of monotone dependence by using sobolev norms for copula. In: Huynh V-N, Inuiguchi M, Demoeux T (eds) Integrated uncertainty in knowledge modelling and decision making. Springer, Cham, pp 126–137

    Chapter  Google Scholar 

  • van den Boogaart KG, Egozcue JJ, Pawlowsky-Glahn V (2010) Bayes linear spaces. Stat Oper Res Trans 34(2):201–222

    MathSciNet  MATH  Google Scholar 

  • van den Boogaart KG, Egozcue JJ, Pawlowsky-Glahn V (2014) Hilbert bayes spaces. Aust NZ J Stat 54(2):171–194

    Article  MATH  Google Scholar 

  • WHO (2020) Adolescent health. https://www.who.int/southeastasia/health-topics/adolescent-health. Accessed 27 Nov 2020

  • Yule GU (1912) On the methods of measuring association between two attributes. J R Stat Soc 75(6):579–642

    Article  Google Scholar 

Download references

Acknowledgements

The authors were supported by Czech Science Foundation (GAČR), GA22-15684L.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Karel Hron.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Spline representation of univariate clr transformed densities

In this appendix section, the terminology and basics for the spline representation of clr transformed univariate densities as \(L^2\) functions with zero integral are recalled. Let the sequence of knots \(\Delta \lambda \, := \, \left\{ \lambda _i \right\} _{i=0}^{g+1}\), \(\lambda _{0}=a<\lambda _{1}<\ldots<\lambda _{g}<b=\lambda _{g+1}\) be given. The symbol \({{{\mathcal {S}}}}_{k}^{\Delta \lambda }[a,b]\) denotes the vector space of polynomial splines of degree \(k>0\), defined on a finite interval [ab] with the sequence of knots \(\Delta \lambda \). It is known that \(\dim \left( \mathcal{S}_{k}^{\Delta \lambda }[a,b]\right) =g+k+1\). Then every spline \(s_{k}(x)\in {{{\mathcal {S}}}}_{k}^{\Delta \lambda }[a,b]\) has an unique representation

$$\begin{aligned} s_{k}\left( x\right) =\sum _{i=-k}^{g}b_{i}B_{i}^{k+1}\left( x\right) . \end{aligned}$$

For generalization of splines to the bivariate density case, the following theorem, which was published in Talská et al. (2018), is of paramount importance.

Theorem 10

For a spline \(s_{k}(x)\in \mathcal{S}_{k}^{\Delta \lambda }[a,b]\), \(s_{k}\left( x\right) =\sum \limits _{i=-k}^{g}b_{i}B_{i}^{k+1}\left( x\right) \), the condition \(\int \limits _{a}^{b}s_{k}(x)\,\text{ d }x=0\) is fulfilled if and only if \(\sum \limits _{i=-k}^{g}\;b_i\left( \lambda _{i+k+1}-\lambda _i\right) \;=\;0.\)

Proof

From the spline theory, it is known that \(\int s_k(x) \, \text{ d }x \, = \, s_{k+1}(x)\). If the notation \(s_{k}(x) = \sum \limits _{i=-k}^{g}b_{i}B_{i}^{k+1}\left( x\right) \) is used, \(s_{k+1}(x) = \sum \limits _{i=-k-1}^{g} c_{i} B_{i}^{k+2}\left( x\right) \), there is known the relationship between their B-spline coefficients in the form

$$\begin{aligned} b_i \; = \; (k+1) \dfrac{c_i-c_{i-1}}{\lambda _{i+k+1}-\lambda _i}, \quad \forall i=-k,\ldots ,g. \end{aligned}$$

Thus the coefficients \(c_i\) can be expressed as

$$\begin{aligned} c_i \; = \; c_{i-1}+ \dfrac{b_i}{d_i}, \quad \forall i=-k,\ldots ,g \end{aligned}$$

with \(d_i=\dfrac{k+1}{\lambda _{i+k+1}-\lambda _i}\) and it means that

$$\begin{aligned} c_g \, = \, \dfrac{b_g}{d_g} \, + \, \cdots \, + \, \dfrac{b_{-k}}{d_{-k}} \, + \, c_{-k-1}. \end{aligned}$$

According to the coincident additional knots, see Machalová et al. (2016) for details, it holds

$$\begin{aligned} \int \limits _{a}^{b} s_{k}(x) \,\text{ d }x \; = \; \left[ s_{k+1}(x) \right] _a^b \; = \; s_{k+1}(b) - s_{k+1}(a) \; = \; c_{g} - c_{-k-1}, \end{aligned}$$
(A1)

and it is obvious that

$$\begin{aligned} 0 \, = \, \int \limits _{a}^{b} s_{k}(x) \, \text{ d }x \quad \Leftrightarrow \quad c_g = c_{-k-1} \quad \Leftrightarrow \quad \dfrac{b_g}{d_g} \, + \, \cdots \, + \, \dfrac{b_{-k}}{d_{-k}} \, = \, 0. \end{aligned}$$

Finally, the definition of \(d_i\) implies that the following sequence of equivalences can be formulated,

$$\begin{aligned} 0 \, = \, \int \limits _{a}^{b} s_{k}(x) \, \text{ d }x \quad \Leftrightarrow \quad \sum \limits _{i=-k}^{g} \dfrac{b_i}{d_i} \, = \, 0 \quad \Leftrightarrow \quad \sum \limits _{i=-k}^{g} b_i\left( \lambda _{i+k+1}-\lambda _i \right) \, = \, 0. \end{aligned}$$

\(\square \)

Algorithm

The algorithm to find a spline \(s_{k}(x)\in \mathcal{S}_{k}^{\Delta \lambda }[a,b]\) with zero integral, i.e., the respective vector \({\mathbf {b}}=(b_{-k}, \cdots , b_g)^{\top }\), can be summarized as follows:

1. choose \(g+k\) arbitrary B-spline coefficients \(b_i\in {\mathbb {R}}\), \(i=-k\ldots ,j-1,j+1,\ldots ,g\),

2. compute

$$\begin{aligned} b_j \; = \; \dfrac{-1}{\lambda _{j+k+1}-\lambda _j} \; \sum \limits _{\begin{array}{c} i=-k\\ i\ne j \end{array}}^{g}\;b_i\left( \lambda _{i+k+1}-\lambda _i\right) . \end{aligned}$$

Appendix B: Proofs

Proof of Theorem 1

The clr transformation of the independence density \(f_{\mathrm {ind}}(x,y)\) can be written as

$$\begin{aligned} {\mathrm {clr}(f_{\mathrm {ind}})}(x,y)=\ln [f_X(x)f_Y(y)] - \frac{1}{{\mathsf {P}}(\Omega )} \int \limits _{\Omega _X} \int \limits _{\Omega _Y}\ln [f_X(x)f_Y(y)]\,d{\mathsf {P}}_X d{\mathsf {P}}_Y. \end{aligned}$$
(B2)

This is invariant under rescaling of the product \(f_{X,{g}}(x)f_{Y,{g}}(y)\). By choosing the following representations of \(f_{X,{g}}(x)\) and \(f_{Y,{g}}(y)\),

$$\begin{aligned} f_{X,{g}}(x)=\exp [{\mathrm {clr}(f_{X,g})}(x)],\quad f_{Y,{g}}(y)=\exp [{\mathrm {clr}(f_{Y,g})}(y)], \end{aligned}$$

the second term in (B2) equals zero. Thus (B2) can be rewritten as

$$\begin{aligned} {\mathrm {clr}(f_{\mathrm {ind}})}(x,y) = \ln \{\exp [{\mathrm {clr}(f_{X,g})}(x)+{\mathrm {clr}(f_{Y,g})}(y)]\}={\mathrm {clr}(f_{X,g})}(x)+{\mathrm {clr}(f_{Y,g})}(y). \end{aligned}$$

For the sake of simplicity in notation, arguments are hereafter omitted. Consider

$$\begin{aligned} {\mathrm {clr}(f_{\mathrm {int}})} = {\mathrm {clr}(f)} - {\mathrm {clr}(f_{\mathrm {ind}})} = {\mathrm {clr}(f)} -{\mathrm {clr}(f_{X,g})} - {\mathrm {clr}(f_{Y,g})}, \end{aligned}$$

then

$$\begin{aligned} \langle {\mathrm {clr}(f_{\mathrm {int}})}, {\mathrm {clr}(f_{\mathrm {ind}})} \rangle _{L_0^2({\mathsf {P}})}&=\langle {\mathrm {clr}(f)} - {\mathrm {clr}(f_{X,g})} - {\mathrm {clr}(f_{Y,g})}, {\mathrm {clr}(f_{X,g})} + {\mathrm {clr}(f_{Y,g})} \rangle _{L_0^2({\mathsf {P}})}=\\&= \langle {\mathrm {clr}(f)},{\mathrm {clr}(f_{X,g})} \rangle + \langle {\mathrm {clr}(f)},{\mathrm {clr}(f_{Y,g})}\rangle _{L_0^2({\mathsf {P}})} - \Vert {\mathrm {clr}(f_{X,g})} \Vert _{L_0^2({\mathsf {P}})}^2 - \\&\quad - \Vert {\mathrm {clr}(f_{Y,g})} \Vert _{L_0^2({\mathsf {P}})}^2 - 2\langle {\mathrm {clr}(f_{X,g})},{\mathrm {clr}(f_{Y,g})}\rangle _{L_0^2({\mathsf {P}})}. \end{aligned}$$

For the first scalar product one has

$$\begin{aligned} \langle {\mathrm {clr}(f)},{\mathrm {clr}(f_{X,g})}\rangle _{L_0^2({\mathsf {P}})}&= \int \limits _{\Omega _X} \int \limits _{\Omega _Y} {\mathrm {clr}(f)}(x,y) {\mathrm {clr}(f_{X,g})}(x)\,d{\mathsf {P}}_X d{\mathsf {P}}_Y=\\&= \int \limits _{\Omega _X} {\mathrm {clr}(f_{X,g})}(x) \int \limits _{\Omega _Y} {\mathrm {clr}(f)}(x,y)\,d{\mathsf {P}}_X d{\mathsf {P}}_Y= \\&= \int \limits _{\Omega _X}[{\mathrm {clr}(f_{X,g})}(x)]^2\,d{\mathsf {P}}_X = \Vert {\mathrm {clr}(f_{X,g})}\Vert _{L_0^2({\mathsf {P}})}^2, \end{aligned}$$

similarly also \(\langle {\mathrm {clr}(f)},{\mathrm {clr}(f_{Y,g})}\rangle _{L_0^2({\mathsf {P}})} = \Vert {\mathrm {clr}(f_{Y,g})}\Vert _{L_0^2({\mathsf {P}})}^2\). Finally,

$$\begin{aligned} \langle {\mathrm {clr}(f_{X,g})},{\mathrm {clr}(f_{Y,g})}\rangle _{L_0^2({\mathsf {P}})}&= \int \limits _{\Omega _X} \int \limits _{\Omega _Y} {\mathrm {clr}(f_{X,g})}(x) {\mathrm {clr}(f_{Y,g})}(y) \,d{\mathsf {P}}_X d{\mathsf {P}}_Y = \\&= \int \limits _{\Omega _X} {\mathrm {clr}(f_{X,g})}(x)\,d{\mathsf {P}}_X \cdot \int \limits _{\Omega _Y} {\mathrm {clr}(f_{Y,g})}(y)\,d{\mathsf {P}}_Y=0, \end{aligned}$$

which completes the proof. \(\square \)

Proof of Theorem 2

In case of independence, one may decompose a bivariate density as the product of its arithmetic marginals as \(f(x,y)=f_{X,a}(x)f_{Y,a}(y)\). In Bayes spaces, this is reformulated as in (11). Call \({\mathrm {clr}(f_{X,g})}(x)\), \({\mathrm {clr}(f_{Y,g})}(y)\) the clr-representation of the marginals, i.e., \(f_{X,a}(x)=\exp [{\mathrm {clr}(f_{X,a})}(x)]\) and similarly \(f_{Y,a}(y)=\exp [{\mathrm {clr}(f_{Y,a})}(y)]\). Using (11), one may build the independent component as \({\mathrm {clr}(f_{\mathrm {ind}})}(x,y) = {\mathrm {clr}(f_{X,a})}(x)+{\mathrm {clr}(f_{Y,a})}(y)\), which clearly coincides with f itself. The clr representation of the geometric X-marginal is derived—by definition (5)—as

$$\begin{aligned} \int _{\Omega _Y}{\mathrm {clr}(f_{\mathrm {ind}})}(x,y)\,d{\mathsf {P}}_Y=\int _{\Omega _Y} {\mathrm {clr}(f_{X,a})}(x)\,d{\mathsf {P}}_Y={\mathsf {P}}_Y(\Omega _Y){\mathrm {clr}(f_{X,a})}(x). \end{aligned}$$

By considering that \({\mathsf {P}}_Y(\Omega _Y)=1\), the geometric X-marginal is obtained by applying the exponential as \(f_{X,{g}}(x)=\exp [{\mathrm {clr}(f_{X,a})}(x)]\), i.e., it coincides with the arithmetic marginal \(f_{X,a}(x)\). The case of Y-marginals would be proven analogously. \(\square \)

Proof of Theorem 3

The orthogonality of the marginals is easy to be proven in the clr space. Specifically,

$$\begin{aligned}&\langle {\mathrm {clr}(f_{X,g})}, {\mathrm {clr}(f_{Y,g})}\rangle _{L_0^2({\mathsf {P}})} = \left\langle \int _{\Omega _X} {\mathrm {clr}(f)}(x,y)\,d{\mathsf {P}}_X,\int _{\Omega _Y} {\mathrm {clr}(f)}(x,y)\,d{\mathsf {P}}_Y\right\rangle _{L_0^2({\mathsf {P}})}=\\&\quad = \int _{\Omega _X}\int _{\Omega _Y}\left[ \int _{\Omega _X} {\mathrm {clr}(f)}(x,y)\,d{\mathsf {P}}_X\right] \left[ \int _{\Omega _Y}{\mathrm {clr}(f)}(x,y)\,d{\mathsf {P}}_Y \right] d{\mathsf {P}}_Xd{\mathsf {P}}_Y=\\&\quad = \int _{\Omega _Y}\left[ \int _{\Omega _X} {\mathrm {clr}(f)}(x,y)\,d{\mathsf {P}}_X\right] d{\mathsf {P}}_Y\cdot \int _{\Omega _X}\left[ \int _{\Omega _Y} {\mathrm {clr}(f)}c(x,y)\,d{\mathsf {P}}_Y\right] d{\mathsf {P}}_X=0 \end{aligned}$$

from the fact that \({\mathrm {clr}(f_{X,g})}\in L_0^2(\Omega _X)\) and \({\mathrm {clr}(f_{Y,g})}\in L_0^2(\Omega _Y)\). In the next step, the orthogonality between \(f_{\mathrm {int}}\equiv f_{\mathrm {int}}(x,y)\) and the X-marginal is proven. Using the first part of this theorem and the relation \(\langle {\mathrm {clr}(f)},{\mathrm {clr}(f_{X,g})}\rangle _{L_0^2({\mathsf {P}})} = \Vert {\mathrm {clr}(f_{X,g})}\Vert ^2_{L_0^2({\mathsf {P}})}\) from the proof of Theorem 1 it holds

$$\begin{aligned}&\langle {\mathrm {clr}(f_{\mathrm {int}})},{\mathrm {clr}(f_{X,g})}\rangle _{L_0^2({\mathsf {P}})} =\langle {\mathrm {clr}(f)} - {\mathrm {clr}(f_{\mathrm {ind}})}, {\mathrm {clr}(f_{X,g})}\rangle _{L_0^2({\mathsf {P}})} = \\&\quad = \langle {\mathrm {clr}(f)}- {\mathrm {clr}(f_{X,g})} - {\mathrm {clr}(f_{Y,g})}, {\mathrm {clr}(f_{X,g})} \rangle _{L_0^2({\mathsf {P}})}= \\&\quad = \langle {\mathrm {clr}(f)}, {\mathrm {clr}(f_{X,g})} \rangle _{L_0^2({\mathsf {P}})} - \Vert {\mathrm {clr}(f_{X,g})} \Vert _{L_0^2({\mathsf {P}})}^2 - \langle {\mathrm {clr}(f_{X,g})}, {\mathrm {clr}(f_{Y,g})}\rangle _{L_0^2({\mathsf {P}})}=\\&\quad =\Vert {\mathrm {clr}(f_{X,g})} \Vert _{L_0^2({\mathsf {P}})}^2 - \Vert {\mathrm {clr}(f_{X,g})}^c\Vert _{L_0^2({\mathsf {P}})}^2=0. \end{aligned}$$

\(\square \)

Proof of Theorem 4

Equation (12) can be equivalently stated in terms of the clr marginals as

$$\begin{aligned} {\mathrm {clr}(f)} + {\mathrm {clr}(f_{\mathrm {int},X,g})} = {\mathrm {clr}(f)}; \quad {\mathrm {clr}(f)} + {\mathrm {clr}(f_{\mathrm {int},Y,g})} = {\mathrm {clr}(f)}.\end{aligned}$$
(3)

In this case, one has

$$\begin{aligned}&{\mathrm {clr}(f)} + {\mathrm {clr}(f_{\mathrm {int},X,g})} = {\mathrm {clr}(f)} + \int _{\Omega _X} {\mathrm {clr}(f_{\mathrm {int}})} d{\mathsf {P}}_X =\\&\quad = {\mathrm {clr}(f)} + \int _{\Omega _X} {\mathrm {clr}(f)}\, d{\mathsf {P}}_X - \int _{\Omega _X} {\mathrm {clr}(f_{X,g})} d{\mathsf {P}}_X - \int _{\Omega _X} {\mathrm {clr}(f_{Y,g})} d{\mathsf {P}}_X = \\&\quad = {\mathrm {clr}(f)} + {\mathrm {clr}(f_{Y,g})} - {\mathrm {clr}(f_{Y,g})} \cdot {\mathsf {P}}_X(\Omega _X) = {\mathrm {clr}(f)}, \end{aligned}$$

where the last equality holds true if the measure \({\mathsf {P}}_X(\Omega _X)\) is normalized. With analogous argument, the same equality is proven for \(f_{\mathrm {int},Y,{g}}^c\). \(\square \)

Proof of Theorem 5

From (11) and the expression \(g_{\mathrm {ind}}=(g_{X,{g}}\oplus f_{X,{g}}) \oplus (g_{Y,{g}} \oplus f_{Y,{g}})\) it follows that \(g_{\mathrm {ind}}\) is an independence density of g. Therefore

$$\begin{aligned} g_{\mathrm {int}}=g\ominus g_{\mathrm {ind}}=f\ominus f_{\mathrm {ind}}=f_{\mathrm {int}}. \end{aligned}$$

\(\square \)

Proof of Theorem 6

Let the first term in (21) be denoted as

$$\begin{aligned} J_1 \; = \; \alpha \sum \limits _{i=1}^{n} \sum \limits _{j=1}^m \, \left[ f_{i j}-s_{k l}(x_{i},y_j)\right] ^{2} \end{aligned}$$
(4)

and the second one as

$$\begin{aligned} J_2 \; = \; (1-\alpha )\iint \limits _{\Omega }\left[ s_{k l}^{(u,v)}(x,y)\right] ^{2}\, \text{ d }x\,\text{ d }y \end{aligned}$$
(5)

We can express the functional \(J_1\) from (4) in matrix notation as

$$\begin{aligned} \begin{aligned} J_1&= \, \alpha \sum \limits _{i=1}^{n} \sum \limits _{j=1}^m \, \left[ f_{i j}-s_{k l}(x_{i},y_j)\right] ^{2} \, = \, \alpha \left[ cs({\mathbf {F}}) - {\mathbb {B}} \, cs({\mathbf {B}})\right] ^{\top } \left[ cs({\mathbf {F}}) - {\mathbb {B}} \, cs({\mathbf {B}}) \right] \, = \\&= \, \alpha \left( cs({\mathbf {F}})\right) ^{\top } cs({\mathbf {F}}) - 2 \alpha \left( cs({\mathbf {B}})\right) ^{\top } {\mathbb {B}}^{\top } \, cs({\mathbf {F}}) + \alpha \left( cs({\mathbf {B}})\right) ^{\top } {\mathbb {B}}^{\top } \, {\mathbb {B}} \, cs({\mathbf {B}}), \end{aligned} \end{aligned}$$

where \({\mathbf {F}}=(f_{i j})\), \({\mathbb {B}} \, := \, {\mathbf {B}}_{l+1}({\mathbf {y}}) \otimes {\mathbf {B}}_{k+1}({\mathbf {x}})\), \({\mathbf {y}}=(y_1,\cdots ,y_m)\), \({\mathbf {x}}=(x_1,\cdots ,x_n)\). Now we consider the derivative of the spline. Similarly as in case of one- dimensional splines, Machalová et al. (2016); Machalová (2002a), the derivative can be expressed by using (23), (24) as

$$\begin{aligned} \begin{aligned} s_{k l}^{(u,v)}(x,y)&= \, \frac{\partial ^{u}}{\partial x^{u}} \, \frac{\partial ^{v}}{\partial y^{v}} \, \sum \limits _{i=-k}^{g} \sum \limits _{j=-l}^{h} \, b_{ij} \, B_i^{k+1}(x) \, B_j^{l+1}(y) = \\&= \, \frac{\partial ^{u}}{\partial x^{v}} \, \frac{\partial ^{v}}{\partial y^{v}} \, \left( {\mathbf {B}}_{l+1}(y) \otimes {\mathbf {B}}_{k+1}(x) \right) \, cs({\mathbf {B}}) =\\&= \left[ {\mathbf {B}}_{l+1-v}(y){\mathbf {S}}_{v} \otimes {\mathbf {B}}_{k+1-u}(x){\mathbf {S}}_{u} \right] \, cs({\mathbf {B}}). \end{aligned} \end{aligned}$$
(6)

With respect to the properties of the tensor product, and using the notation \({\mathbb {B}}^{u,v}(x,y) := {\mathbf {B}}_{l+1-v}(y) \otimes {\mathbf {B}}_{k+1-u}(x)\), the derivative given in (6) can be reformulated as \( s_{k l}^{(u,v)}(x,y) = {\mathbb {B}}^{u,v}(x,y) \, {\mathbb {S}} \, cs({\mathbf {B}}). \) Note that the flexibility in the choice of the orders \(u,\, v\) in the derivatives \(s_{k l}^{(u,v)}(x,y)\) can be considered as an element of innovation with respect to the classical tensor smoothing spline approach Dierckx (1993). Then the functional \(J_2\) from (5) can be rewritten as

$$\begin{aligned} \begin{aligned} J_2&= \, (1-\alpha )\int _{\Omega }\left[ s_{k l}^{(u,v)}(x,y)\right] ^{2} \, \text{ d }x\,\text{ d }y \, = \\&= \, (1-\alpha )\int \limits _a^b \int \limits _c^d \left[ {\mathbb {B}}^{u,v}(x,y) \, {\mathbb {S}} \, cs({\mathbf {B}})\right] ^{\top } {\mathbb {B}}^{u,v}(x,y) \, {\mathbb {S}} \, cs({\mathbf {B}}) \, \text{ d }y \,\text{ d }x=\\&= \, (1-\alpha ) \left( cs({\mathbf {B}})\right) ^{\top } {\mathbb {S}}^{\top } \int \limits _a^b \int \limits _c^d \left( {\mathbb {B}}^{u,v}(x,y)\right) ^{\top } {\mathbb {B}}^{u,v}(x,y) \, \text{ d }y\,\text{ d }x \; {\mathbb {S}} \, cs({\mathbf {B}}). \end{aligned} \end{aligned}$$

Furthermore,

$$\begin{aligned}&\int \limits _a^b \int \limits _c^d \left( {\mathbb {B}}^{u,v}(x,y)\right) ^{\top } {\mathbb {B}}^{u,v}(x,y) \, \text{ d }y\, \text{ d }x \, =\\&\quad = \int \limits _a^b \int \limits _c^d \left[ {\mathbf {B}}_{l+1-v}(y) \otimes {\mathbf {B}}_{k+1-u}(x) \right] ^{\top } \left[ {\mathbf {B}}_{l+1-v}(y) \otimes {\mathbf {B}}_{k+1-u}(x) \right] \, \text{ d }y\,\text{ d }x \, = \\&\quad = \int \limits _a^b \int \limits _c^d \left[ {\mathbf {B}}_{l+1-v}^{\top }(y){\mathbf {B}}_{l+1-v}(y)\right] \otimes \left[ {\mathbf {B}}_{k+1-u}^{\top }(x){\mathbf {B}}_{k+1-u}(x)\right] \text{ d }y \,\text{ d }x \, =\\&\quad = {\mathbf {M}}_{l,v}^{y} \otimes {\mathbf {M}}_{k,u}^{x}. \end{aligned}$$

This yields, \( J_2 \; = \; (1-\alpha ) \left( cs({\mathbf {B}})\right) ^{\top } {\mathbb {S}}^{\top } {\mathbb {M}} \, {\mathbb {S}} \, cs({\mathbf {B}}).\) By putting together the matrix forms of \(J_1\) and \(J_2\), the functional \(J_{uv}(s_{k l}(x, y))\) from (21) can be expressed as a function of unknown B-spline parameters \(b_{ij}\), specifically

$$\begin{aligned} \begin{aligned} J_{uv}\left( cs({\mathbf {B}})\right) \; =&\; \alpha \left( cs({\mathbf {F}})\right) ^{\top } cs({\mathbf {F}})- 2\alpha \left( cs({\mathbf {B}})\right) ^{\top } {\mathbb {B}}^{\top } \, cs({\mathbf {F}}) \\&+ \alpha \left( cs({\mathbf {B}})\right) ^{\top } {\mathbb {B}}^{\top } \, {\mathbb {B}} \, cs({\mathbf {B}}) + \; +(1-\alpha ) \left( cs({\mathbf {B}})\right) ^{\top } {\mathbb {S}}^{\top } {\mathbb {M}} \, {\mathbb {S}} \, cs({\mathbf {B}}). \end{aligned} \end{aligned}$$
(7)

The fulfilment of the zero integral condition (22) is based on relation (26). By using this, the function \(J_{uv}(cs({\mathbf {B}}))\) can be reformulated as

$$\begin{aligned} \begin{aligned} J_{uv}\left( cs(\widetilde{{\mathbf {C}}})\right) \, =&\, \alpha \left( cs({\mathbf {F}})\right) ^{\top } cs({\mathbf {F}}) \, - \, 2\alpha \left( {\mathbb {D}} \, \widetilde{{\mathbb {K}}} \, cs(\widetilde{{\mathbf {C}}}) \right) ^{\top } {\mathbb {B}}^{\top } \, cs({\mathbf {F}}) \, + \\&+ \, \alpha \left( {\mathbb {D}} \, \widetilde{{\mathbb {K}}} \, cs(\widetilde{{\mathbf {C}}})\right) ^{\top } {\mathbb {B}}^{\top } \, {\mathbb {B}} \, {\mathbb {D}} \, \widetilde{{\mathbb {K}}} \, cs(\widetilde{{\mathbf {C}}})\, +\\&\, + (1-\alpha ) \left( {\mathbb {D}} \, \widetilde{{\mathbb {K}}} \, cs(\widetilde{{\mathbf {C}}})\right) ^{\top } {\mathbb {S}}^{\top } {\mathbb {M}} \, {\mathbb {S}} \, {\mathbb {D}} \, \widetilde{{\mathbb {K}}} \, cs(\widetilde{{\mathbf {C}}}). \end{aligned} \end{aligned}$$
(8)

Thus, the necessary and sufficient condition for the minimum of function \(J_{uv}(cs({\mathbf {B}}))\) is \( \dfrac{\partial \, J_{uv}(cs({\mathbf {B}}))}{\partial \, cs({\mathbf {B}})} \, = \, 0. \) By applying this condition to (8) the following equation is obtained,

$$\begin{aligned} \widetilde{{\mathbb {K}}}^{\top } \, {\mathbb {D}}^{\top } \, \left[ (1-\alpha ) {\mathbb {S}}^{\top } \, {\mathbb {M}} \, {\mathbb {S}} \, + \, \alpha \, {\mathbb {B}}^{\top } \, {\mathbb {B}}\right] {\mathbb {D}} \, \widetilde{{\mathbb {K}}} \, cs(\widetilde{{\mathbf {C}}}) \, = \, \alpha \, \widetilde{{\mathbb {K}}}^{\top } \, {\mathbb {D}}^{\top } {\mathbb {B}}^{\top } \, cs({\mathbf {F}}). \end{aligned}$$

Then the solution to this system is given by

$$\begin{aligned} cs(\widetilde{\mathbf {C^*}}) \; = \; \left[ \widetilde{{\mathbb {K}}}^{\top } {\mathbb {D}}^{\top } \left[ (1-\alpha ) \, {\mathbb {S}}^{\top } {\mathbb {M}} \, {\mathbb {S}} + \alpha \, {\mathbb {B}}^{\top } \, {\mathbb {B}}\right] {\mathbb {D}} \widetilde{{\mathbb {K}}} \right] ^{+} \alpha \,\widetilde{{\mathbb {K}}}^{\top }{\mathbb {D}}^{\top } {\mathbb {B}}^{\top } \, cs({\mathbf {F}}) \end{aligned}$$
(9)

And finally, the matrix \({\mathbf {B}}^*\) of coefficients for the resulting smoothing spline with zero integral is obtained by

$$\begin{aligned} cs({\mathbf {B}}^*) \, = \, {\mathbb {D}} \, \widetilde{{\mathbb {K}}} \, cs(\widetilde{{\mathbf {C}}^*}). \end{aligned}$$
(10)

\(\square \)

Proof of Theorem 7

The spline \(s_{kl}(x,y) \in \mathcal{S}_{kl}^{\Delta \lambda ,\Delta \mu }(\Omega )\) can be expressed as

$$\begin{aligned} s_{kl}\left( x,y\right) \, = \, \sum \limits _{i=-k}^{g} \sum \limits _{j=-l}^{h} b_{ij} \, B_{i}^{k+1} \left( x\right) \, B_j^{l+1} \left( y\right) \, = \, \sum \limits _{i=-k}^{g} s_{l}^i(y) \, B_{i}^{k+1} \left( x\right) , \end{aligned}$$

where \(s_{l}^i (y) \, := \, \sum \limits _{j=-l}^{h} b_{ij} \, B_j^{l+1} \left( y\right) \), \(i=-k,\cdots ,g\), are in fact one-dimensional splines of order \(l+1\) for the y-variable with coefficients \(b_{ij}\), \(j=-l,\cdots ,h\). Then

$$\begin{aligned} \int s_{kl}(x,y) \, \text{ d }y \, = \, \int \sum \limits _{i=-k}^{g} s_l^i(y) \, B_i^{k+1}(x) \, \text{ d }y \, = \, \sum \limits _{i=-k}^{g} B_i^{k+1}(x) \, \int s_l^i(y) \, \text{ d }y \end{aligned}$$

and

$$\begin{aligned} \int s_l^i(y) \, \text{ d }y \, = \, s_{l+1}^i(y), \quad \text{ with } \quad s_{l+1}^i(y) \, = \, \sum \limits _{j=-l-1}^h u_{ij} \, B_j^{l+2}(y). \end{aligned}$$

By considering the case of one-dimensional splines, specifically the proof of Theorem 10, it holds

$$\begin{aligned} u_{ij} \, = \, u_{i,j-1}+\dfrac{b_{ij}}{t_j}, \quad \text{ where } \quad t_j= \dfrac{l+1}{\mu _{j+l+1}-\mu _j}, \quad j=-l,\cdots ,h, \end{aligned}$$
(11)

i.e.

$$\begin{aligned} u_{ih} \, = \, \dfrac{b_{ih}}{t_h} + \ldots + \dfrac{b_{i,-l}}{t_{-l}}+u_{i,-l-1}, \quad \forall i. \end{aligned}$$
(12)

Altogether

$$\begin{aligned} \begin{aligned} \int s_{kl}(x,y) \, \text{ d }y&= \, \sum \limits _{i=-k}^{g} B_i^{k+1}(x) \sum \limits _{j=-l-1}^{h} u_{ij} \, B_{j}^{l+2}(y) \, = \\&= \, \sum \limits _{i=-k}^{g}\sum \limits _{j=-l-1}^{h} u_{ij} \, B_{i}^{k+1}(x) \, B_{j}^{l+2}(y) \, =: \, s_{k,l+1}(x,y). \end{aligned} \end{aligned}$$

Subsequently, using the last expression, the integral can be expressed as

$$\begin{aligned} \begin{aligned} \int \limits _{c}^{d} s_{kl}(x,y) \, \text{ d }y&= \, \left[ s_{k,l+1}(x,y) \right] _{c}^{d} \, = \, s_{k,l+1}(x,d) - s_{k,l+1}(x,c) \, = \\&= \, \sum \limits _{i=-k}^{g} \sum \limits _{j=-l-1}^{h} u_{ij} \, B_{i}^{k+1}(x) \, \left( B_{j}^{l+2}(d) - B_j^{l+2}(c) \right) \, = \\&= \, \sum \limits _{i=-k}^{g} B_{i}^{k+1}(x) \left( u_{ih}- u_{i,-l-1} \right) \, = \, \sum \limits _{i=-k}^{g} v_i \, B_i^{k+1}(x) \, =: \, s_{k}(x), \end{aligned} \end{aligned}$$
(13)

for

$$\begin{aligned} v_{i} \, := \, u_{ih} - u_{i,-l-1} \qquad \forall i=-k,\cdots ,g, \end{aligned}$$
(14)

because of coincident additional knots (19), (20) it holds

$$\begin{aligned} B_j^{l+2}(d)= \left\{ \begin{array}{cl} 1 &{} \quad \text{ if } \; j = h \\ 0 &{} \quad \text{ otherwise } \end{array} \right. \qquad B_j^{l+2}(c)= \left\{ \begin{array}{cl} 1 &{} \quad \text{ if } \; j = -l-1 \\ 0 &{} \quad \text{ otherwise }. \end{array} \right. \end{aligned}$$

Finally, according to (13) and (A1), there is

$$\begin{aligned} \begin{aligned} \int \limits _{a}^{b}\int \limits _{c}^{d} s_{kl}(x,y)\, \text{ d }y \, \text{ d }x \,&= \, \int \limits _{a}^{b} s_k(x) \, \text{ d }x \, = \, \left[ s_{k+1}(x) \right] _a^b \, = \\&\, = s_{k+1}(b)-s_{k+1}(a) \, = \, w_g - w_{-k-1}, \end{aligned} \end{aligned}$$
(15)

where \(s_{k+1}(x) \, = \, \sum \limits _{i=-k-1}^{g} w_i \, B_i^{k+2}(x)\) and

$$\begin{aligned} w_i=w_{i-1}+\dfrac{v_i}{d_i}, \quad \text{ with }\quad d_i=\dfrac{k+1}{\lambda _{i+k+1}-\lambda _i} \quad \forall i=-k,\cdots ,g, \end{aligned}$$
(16)

i.e.

$$\begin{aligned} w_g \, = \, \dfrac{v_g}{d_g}+\cdots +\dfrac{v_{-k}}{d_{-k}} + w_{-k-1}. \end{aligned}$$
(17)

As a direct consequence, the following equivalence can be formulated

$$\begin{aligned} \int \limits _{a}^{b}\int \limits _{c}^{d} s_{kl}(x,y)\, \text{ d }y \, \text{ d }x \, = \, 0 \quad \Leftrightarrow \quad w_g \, = \, w_{-k-1} \quad \Leftrightarrow \quad \dfrac{v_g}{d_g}+\cdots +\dfrac{v_{-k}}{d_{-k}} \, = \, 0. \end{aligned}$$

By using (14) and (12),

$$\begin{aligned} \begin{aligned} \dfrac{v_g}{d_g}+\cdots +\dfrac{v_{-k}}{d_{-k}} \,&= \, \sum \limits _{i=-k}^{g} \dfrac{u_{ih}-u_{i,-l-1}}{d_i} \, = \\&\, = \sum \limits _{i=-k}^{g} \dfrac{1}{d_i}\left( \dfrac{b_{ih}}{t_h}+\cdots +\dfrac{b_{i,-l}}{t_{-l}} \right) \, = \, \sum \limits _{i=-k}^{g} \, \sum \limits _{j=-l}^{h} \, \dfrac{b_{ij}}{d_i \, t_j}, \end{aligned} \end{aligned}$$

and altogether

$$\begin{aligned} \int \limits _{a}^{b}\int \limits _{c}^{d} s_{kl}(x,y)\, \text{ d }y \, \text{ d }x \, = \, 0 \quad \Leftrightarrow \quad \sum \limits _{i=-k}^{g}\sum \limits _{j=-l}^{h} b_{ij} \left( \lambda _{i+k+1}-\lambda _i\right) \left( \mu _{j+k+1}-\mu _j\right) \, = \, 0. \end{aligned}$$

\(\square \)

Proof of Theorem 8

Let \(s_{kl}(x,y) \in \mathcal{S}_{kl}^{\Delta \lambda ,\Delta \mu }(\Omega )\), with the given representation \(s_{kl}\left( x,y\right) =\sum \limits _{i=-k}^{g} \sum \limits _{j=-l}^{h} b_{ij} \, B_{i}^{k+1} \left( x\right) \, B_j^{l+1} \left( y\right) \), and let \(\iint \limits _{\Omega } s_{kl}(x,y) \, \text{ d }x \, \text{ d }y \, = \, 0\). Then from Theorem 7 it is

$$\begin{aligned} \sum \limits _{i=-k}^{g}\sum \limits _{j=-l}^{h} b_{ij} \left( \lambda _{i+k+1}-\lambda _i\right) \left( \mu _{j+k+1}-\mu _j\right) \, = \, 0. \end{aligned}$$
(18)

By using (13), (14) from the proof of Theorem 7 it is obtained that \(s_k(x)\, = \, \sum \limits _{i=-k}^g v_i \, B_{i}^{k+1}(x)\), where \(v_i \, = \, u_{ih}-u_{i,-l-1}\). According to (12) it holds

$$\begin{aligned} v_i \, = \, \dfrac{b_{ih}}{t_h}+\cdots +\dfrac{b_{i,-l}}{t_{-l}}. \end{aligned}$$
(19)

Next, by considering (15),

$$\begin{aligned} \int \limits _a^b s_k(x) \, \text{ d }x \, = \, \left[ s_{k+1}(x)\right] _a^b \, = \, w_g - w_{-k-1}, \end{aligned}$$

where \(s_{k+1}(x)= \sum \limits _{i=-k-1}^g w_i \, B_i^{k+2}(x)\). However, with respect to (16), (17), (19) and (18) this difference equals to

$$\begin{aligned} \begin{aligned} w_g - w_{-k-1}&= \, \dfrac{v_g}{d_g} + \cdots + \dfrac{v_{-k}}{d_{-k}} \, = \, \sum \limits _{i=-k}^{g} \dfrac{v_i}{d_i} \, = \, \sum \limits _{i=-k}^{g} \dfrac{1}{d_i} \sum \limits _{j=-l}^{h} \dfrac{b_{ij}}{t_j} \, = \\&= \, \sum \limits _{i=-k}^{g} \sum \limits _{j=-l}^{h} \dfrac{b_{ij}\left( \lambda _{i+k+1} -\lambda _i \right) \left( \mu _{j+l+1}- \mu _j\right) }{(k+1)(l+1)} \, = 0, \end{aligned} \end{aligned}$$

and consequently also \(\int \limits _{a}^{b} s_{k}(x) \text{ d }x \, = \, 0\). The second statement can be proven analogously. \(\square \)

Proof of Theorem 9

Every bivariate spline \(s_{kl}(x,y) \in \mathcal{S}_{kl}^{\Delta \lambda ,\Delta \mu }(\Omega )\) can be expressed as

$$\begin{aligned} s_{kl}(x,y) \, = \, \sum \limits _{i=-k}^g \sum \limits _{j=-l}^h b_{ij} B_{i}^{k+1}(x) B_{j}^{l+1}(y) \, = \, \sum \limits _{i=-k}^g c_{i} B_{i}^{k+1}(x), \end{aligned}$$

where \(c_i \, = \, \sum \limits _{j=-l}^h b_{ij} B_{j}^{l+1}(y)\). For a given univariate spline \(s_k(x)=\sum \limits _{i=-k}^g v_{i} B_{i}^{k+1}(x)\) we can define coefficients

$$\begin{aligned} v_{ij} \, := \, v_i, \; \forall j=-l,\ldots ,h. \end{aligned}$$

Then \(s_k(x)\) can be expressed as a bivariate spline which is constant in variable y and which uses B-spline bases functions \(B_{j}^{l+1}(y)\) in the form

$$\begin{aligned} s_k(x) \, = \, \sum \limits _{i=-k}^g \sum \limits _{j=-l}^h v_{ij} B_{i}^{k+1}(x) B_{j}^{l+1}(y), \end{aligned}$$

since with respect to the properties of B-splines, de Boor (1978), Dierckx (1993), Schumaker (2007), we have

$$\begin{aligned} \sum \limits _{j=-l}^h v_{ij} B_{j}^{l+1}(y) \, = \, v_i \sum \limits _{j=-l}^h B_{j}^{l+1}(y) \, = \, v_i \cdot 1 \, = \, v_i. \end{aligned}$$

The rest of the proof is obvious with respect to the addition or subtraction of two splines.\(\square \)

Appendix C: Algorithm

Theorem 7 enables to formulate an algorithm for finding a bivariate tensor spline \(s_{kl}(x,y) \in {{{\mathcal {S}}}}_{kl}^{\Delta \lambda ,\Delta \mu }(\Omega )\) with zero integral over \(\Omega \). This task is equivalent to finding the matrix \({\mathbf {B}} = \left( b_{ij}\right) \), \(i=-k,\cdots ,g\), \(j=-l,\cdots ,h\) of the B-spline coefficients:

1. choose \((g+k+1)(h+l+1)-1\) arbitrary B-spline coefficients \(b_{ij}\in {\mathbb {R}}\), for \(i=-k\ldots ,\beta -1,\beta +1,\ldots ,g\) and \(j=-l\ldots ,\gamma -1,\gamma +1,\ldots ,h\),

2. compute

$$\begin{aligned} b_{\beta \gamma } \; = \; \dfrac{-1}{\left( \lambda _{\beta +k+1}-\lambda _\beta \right) \left( \mu _{\gamma +l+1}-\mu _\gamma \right) } \; \sum \limits _{\begin{array}{c} i=-k\\ i\ne \beta \end{array}}^{g} \sum \limits _{\begin{array}{c} j=-l\\ j\ne \gamma \end{array}}^{h} \; b_{ij} \left( \lambda _{i+k+1}-\lambda _i\right) \left( \mu _{j+l+1}-\mu _j\right) . \end{aligned}$$

Appendix D: Complete set of anthropometric data

See Figs. 7, 8, 9, and 10.

Fig. 7
figure 7

Anthropometric data: smoothed clr transformed densities for all age intervals together with data points resulting from he discrete clr transformation at mid-points of histogram classes. The choice of the scale of the reference measure (uniform measure) does not play any role here

Fig. 8
figure 8

Anthropometric data: smoothed original bivariate densities for all age intervals

Fig. 9
figure 9

Anthropometric data: smoothed independent densities for all age intervals

Fig. 10
figure 10

Anthropometric data: smoothed interation densities for all age intervals

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hron, K., Machalová, J. & Menafoglio, A. Bivariate densities in Bayes spaces: orthogonal decomposition and spline representation. Stat Papers 64, 1629–1667 (2023). https://doi.org/10.1007/s00362-022-01359-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00362-022-01359-z

Keywords

Navigation