Skip to main content
Log in

Overcoming convergence problems in PLS path modelling

  • Original paper
  • Published:
Computational Statistics Aims and scope Submit manuscript

Abstract

The present paper deals with convergence issues of Lohmöller’s procedure for the computation of the components in the PLS-PM algorithm. More datasets and proofs are given to highlight the convergence failure of this procedure. Consequently, a new procedure based on the Signless Lapalacien matrix of the indirect graph between constructs is introduced. In several cases that will be specified in this paper, both monotony and error convergence for this new procedure will be established. Several comparisons will be presented between the new procedure and the two conventionally used procedures (Lohmöller’s and Hanafi-Wold’s procedures).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  • Esposito VV, Chin W, Henseler J, Wang W (eds) (2010) Handbook of partial least squares. Concepts, methods and applications. Springer, Heidelberg

  • Hair J, Hult G, Ringle C, Sarstedt M (2017) A primer on partial least squares structural equation modeling (pls-sem), 2nd edn. Sage, Thousand Oaks

    MATH  Google Scholar 

  • Hanafi M (2007) Pls path modelling: computation of latent variables with the estimation mode b. Comput Stat 22(2):275–292. https://doi.org/10.1007/s00180-007-0042-3

    Article  MathSciNet  MATH  Google Scholar 

  • Hanafi M, Dolce P, El Hadri Z (2021) Generalized properties for Hanafi-Wold’s procedure in partial least squares path modeling. Comput Stat 36:603–614. https://doi.org/10.1007/s00180-020-01015-w

    Article  MathSciNet  MATH  Google Scholar 

  • Henseler J (2010) on the convergence of the partial least squares path modeling algorithm. Comput Stat 25(1):107–120

    Article  MathSciNet  Google Scholar 

  • Jöreskog K (1970) A general method fort he analysis of covariance structure. Biometrika 57:239–251

    Article  MathSciNet  Google Scholar 

  • Li JS, Zhang XD (1998) On the Laplacian eigenvalues of a graph. Linear Algebra Appl 285(1–3):305–7

    Article  MathSciNet  Google Scholar 

  • Li Y (2005) Pls-gui-graphic user interface for partial least squares (pls-pc 1.8)-version 2.0.1 beta. University of South Carolina, Columbia

  • Monecke A, Leisch F (2012) Sempls: structural equation modeling using partial least squares. J Stat Softw 48(3):1–32

    Article  Google Scholar 

  • Sanchez G (2013) Pls path modeling with r trowchez editions. Berkeley. http://www.gastonsanchez.com/PLS Path Modeling with R.pdf

  • SARL A (2007–2008). Xlstat-plspm, Paris, France. http://www.xlstat.com/en/products/xlstat-plspm

  • Schur J (1911) Bemerkungen zur theorie der beschränkten bilinearformen mit unendlich vielen veränderlichen. Journal für die reine und angewandte Mathematik. 1911(140):1–28

    Article  Google Scholar 

  • Tenenhaus M, Vinzi VE, Chatelin YM, Lauro C (2005) Pls path modeling. Comput Stat Data Anal 48:159–205

    Article  MathSciNet  Google Scholar 

  • Tenenhaus M, Tenenhaus A, Groenen PJ (2017) Regularized generalized canonical correlation analysis: a framework for sequential multiblock component methods. Psychometrika 82:737–777

    Article  MathSciNet  Google Scholar 

  • Wold H (1982) Soft modelling: the basic design and some extensions. In: Joreskog KG, Wold H (eds) System under indirect observation, vol 2. North Holland, Amsterdam, pp 1–54

    Google Scholar 

  • Wold H (1985) Partial least squares. In: Kotz S, Johnson NL (eds) Encyclopaedia of statistical sciences, vol 6. Wiley, New York, pp 581–591

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zouhair El Hadri.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A

1.1 A1

Proof

of Lemma 1.

The proof is performed by recurrence on the iteration s. \({\mathbf{z }_k}^{(s)}\) denotes the sequence of components generated by Lohmöller’s procedure applied to \((\mathbf{X }_1,... ,\mathbf{X }_K)\) and initialized by the weights \(\left( \tilde{\mathbf{w }}_1^{(0)},..., \tilde{\mathbf{w }}_K^{(0)}\right) \). \({\mathbf{y }_k}^{(s)}\) denotes the sequence of components generated by Lohmöller’s procedure applied to \((\mathbf{Q }_1,... ,\mathbf{Q }_K)\) and initialized by the weights \(\left( \tilde{\mathbf{u }}_1^{(0)},..., \tilde{\mathbf{ u }}_K^{(0)}\right) \) such that \({\mathbf{z }_k}^{(0)}={\mathbf{y }_k}^{(0)}\) for each \(k=1,2,\cdots ,K\). Suppose \({\mathbf{z }_k}^{(s)}={\mathbf{y }_k}^{(s)}\) for all \(s=0,1,2, \ldots , s_0. \) Let us show that \({\mathbf{z }_k}^{(s_0+1)}={\mathbf{y }_k}^{(s_0 +1)}\)

For each block k at iteration \(s_0+1\) we have:

$$\begin{aligned} \mathbf{z }_k^{(s_0+1)}&=\mathbf{X }_k \mathbf{w }_k^{(s_0+1)} \\&= \sqrt{n}\frac{ \mathbf{X }_k \tilde{\mathbf{w }}^{(s_0+1)}_{k}}{\left\| {\mathbf{X }_{k}\tilde{\mathbf{w }}^{(s_0+1)}_{k}}\right\| } \\&= \sqrt{n} \frac{ \mathbf{X }_k ({\mathbf{X }_k}^{'}\mathbf{X }_k)^{-1} {\mathbf{X }_k}^{'} { \tilde{ \mathbf{z }}_k}^{(s_0)} }{\left\| {\mathbf{X }_{k}({\mathbf{X }_k}^{'}\mathbf{X }_k)^{-1} {\mathbf{X }_k}^{'} { \tilde{ \mathbf{z }}_k}^{(s_0)} }\right\| } \\&=\sqrt{n}\frac{\mathbf{X }_k\left( \dfrac{{\mathbf{X }_k}^{'}\mathbf{X }_k}{n} \right) ^{-1} {\mathbf{X }_k}^{'} \left( \displaystyle \sum ^{K}_{l \ne k , l=1} c_{kl} \theta ^{(s_0)}_{kl}\mathbf{z }_l^{(s_0)}\right) }{\left\| \mathbf{X }_k \left( \dfrac{{\mathbf{X }_k}^{'}\mathbf{X }_k}{n} \right) ^{-1} {\mathbf{X }_k}^{'}\left( \displaystyle \sum ^{K}_{l \ne k , l=1} c_{kl} \theta ^{(s_0)}_{kl}\mathbf{z }_l^{(s_0)}\right) \right\| } \\&= \sqrt{n}\frac{\mathbf{X }_k \left( \dfrac{{\mathbf{X }_k}^{'}\mathbf{X }_k}{n} \right) ^{-1/2}\left( \dfrac{{\mathbf{X }_k}^{'}\mathbf{X }_k}{n} \right) ^{-1/2} {\mathbf{X }_k}^{'}\left( \displaystyle \sum ^{K}_{l \ne k , l=1} c_{kl} \theta ^{(s_0)}_{kl}\mathbf{z }_l^{(s_0)}\right) }{\left\| \mathbf{X }_k \left( \dfrac{{\mathbf{X }_k}^{'}\mathbf{X }_k}{n} \right) ^{-1/2} \left( \dfrac{{\mathbf{X }_k}^{'}\mathbf{X }_k}{n} \right) ^{-1/2} {\mathbf{X }_k}^{'}\left( \displaystyle \sum ^{K}_{l \ne k , l=1} c_{kl} \theta ^{(s_0)}_{kl}\mathbf{z }_l^{(s_0)}\right) \right\| } \\ \end{aligned}$$

Substituting \( \mathbf{Q }_k= \mathbf{X }_k \left( \dfrac{{\mathbf{X }_k}^{'}\mathbf{X }_k}{n} \right) ^{-1/2}\), it follows

$$\begin{aligned} \mathbf{z }_k^{(s_0+1)}&=\sqrt{n}\frac{\mathbf{Q }_k\mathbf{Q }_k^{'}\left( \displaystyle \sum ^{K}_{l \ne k , l=1} c_{kl} \theta ^{(s_0)}_{kl}\mathbf{z }_l^{(s_0)}\right) }{\left\| \mathbf{Q }_k\mathbf{Q }_k^{'}\left( \displaystyle \sum ^{K}_{l \ne k , l=1} c_{kl} \theta ^{(s_0)}_{kl}\mathbf{z }_l^{(s_0)}\right) \right\| } \\&=\sqrt{n}\frac{\mathbf{Q }_k\left( \mathbf{Q }_k^{'} \mathbf{Q }_k \right) ^{-1}\mathbf{Q }_k^{'}\left( \displaystyle \sum ^{K}_{l \ne k , l=1} c_{kl} \theta ^{(s_0)}_{kl}\mathbf{z }_l^{(s_0)}\right) }{\left\| \mathbf{Q }_k\left( \mathbf{Q }_k^{'} \mathbf{Q }_k \right) ^{-1}\mathbf{Q }_k^{'}\left( \displaystyle \sum ^{K}_{l \ne k , l=1} c_{kl} \theta ^{(s_0)}_{kl}\mathbf{z }_l^{(s_0)}\right) \right\| }\\&=\mathbf{Q }_k {\mathbf{u}}_k^{(s_0+1)}\\&= \mathbf{y }_k^{(s_0+1)}. \end{aligned}$$

\(\square \)

1.2 A2

Proof

of lemma 2.

Substituting step 4 in step 5 in the equivalent form of Lohmöller’s procedure (see column 2 in Table 4) and by remarking that \({\mathbf{Q}}_{k}^{'}{ {\mathbf{Q}}_k}=n\mathbf{I }_n\), it follows:

$$\begin{aligned} {{\mathbf{u}}}^{(s+1)}_{k}=\sqrt{n}\frac{{ {\mathbf{Q}}_k}^{'} { \tilde{ \mathbf{z }}_k}^{(s)}}{\left\| { {\mathbf{Q}}_{k}{ {\mathbf{Q}}_k}^{'} { \tilde{ \mathbf{z }}_k}^{(s)}}\right\| }=\frac{{ {\mathbf{Q}}_k}^{'} { \tilde{ \mathbf{z }}_k}^{(s)}}{\left\| { { {\mathbf{Q}}_k}^{'} { \tilde{ \mathbf{z }}_k}^{(s)}}\right\| } \end{aligned}$$
(26)

Substituting step 3 in (26), it follows:

$$\begin{aligned} {\mathbf{u}}_k^{(s+1)}= \frac{ {\sum _{l=1}^K} c_{kl} \theta _{kl}^{(s)}\mathbf{Q }'_k\mathbf{Q }_l {\mathbf{u}}_l^{(s)}}{ \bigg \Vert {\sum _{l=1}^K} c_{kl} \theta _{kl} ^{(s)}\mathbf{Q }'_k\mathbf{Q }_l {\mathbf{u}}_l^{(s)} \bigg \Vert }= \frac{{\sum _{l=1}^K} \mathbf{R }_{kl}( {\mathbf{u}}^{(s)}) {\mathbf{u}}_l^{(s)}}{ \bigg \Vert {\sum _{l=1}^K} \mathbf{R }_{kl}( {\mathbf{u}}^{(s)}) {\mathbf{u}}_l^{(s)} \bigg \Vert } =\frac{{\sum _{l=1}^K} \mathbf{R }_{kl}( {\mathbf{u}}^{(s)}) {\mathbf{u}}_l^{(s)} }{{\lambda _k}^{(s)} }. \end{aligned}$$

with \(\lambda _k^{(s)}= \bigg \Vert {\sum _{l=1}^K} \mathbf{R }_{kl}( {\mathbf{u}}^{(s)}) {\mathbf{u}}_l^{(s)} \bigg \Vert \).

Equivalently,

$$\begin{aligned} \left\{ \begin{array}{c c l} \mathbf{Q }^{'}_k \left[ {\sum _{l=1}^K} c_{kl} \theta _{kl}^{(s)}\mathbf{Q }_l{\mathbf{u}}^{(s)}_l\right] &{} =&{} {\lambda _k}^{(s)} {\mathbf{u}}_k^{(s+1)} \ ; \ k=1, ..., K. \\ {\mathbf{u}}^{(s)'}_k {\mathbf{u}}^{(s)}_k &{} =&{}1 \ ; k=1,2,\cdots ,K. \end{array} \right. \end{aligned}$$
(27)

\(\square \)

1.3 A3

Proof

of Lemma 3.

Recall that \( {\mathbf{H}} \mathbf{a } = {\varvec{\Gamma }} \mathbf{b }\) and \(\Vert \mathbf{a }_k\Vert =\Vert \mathbf{b }_k\Vert =1\).

By developing :

$$\begin{aligned} \mathbf{(b--a) }^{'} {\mathbf{H}} \mathbf{(b--a) }={\mathbf{b }}^{'} {\mathbf{H}} \mathbf{b }+{\mathbf{a }}^{'} {\mathbf{H}} \mathbf{a } - 2 \sum _{ k=1} ^K \gamma _k \end{aligned}$$
(28)

and

$$\begin{aligned} \mathbf{(b-a) }^{'} {\varvec{\Gamma }} \mathbf{(b-a) }= 2 \sum _{ k=1} ^K \gamma _k - 2 {\mathbf{a }}^{'} {\varvec{\Gamma }} \mathbf{b } \end{aligned}$$
(29)

By remarking that \( {\mathbf{a }}^{'} {\varvec{\Gamma }} \mathbf{b } = {\mathbf{a }}^{'} {\mathbf{H}} \mathbf{a } \) and summing (28) and (29) it follows, \(\mathbf (b--a) ' [ {\mathbf{H}} + \ {\varvec{\Gamma }} ] \mathbf (b--a) = \mathbf{b }' {\mathbf{H}} \mathbf{b } - \mathbf{a }' {\mathbf{H}} \mathbf{a }\). \(\square \)

1.4 A4

Proof

of lemma 4. Using Lemma 3 with \(\mathbf{a } = {\mathbf{u}}^{(s)}, \mathbf{b } = {\mathbf{u}}^{(s+1)}, {\mathbf{H}} =\mathbf{R }^{(s)}\) and \( {\varvec{\Gamma }} = {\varvec{\Lambda }}^{(s)},\) it follows,

$$\begin{aligned}&\mathbf{{e}}^{(s)'} \left[ \mathbf{R }^{(s)} + {\varvec{\Lambda }}^{(s)} \right] \mathbf{{e}}^{(s)}={{\mathbf{u}}^{(s+1)}}^{'} \mathbf{R }^{(s)} {\mathbf{u}}^{(s+1)}-{{\mathbf{u}}^{(s)}}^{'} \mathbf{R }^{(s)} {\mathbf{u}}^{(s)} \\ \end{aligned}$$

By developing

$$\begin{aligned}&{{\mathbf{u}}^{(s+1)}}^{'} \mathbf{R }^{(s)} {\mathbf{u}}^{(s+1)}-{{\mathbf{u}}^{(s)}}^{'} \mathbf{R }^{(s)} {\mathbf{u}}^{(s)} \\&=\sum _{\begin{array}{c} k,l = 1 \\ \end{array}}^K c_{kl} \theta _{kl}^{(s)} \bigg (\mathbf{Q }_k {\mathbf{u}}_k^{(s+1)}\bigg )' \mathbf{Q }_l{\mathbf{u}}_l^{(s+1)}- \sum _{\begin{array}{c} k,l = 1 \\ \end{array}}^K c_{kl} \theta _{kl}^{(s)}\bigg (\mathbf{Q }_k {\mathbf{u}}_k^{(s)}\bigg )' \mathbf{Q }_l{\mathbf{u}}_l^{(s)} \\&= \sum _{\begin{array}{c} k,l = 1 \\ \end{array}}^K c_{kl} \theta _{kl}^{(s)} n r_{kl}^{(s+1)} - \sum _{\begin{array}{c} k,l = 1 \\ \end{array}}^K c_{kl} \theta _{kl}^{(s)} n r_{kl}^{(s)}. \end{aligned}$$
1):

If the centroid scheme is considered \( \left( \theta _{kl}^{(s)}= sign \left( r_{kl}^{(s)}\right) \right) \), then

$$\begin{aligned} \mathbf{e }^{(s)'} \left[ \mathbf{R }^{(s)} + {\varvec{\Lambda }}^{(s)} \right] \mathbf{e }^{(s)} = n \sum _{\begin{array}{c} k,l = 1 \\ \end{array}}^K c_{kl}sign \left( r_{kl}^{(s)}\right) r_{kl}^{(s+1)} - n \sum _{\begin{array}{c} k,l = 1 \\ \end{array}}^K c_{kl}sign \left( r_{kl}^{(s)}\right) r_{kl}^{(s)}. \end{aligned}$$

On one hand,

$$\begin{aligned} \sum _{\begin{array}{c} k,l = 1 \\ \end{array}}^K c_{kl}sign \left( r_{kl}^{(s)}\right) r_{kl}^{(s)} = \rho ^{(s)}. \end{aligned}$$

On the other hand,

$$\begin{aligned} \sum _{\begin{array}{c} k,l = 1 \\ \end{array}}^K c_{kl}sign \left( r_{kl}^{(s)}\right) r_{kl}^{(s+1)}\le & {} \left| \sum _{\begin{array}{c} k,l = 1 \\ \end{array}}^K c_{kl}sign \left( r_{kl}^{(s)}\right) r_{kl}^{(s+1)}\right| \le \sum _{\begin{array}{c} k,l = 1 \\ \end{array}}^K c_{kl} \left| r_{kl}^{(s+1)} \right| \\= & {} \rho ^{(s+1)}. \end{aligned}$$

Thus,

$$\begin{aligned} \mathbf{e }^{(s)'} \left[ \mathbf{R }^{(s)} + {\varvec{\Lambda }}^{(s)} \right] \mathbf{e }^{(s)} \le n \left( \rho ^{(s+1)}-\rho ^{(s)} \right) . \end{aligned}$$
2):

If the factorial scheme is considered, then

$$\begin{aligned} \mathbf{e }^{(s)'} \left[ \mathbf{R }^{(s)} + {\varvec{\Lambda }}^{(s)} \right] \mathbf{e }^{(s)}= n \sum _{\begin{array}{c} k,l = 1 \\ \end{array}}^K c_{kl} r_{kl}^{(s)} r_{kl}^{(s+1)} - n \sum _{\begin{array}{c} k,l = 1 \\ \end{array}}^K c_{kl} \left( r_{kl}^{(s)} \right) ^2. \end{aligned}$$

On one hand,

$$\begin{aligned} \sum _{\begin{array}{c} k,l = 1 \\ \end{array}}^K c_{kl} \left( r_{kl}^{(s)} \right) ^2= \rho ^{(s)}. \end{aligned}$$

On the other hand, since \( c_{kl}= c^2_{kl}\), it comes,

$$\begin{aligned} \sum _{\begin{array}{c} k,l = 1 \\ \end{array}}^K c_{kl} r_{kl}^{(s)} r_{kl}^{(s+1)}&\le \sum _{\begin{array}{c} k,l = 1 \\ \end{array}}^K { c_{kl}}^2 \left| r_{kl}^{(s)} \right| \left| r_{kl}^{(s+1)} \right| \\&\le \sqrt{ \sum _{\begin{array}{c} k,l = 1 \\ \end{array}}^K { \left( c_{kl} r_{kl}^{(s)} \right) }^2 }\times \sqrt{ \sum _{\begin{array}{c} k,l = 1 \\ \end{array}}^K \left( c_{kl} r_{kl}^{(s+1)} \right) ^2} \\&\le \sqrt{\rho ^{(s)}}\sqrt{\rho ^{(s+1)} } \end{aligned}$$
$$\begin{aligned} \mathbf{e }^{(s)'} \left[ \mathbf{R }^{(s)} + {\varvec{\Lambda }}^{(s)} \right] \mathbf{e }^{(s)} \le n \left( \sqrt{\rho ^{(s)}}\sqrt{\rho ^{(s+1)}}- \rho ^{(s)} \right) . \end{aligned}$$

Since the left side of the previous inequality is non negative, it follows:

$$\begin{aligned} 0\le \sqrt{\rho ^{(s)}}\sqrt{\rho ^{(s+1)}}- \rho ^{(s)}= \sqrt{\rho ^{(s)}} \left( \sqrt{\rho ^{(s+1)}}- \sqrt{ \rho ^{(s)}} \right) . \end{aligned}$$

As a consequence,

$$\begin{aligned} \sqrt{\rho ^{(s+1)}} \ge \sqrt{ \rho ^{(s)}} \end{aligned}$$

Thus,

$$\begin{aligned} \mathbf{e }^{(s)'} \left[ \mathbf{R }^{(s)} + {\varvec{\Lambda }}^{(s)} \right] \mathbf{e }^{(s)} \le n \left( \rho ^{(s+1)}-\rho ^{(s)} \right) . \end{aligned}$$

\(\square \)

1.5 A5

Proof

of lemma 5.

Since \({{\varvec{\Omega }}}\) is symmetric positive semidefinite then it can be decomposed as \( {\varvec{\Omega }}= {\sum }^K_{i=1} \gamma _i \mathbf{a }_i \mathbf{a }'_i \) where \(\gamma _i\) are the eigenvalues of \( {\varvec{\Omega }}\) and \(a_i\) the associated eigenvectors.

Thus, each element \(\omega _{kl} \) can be writen as

$$\begin{aligned} \omega _{kl}= \sum _i^K \gamma _i a_{ki} a_{li}, \end{aligned}$$
(30)

where \(a_{ki}\) is the kth component of the vector \(\mathbf{a }_i\).

Recall that \(\mathbf{A }\left( {{\varvec{\Omega }}} \right) \) is given as :

$$\begin{aligned} \mathbf{A }\left( {{\varvec{\Omega }}} \right) =\left( \begin{array}{cccc} n \omega _{11}\mathbf{I }_{p_1}&{}\omega _{12} {\mathbf{Q}}'_1 {\mathbf{Q}}_2&{}\cdots &{}\omega _{1K} \mathbf{Q }'_1\mathbf{Q }_K\\ \omega _{21} \mathbf{Q }'_2\mathbf{Q }_1&{}n \omega _{22}\mathbf{I }_{p_2}&{}\cdots &{}\omega _{2K} \mathbf{Q }'_2\mathbf{Q }_K\\ \vdots &{}\vdots &{} \ddots &{} \vdots \\ \omega _{K1} \mathbf{Q }'_K\mathbf{Q }_1&{}\omega _{K2} \mathbf{Q }'_K\mathbf{Q }_2&{}\cdots &{}n \omega _{KK}\mathbf{I }_{p_K} \end{array} \right) \end{aligned}$$

For any vector \(\mathbf{v } = (\mathbf{v }'_1,\cdots ,\mathbf{v }'_K)'\), clearly

$$\begin{aligned} \mathbf{v }' \mathbf{A } ({\varvec{\Omega }})\mathbf{v }={\sum }^K_{k,l=1}\omega _{kl}\mathbf{v }'_k\mathbf{Q }'_k\mathbf{Q }_l\mathbf{v }_l \end{aligned}$$

Substituting (30) in the previous equation, it follows:

$$\begin{aligned} \mathbf{v }' \mathbf{A } ({\varvec{\Omega }})\mathbf{v }&={\sum }^K_{k,l=1}(\sum _i^K \gamma _i a_{ki} a_{li})\mathbf{v }'_k\mathbf{Q }'_k\mathbf{Q }_l\mathbf{v }_l \\&={\sum }^K_{k,l=1}(\sum _i^K\gamma _i a_{ki} a_{li}\mathbf{v }'_k\mathbf{Q }'_k\mathbf{Q }_l\mathbf{v }_l) \\&={\sum }^K_{k,l=1}{\sum }^K_{i=1} (\sqrt{\gamma _i} a_{ki}\mathbf{v }_k)' \mathbf{Q }'_k\mathbf{Q }_l (\sqrt{\gamma _i} a_{li}\mathbf{v }_l) \\&={\sum }^K_{i=1} {\sum }^K_{k,l=1}(\sqrt{\gamma _i} a_{ki}\mathbf{v }_k)' \mathbf{Q }'_k\mathbf{Q }_l (\sqrt{\gamma _i} a_{li}\mathbf{v }_l) \\&= {\sum }^K_{i=1} {{\varvec{\alpha }}_i}^{'} \mathbf{Q }'\mathbf{Q } {\varvec{\alpha }}_i \ge 0 \end{aligned}$$

where \({\varvec{\alpha }}_i=(\sqrt{\gamma _i}a_{1i} \mathbf{{v}}'_1,\ldots ,\sqrt{\gamma _i}a_{Ki} \mathbf{{v}}'_K)'\) and \(\mathbf{Q }=[\mathbf{Q }_1|\mathbf{Q }_2 | \ldots | \mathbf{Q }_K ]\). \(\square \)

1.6 A6

Proof

of lemma 6.

The proof begins by showing that the signless Laplacian matrix \(\mathbf {D} + \mathbf {C}\) is positive semidefinite.

Let \({\varvec{\alpha }}=({\varvec{\alpha }}_1, \ldots , {\varvec{\alpha }}_K)\) and \(\vert {\varvec{\alpha }}\vert =(\vert {\varvec{\alpha }}_1\vert , \ldots , \vert {\varvec{\alpha }}_K\vert )\). The Laplacian matrix (\(\mathbf{D }-\mathbf{C }\) ) is symmetric positive semidefinite (Li 1998). It follows that \(\vert {\varvec{\alpha }}\vert ^{'} (\mathbf{D }-\mathbf{C })\vert {\varvec{\alpha }}\vert \ge 0\) or equivalently

$$\begin{aligned} \sum _{k=1}^K d_k {\varvec{\alpha }}^2_k \ge \underset{ k,l=1, l \ne k }{\sum ^K} c_{kl}\vert {\varvec{\alpha }}_k\vert \vert {\varvec{\alpha }}_l\vert \end{aligned}$$
(31)

Suppose that \({\varvec{\Omega }} \) is not symmetric positive semidefinite then there is a vector \( \beta =(\beta _1,\ldots ,\beta _K)\ne \mathbf{0 }\) such that :

$$\begin{aligned} \sum _{k=1}^K d_k\beta ^2_k < - \underset{ l \ne k }{\sum _{ k,l=1} ^K} c_{kl}\beta _k\beta _l \end{aligned}$$
(32)

By remarking that the right term of the (32) is strictly positive, it follows :

$$\begin{aligned} \sum _{k=1}^K d_k\beta ^2_k< \left| - \underset{ l \ne k }{\sum _{ k,l=1} ^K} c_{kl}\beta _k\beta _l \right| < \underset{ l \ne k }{\sum _{ k,l=1} ^K} c_{kl}|\beta _k||\beta _l| \end{aligned}$$
(33)

Thus,

$$\begin{aligned} \sum _{k=1}^K d_k\beta ^2_k < \underset{ }{\sum _{ k,l=1} ^K} c_{kl}|\beta _k||\beta _l| \end{aligned}$$
(34)

Regarding (31), the inequality (34) is impossible.

Let us now come to the proof that the matrix \( (\mathbf{D }+\mathbf{C }) \circ {\varvec{\theta }}^{(s)} \) is symmetric positive semidefinite.

(a):

Factorial scheme : \({\varvec{\theta }}^{(s)}= \left[ \theta ^{(s)}_{kl}\right] = \left[ cor\left( \mathbf{z }^{(s)}_k,\mathbf{z }^{(s)}_l\right) \right] \).

Recall that the Hadamard product of two symmetric positive semidefinite matrices is symmetric and positive semidefinite matrix (Schur 1911). Also the correlation matrix \({\varvec{\theta }}^{(s)}\) is positive semidefinite, therfore \((\mathbf{D }+\mathbf{C }) \circ {\varvec{\theta }}^{(s)}\) is positive semidefinite.

(b):

Centroid scheme : \({\varvec{\theta }}^{(s)}= \left[ \theta ^{(s)}_{kl}\right] = \left[ sign\left( cor(\mathbf{z }^{(s)}_k,\mathbf{z }^{(s)}_l)\right) \right] \).

\({\varvec{\theta }}^{(s)}\) can be written as \({\varvec{\theta }}^{(s)}= {\varvec{\theta }}_{+}^{(s)} - {\varvec{\theta }}_{-}^{(s)}\), where \({\varvec{\theta }}_{+}^{(s)} \) is the positive part of \({\varvec{\theta }}^{(s)}\) and \({\varvec{\theta }}_{-}^{(s)}\) its negative part. Therefore \( \mathbf{C } \circ {\varvec{\theta }}^{(s)} = \mathbf{C }\circ {\varvec{\theta }}_{+}^{(s)} - \mathbf{C } \circ {\varvec{\theta }}_{-}^{(s)}. \)

\( \mathbf{C }\circ {\varvec{\theta }}_{+}^{(s)}\) and \( \mathbf{C }\circ {\varvec{\theta }}_{-}^{(s)}\) can be seen as the adjacency matrices of two indirect graphs denoted respectively \( {\mathbf{G }_{+}}^{(s)}\) and \({\mathbf{G }_{-}}^{(s)}.\)

Let \({\mathbf{D }_+}^{(s)}\) and \({\mathbf{D }_- }^{(s)}\) be the diagonal matrices of degrees associated respectively to \( {\mathbf{G }_{+}}^{(s)}\) and \({\mathbf{G }_{-}}^{(s)}\), the Singnless Laplacian Matrix \( \left( {\mathbf{D }_+}^{(s)} + \mathbf{C }\circ {{\varvec{\theta }}_{+}}^{(s)} \right) \) of \({\mathbf{G }_{+}}^{(s)}\) is positive semidefinite, the Laplacian matrix \( \left( {\mathbf{D }_-}^{(s)} - \mathbf{C }\circ {{\varvec{\theta }}_{-}}^{(s)} \right) \) of \({\mathbf{G }_{-}}^{(s)}\) is also positive semidefinite, and clearly \(\mathbf{D } =( {\mathbf{D }_+}^{(s)} + {\mathbf{D }_-}^{(s)})\). Moreover, the matrix \(\left( \mathbf{D } + \mathbf{C } \right) \circ {\varvec{\theta }}^{(s)}\) can be written as :

$$\begin{aligned} \left( \mathbf{D } + \mathbf{C } \right) \circ {\varvec{\theta }}^{(s)} = \mathbf{D } + \mathbf{C } \circ {\varvec{\theta }}^{(s)} = ({\mathbf{D }_+}^{(s)} + \mathbf{C }\circ {{\varvec{\theta }}_{+}}^{(s)} ) + ( {\mathbf{D }_-}^{(s)} - \mathbf{C }\circ {{\varvec{\theta }}_{-}}^{(s)}). \end{aligned}$$

It results that the matrix \(\left( \mathbf{D } + \mathbf{C } \right) \circ {\varvec{\theta }}^{(s)} \) is symmetric positive semidefinite as the sum of two positive semidefinite matrices. \(\square \)

1.7 A7

Proof

of Lemma 7.

Using Lemma 3 with \(\mathbf{a } = {\mathbf{u}}^{(s)}, \mathbf{b } = {\mathbf{u}}^{(s+1)}, {\mathbf{H}} = \tilde{\mathbf{R }}^{(s)}\) and \( {\varvec{\Gamma }} = \tilde{ {\varvec{\Lambda }}}^{(s)},\) it follows,

$$\begin{aligned}&\mathbf{e }^{(s)'} \left[ \tilde{\mathbf{R }}^{(s)} + \tilde{ {\varvec{\Lambda }}}^{(s)} \right] \mathbf{e }^{(s)}={{\mathbf{u}}^{(s+1)}}^{'} \tilde{\mathbf{R }}^{(s)} {\mathbf{u}}^{(s+1)}-{{\mathbf{u}}^{(s)}}^{'} \tilde{\mathbf{R }}^{(s)} {\mathbf{u}}^{(s)} \\ \end{aligned}$$

By developing

$$\begin{aligned}&{{\mathbf{u}}^{(s+1)}}^{'} \tilde{\mathbf{R }}^{(s)}{\mathbf{u}}^{(s+1)}-{{\mathbf{u}}^{(s)}}^{'}\tilde{\mathbf{R }}^{(s)} {\mathbf{u}}^{(s)}\\&=\sum _{\begin{array}{c} k,l = 1 \end{array}}^K (c_{kl}+d_{kl}) \theta _{kl}^{(s)} \bigg (\mathbf{Q }_k {\mathbf{u}}_k^{(s+1)}\bigg )' \mathbf{Q }_l{\mathbf{u}}_l^{(s+1)}- \sum _{\begin{array}{c} k,l = 1, \end{array}}^K (c_{kl}+d_{kl}) \theta _{kl}^{(s)}\bigg (\mathbf{Q }_k {\mathbf{u}}_k^{(s)}\bigg )' \mathbf{Q }_l{\mathbf{u}}_l^{(s)} \\&= \sum _{\begin{array}{c} k,l = 1 \end{array}}^K c_{kl} \theta _{kl}^{(s)} n r_{kl}^{(s+1)} - \sum _{\begin{array}{c} k,l = 1 \end{array}}^K c_{kl} \theta _{kl}^{(s)} n r_{kl}^{(s)}. \end{aligned}$$
(1):

If the Centroid scheme is considered \( \left( \theta _{kl}^{(s)}= sign \left( r_{kl}^{(s)}\right) \right) \), it follows,

$$\begin{aligned} \mathbf{e }^{(s)'} \left[ \tilde{\mathbf{R }}^{(s)}+ \tilde{ {\varvec{\Lambda }}}^{(s)} \right] \mathbf{e }^{(s)} = n \sum _{\begin{array}{c} k,l = 1 \end{array}}^K c_{kl}sign \left( r_{kl}^{(s)}\right) r_{kl}^{(s+1)} - n \sum _{\begin{array}{c} k,l = 1 \end{array}}^K c_{kl}sign \left( r_{kl}^{(s)}\right) r_{kl}^{(s)}. \end{aligned}$$

On one hand,

$$\begin{aligned} \sum _{\begin{array}{c} k,l = 1 \end{array}}^K c_{kl}sign \left( r_{kl}^{(s)}\right) r_{kl}^{(s)} = \rho \left( {\mathbf{{z}}_1^{(s)}, \mathbf{{z}}_2^{(s)}, \cdots , \mathbf{{z}}_K^{(s)}} \right) . \end{aligned}$$

On the other hand,

$$\begin{aligned} \sum _{\begin{array}{c} k,l = 1 \end{array}}^K c_{kl}sign \left( r_{kl}^{(s)}\right) r_{kl}^{(s+1)}\le & {} \left| \sum _{\begin{array}{c} k,l = 1 \end{array}}^K c_{kl}sign \left( r_{kl}^{(s)}\right) r_{kl}^{(s+1)}\right| \le \sum _{\begin{array}{c} k,l = 1 \end{array}}^K c_{kl} \left| r_{kl}^{(s+1)} \right| \\= & {} \rho ^{(s+1)}. \end{aligned}$$

Thus,

$$\begin{aligned} \mathbf{e }^{(s)'} \bigg [ \tilde{\mathbf{R }}^{(s)} + \tilde{ {\varvec{\Lambda }}}^{(s)} \bigg ]\mathbf{e }^{(s)} \le n \left( \rho ^{(s+1)}-\rho ^{(s)} \right) . \end{aligned}$$
(2):

If the Factorial scheme is considered, then

$$\begin{aligned} \mathbf{e }^{(s)'} \bigg [ \tilde{\mathbf{R }}^{(s)} + \tilde{ {\varvec{\Lambda }}}^{(s)} \bigg ]\mathbf{e }^{(s)}= n \sum _{\begin{array}{c} k,l = 1 \end{array}}^K c_{kl} r_{kl}^{(s)} r_{kl}^{(s+1)} - n \sum _{\begin{array}{c} k,l = 1 \end{array}}^K c_{kl} \left( r_{kl}^{(s)} \right) ^2. \end{aligned}$$

On one hand,

$$\begin{aligned} \sum _{\begin{array}{c} k,l = 1 \end{array}}^K c_{kl} \left( r_{kl}^{(s)} \right) ^2= \rho \left( {\mathbf{{z}}_1^{(s)}, \mathbf{{z}}_2^{(s)}, \cdots , \mathbf{{z}}_K^{(s)}} \right) . \end{aligned}$$

On the other hand \(( c_{kl}= c^2_{kl})\),

$$\begin{aligned} \sum _{\begin{array}{c} k,l = 1 \end{array}}^K c_{kl} r_{kl}^{(s)} r_{kl}^{(s+1)}&\le \sum _{\begin{array}{c} k,l = 1 \end{array}}^K { c_{kl}}^2 \left| r_{kl}^{(s)} \right| \left| r_{kl}^{(s+1)} \right| \\&\le \sqrt{ \sum _{\begin{array}{c} k,l = 1 \end{array}}^K { \left( c_{kl} r_{kl}^{(s)} \right) }^2 }\times \sqrt{ \sum _{\begin{array}{c} k,l = 1 \end{array}}^K \left( c_{kl} r_{kl}^{(s+1)} \right) ^2} \\&\le \sqrt{\rho ^{(s)}}\sqrt{\rho ^{(s+1)} } \end{aligned}$$

It comes,

$$\begin{aligned} \mathbf{e }^{(s)'} \bigg [ \tilde{\mathbf{R }}^{(s)} + \tilde{ {\varvec{\Lambda }}}^{(s)} \bigg ]\mathbf{e }^{(s)} \le n \left( \sqrt{\rho ^{(s)}}\sqrt{\rho ^{(s+1)}}- \rho ^{(s)} \right) . \end{aligned}$$

Since the left side of the previous inequality is non negative, it follows:

$$\begin{aligned} 0\le \sqrt{\rho ^{(s)}}\sqrt{\rho ^{(s+1)}}- \rho ^{(s)}= \sqrt{\rho ^{(s)}} \left( \sqrt{\rho ^{(s+1)}}- \sqrt{ \rho ^{(s)}} \right) . \end{aligned}$$

As a consequence,

$$\begin{aligned} \sqrt{\rho ^{(s+1)}} \ge \sqrt{ \rho ^{(s)}} \end{aligned}$$

Thus,

$$\begin{aligned} \mathbf{e }^{(s)'} \left[ \tilde{\mathbf{R }}^{(s)} + \tilde{ {\varvec{\Lambda }}}^{(s)} \right] \mathbf{e }^{(s)} \le n \left( \rho ^{(s+1)}-\rho ^{(s)} \right) . \end{aligned}$$

\(\square \)

Appendix B

Table 10 Dataset 1 (Hanafi 2007), available in semPLS R package (Monecke and Leisch 2012)
Table 11 Dataset 2
Table 12 Dataset 3
Table 13 Dataset 4
Table 14 Dataset 5
Table 15 Dataset 6

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hanafi, M., El Hadri, Z., Sahli, A. et al. Overcoming convergence problems in PLS path modelling. Comput Stat 37, 2437–2470 (2022). https://doi.org/10.1007/s00180-022-01204-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00180-022-01204-9

Keywords

Navigation