Matrix Normal Cluster-Weighted Models

Finite mixtures of regressions with fixed covariates are a commonly used model-based clustering methodology to deal with regression data. However, they assume assignment independence, i.e. the allocation of data points to the clusters is made independently of the distribution of the covariates. In order to take into account the latter aspect, finite mixtures of regressions with random covariates, also known as cluster-weighted models (CWMs), have been proposed in the univariate and multivariate literature. In this paper, the CWM is extended to matrix data, e.g. those data where a set of variables are simultaneously observed at different time points or locations. Specifically, the cluster-specific marginal distribution of the covariates, and the cluster-specific conditional distribution of the responses given the covariates, are assumed to be matrix normal. Maximum likelihood parameter estimates are derived using an ECM algorithm. Parameter recovery, classification assessment and the capability of the BIC to detect the underlying groups are analyzed on simulated data. Finally, two real data applications concerning educational indicators and the Italian non-life insurance market are presented.


Introduction
Finite mixture models have seen increasing use over the last decades (for a recent survey, see McNicholas, 2016b). Because of their flexibility, they are a suitable statistical tool for modeling a wide range of phenomena characterized by unobserved heterogeneity, and constitute a powerful device for clustering and classification. Specifically, in their "direct application", each mixture component represents a group (or cluster) within data, and the scope is to identify these groups and estimate the parameters of the conditional-group distributions (McLachlan and Peel, 2000;McNicholas, 2016a). If no exogenous variables explain the means and the variances of each component, they are also called unconditional mixture models. However, when there is a linear relationship between some variables, important insight can be gained by accounting for functional dependencies between them.
For this reason, finite mixtures of regression models with fixed covariates (FMR) have been proposed in the literature (see DeSarbo and Cron, 1988;Frühwirth-Schnatter, 2006, for examples). Finite mixtures of regression models with concomitant variables (FMRC; Dayton and Macready, 1988) are an extension of FMR where the mixing weights depend on some concomitant variables (which are often the same covariates) and are usually modeled by a multinomial logistic model (see Punzo, 2016, 2020 and 2019 for details). Unfortunately, these methodologies do not explicitly use the distribution of the covariates for clustering, i.e. the assignment of data points to clusters does not directly utilize any information from the distribution of the covariates. Differently from these approaches, finite mixtures of regressions with random covariates (Gershenfeld, 1997;Gershenfeld et al., 1999), also known as cluster-weighted models (CWMs), allow for such functional dependency. This occurs because, for each mixture component, CWMs decompose the joint distribution of responses and covariates into the product between the marginal distribution of the covariates and the conditional distribution of the responses given the covariates.
Several CWMs have been introduced in the univariate and multivariate literature. Most of them consider a univariate response variable, along with a set of covariates, modeled by a univariate and a multivariate distribution, respectively (see Ingrassia et al., 2012Ingrassia et al., , 2014Punzo, 2014, for examples). Instead, fewer CWMs exist in the case of a multivariate response Dang et al., 2017). However, over the years, there has been an increasing interest in applications involving matrix-variate (three-way) data.
This data structure can occur in several and different application domains, such as longitudinal data on multiple response variables, spatial multivariate data, multivariate repeated measures or spatio-temporal data. Nevertheless, there exists a limited number of contributions involving matrix-variate regression models. First introduced by Viroli (2012), these models are a natural generalization of the multivariate-multiple regression (see also Anderlucci et al., 2014). In the mixture framework, finite mixtures of matrix-variate regressions (MN-FMR) have been recently considered by Melnykov and Zhu (2019). There are no matrix-variate CWMs in the literature and this paper aims to fill this gap by introducing and discussing a matrix-variate CWM in which the cluster-specific marginal distribution of the covariates, and the cluster-specific conditional distribution of the responses given the covariates, are assumed to be matrix normal.
The remainder of the paper is organized as follows. In Section 2, some preliminary aspects are described. The matrix normal cluster-weighted model (MN-CWM) and the expectation-conditional maximization (ECM) algorithm for parameter estimation are discussed in Section 3. Computational and operational aspects are laid out in Section 4. In the simulation study outlined in Section 5, the parameter recovery and the classification performance of the MN-CWM are investigated as well as the capability of the Bayesian information criterion (BIC; Schwarz et al., 1978) to detect the underlying group structure.
The MN-CWM is also therein compared to the MN-FMR and to the multivariate-multiple normal CWM (MMN-CWM). The application of the MN-CWM to two real datasets concerning educational indicators and the Italian non-life insurance market is therefore analyzed in Section 6, whereas some conclusions and ideas for future developments are drawn in Section 7.

Matrix normal distribution
where M is the p × r mean matrix, and Φ and Ψ are the p × p and r × r covariance matrices associated with the p rows and r columns, respectively. An equivalent definition specifies the (p × r)-matrix normal distribution as a special case of the pr-multivariate normal distribution. Specifically, where N pr (·) denotes the pr-variate normal distribution, vec (·) is the vectorization operator and ⊗ denotes the Kronecker product. However, a MN distribution has the desirable feature of simultaneously modeling and identify the between and within-variables variabilities as well as reducing the number of free parameters from pr (pr + 1) /2 to [r (r + 1) /2] + [p (p + 1) /2] − 1.

The matrix-variate regression model
Let Y ∈ R p×r be a continuous random matrix of dimension p × r, containing p responses measured in r occasions. Suppose we observe a set of q covariates for each occasion, inserted in a matrix X of dimension q × r. A generic matrix-variate regression model for Y has the where β is the p × 1 vector consisting in the parameters related with the intercept, w is a r × 1 vector of ones, B is the p × q matrix containing the parameters related to the q covariates and U is the p × r error term matrix. Model (3) can be expressed in compact notation as where B * is the p×(1 + q) matrix involving all the parameters to be estimated and X * is the (1 + q) × r matrix containing the information about the intercept and q covariates (Viroli, 2012). If we assume U ∼ N p×r (0, Φ, Ψ), then Y |X * ∼ N p×r (B * X * , Φ, Ψ). Therefore, a matrix-variate regression can be viewed as an encompassing framework containing as special cases the multivariate-multiple regression, when r = 1, and the univariate-multiple regression when r = 1 and p = 1.

The matrix normal CWM
Let (X, Y ) be a pair of random matrices, defined as in Section 2.2, with joint distribution p (X, Y ). Then, a general matrix CWM has the following joint distribution where p g (Y |X * ) is the cluster-specific conditional distribution of the responses, p g (X) is the cluster-specific marginal distribution of the covariates, and π g is the mixing weight (with π g > 0 and G g=1 π g = 1). Furthermore, we assume that in each group the conditional expectation E (Y |X * ) is a linear function of X * depending on some parameters.
In this paper, we focus on model (5) by assuming that both p g (Y |X * ) and p g (X) are matrix normal densities, and E (Y |X * ) = B * X * , as described in Section 2.2. Thus, model (5) can be rewritten as where Θ denotes the set of all parameters. For ease of understanding, a subscript with the variable name is added to the respective covariance matrices. Notice that there is an identifiability issue concerning the covariance matrices. Indeed, for a MN distribution, As a result, matrices Φ and Ψ are identifiable up to a multiplicative constant a . According to McNicholas (2017, 2018), a way to obtain a unique solution is to fix the first diagonal element of the row covariance matrix to 1. Therefore, in model (6) we adopt this approach by setting the first diagonal element of Φ Y g and Φ Xg to 1.
If the MN-CWM was not available, a possible approach would be to vectorize the matrices and consider the MMN-CWM, of which it is a special case; see Equation (2). However, such a procedure leads to two main concerns. The first one is the overparameterization of the vectorized model. Secondly, this increased number of parameters may have direct effects on model selection, as will be better analyzed in Section 5.3.

Parameter estimation
Parameter estimation is carried out via the expectation-conditional maximization (ECM) algorithm (Meng and Van Dyk, 1997). The only difference with respect to the expectationmaximization (EM) algorithm (Dempster et al., 1977) is that the M-step is replaced by a sequence of simpler and computationally convenient CM-steps. The EM algorithm cannot be directly implemented since there is no closed form solution for the covariance matrices of the MN distribution, i.e. one of the two depends on the value of the other at the previous iteration (Dutilleul, 1999).
be a sample of N independent observations from model (6). Then, the incomplete-data likelihood function is Within the formulation of mixture models, S is viewed as being incomplete because, for each observation, we do not know its component membership. Let z i = (z i1 , . . . , z iG ) ⊤ be the component membership vector such that z ig = 1 if (X i , Y i ) comes from group g and z ig = 0 otherwise. Now, the complete-data are , and the complete-data likelihood is Therefore, the corresponding complete-data log-likelihood can be written as In the following, by adopting the notation used in Tomarchio et al. (2020), the parameters marked with one dot correspond to the updates at the previous iteration and those marked with two dots represent the updates at the current iteration.

E-
Step The E-step requires calculation of the conditional expectation of (9), given the observed data and the current estimate of the parametersΘ. To do this, we need to which corresponds to the posterior probability that the unlabeled observation (X i , Y i ) belongs to the gth component of the mixture.

CM-
Step 1 At the first CM-step, we maximize the expectation of the complete-data log- atΘ 2 . In particular, we obtainπ CM-Step 2 At the second CM-step, we maximize the expectation of the complete-data log-likelihood with respect to Θ 2 , keeping fixed Θ 1 atΘ 1 . Therefore, we havë

Computational and operational aspects
The code for the ECM algorithm previously described, and for the analysis in the following sections, is written within the R computing environment (R Core Team, 2018).

ECM initialization
The choice of the starting values is an important aspect for EM-based algorithms (see, e.g., Maitra and Melnykov, 2010;Michael and Melnykov, 2016). A common way to start an EM-based algorithm consists in providing initial values for the z ig in (10), during the first E-step of the algorithm (McLachlan and Peel, 2000). In our case, we need to provide an initial value also for Ψ Xg and Ψ Y g in (14) and (15), respectively, during the first CM-step 1 of the algorithm. Therefore, the following initialization strategy is implemented: 1. Generate G random positive definite matrices for both Ψ X g and Ψ Y g . This is done via the genPositiveDefMat() function of the clusterGeneration package, by using the "eigen" method. For further details see Qiu and Joe. (2015).
. . , N. This is done by using the following three approaches: 2.1. in a "soft" way, by generating G positive random values from a uniform distribution on [0,1] for each observation, that are subsequently normalized to have a unitary sum. Being purely random, this procedure is repeated 15 times, and the solution maximizing the observed-data log-likelihood among these runs is considered; 2.2. in a "hard" way, by using the classification produced by the k-means algorithm on the vectorized and merged data. Specifically, after computing {vec , the data are merged so that for each observation we have a vector of dimension (pr + qr) × 1; 2.3. in a "hard" way, by using the classification produced by mixtures of matrixnormal distributions, computed on the merged data. In detail, for each observation we have a (p + q) × r matrix.
The approach producing the highest observed-data log-likelihood is finally selected.

Spurious clusters
A well-known issue in mixture models is related to the possibility for EM-based algorithms to converge to spurious local maximizers. This is not a failing of these algorithms, and such solutions reflect a random pattern in the data rather than an underlying group structure. They can be usually detected by the presence of data groups with very few observations or small eigenvalues compared to the other ones. Consequently, these solutions often have a high likelihood, but are of little practical use or real-world interpretation (McLachlan and Peel, 2000). Various approaches for mitigate this problem have been proposed in the literature. For example, Leisch (2004) and Dang and McNicholas (2015) impose a minimum size for the clusters of π g = 0.05. Melnykov and Zhu (2019) fought with this issue by excluding all the solutions that involved clusters consisting of less than 5 points. We faced this problem by removing the solutions that included clusters with estimated π g ≈ 0.05 or less, and with eigenvalues close to zero, in one or more of its estimated covariance matrices.

Model selection and performance evaluation
It is often the case that the number of groups G is not known in advance, and model selection criteria are commonly used for estimating it. Among them, the BIC is one of the most popular, and will be used in the following. Furthermore, when the true classification of the data is known, the adjusted Rand index (ARI; Hubert and Arabie, 1985) can be adopted to evaluate the clustering performance of a model. Specifically, the ARI calculates the agreement between the true classification and the one predicted by the model. An ARI of 1 indicates perfect agreement between the two partitions, whereas the expected value of the ARI under a random classification is 0. It will be used in the manuscript along with the misclassification percentage rate η, which is the percentage of units classified in the wrong classes.
5 Simulation studies 5.1 Simulation 1: A focus on the matrix-normal CWM In this study, several aspects related to our model are analyzed. First of all, since the ECM algorithm is used to fit the model, it is desirable to evaluate its parameter recovery, i.e. whether it can recover the generating parameters accurately. For this reason, data are generated from a four-component MN-CWM with p = q = r = 3. Two scenarios are then evaluated, according to the different level of overlap of the mixture components. In the first scenario (labeled as "Scenario A 1 "), the mixture components are well-separated both in X, by assuming relatively distant mean matrices, and in Y |X * , by using different intercepts and slopes. On the contrary, in the second scenario (labeled as "Scenario B 1 "), there is a certain amount of overlap, since the intercepts are all equal among the mixture components, while the slopes and the mean matrices assume approximately the same values among the mixture components. The parameters used for Scenario A 1 are displayed in Appendix A.
, M 1 and the slopes in B * 1 and B * 3 , are the same of Scenario A 1 . The other mean matrices are obtained by adding a constant c to each element of the corresponding mean matrices used for Scenario A 1 . In detail, c is equal to -5, 5 and -10 for M 2 , M 3 and M 4 , respectively. The intercept column of all the mixture components is equal to (7, 2, 5) ⊤ , whereas the slopes in B * 2 and B * 4 are all multiplied by -1, with respect to those used in Scenario A 1 . Lastly, within each scenario two sample sizes are considered, i.e. N = 200 and N = 500.
On all the generated datasets, the MN-CWM is fitted directly with G = 4, and the bias and mean squared error (MSE) of the parameter estimates are computed. For brevity's sake, and as also supported by the existing CWM literature (see, e.g. Punzo, 2014;Ingrassia et al., 2015;Punzo and Ingrassia, 2016;Punzo and McNicholas, 2017), the attention will be only focused on the regression coefficients B * g G g=1 . Before showing the obtained results, it is important to underline the well-known label switching issue, caused by the invariance of a mixture distribution to relabeling the components (Frühwirth-Schnatter, 2006). There are no generally accepted labeling methods. Then, to assign the correct labels, an analysis of the overall estimated parameters is conducted on each generated dataset to properly identify each mixture component. Other aspects that are investigated consist in the evaluation of the classification produced by our model, as well as the capability of the BIC in identifying the correct number of groups in the data. For this reason, under each of the considered scenarios, the MN-CWM is fitted to the generated datasets for G ∈ {1, 2, 3, 4, 5}, and the results are reported in Table 3. It is easy to see that under scenario A 1 , a perfect classification is always obtained, regardless of the chosen sample size. Additionally the BIC regularly detects the correct number of groups in the data. Under scenario B 1 , because of the larger overlap, the ARI assumes lower but in any case good values. Relatedly, the percentage of misclassified units stands at around the 3% for both sample sizes. About the BIC, also in this case it properly identifies the underlying group structure, with only one exception when N = 200.
A final aspect that is evaluated in this study concerns the initialization strategy. Specif-   Table 4 displays the number of times each strategy for the z i produces the highest log-likelihood at convergence, within each scenario and for both sample sizes. The initial G random matrices for Ψ Xg and Ψ Y g are assumed to be the same.
The first result suggests the importance of considering multiple initialization strategies, since none of them are preferred in all the generated datasets. However, the random strategy is quite close to this target, since it only fails in 3 datasets under scenario B 1 .

Simulation 2: A comparison between the matrix-normal CWM and the matrix-normal FMR
In this study, the matrix-normal CWM is compared to the matrix-normal FMR. Specifically, three scenarios with N = 200, p = 2, q = 3 and r = 4 are considered, and in each of them thirty datasets from a matrix-normal CWM with G = 2 are generated. The first scenario (hereafter simply referred to as "Scenario A 2 ") is characterized by the fact that the two groups differ only for the intercepts and the covariance matrices. This implies that they have totally overlapped mean matrices, which should make the distribution of the covariates p g (X) not very important for clustering. The parameters used to generate the datasets are displayed in Appendix B. In the second scenario ("Scenario B 2 ") the two groups have the same B * g and π g . The parameters used to generate the datasets are the same as for Scenario A 2 , but with only two differences: a value c = 5 is added to each element of M 2 , and B * 2 = B * 1 . Lastly, in the third scenario ("Scenario C 2 "), the two groups have only the same slopes and π g . Here, with respect to the parameters used un-der Scenario B 2 , the only difference is in the intercepts vectors which are (−3, −4) ⊤ and (−7, −8) ⊤ , for the first and the second group, respectively.
The MN-CWM and the MN-FMR are then fitted to the datasets of each scenario for G ∈ {1, 2, 3}, and the results in terms of model selection and clustering are reported in Table 5. It is possible to see that in Scenario A 2 the BIC correctly selects two groups for both models and the classifications produced are perfect. Therefore, even if the two groups have the same means and are strongly overlapped, the MN-CWM seems able to properly identify the true underlying group structure. However, under such scenario the MN-FMR should be preferred, since the distribution of the covariates p g (X) is not useful for clustering, and it is more parsimonious than the MN-CWM. On the contrary, Scenarios B 2 and C 2 represent typical examples of the usefulness of p g (X). Specifically, the BIC always identifies just one group under both scenarios for the MN-FMR, with obvious consequences in terms of the classification produced. Notice that, even if the MN-FMR had been fitted directly with G = 2, the resulting classifications would lead to almost identical ARI and η for Scenario B 2 , and slightly better performance for Scenario C 2 , since ARI = 0.15 and η = 32.48%.
This underlines how regardless of the BIC, the MN-FMR is not able to properly model such data structures. By combining both experimental factors, 9 scenarios are obtained, and for each of them thirty datasets are generated from a MN-CWM. The parameters used to generated the data comes from Section 5.1 and are shown in Appendix A. In detail, when p = q = r = 2 they are obtained by taking the submatrix in the upper-left corner of each parameter, when p = r = q = 3 they are exactly as displayed, whereas when p = r = q = 4 a row and a column on each parameter matrix is added, which for brevity's sake are not here reported.
About the number of groups, and by considering Appendix A, when G = 2 and G = 3 the first two and three groups are selected, respectively, while when G = 4 all of them are considered.
The MN-CWM is then fitted to each dataset for G ∈ {1, 2, 3, 4, 5}. The same is done for the MMN-CWM after data vectorization, and the results of both models in terms of model selection by the BIC are shown in Table 6. As we can see, when the MN-CWM is considered, regardless of the data dimensionality and the number of groups, the BIC always selects the correct number of groups. The same also holds for the MMN-CWM when p = q = r = 2 or, regardless of the data dimensionality, when G = 2. However, when p = q = r = 3 the BIC starts to face issues for G = 3, since the true number of groups is detected only 11 times (the other 19 times G = 2 is selected), and it systematically fails when G = 4. This problem gets even worse when p = r = q = 4 (with the exclusion of G = 2). The reason for such failures is related to the increased number of parameters with respect to the MN-CWM. Therefore, we have on the one hand a model that can seriously becomes overparametrized, with negative effects also on model selection (the MMN-CWM), and on the other hand a model (the MN-CWM) which is able to fit the same data in a far more parsimonious way and without causing problems on model selection.  6 Real data applications 6.1 Education data The first dataset comes from the Italian national agency for the evaluation of universities and research institutes, which makes available to Italian universities quantitative indicators related to the academic careers of the students and the results of the training activities.
For this application, the following p = 2 responses, that measure the level of completion of studies by students, are considered 1. percentage of students that graduate within T + 1 years (Complete),

percentage of students that drop after T + 1 years (Drop),
where T is the normal duration of the study program. Moreover, the following q = 2 covariates, that may be helpful in explaining this progress, 1. percentage of course credits earned in the first year over the total to be achieved (Credits), 2. percentage of students that have earned at least 40 course credits during the solar year (Students), are taken into account. For sake of simplicity, hereafter these variables will be referred by using the names given in round brackets. All the measurements refer to N = 75 study programs in the non-telematic Italian universities, over r = 3 years. Each study program is measured at national level, i.e. it is the average value of all the study programs of the same type across the country, for the reference period.
There are two groups in the data, namely N 1 = 33 bachelor's degrees and N 2 = 42 master's degrees. The MN-CWM and the MN-FMR are fitted to the data for G ∈ {1, 2, 3} and their results are reported in Table 7. The BIC selects a two-component MN-CWM that   Table 8, for the two groups, respectively. As it is plausible to expect, both covariates have a positive effect on the successful completion of the study programs, even if their magnitude is different in the two groups. The credits obtained during the first year of study might be more important for bachelor's students, considering the difficulties that arise in the transition from high schools to universities (Krause, 2006). At the same time, obtaining at least 40 course credits per year should be easier for masters' students, resulting in a greater importance for the completion of the studies. Conversely, both covariates have a negative impact on the drop rates, with the exception of the Credits variable that surprisingly turns to have a positive sign for the master's courses.

Insurance data
For this second real data application, the "Insurance" dataset included in the splm package (Millo and Piras, 2012) is used. This dataset was introduced by Millo and Carmeci (2011) to study the consumption of non-life insurance across the N = 103 Italian provinces in the years 1998-2002 (r = 5).
In this application we select, as responses, the following p = 2 variables that are related to the consumption and the presence of insurance products in the market The reasons why we focused on this subset of covariates are: (1) they are almost regularly used in the literature, and their relevant effects on the consumption or development of insurance products has been widely discussed (see the references in Millo and Carmeci, 2011, for further details). Indeed, they are commonly used as proxies for income and general level of economic activity (RGDP), stock of wealth (BANK) and opportunity cost of allocate funds in insurance policies (RIRS); (2) avoid an excessive parametrization of the models. Notice that, for a better interpretation of the regression coefficients, the variables RGDP and BANK are divided by 1,000; thus, thousand of euros are considered.
Differently from the previous application, we don't have a classification of the data.
However, since we are using p (X) in (6), the sampling distribution of each covariate could provide useful insights. They are reported in Figure 1, for the full 5-year data, as done by Melnykov and Zhu (2019). The bimodality in all these histograms seems to suggest the The only exceptions concern the province of Rome (in the Lazio region), which due to its social-economic development is reasonably assigned to the Central-Northern Italy group, the province of Ascoli-Piceno (in the Marche region) and the province of Massa-Carrara (in the Toscana region). On the contrary, the three groups detected by the MN-FRM are not supported by the literature and are difficult to interpret, even because they put together provinces spanning all over the country without a straightforward and reasonable justification.
As in Section 6.1, the estimated regression coefficients by the MN-CWM are briefly presented in Table 9, for the two groups. Overall, the coefficients are quite different between RIRS has a negative impact on the PPCD, because the higher the interest rate on lending, the higher the opportunity cost of investing in insurance products. Conversely, an increase in the RIRS has a positive effect on the AGEN because, from the insurance companies point of view, it rises the gains of investing the premiums on financial markets in the time between premium collection and claims settling. Therefore, these increased revenues might lead to an expansion of the insurance companies over the territory. Similarly, and in accordance with existing literature, income (RGDP) and wealth (BANK) have a positive influence on the PPCD (Millo and Carmeci, 2011). It would be reasonable to expect that they have a positive effect also on AGEN. However, this is not the case at least for the Central-Northern Italy, that shows lower coefficients than the South Italy. In the southern part of the country, where the insurance density and penetration are much lower, an increased income and wealth can give more space to growth and investment opportunities.

Conclusion
The matrix-variate CWM has been introduced in this work.  (2015) and Mazza et al. (2018). Furthermore, to accommodate for skewness or mild outliers in the data, skew or heavy tailed matrix-variate distributions could also be considered for the mixing components of the model (Melnykov and Zhu, 2018;Gallaugher and McNicholas, 2018;Tomarchio et al., 2020).