A reduced-rank approach to predicting multiple binary responses through machine learning

This paper investigates the problem of simultaneously predicting multiple binary responses by utilizing a shared set of covariates. Our approach incorporates machine learning techniques for binary classification, without making assumptions about the underlying observations. Instead, our focus lies on a group of predictors, aiming to identify the one that minimizes prediction error. Unlike previous studies that primarily address estimation error, we directly analyze the prediction error of our method using PAC-Bayesian bounds techniques. In this paper, we introduce a pseudo-Bayesian approach capable of handling incomplete response data. Our strategy is efficiently implemented using the Langevin Monte Carlo method. Through simulation studies and a practical application using real data, we demonstrate the effectiveness of our proposed method, producing comparable or sometimes superior results compared to the current state-of-the-art method.


Introduction
The relationship between multiple response variables and a set of predictors has been a topic of ongoing research and interest in the literature, with numerous studies dedicated to understanding and exploring this connection.One area of particular interest in this field is the use of reduced rank regression, which involves using a low-rank constraint to linearly connect the response variables and the predictors, see e.g.[1,2,3,4].This approach has been widely studied and applied, with numerous works published on the topic, including those by [5,6,7,8], and many others.From frequentist to Bayesian approaches, there have been a wide range of methods and techniques employed to analyze and model these relationships, [9,10,11,12,13,14,15,16].
However, despite the extensive research in this area, most studies have focused on real-valued responses.In many applications, the entries of the response matrix are binary, that is, they are in the set {−1, 1}.For example, the treatment responses from multiple drugs can be recoded as binary or categorical when measured from each cell line, as seen in studies by [17,18,19].This highlights the need for further research on the use of binary or categorical response variables in reduced rank regression and other multivariate modeling techniques.
The problem of modeling multiple binary response variables has received some limited attention in recent years, with few studies proposing reduced-rank regression models as a solution.One notable example is the paper by Luo et al. [20], which proposed a mixed-outcome reduced-rank regression model to handle response matrices that include both binary and count data, and also addressed the issue of missing data in the responses.Another recent study, carried out independently of [20], is the paper by Park et al. [21], which additionally considered row-wise sparse constraints in addition to the low-rank assumption, but only for fully observed binary response matrices.The main idea behind these studies is to assume a marginal logistic regression model to relate the binary response and the covariates, and then employ a penalized maximum likelihood method.
However, the studies in [20] and [21] primarily focus on recovering the parameter matrix of interest, and provide estimation error rates for their estimators.While their results demonstrate that their estimators are consistent when estimating a low-rank matrix, they do not provide any guarantee on the prediction error or misclassification error.This highlights the need for further research in this area, particularly in terms of developing methods that not only accurately estimate the parameter matrix, but also provide guarantees on the performance of predictions and classification.
One of the key challenges in addressing the problem of multiple binary responses is the lack of robust and generalizable models that can accurately predict and classify the outcomes.In this paper, we aim to address this gap by taking a machine learning approach and adopting a classificationbased method to deal with binary output.Unlike traditional methods that rely on parametric models, we will consider a set of prediction matrices and seek to find the one that yields the best prediction error.This approach is built on the principles of statistical learning theory [22], where the zero-one loss is used as a measure of prediction error, and the risk of the classifier is controlled by a PAC (probably approximately correct) bound.However, the non-convex nature of the zero-one loss function makes it computationally intractable.An alternative is the use of a convex surrogate such as the hinge loss, which was introduced in [23], has been shown to be effective in a variety of machine learning tasks and has the added benefit of being computationally efficient.By leveraging this method, we aim to provide a robust and generalizable model for predicting and classifying multiple binary responses.
In this work, we propose a novel approach to addressing the problem of multiple binary responses that combines elements of both Bayesian and machine learning methodologies.Specifically, we propose a pseudo-Bayesian approach that utilizes a notion of risk based on the hinge loss, rather than relying on a likelihood function.Our approach is based on the principles of PAC-Bayesian theory [24,25,26,27,28,29,30,31,32], which provides theoretical guarantees on the prediction and misclassification error for our method.It is worth mentioning that using loss functions in replacing the likelihood is becoming popular in the so-called generalized Bayesian inference in recent years as documented for example in [33,34,35,36,37,38,39,40].
The use of a hinge loss-based risk function allows us to overcome some of the limitations of traditional likelihood-based Bayesian models, particularly when dealing with binary response variables.Unlike traditional likelihood functions, which can be difficult to model and computationally intensive to compute, the hinge loss function is convex and can be easily optimized.To further improve the efficiency and practicality of our proposed approach, we also develop an efficient gradient-based sampling method based on Langevin Monte Carlo.This method allows us to approximate the computation of our proposed method, making it more computationally tractable and suitable for large-scale applications.Overall, our proposed approach offers a novel and promising approach to modeling and predicting multiple binary responses, providing both theoretical guarantees and practical computational methods for its implementation.
Similar to the approach proposed in [20], our proposed method has the capability to handle incomplete response matrices.This is achieved by extending our approach to account for missing data in the response matrix, which allows for greater flexibility and applicability of our method.The extension of our method to handle missing data is relatively simple and does not introduce any additional complexity to the overall approach.This allows for our method to be applied to a wider range of datasets with incomplete or missing data.This capability is especially useful in real-world applications, where missing data is often present and traditional methods may struggle to handle such cases effectively.For example, in studies involving medical treatments, it is not uncommon for certain patients to drop out of the study, resulting in missing data in the response matrix.Our proposed method allows for the inclusion of such missing data, providing a more comprehensive and realistic analysis of the treatment outcomes.Additionally, in observational studies, missing data can be a common problem due to various factors such as non-response, measurement error or data collection issues.Our proposed method's ability to handle missing data would be beneficial in these scenarios as well.
The remainder of the paper is structured as follows.Section 2 provides a detailed description of the problem statement and presents our proposed method, along with its theoretical results.An extension to handle incomplete response data is also discussed in this section.In Section 3, we describe the Langevin Monte Carlo method used to compute our proposed method and present numerical studies on both synthetic and real datasets to demonstrate its performance.Conclusions and discussions are given in Section 4. All technical proofs are gathered in Appendix A.
2 Problem and method

Problem statement
We formally consider the following multiple binary responses with a set of common covariates problem: for units i = 1, . . ., n with covariate vectors x i ∈ R p , there exist q binary responses y k i ∈ {−1, 1} for k = 1, . . ., q. From a classification perspective, it would be natural to use a linear predictor as a function from R p to {−1, 1} in the following way: when x i is revealed, M (k) ∈ R p predicts y k i by sign(x i M (k) ).For the matrix notation, let's define the binary response matrix Y = [y 1 , . . ., y q ] ∈ {−1, 1} n×q and the covariate matrix With multiple responses, the predictors can be written in matrix-form as M = [M (1) , . . ., M (q) ] ∈ R p×q .
The ability of the predictor to predict a new entry of the matrix is then assessed by the risk and its empirical counterpart is: From the standard approach in classification theory [22,41], the best possible classifier is the Bayes classifier, M B , such that The anticipated property of the Bayes matrix, M B , is that it exhibits a low-rank structure or can be effectively approximated by a low-rank matrix.
For the sake of simplicity, we put R = R(M B ) and r = r(M B ).The goal is to find an estimator M that yields the minimal excess risk R( M ) − R.
While the risk R(M ) has a clear interpretation, working with its empirical counterpart r(M ) is challenging as it is non-smooth and non-convex.A common approach to overcome this issue is to replace the empirical risk with a convex surrogate [23].In this paper, we primarily focus on the hinge loss, which results in the following hinge empirical risk: where (a) + := max(a, 0), ∀a ∈ R.

Estimation procedure
Building upon the work previously done in the field of PAC-Bayesian theory, we define the pseudoposterior distribution as follows: where λ > 0 is a tuning parameter that will be discussed later and π(M ) is a prior distribution, given in (1), that promotes (approximately) low-rankness on the parameter matrix M .The term ρ λ , known as the Gibbs posterior, can be interpreted as the posterior distribution under a Bayesian framework, where π represents the prior distribution for the parameter M .However, this Bayesian interpretation is not essential for understanding the approach, as it relies on the proportionality of exp[−λr h (M )] to a likelihood function.The motivation behind defining ρ λ stems from the minimization problem in Lemma 1 rather than Bayesian principles.It is not necessary to have a likelihood function or a complete model; only the empirical risk based on the hinge loss function is required.
Nonetheless, we still refer to π as the prior and ρ λ as the pseudo-posterior.The measure ρ λ can be seen as an adjusted version of π.Comparing two parameters, m 1 and m 2 , if r h (m 1 ) < r h (m 2 ), then exp[−λr h (m 1 )] > exp[−λr h (m 2 )] for any λ > 0. This implies that, relative to π, ρ λ assigns more weight to m 1 than to m 2 .The adjustment in the distribution thus favors the parameter value that results in a smaller in-sample hinge empirical risk.The tuning parameter λ controls the degree of adjustment.The choice of λ will be further explored in subsequent sections.This pseudo-Bayesian approach has been previously studied in various low-matrix estimation problems, such as [11,42,43,44,45].
In this work, we have opted to use a spectral scaled Student prior distribution, as follows, with a parameter τ > 0, ( This prior can induce low-rankness of matrices M , as it can be verified that π(M ) ∝ p j=1 (τ 2 + s j (M ) 2 ) −(p+q+2)/2 , where s j (M ) denotes the j th largest singular value of M .It means that this prior follows a scaled Student distribution evaluated at s j (M ) which induces approximately sparsity on the s j (M ) [46].Thus, under this prior distribution, most of the s j (M ) are close to 0 and that M is approximately low-rank.This prior has been used before in image denoising [47], bilinear regression [48] and [49] for matrix completion.Even though this prior distribution is not conjugate in our problem, it is advantageous to utilize the Langevin Monte Carlo, a sampling method that relies on gradients for implementation purposes.

Theoretical results
Assumption 1.We assume that r h (M B ) ≤ 2r(M B ).
The above assumption can be relaxed to that there exist a positive constant All the theoretical results in this work are subjected to this assumption.
In this work, we make use of the Mammen and Tsybakov's margin assumption in [50].
Assumption 2 (Margin assumption).We assume that there is a constant C ≥ 1 such that: As an example, in the noiseless case where Y = sign(XM B ) almost surely, we have that Thus, the margin assumption is satisfied with C = 1.We now present a theoretical bound on the expected risk for a random estimator of M generated from the pseudo-posterior ρ λ (M ).
The proof of the above theorem is given in Appendix A. The technical argument used in the proof is known as "PAC-Bayesian bounds", introduced in [24,25] as a way to provide empirical bounds on the prediction risk of Bayesian-type estimators.However, it is well known that the PAC-Bayesian approach also comes with a set of powerful technical tools to establish non-asymptotic bounds as documented in [51,52,27] that have been explored in this paper.For an in-depth exploration of PAC-Bayes bounds, including recent surveys and advancements, readers are encouraged to refer to the following references [53] and [54].
Remark 1.It is important to mention that the result of the above theorem has an adaptive characteristic in the sense that the estimator does not depend on the rank r * = rank(M B ).When the rank r * is small, the prediction error will be similar to the Bayes error, R, even with a small sample size.This type of result is commonly referred to as an 'oracle inequality' as it suggests that our estimator performs as well as if we had access to the rank of M B through an oracle.Additionally, it is noteworthy that r * = 0 is not a necessary condition in the above formula.If r * = 0, we interpret 0 log(1 + 0/0) as 0.
Corollary 2. In the case that Y = sign(XM B ) a.s., for any ǫ ∈ (0, 1) and for λ = 2nq/5, with probability at least 1 − 2ǫ, where The Theorem 1 and Corollary 2 presented in this study offer novel perspectives on the prediction error, which complement the previously established theoretical results on estimation errors in [20] and [21].These theoretical inequalities allow for the comparison of the out-of-sample error of our predictor to the optimal one, and demonstrate that the prediction error rate is r * max(q, p)/nq, with logarithmic terms included.This is a noteworthy contribution to the field as it provides a comprehensive understanding of the relationship between the rank of M B and the prediction error.
It is worth mentioning that the utilization of Assumption 2 plays a crucial role in achieving a 'fast' prediction rate, as demonstrated in Theorem 1.The initial introduction of this assumption was made in [50] for classification purposes, and it has since been adopted for ranking tasks in subsequent works such as [55,56,57].In the forthcoming proposition, we present a slower rate result without relying on the usage of Assumption 2. The proof is also given in Appendix A. Proposition 3. Put r * = rank(M B ) and with τ 2 = (p + q)/(2q 2 pn X 2 F ).Then, for any ǫ ∈ (0, 1) and for λ = 2 nq/(p + q + 2), ς ∈ (0, 1), with probability at least 1 − 2ǫ, where Ψ ς is a known constant depending only on ς.

Dealing with imcomplete responses
As previously mentioned in the introduction, the method proposed in [20] has the capability to handle incomplete response data.Similarly, our proposed approach can also be easily and naturally extended to address the scenario where the response matrix Y contains missing data.The ability to handle missing data is an important consideration, as it is a common issue in many real-world datasets.This can be especially useful in cases where data collection is difficult or expensive, as it allows for the use of all available data, rather than discarding observations with missing data.
The risk in this case is given as and its empirical counterpart is: The hinge empirical risk is now as The subsequent theorem establishes a theoretical bound that links the integrated risk of the estimator to the minimum attainable risk achieved by the Bayes classifier, M B .Theorem 4. Assume that Assumption 2 is satisfied and put r * = rank(M B ).Then, for any ǫ ∈ (0, 1) and for λ = 2m/(3C + 2), τ 2 = (p + q)/(2qpm X 2 F ), υ ∈ (0, 1), with probability at least C,υ is known constant that depends only on the υ, C. Remark 3. The message of Theorem 4 lies in its ability to give a finite sample bound and to demonstrate that our estimate remains effective in a non-asymptotic scenario when the intrinsic dimension (i.e., the rank) is relatively small in relation to m.To clarify this point, consider the scenario where p is a function of m that increases as m increases.In this scenario, a traditional asymptotic approach would not provide useful information, but our bounds still provide valuable insights, as long as the rank of the parameter matrix is sufficiently small in relation to m.
The selection of λ in our results is based on optimizing an upper bound on the risk R (as presented in the proofs of the theorems, given in Appendix A).However, it is important to keep in mind that this choice may not always be the most suitable option in practice, even though it provides a reliable estimate of the magnitude of λ.To ensure optimal performance, it is recommended to use cross-validation to adjust the temperature parameter correctly.
3 Numerical studies

Implementation
In this section, we introduce the use of the Langevin Monte Carlo (LMC) algorithm as a method for sampling from the (pseudo) posterior.The LMC algorithm is a gradient-based method for sampling from a distribution.
First, a constant step-size unadjusted LMC algorithm, as described in [58], is proposed.The algorithm starts with an initial matrix M 0 and uses the recursion: where h > 0 is the step-size and N 0 , N 1 , . . .are independent random matrices with i.i.d standard Gaussian entries.When the step-size h is not small enough, the sum can explode [59], so a Metropolis-Hastings (MH) correction is included in the algorithm.This guarantees convergence to the desired distribution, but slows down the algorithm due to the additional acception/rejection step at each iteration.
The update rule in ( 3) is now considered as a proposal for a new candidate, This proposal is accepted or rejected according to the MH algorithm with probability: is the transition probability density from x to x ′ .This is known as the Metropolis-adjusted Langevin algorithm (MALA).It guarantees the convergence to the (pseudo) posterior and provides a way to choose the step-size h.Unlike randomwalk MH, MALA usually proposes moves into regions of higher probability, which are more likely to be accepted.The step-size h for MALA is chosen such that the acceptance rate is approximate 0.5 following [60], while the step-size for LMC in the same setting is chosen smaller than for MALA.

Compared Methods
We will assess the effectiveness of our proposed methods by comparing them to Bayesian approaches that rely on logistic regression.More specifically, it is now assumed that Y |X = 1 with probability f (XM ) and Y |X = −1 with probability 1 − f (XM ), where f (•) is the link function, f (x) = e x /(1 + e x ).In this case the pseudo-likelihood exp(−λr ℓ (M )), with λ = nq, is exactly equal to the likelihood of the logistic model.Here, is the logistic loss.The prior distribution is exactly the same as in previous sections.
As studied in [23], the logistic loss can serve as a convex alternative to the hinge loss for approximating the 0-1 loss.However, it is worth noting that employing the logistic loss may lead to a slower convergence rate compared to the hinge loss [23].
In this study, we evaluate the performance of our proposed methods, LMC-H and MALA-H, in comparison to three other alternatives: (1) LMC-logit, (2) MALA-logit (methods based on Bayesian logistic regression) and (3) the current state-of-the-art method mRRR.The mRRR method, which was proposed in [20], is a frequentist approach and its implementation can be found in the R package rrpack [61].

Simulation studies
We consider different scenarios of data generation to assess the performance of our method.The sample size is fixed as n = 100, while we vary the dimension of the covariates and responses as q = 8, p = 12 and q = 20, p = 50.The entries of the covariate matrix X are simulated from N (0, 1).More specifically, we consider two scenarios for the true parameter matrix M * : • first, it is a rank-2 matrix that is a product of two rank-2 matrices, i.e M * = A p×2 B ⊤ q×2 , where A's and B's entries are iid drawn from N (0, 1); • second, it is an approximate rank-2 matrix.To create an approximate rank-2 matrix, we first simulate a rank-2 matrix M ′ as before and then add some noise as M * = 2M ′ + N , where entries of N are simulated from N (0, 0.1).
Then we consider the following settings to obtain the responses: • Setting II: with u = XM * + E, put p = exp(u)/(1 + exp(u)): Here, the noise term (E, B) is varied in different scenarios which lead to different setup in each setting.It is summarized in Table 1.
The LMC, MALA are run with 30000 iterations and we take the first 1000 steps as burn-in.We set the values of tuning parameters λ and τ to 1 for all scenarios.It is important to acknowledge that a better approach would be to tune these parameters using cross validation, which could lead to improved results.The mRRR method is run with default options and that 5-fold cross validation is used to select the rank.
Each simulation setting is repeated 100 times and we report the averaged results.The results of the simulations study are detailed in Table 2, Table 3 and Table 4, Table 5 and the values within parentheses indicate the standard deviations associated with each misclassification error percentage.
Among the methods mentioned, the MALA algorithm with the hinge loss consistently demonstrates the lowest misclassification error rate.In fact, in some instances, it even outperforms the frequentist mRRR method by a margin of two times.This highlights the effectiveness of the proposed method, which combines the MALA algorithm and the hinge loss, for achieving accurate classification results.The other methods that employ the logistic loss function, such as MALA-logit and the LMC algorithm-based approaches (LMC-H and LMC-logit), exhibit similar performance to the mRRR method.These methods are generally comparable in terms of misclassification error rates.However, it is worth noting that the MALA-logit approach, which also utilizes the MALA algorithm, shows superiority to the mRRR method in cases involving low-rank matrices and higher dimensions.
While the LMC-based methods (LMC-H and LMC-logit) perform well, they are not significantly better than the mRRR method.However, these approaches are noted to be more efficient in handling larger data sets, which can be advantageous in certain scenarios.
It's important to acknowledge that as the proportion of missing data increases to 10% and 30%, the misclassification error percentages also increase.This suggests a decline in the performance of these methods in the presence of missing data.Therefore, addressing and mitigating the effects of missing data is crucial for improving the performance of these methods in real-world applications.
In summary, the MALA algorithm with the hinge loss stands out as the method with consistently lower misclassification error rates.The logistic loss-based methods and the LMC-based methods demonstrate comparable performance to the mRRR method.However, it is necessary to carefully consider the impact of missing data on these methods' performance and employ appropriate strategies to handle missing data effectively.

A real data study
In this section, we evaluate the performance of our proposed method on a real data with multiple binary responses.To do this, we utilize the spider data set, which can be found in the R package mvabund [62].The matrix of covariates, X ∈ R 28×6 , includes information on 6 environmental features from 28 samples.The response matrix, Y ∈ R 28×12 , is count data that represents the number of 12 hunting spider species from 28 observations of abundance.We convert the response data into binary format by setting y ij = −1 if there is no such species surviving in a certain environment and y ij = 1 otherwise.This results in a total of 154 negative ones (45.8%) and 182 positive ones (54.2%).
The data is divided randomly into two sets: a training set consisting of 23 samples and a test set consisting of 5 samples.We use the training data to run the methods and then evaluate their prediction accuracy based on the test data.This process is repeated 100 times, each time with a different random partition of the training and test data.The results of this procedure are illustrated in Figure 1.By repeating the procedure multiple times and averaging the results, we can obtain a more robust and accurate assessment of the performance of the methods.This approach allows us to account for any potential variability in the data and obtain a better understanding of the methods' performance.
The results, from Figure 1, show that our proposed method, computed using the Metropolis-Adjusted Langevin Algorithm with the hinge loss ('MALA-H'), outperforms the frequentist mRRR method.The prediction errors for MALA-H and mRRR are 15.05% (±5.15%) and 24.96% (±7.85%),   respectively.Other approaches, such as those using LMC, also perform well and are slightly better than the mRRR method.The approach MALA-logit, which utilizes the logit loss, is slightly behind MALA-H with a prediction error of 16.47% (±5.63%).This suggests that MALA-H is the most effective method among those tested.
To further evaluate the performance of all the considered methods, we also investigate their performance when missing data is present in the response matrix of the spider real data set.To do this, we randomly remove 10%, 20%, and 30% of the entries in the response matrix.We denote by Ω the index set of the observed entries.We then run all the considered methods on the data set with incomplete responses and evaluate the prediction/misclassification error on the set of unobserved entries and this process is repeated 100 times.This allows us to assess how well the methods can handle missing data and make predictions even when certain information is missing.The results of this evaluation are presented in Figure 2. In general, when examining incomplete response scenarios, the outcomes are comparable to those observed in complete response cases, implying that all the methods under consideration are capable of handling missing data.Furthermore, it is worth noting that MALA-H continues to outperform other methods in dealing with incomplete response situations.

Discussion and conclusion
In this paper, we investigated the problem of making predictions for multivariate binary responses using a set of covariates by exploring low-rank predictors through the lens of machine learning.We focused on providing the prediction error rate, which has not been addressed in previous published works.Our approach leverages methods from statistical learning theory for binary classification and does not require any assumptions about the underlying observations.Instead, we focus on a set of predictors and aim to find the one that results in the lowest prediction error.We propose using a pseudo-Bayesian method in this paper, which is also able to handle incomplete response data.
Furthermore, we developed an efficient computational approximation method, based on a gradientbased sampling technique known as Langevin Monte Carlo.By implementing this method, we were able to overcome the computational challenges that are often associated with this type of probabilistic approach, making our proposed approach more practical and applicable in real-world settings.
The numerical studies we conducted on simulated and real-world data sets have shown promising results when compared to the state-of-the-art method, further validating the effectiveness of our proposed approach.
Although, our work offers a promising solution for the problem of relating a set covariates to multiple binary response, but there is still room for further exploration and improvement.One area of future research could include extending our method to incorporate variable selection, in order to identify the most important covariates in determining the binary responses.Additionally, future research could also focus on addressing the problem of missing data in the covariate matrix X, which is a common issue in many real-world datasets.This would further improve the robustness and applicability of our proposed method.
An additional aspect that requires further attention in practical applications is the tuning of the learning rate λ and the parameter τ in the prior distribution.While we have presented certain values that yield favorable theoretical outcomes, it is important to acknowledge that these choices may not be optimal in practice, although they offer some guidance regarding the magnitude of the tuning parameters.We have also mentioned that cross-validation can be employed in practical scenarios, albeit at the cost of increased computational time.It is worth highlighting that the optimal tuning of these parameters remains a challenging problem in practical settings.In particular, tuning the learning rate λ remains as an open research question that has garnered significant attention within the framework of generalized Bayesian inference, see for example [63,64] and references there in.
Furthermore, when dealing with practical problems involving huge datasets, it has been observed that LMC algorithms may encounter scalability issues.In order to address this challenge, variational inference (VI) has emerged as a computational optimization-based alternative to Markov chain Monte Carlo techniques.VI has gained popularity for approximating intractable posterior distributions in large-scale Bayesian models due to its comparable effectiveness and superior computational efficiency.The connection between the PAC-Bayesian approach and variational inference has been elucidated in [30], while the development of variational inference for matrix completion has been explored in [44].These references provide a roadmap for future research, aiming to develop a more scalable computational approximation method for our proposed approach.
We will make use of the following version of the Bernstein's lemma taken from [65, page 24].Lemma 2. Let U 1 , . . ., U n be independent real valued random variables.Let us assume that there are two constants v and w such that Firstly, we establish a general PAC-Bayesian bound for our problem as a preliminary step.Subsequently, the specific case necessary for obtaining the result stated in Theorem 1 will be examined.Lemma 3. Assume that Assumption 2 is satisfied and that λ < 2nq/(C + 2).Then, for ǫ ∈ (0, 1), with probability at least 1 − ǫ: Proof.Fix any M and put Thus, we can apply Lemma 2 with v := nqC[R(M ) − R], w := 1 and ζ := λ/nq.We obtain, for any λ ∈ (0, nq), Them, using Fubini's theorem, we get: Consequently, using Lemma 1, Using Markov's inequality, Then taking the complementary and we obtain with probability at least 1 − ǫ that: As it stands for all ρ then the right hand side can be minimized and, from Lemma 1, the minimizer over P(R p×q ) is ρ λ .Thus we get, when λ < 2nq/(C + 2), A.1 Proof for Theorem 1 Finally, we consider the distributions ρ ∈ P(R p×q ) that will be defined as translations of the prior π.

A.3 Proof for Theorem 4
We first start with preliminary lemmas.
Proof of Theorem 4. Assume that Assumption 2 is satisfied, we proceed as in the proof for Theorem 1, More specifically we carry as in the proof of Lemma 3, and obtain, for ǫ ∈ (0, 1), with probability at least 1 − ǫ and for λ < 2m/(C + 2): Then, we have Then, focusing on ρ = ρM B (M ), we use Lemma 5 to get, with probability at least 1 − 2ε, Taking λ = 2m/(3C + 2) and the choice τ 2 = (p + q)/(2qpm X 2 F ) leads to the result.

Figure 1 :
Figure 1: Result on real data.

Figure 2 :
Figure2: Results on real data with missing data in the response matrix.Left: 10% of the entries is removed, middle: 20% of the entries is removed, right: 30% of the entries is removed.

Table 1 :
Summary of simulation settings.