1 Introduction

Multi-target regression (MTR), also known as multivariate or multi-output regression, refers to the task of predicting multiple continuous variables using a common set of input variables. Such problems arise in various fields including ecological modeling (Kocev et al. 2009; Dzeroski et al. 2000) (e.g. predicting the abundance of plant species using water quality measurements), economics (Ghosn and Bengio 1996) (e.g. predicting stock prices from econometric variables) and energy (e.g. predicting energy production in solar/wind farms using historical measurements and weather forecast information). Given the importance and diversity of its applications, it is not surprising that research on this topic has started as early as 40 years ago in Statistics (Izenman 1975).

Recently, a closely related task called multi-label classification (MLC) (Tsoumakas et al. 2010; Zhang and Zhou 2014) has received increased attention by Machine Learning researchers. Similarly to MTR, MLC deals with the prediction of multiple variables using a common set of input variables. However, prediction targets in MLC are binary. In fact, the two tasks can be thought of as instances of the more general learning task of multi-target prediction where targets can be continuous, binary, ordinal, categorical or even of mixed type. The baseline approach of learning a separate model for each target applies to both MTR and MLC. Moreover, they share the same core challenge of exploiting dependencies between targets (in addition to dependencies between targets and inputs) in order to improve prediction accuracy, as acknowledged by researchers working in both tasks (e.g. Izenman 2008; Dembczynski et al. 2012). Despite their commonalities, MTR and MLC have typically been treated in isolation and only few works (Blockeel et al. 1998; Weston et al. 2002; Teh et al. 2005; Balasubramanian and Lebanon 2012) have given a general formulation of their key ideas, recognizing the dual applicability of their approaches.

Motivated by the tight connection between the two tasks, this paper looks at a family of MLC methods that, despite being almost directly applicable to MTR problems, have not been applied so far in this domain. In particular, we consider methods that decompose the MLC task into a series of binary classification tasks, one for each label. This category, includes the typical one-versus-all or Binary Relevance approach that assumes label independence but also approaches that model label dependencies by building models that treat other labels as additional input variables (meta-inputs). In this work we adapt two popular methods of this kind (Godbole and Sarawagi 2004; Read et al. 2011) for MTR, contributing two new MTR methods: Stacked single-target (SST) and Ensemble of Regressor Chains (ERC). Both methods have been very successful in the MLC domain and provided inspiration for many subsequent works (Cheng and Hüllermeier 2009; Dembczynski et al. 2010; Kumar et al. 2012; Read et al. 2014).

Although the adaptation is trivial (as it basically consists of employing a regression instead of a binary classification algorithm to solve each single-target prediction task), it widens the applicability of existing approaches and increases our understanding of challenges shared by both learning tasks, such as the modeling of target dependencies. This kind of abstraction of key ideas from solutions tailored to related problems can sometimes offer additional advantages, such as improving the modularity and conceptual simplicity of learning techniques and avoiding reinvention of the same solutions.Footnote 1

In addition to evaluating the direct adaptations of the corresponding MLC methods in the MTR domain, we also take a careful look at the treatment of targets as additional input variables and spot a shortcoming that was overlooked in the original MLC formulations of both methods. Specifically, we notice that in both methods the values of the meta-inputs are generated differently between training and prediction, causing a discrepancy that is shown to drastically downgrade their performance. To tackle this problem, we develop extended versions of the two methods that manage to decrease the discrepancy by using out-of-sample estimates of the targets during training. These estimates are obtained via an internal cross-validation methodology.

The performance of the proposed methods is comprehensively analyzed based on a large experimental study that includes 18 diverse real-world datasets, 14 of which are firstly used in this paper and are made publicly available for future benchmarks. The experimental results reveal that, affected by the discrepancy problem, the direct adaptations of the corresponding MLC methods fail to obtain better accuracy than the baseline approach that performs independent regressions. On the other hand, the extended versions obtain consistent improvements against the baseline, confirming the effectiveness of the proposed solution. Furthermore, extended versions of ERC obtain significantly better accuracy than state-of-the-art methods, including a method based on ensembles of multi-objective decision trees (Kocev et al. 2007) and a recent regularization-based multi-task learning method (Jalali et al. 2010, 2013). Moreover, it is shown that, compared to the rest of the methods, the extended versions of ERC are associated with the smallest risk of decreasing the accuracy of the baseline, an appealing property.

The rest of the paper is organized as follows: Sect. 2 presents the SST and ERC methods and describes the discrepancy problem and the proposed solution. Section 3 discusses related work from the MTR field, including well-known statistical procedures and multi-task learning methods, and points out differences with previous work on the discrepancy problem. The details of the experimental setup (method configuration, evaluation methodology, datasets) are given in Sects. 4 and 5 presents and discusses the experimental results. Finally, Sect. 6 offers our conclusion and outlines future work directions.

2 Methods

We first formally describe the MTR task and provide the notation that will be used subsequently for the description of the methods. Let \(\mathbf {X}\) and \(\mathbf {Y}\) be two random vectors where \(\mathbf {X}\) consists of d input variables \(X_1,\ldots ,X_d\) and \(\mathbf {Y}\) consists of m target variables \(Y_1,\ldots ,Y_m\). We assume that samples of the form \((\mathbf {x,y})\) are generated i.i.d. by some source according to a joint probability distribution \(\mathbf {P}(\mathbf {X,Y})\) on \(\mathscr {X} \times \mathscr {Y}\) where \(\mathscr {X}=R^d\) Footnote 2 and \(\mathscr {Y}=R^m\) are the domains of \(\mathbf {X}\) and \(\mathbf {Y}\) and are often referred to as the input and the output space. In a sample \((\mathbf {x,y})\), \(\mathbf {x}=[x_1,\ldots ,x_d]\) is the input vector and \(\mathbf {y}=[y_1,\ldots ,y_m]\) is the output vector which are realizations of \(\mathbf {X}\) and \(\mathbf {Y}\) respectively. Given a set \(D=\{(\mathbf {x}^1,\mathbf {y}^1),\ldots ,(\mathbf {x}^n,\mathbf {y}^n)\}\) of n training examples, the goal in MTR is to learn a model \(\mathbf {h}:\mathscr {X}\rightarrow \mathscr {Y}\) that given an input vector \(\mathbf {x}\), is able to predict an output vector \(\hat{\mathbf {y}} = \mathbf {h(x)}\) that best approximates the true output vector \(\mathbf {y}\).

In the baseline Single-Target (ST) method, a multi-target model \(\mathbf {h}\) is comprised of m single-target models \(h_j:\mathscr {X} \rightarrow R\) where each model \(h_j\) is trained on a transformed training set \(D_j = \{(\mathbf {x}^{1},y_{j}^{1}),\ldots ,(\mathbf {x}^{n},y_{j}^{n})\}\) to predict the value of a single target variable \(Y_j\). This way, target variables are modeled independently and no attempt is made to exploit potential dependencies between them. Despite the simplicity of the ST approach, several empirical studies (e.g. Luaces et al. 2012) have shown that Binary Relevance, its MLC counterpart, often obtains comparable performance with more sophisticated MLC methods that model label dependencies, especially in cases where the underlying single-target prediction model is well fitted to the data (Dembczynski et al. 2012; Read and Hollmén 2014, 2015). A theoretical explanation of these results was offered by Dembczynski et al. (2012) who showed that modeling the marginal conditional distributions \(P(Y_i|\mathbf {x})\) of the labels (as done by Binary Relevance) can be sufficient for getting good results in multi-label losses whose risk minimizers can be expressed in terms of marginal distributions (e.g. Hamming loss).

2.1 Stacked single-target

Stacked single-target (SST) is inspired from the Stacked Binary Relevance method (Godbole and Sarawagi 2004) where the idea of stacked generalization (Wolpert 1992) was applied in a MLC context. The training of SST consists of two stages. In the first stage, m independent single-target models \(h_j:\mathscr {X} \rightarrow R\) are learned as in ST. However, instead of directly using these models for prediction, SST involves an additional training stage where a second set of m meta models \(h'_j: \mathscr {X} \times R^{m} \rightarrow R\) are learned, one for each target \(Y_j\). Each meta model \(h'_j\) is learned on a transformed training set \(D'_j=\{(\mathbf {x}'^{1},y^1_j),\dots ,(\mathbf {x}'^{n},y^n_j)\}\), where the original input vectors of the training examples (\(\mathbf {x}^{i}\)) have been augmented by estimates of the values of their target variables (\(\hat{y}^i_1,\ldots ,\hat{y}^i_m\)) to form expanded input vectors \(\mathbf {x}'^{i}=[\mathbf {x}^{i},\hat{y}^i_1,\ldots ,\hat{y}^i_m]\). These estimates are obtained by applying the first stage models to the examples of the training set.

To obtain predictions for an unknown instance \(\mathbf {x}^q\), the first stage models are first applied and an output vector \(\hat{\mathbf {y}}^q=[h_1(\mathbf {x}^q),\ldots ,h_m(\mathbf {x}^q)]\) is obtained. Then, the second stage models are applied on transformed input vectors \(\mathbf {x}'^{q} =[\mathbf {x}^q,\hat{\mathbf {y}}^q]\) to produce the final output vector \(\tilde{\mathbf {y}}^q = [h'_1(\mathbf {x}'^{q}_1),\ldots ,h'_m(\mathbf {x}'^{q}_m)]\). The training and prediction procedures of SST are graphically illustrated in Fig. 1.

Fig. 1
figure 1

Graphical illustration of SST’s training and prediction procedures

2.2 Ensemble of regressor chains

Regressor Chains (RC) is derived from Classifier Chains (Read et al. 2011), a recently proposed MLC method based on the idea of chaining binary models. The training of RC consists of selecting a random chain (permutation) of the set of target variables and then building a separate regression model for each target. Assuming that the chain \(C = \{Y_1 , Y_2 ,\ldots , Y_m\}\) (C represents an ordered set) is selected, the first model concerns the prediction of \(Y_1\), has the form \(h_1: \mathscr {X} \rightarrow R\) and is the same as the model built by the ST method for this target. The difference in RC is that subsequent models \(h_{j}, j>1\) are learned on transformed training sets \(D'_j = \{(\mathbf {x}'^{1}_j,y^{1}_{j}),\ldots ,(\mathbf {x}'^{n}_j,y^{n}_{j})\}\), where the original input vectors of the training examples have been augmented by the actual values of all previous targets of the chain to form expanded input vectors \(\mathbf {x}'^{i}_j = [x^i_1,\ldots ,x^i_d,y^{i}_{1},\ldots ,y^{i}_{j-1}]\). Thus, the models built for targets \(Y_{j}\) have the form \(h_j: \mathscr {X} \times R^{j-1} \rightarrow R\).

Given such a chain of models, the output vector \(\hat{\mathbf {y}}^q\) of an unknown instance \(\mathbf {x}^q\) is obtained by sequentially applying the models \(h_j\), thus \(\hat{\mathbf {y}}^q = [h_1(\mathbf {x}^q),h_2(\mathbf {x}'^{q}_2),\ldots ,h_m(\mathbf {x}'^{q}_m)]\) where \(\mathbf {x}'^{q}_{j}= [x^q_1,\ldots ,x^q_d,\hat{y}^q_1,\ldots ,\hat{y}^q_{j-1}]\). Note that since the true values \(y^q_1,\ldots ,y^q_{j-1}\) of the target variables are not available at prediction time, the method relies on estimates of these values obtained by applying the models \(h_1,\ldots ,h_{j-1}\). The training and prediction procedures of RC are graphically illustrated in Fig. 2.

Fig. 2
figure 2

Graphical illustration of RC’s training and prediction procedures

One notable property of RC is that it is sensitive in the selected chain ordering. To alleviate this issue, Read et al. (2011) proposed an ensemble scheme called Ensemble of Classifier Chains where a set of k Classifier Chains models with different random chains are built on bootstrap samples of the training set and the final predictions come from majority voting. This scheme has been shown to consistently improve the accuracy of a single Classifier Chain in the classification domain. We apply the same idea on RC and compute the final predictions by taking the mean of the k estimates for each target. The resulting method is called Ensemble of Regressor Chains (ERC).

2.3 Theoretical insights into stacking and chaining

Both stacking and chaining have enjoyed significant attention from the MLC community, mainly due to their high performance and conceptual simplicity. A number of recent works have attempted a theoretical analysis of the methods (Dembczynski et al. 2010, 2012; Read and Hollmén 2015). Adopting a statistical perspective, the authors of Dembczynski et al. (2010, 2012) distinguish between two types of label dependence:

  • unconditional, where \(P(\mathbf {Y}) \ne \prod _{i=1}^{m} P(Y_i)\); and

  • conditional, where \(P(\mathbf {Y}|\mathbf {x}) \ne \prod _{i=1}^{m} P(Y_i|\mathbf {x})\),

and show that modeling them is important for improving generalization performance. According to this analysis, stacking is interpreted as a method that models unconditional label dependence and is more suitable for minimizing label-wise decomposable multi-label loss functions,Footnote 3 while chaining is interpreted as a method that models conditional dependence and is more suitable for minimizing multi-label loss functions that cannot be decomposed label-wise.

Another interesting interpretation is offered by Read and Hollmén (2015) who show that Binary Relevance can (under certain conditions) achieve optimal performance in any dataset, and that improvements over the independent approach are often the result of using an inadequate base learner. Under this view, stacking and chaining can be considered as ‘deep’ independent learners who owe their improved performance over Binary Relevance (when the same base learner is used) to the use of labels as nodes in the inner layers of a deep neural network. These nodes represent readily availableFootnote 4 (in the training phase), high-level transformations of the original inputs. This interpretation of stacking and chaining applies directly to the MTR versions of these methods that we present here.

From a bias-variance perspective, we observe that by introducing additional features to single-target models, SST and ERC have the effect of decreasing their bias at the expense of an increased variance. This suggests that whenever the increase in variance is outweighed by the decrease in bias, one should expect gains in generalization performance over ST. This also hints that both methods will probably benefit from being combined with a base regressor that includes a variance reduction mechanism like bagged (Breiman 1996) regression trees.Footnote 5 As shown in Munson and Caruana (2009), bagged trees not only ignore irrelevant features but can also exploit features that contain useful but noisy information. Both properties are very important in the context of SST and ERC because some of the extra features that they introduce might be irrelevant (e.g. whenever two target variables are statistically independent) and/or noisy (as discussed in the following subsection).

2.4 Generation of meta-inputs

Both SST and ERC are based on the same core idea of treating other prediction targets as additional input variables. These meta-inputs differ from ordinary inputs in the sense that while their actual values are available at training time, they are missing during prediction. Thus, during prediction both methods have to rely on estimates of these values which come either from ST (in the case of SST) or from RC (in the case of ERC) models built on the training set. An important question that is answered differently by each method is the following: What type of values should be used at training time for the meta-inputs? SST uses estimates of the variables obtained by applying the first stage models on the training examples, while ERC uses their actual values. We observe that in both cases a core assumption of supervised learning is violated: that the training and testing data should be identically and independently distributed. In the SST case, the in-sample estimates that are used to form the training examples of the second stage models will typically be more accurate than the out-of-sample estimates used at prediction time. The situation is even more problematic in the case of ERC since the actual target values are used during training. In both cases, some of the input variables that are used by the underlying regression algorithm during model induction, become noisy (or noisier in the case of SST) at prediction time and, as a result, the induced model might wrongly estimate (overestimate) their usefulness.

To mitigate this problem, we propose the use of out-of-sample estimates of the targets during training in order to increase the compatibility between the training values of the target variables and the values used during prediction. One way to obtain such estimates is to use a subset of the training set for building the first stage ST models (in the case of SST) or the RC models (in the case of ERC) and apply them to the held-out part. However, this approach would lead to reduced second stage training sets for SST as only the examples of the held-out set would be available for training the second stage models. The same holds for ERC where the chained RC models would be trained on training sets of decreasing size. The solution that we propose to this problem is the use of an internal f-fold cross-validation approach that allows obtaining out-of-sample estimates of the target variables for all the training examples. Compared to the actual target values or the in-sample estimates of the targets, the cross-validation estimates are expected to better resemble the values that are used during prediction. As a result, we expect that the contribution of the meta-inputs to the prediction of each target will be better estimated by the underlying regression algorithm.

The training procedures of the extended SST (denoted as SST\(_{cv}\)) and RC (denoted as RC\(_{cv}\)) methods are outlined in Algorithms 1 and 3. ERC\(_{cv}\) consists of simply repeating the RC\(_{cv}\) procedure k times with different random chains. The corresponding prediction procedures are presented in Algorithms 2 and 4. Note that the prediction procedures of the original and the extended versions of each method coincide. In Sect. 5 we compare the performance of the extended versions of SST and ERC with the performance of the directly adapted variants, henceforth denoted as SST\(_{train}\) and ERC\(_{true}\). To better study the effects of the discrepancy problem, the comparison also includes SST using the actual target values (SST\(_{true}\)) and ERC using in-sample estimates of the target variables (ERC\(_{train}\)).

figure a
figure b
figure c
figure d

2.5 Discussion

Besides the type of values that each method uses for the meta-inputs at training time, SST and ERC have additional conceptual differences. A notable one is that the model built for each target \(Y_j\) by SST, uses all other targets as inputs while in RC each model involves only targets that precede \(Y_j\) in a random chain. As a result, the model built for \(Y_j\) by RC, cannot benefit from statistical relationships with targets that appear later than \(Y_j\) in the chain. This potential disadvantage of RC is partially overcome by ERC since each target is included in multiple random chains and, therefore, the probability that other targets will precede it is increased. At a first glance, SST seems to represent a more straightforward way of including all the available information about other targets. However, we should take into account that, since both methods rely on estimates of the meta-inputs at prediction time (as discussed in previous subsection), the more the meta-inputs that are included in the input space, the higher the amount of error accumulation that is risked at prediction time. From this perspective, ERC seems to adopt a more cautious approach than SST. On the other hand, the estimates of the meta-inputs that are used by the second stage models in SST come from independent models, while the estimates of the meta-inputs used by each model in RC (and ERC) come from models that include information about other targets and thus involve a higher risk of becoming noisy. Overall, there seems to be a trade-off between using the additional information available in the targets and the noise that this information comes with. Which of the two methods (and which variant) achieves a better balance in this trade-off is revealed by the experimental analysis in Sect. 5.

2.6 Complexity analysis

In this section we discuss the time complexity of all variants of the proposed methods at training and test time, given a single-target regression algorithm with training complexity \(O(g_{tr}(n,d))\) and test complexity \(O(g_{te}(n,d))\) for a dataset with n examples and d input variables. The training and test complexities of the ST method are \(O(m {\cdot } g_{tr}(n,d))\) and \(O(m {\cdot } g_{te}(n,d))\) respectively, as it involves training and querying m independent single-target models.

With respect to SST, the method builds \(2 {\cdot } m\) models at training time, all of which are queried at prediction time. In all variants of the method, half of the models are built on the original input space and half of the models are built on an input space augmented by m meta-inputs. Thus, in the case of SST\(_{true}\), where the meta-inputs are readily available, the training and test complexities are \(O(m {\cdot } (g_{tr}(n,d) {+} g_{tr}(n,d{+}m)))\) and \(O(m {\cdot } (g_{te}(n,d) {+} g_{te}(n,d{+}m)))\) respectively. Given that in most cases (see Table 3) the number of targets is much smaller than the number of inputs, i.e. \(m \ll d\), the effective training and test complexities of SST\(_{true}\) become \(O(m {\cdot } g_{tr}(n,d))\) and \(O(m {\cdot } g_{te}(n,d))\) respectively, thus same with ST’s complexities. SST\(_{train}\) and SST\(_{cv}\) have the same test complexity with SST\(_{true}\) but a larger training complexity because of the process of generating estimates for the meta-inputs. In the SST\(_{train}\) case, the training complexity is \(O(m {\cdot } g_{tr}(n,d) {+} m {\cdot } g_{te}(n,d))\) because the m first-stage models are applied to obtain estimates for all the training examples. For most regression algorithms (e.g. regression trees), the computational cost of making predictions for n instances is much smaller than the cost of training on n examples. For instance, the training complexity of a typical binary regression tree learner is \(O(n {\cdot } d^2)\) (Su and Zhang 2006) while the test complexity is \(O(n {\cdot } \log _2 d)\). Thus, practically, the training complexity of SST\(_{train}\) is similar to that of SST\(_{true}\). When it comes to SST\(_{cv}\), in addition to the m first-stage models, f additional models are built on \(\frac{f-1}{f} {\cdot } n\) examples each. Therefore, the training complexity of SST\(_{cv}\) is \(O(m {\cdot } g_{tr}(n,d) + m {\cdot } f {\cdot } g_{tr}(\frac{f-1}{f} {\cdot } n,d) + m {\cdot } g_{te}(n,d)) \approx O(f {\cdot } m {\cdot } g_{tr}(n,d) {+} m {\cdot } g_{te}(n,d))\). Given

that \(g_{te}(n,d)\) \(\ll \) \(g_{tr}(n,d)\), we conclude that the training complexity of SST\(_{cv}\) is roughly f times ST’s training complexity. Also, note that SST\(_{train}\) and SST\(_{cv}\) can be parellelized stage-wise both at training and at prediction time, i.e. all single-target models within the same level can be trained and queried independently, while SST\(_{true}\) is fully parallelizable at training time (all single-target models can be trained independently) and stage-wise parallelizable at test time.

In ERC, each RC model consists of a chain of m models built on input spaces augmented by \(\{0,1,\ldots ,m-1\}\) meta-inputs, thus \(\frac{m-1}{2}\) meta-inputs on average. In the case of ERC\(_{true}\) and for an ensemble size of k RC models, the training and test complexities are \(O(k {\cdot } m {\cdot } g_{tr}(n,d{+}(\frac{m-1}{2}))\) and \(O(k {\cdot } m {\cdot } g_{te}(n,d{+}(\frac{m-1}{2}))\) respectively. Given, as before, that \(m \ll d\), the training complexity of ERC\(_{true}\) becomes \(O(k {\cdot } m {\cdot } g_{tr}(n,d))\) and its test complexity becomes \(O(k {\cdot } m {\cdot } g_{te}(n,d))\), thus k times ST’s complexity in both cases. Following a similar reasoning as we did above for SST, we can show that the training complexity of ERC\(_{train}\) is similar to that of ERC\(_{true}\) and that the training complexity of ERC\(_{cv}\) is \(O(k {\cdot } f {\cdot } m {\cdot } g_{tr}(n,d))\), i.e. \(k {\cdot } f \) times ST’s training complexity. Obviously, the test complexities of both ERC\(_{train}\) and ERC\(_{cv}\) are the same as ERC\(_{true}\)’s test complexity. With respect to parellelization, we observe that each member of an ERC\(_{train}\) or ERC\(_{cv}\) ensemble can be trained independently, while ERC\(_{true}\) is fully parallelizable at training time, i.e. all \(k {\cdot } m\) single-target models can be trained independently. For all ERC variants, test time parallelization is also possible since each ensemble member can be queried independently.

Table 1 Training and test complexities of the proposed methods with single- and multi-core implementations

Table 1 summarizes the training and test complexities of each method assuming a single-core implementation as well as the minimum possible complexity when a multi-core implementation is used. Note that, as shown in the table, SST\(_{cv}\) and ERC\(_{cv}\) have the same multi-core complexity with SST\(_{train}\) and ERC\(_{train}\) respectively because their internal cross-validation procedure can also be parallelized.

3 Related work

3.1 Multi-target regression

MTR was first studied in Statistics under the term multivariate regression with Reduced Rank Regression (RRR) (Izenman 1975), FICYREG (Merwe and Zidek 1980) and two-block PLS (Wold 1985) (the multiple response version of PLS) being three of the earliest methods. Among these methods, two-block PLS has been used more widely, especially in Chemometrics. More recently, the Curds and Whey (C&W) method was proposed (Breiman and Friedman 1997) and was found to outperform RRR, FICYREG and two-block PLS. As noted by Breiman and Friedman (1997), C&W, RRR and FICYREG can all be expressed using the same generic form \({\tilde{\mathbf{y}}= \mathbf{B} \hat{\mathbf{y}}}\), where \( {\hat{\mathbf{y}}}\) are estimates obtained by applying ordinary least squares regression on the target variables and \(\mathbf {B}\) is a matrix that modifies these estimates in order to obtain a more accurate prediction \({\tilde{\mathbf{y}}}\), under the assumption that the targets are correlated.

In all methods, \(\mathbf {B}\) can be expressed as \(\mathbf {B} = {\hat{\mathbf{T}}}^{-\mathbf{1}} \mathbf {D} {\hat{\mathbf{T}}}\), where \({\hat{\mathbf{T}}}\) is the matrix of sample canonical co-ordinates and \(\mathbf {D}\) is a diagonal “shrinking” matrix that is obtained differently in each method. SST is highly similar to these methods but allows a more general formulation of the MTR problem. Firstly, SST does not impose any restriction to the family of models that generate the uncorrected (first stage) estimates in contrast to these approaches that use estimates obtained from least squares regression. Secondly, the correction of the estimates applied by SST comes from a learning procedure that jointly considers target and input variables rather than target variables alone.

As shown by Breiman and Friedman (1997), the above methods can be described by an alternative but equivalent scheme. According to this, \(\mathbf {y}\) is first transformed to the canonical co-ordinate system \(\mathbf {y}' = {\hat{\mathbf{T}}} \mathbf {y}\), then separate least squares regression is performed on each \(\mathbf {y}'\) to obtain \({\hat{\mathbf{y}}'}\), these estimates are scaled by \(\mathbf {D}\) to obtain \({\tilde{\mathbf{y}}'} = \mathbf {D} {\hat{\mathbf{y}}'}\) and finally transformed back to the original output space \({\tilde{\mathbf{y}}} = {\hat{\mathbf{T}}^{-\mathbf{1}}} {\tilde{\mathbf{y}}'}\). As discussed by Dembczynski et al. (2012), from this perspective, these methods fall under a more general scheme where the output space is first transformed, single-target regressors are then trained on the transformed output space and an inverse transformation is performed (possibly along with shrinkage/regularization) to obtain predictions for the original targets. Due to its generality, this scheme has been adopted by a number of recent methods in both MLC (Hsu 2009; Zhang and Schneider 2011, 2012; Tai and Lin 2012) and MTR (Balasubramanian and Lebanon 2012; Tsoumakas et al. 2014).

A large number of MTR methods are derived from the predictive clustering tree (PCT) framework (Blockeel et al. 1998). The main difference between the PCT algorithm and a standard decision tree is that the variance and the prototype functions are treated as parameters that can be instantiated to fit the given learning task. Such an instantiation for MTR tasks are the multi-objective decision trees (MODTs) where the variance function is computed as the sum of the variances of the targets, and the prototype function is the vector mean of the target vectors of the training examples falling in each leaf (Blockeel et al. 1998, 1999). Bagging and random forest ensembles of MODTs were developed by Kocev et al. (2007) and were found significantly more accurate than MODTs and equally good or better than ensembles of single-objective decision trees for both regression and classification tasks. In particular, multi-objective random forests yielded better performance than multi-objective bagging.

Methods that deal with the prediction of multiple target variables can be found in the literature of the related learning task of multi-task learning. According to Caruana (1997), multi-task learning is a form of inductive transfer (Pratt 1992) where the aim is to improve generalization accuracy on a set of related tasks by using a shared representation that exploits commonalities between them. This definition implies that a multi-task method should be able to deal with problems where different prediction tasks do not necessarily share the same set of training examples or descriptive features and, moreover, each task can have a different data type. Thus, multi-task learning is actually a generalization of MTR.

Artificial neural networks (ANNs) are very well suited for multi-task problems because they can be naturally extended to support multiple outputs and offer flexibility in defining how inputs are shared between tasks. Thus, it is not surprising that most of the earliest multi-task methods were based on ANNs. Caruana (1994), for example, proposed a method where backpropagation is used to train single ANN with multiple outputs (connected to the same hidden layers), and showed that it has better generalization performance compared to multiple single-task ANNs. A different architecture was used by Baxter (1995) where only the first hidden layers are shared and subsequent layers are specific to each task. The question of how much sharing is better when multi-task ANNs are applied for stock return prediction was explored by Ghosn and Bengio (1996) who concluded that a partial sharing of network parameters is preferable compared to full or no sharing. More recently, Collobert and Weston (2008) applied a deep multi-task neural network architecture for natural language processing.

A large number of multi-task learning methods stem from a regularization perspective.Footnote 6 Regularization-based multi-task methods minimize a penalized empirical loss of the form \(\displaystyle \min _W \mathscr {L}(W)+\varOmega (W)\), where W is a parameter matrix that has to be estimated, \(\mathscr {L}(W)\) is an empirical loss calculated on the training data and \(\varOmega (W)\) is a regularization term that takes a different form in each method depending on the underlying task relatedness assumption. Most methods assume that all tasks are related to each other (Evgeniou and Pontil 2004; Ando and Zhang 2005; Argyriou et al. 2006, 2008; Chen et al. 2009, 2010a; Obozinski et al. 2010), while there are methods assuming that tasks are organized in structures such as clusters (Jacob et al. 2008; Zhou et al. 2011a), trees (Kim and Xing 2010) and graphs (Chen et al. 2010b). A well-studied category of methods, which are particularly useful when dealing with high-dimensional input spaces, assume that models for different tasks share a common low-rank subspace and impose a trace-norm constraint on the parameter matrix (Argyriou et al. 2006, 2008; Ji and Ye 2009). A similar category of methods constraint all models to share a common set of features (thus performing a joint feature selection), typically by applying \(L_1 / L_q\)-norm (\(q>1\)) regularization (Obozinski et al. 2010). An approach that relaxes the above restrictive constraint allowing models to leverage different extents of feature sharing is proposed in Jalali et al. (2010, 2013).

Finally, we would like to mention that a number of MTR methods are based on the Gaussian Processes framework (e.g., Bonilla et al. 2007; Álvarez and Lawrence 2011). These methods capture correlations between tasks by appropriate choices of covariance functions. A nice review of such methods as well as their relations to regularization-based multi-task approaches can be found in Alvarez et al. (2011).

3.2 Discrepancy in meta-inputs

In the MLC domain, Senge et al. (2013a) studied how the discrepancy issue affects the performance of Classifier Chains and showed that longer chains (i.e. multi-label problems with more labels to be predicted) lead to a higher performance deterioration. In an extension of that work Senge et al. (2013b), a “rectified” version of Classifier Chains (called Nested Stacking) was presented that uses in-sample estimates of the label variables for training as in Stacked Binary Relevance. It was shown that this method performs better than the original Classifier Chains, especially when the label dependencies are strong. Following the opposite direction, Montañés et al. (2011) proposed AID, a method similar to Stacked Binary Relevance, and found that using the actual label values instead of (in-sample) estimates, leads to better results for most multi-label evaluation measures in both AID and Stacked Binary Relevance.

Our work is the first to study this issue in the MTR domain.Footnote 7 The issue is studied jointly for SST and ERC, thus allowing general conclusions to be drawn for this type of methods. Furthermore, Montañés et al. (2011), Senge et al. (2013b) compared only the use of actual target values with the use of in-sample estimates while our comparison includes the use of out-of-sample estimates obtained by a cross-validation procedure. Finally, Senge et al. (2013b) evaluate the use of estimates in Classifier Chains whereas we focus on the ensemble version of the corresponding MTR method (ERC) that is expected to offer more resilience to error propagation, as discussed in Sect. 2.5.

4 Experimental setup

This section describes our experimental setup. We first present the participating methods and their parameters and provide details about their implementation in order to facilitate reproducibility of the experiments. Next, we describe the evaluation measure and explain the process that was followed for the statistical comparison of the methods. Finally, we present the datasets that we used and their main statistics.

4.1 Methods, parameters and implementation

The experimental evaluation includes all variants of the proposed SST and ERC methods, the ST baseline and the following state-of-the-art multiple prediction methods: (a) multi-objective random forest (MORF) (Kocev et al. 2007), (b) trace norm regularization for multi-task learning (TNR) (Argyriou et al. 2008), (c) the Dirty approach for multi-task learning (Jalali et al. 2010, 2013) and (d) a very recent multi-target method based on random linear combinations of the output space (RLC) (Tsoumakas et al. 2014). For easy reference, Table 2 lists all methods included in the evaluation along with their abbreviations and citations where appropriate.

Table 2 Methods used in experiments with abbreviations and citations

The proposed methods as well as ST and RLC transform the mutli-target regression task into a series of single-target regression tasks which can be dealt with using any standard regression algorithm. For most of the experiments, we use bagged regression trees as the base regressor. This choice was motivated in Sect. 2.3 and is further discussed in Sect. 5.1 where we present results using a variety of well-known linear and non-linear regression algorithms. The ensemble size of all ERC variants is set to \(k=10\) RC models, each one trained using a different random chain. In datasets with less than 10 distinct chains, we create exactly as many RC models as the number of distinct chains. Furthermore, since the base regressor involves bootstrap sampling, we do not perform sampling in ERC, i.e. each RC model is trained using all training examples. In SST, we exclude the target being predicted by each second stage model from the input space of that model as we found that this choice improves slightly the performance of all variants of this method. \(f=10\) internal cross-validation folds are used in both SST\(_{cv}\) and ERC\(_{cv}\).

Concerning the parameter settings of the competitive methods, in MORF we use an ensemble size of 100 trees and the values suggested by Kocev et al. (2007) for the rest of its parameters. In RLC, we generate \(r=100\) new target variables by combining \(k=2\) of the original target variables (after bringing them to the [0, 1] interval). As shown in Tsoumakas et al. (2014), these values lead to near optimal results. In TNR, we minimize the squared loss function using the accelerated gradient method for trace norm minimization (Ji and Ye 2009). The regularization parameter is tuned by selecting among the values \(\{10^r : r \in \{-3,\ldots ,3\}\}\) with internal 5-fold cross-validation. Before applying TNR, we apply z-score normalization and add a bias column as suggested in Zhou et al. (2011b). Finally, Dirty is setup as suggested in Jalali et al. (2013): Input variables are scaled to the \([-1,1]\) range by dividing them with their maximum values. The regularization parameters \(\lambda _b\) and \(\lambda _s\) are tuned via internal 5-fold cross-validation (as in TNR). As suggested in Jalali et al. (2013), we set \(\lambda _b = c \sqrt{\frac{m \log d}{n}}\), where \(c \in \{10^r : r \in \{-2,\ldots ,2\}\} \) is a constant. Each distinct value of \(\lambda _b\) is paired with five values of \(\lambda _s = \frac{\lambda _b}{1+ \frac{m-1}{4} i}, i \in \{0,1,2,3,4\}\), thus respecting the \(\frac{\lambda _s}{\lambda _b} \in [\frac{1}{m},1]\) relationship dictated by the optimality conditions. In total, 25 different combinations of \(\lambda _b\) and \(\lambda _s\) are evaluated.

All the proposed methods and the evaluation framework were implemented in Java and integrated in MulanFootnote 8 (Tsoumakas et al. 2011) by expanding its functionality to multi-target regression. The implementation of all single-target regression algorithms that were used to instantiate problem transformation methods are taken from Weka.Footnote 9 With respect to the competing methods, RLC was already integrated in Mulan while for the purposes of this study we also integrated MORF (via a wrapper of the implementation offered in CLUSFootnote 10) as well as TNR and Dirty (via wrappers of the implementations offered in MALSAR (Zhou et al. 2011b)). Thus, all methods were evaluated under a common framework. In support of open science, we created a github projectFootnote 11 that contains all our implementations, including code that facilitates easy replication of our experimental results.

4.2 Evaluation

The proposed methods aim at reducing the prediction error on every single target of a MTR problem. To measure the performance of a MTR model on each target variable we use Relative Root Mean Squared Error (RRMSE). The RRMSE of a model \(\mathbf {h}\) that has been induced from a train set \(D_{train}\) is estimated based on a test set \(D_{test}\) according to the following equation:

$$\begin{aligned} {\textit{RRMSE}}(\mathbf {h},D_{test})=\sqrt{\frac{\sum _{(\mathbf {x},\mathbf {y}) \in D_{test}} (\hat{y}_{j}-y_{j})^{2}}{\sum _{(\mathbf {x},\mathbf {y}) \in D_{test}} (\bar{Y}_{j}-y_{j})^{2}}} \end{aligned}$$
(1)

where \(\bar{Y}_{j}\) is the mean value of target variable \(Y_{j}\) over \(D_{train}\) and \(\hat{y}_{j}\) is the estimation of the MTR model \(\mathbf {h}\) for \(Y_{j}\). More intuitively, RRMSE for a target is equal to the Root Mean Squared Error (RMSE) for that target divided by the RMSE of predicting the average value of that target in the training set. RRMSE is estimated using k-fold cross-validation on all datasets, i.e. one RRMSE measurement is obtained on each fold and the final RRMSE is calculated as the average of those measurements. We use \(k=10\) on all datasets, except those with more than 9000 examples where for computational reasons we use either \(k=5\) (rf1 and rf2) or \(k=2\) (scm1d and scm20d).Footnote 12

To test the statistical significance of the observed differences between the methods, we follow the methodology suggested by Demsar (2006). To compare multiple methods on multiple datasets we use the Friedman test, the non-parametric alternative of the repeated-measures ANOVA. The Friedman test operates on the average ranks of the methods and checks the validity of the hypothesis (null-hypothesis) that all methods are equivalent. Here, we use an improved (less conservative) version of the test that uses the \(F_f\) instead of the \(\chi ^2_F\) statistic (Iman and Davenport 1980). When the null-hypothesis of the Friedman test is rejected (\(p < 0.01\)), we proceed with the Nemenyi post-hoc test that compares all methods to each other in order to find which methods in particular differ from each other. Instead of reporting the outcomes of all pairwise comparisons, we employ the simple graphical presentation of the test’s results introduced by Demsar (2006), i.e. all methods being compared are placed in a horizontal axis according to their average ranks and groups of methods that are not significantly different (at a certain significance level) are connected (see Fig. 4 for an example). To generate such a diagram, a critical difference (CD) should be calculated that corresponds to the minimum difference in average ranks required for two methods to be considered significantly different. CD for a given number of methods and datasets, depends on the desired significance level. Due to the known conservancy of the Nemenyi test (Demsar 2006), we use a 0.05 significance level for computing the CD throughout the paper.

As the above methodology requires a single performance measurement for each method on each dataset, it is not directly applicable to multi-target evaluation where we have multiple performance measurements (one for each target) for each method on each dataset. One option is to take the average RRMSE (aRRMSE) across all target variables within a dataset as a single performance measurement. This choice, however, has the disadvantage that a very small or large error on a single target might dominate the average, thus obscuring performance differences on the target level. Another option is to treat the RRMSE of each method on each target as a different performance measurement. In this case, Friedman test’s assumption of independence between performance measurements might be violated. In the absence of a better solution, we perform a two-dimensional analysis (as done e.g. by Aho et al. 2012) where statistical tests are conducted using both aRRMSE (per dataset analysis) but also considering RRMSE per target as an independent performance measurement (per target analysis).

4.3 Datasets

Despite the numerous interesting applications of MTR, there are only few publicly available datasets of this kind—perhaps because most applications are industrial—and most experimental evaluations of MTR methods are based on a limited amount of datasets. For this study, much effort was made for the composition of a large and diverse collection of benchmark MTR datasets. In addition to 5 datasets that have been used in previous studies and are publicly available (edm, sf1, sf2, jura, wq), we also used 5 publicly available datasets (enb, slump, andro, osales, scpf) that have not been used for MTR benchmarking in the past. We also collected raw MTR data from a variety of interesting application domains and composed 8 new benchmark datasets (atp1d, atp7d, oes97, oes10, rf1, rf2, scm1d, scm20d). In total we collected 18 datasets and make them publicly available for future studies.Footnote 13 To the best of our knowledge, this is the largest collection of benchmark MTR datasets to date.

Table 3 Name, source, number of examples, number of input variables (d) and number of target variables (m) of the datasets used in the evaluation

Table 3 reports the name (1st column), source (2nd column), number of examples (3rd column), number of input variables (4th column) and number of target variables (5th column) of each dataset. Detailed descriptions of all datasets are provided in “Appendix”.

5 Experimental analysis

In this section we present an extensive experimental analysis of the performance of the proposed methods. Sect. 5.1 is devoted to an exploration of the performance of ST using various well-known regression algorithms. The purpose of this investigation is to help us select an algorithm that works well on the studied datasets and use it as base regressor in all problem transformation methods (ST, SST, ERC and RLC) in subsequent experiments. At the same time, a challenging baseline performance level will be set for all multi-target methods. In Sect. 5.2 we evaluate SST\(_{train}\) and ERC\(_{true}\), the direct adaptations of the corresponding MLC methods, in order to see whether these variants obtain a competitive performance compared to ST and state-of-the-art multi-target methods. Next, in Sect. 5.3 all three meta-input generation variants (true, train, cv) of SST and ERC are evaluated and compared to ST, shedding light into the impact of the discrepancy problem on each method. After the best performing variants of each method have been identified, Sect. 5.4 compares them with the state-of-the-art. The running times of all methods are reported and compared in Sect. 5.5, and finally, this section ends with a discussion of the main outcomes of the experimental results (Sect. 5.6).

5.1 Base regressor exploration

In this subsection we explore the performance of ST on the studied domains using a variety of regression algorithms. The goal of this exploration is to help us identify a regression algorithm that performs well across many domains, thus setting a challenging baseline performance level for the multi-target methods that we study next. The algorithm that will emerge as the best performer will be used to instantiate all problem transformation methods (ST, SST, ERC and RLC) in the rest of the experiments, facilitating a fair comparison between these methods.

We selected five well-known linear and non-linear regression algorithms to couple ST with, in particular we use: ridge regression (Hoerl and Kennard 1970) (ridge), regression tree Breiman et al. (1984) (tree), L2-regularized support vector regression regression (Drucker et al. 1996) (svr), bagged (Breiman 1996) regression trees (bag) and stochastic gradient boosting (Friedman 2002) (sgb). In ridge and svr, the regularization parameter was tuned (separately for each target) by applying internal 5-fold cross-validation and choosing the value that leads to the lowest root mean squared error among \(\{10^r : r \in \{-4,\ldots ,2\}\}\). In bag we combine the predictions of 100 trees while in sgb we boost trees with four terminal nodes using a small shrinkage rate (0.1) and a large number of iterations (100), as suggested by Friedman et al. (2001).

The detailed results obtained by each instantiation on each dataset and target are given in Appendix “Base regressor exploration results”. We observe that no algorithm is better in all domains (as dictated by the no free lunch theorems for supervised learning (Wolpert 1996, 2002)). However, ST-bag stands out obtaining the lowest aRRMSE in nine datasets. ST-sgb follows with five wins while ST-ridge and ST-svr each obtain the lowest error in two datasets. Figure 3 shows the average ranks of the different instantiations along with the results of the Friedman and the Nemenyi tests for the analysis per dataset (left) and per target (right). In both analyses, the lowest average rank is obtained by ST-bag, followed by ST-sgb and ST-ridge. In the per dataset analysis, the Nemenyi test finds that ST-bag is significantly better than ST-tree and ST-svr while in the per target analysis, ST-bag is found significantly better than all the other instantiations. Therefore, we use bag as the base regressor for all problem transformation methods in the rest of the experiments.

Fig. 3
figure 3

Comparison of different ST instantiations using the Nemenyi test. Groups of methods that are not significantly different (at \(p=0.05\)) are connected. a Per dataset analysis. b Per target analysis

5.2 Evaluation of direct adaptations

In this subsection we focus on SST\(_{train}\) and ERC\(_{true}\), the versions of SST and ERC that use the same type of values for the meta-inputs as their MLC counterparts, and compare their performance to that of ST, MORF, RLC, TNR and Dirty to see where these methods stand with respect to the state-of-the-art.

Figure 4 shows the average ranks of the methods along with the results of the Friedman and the Nemenyi tests when the analysis is performed per dataset (left) and per target (right).Footnote 14 Several interesting remarks can be made based on these results. First, we see that both SST \(_{train}\) and ERC \(_{true}\) are competitive with state-of-the-art methods. SST\(_{train}\) obtains the lowest average rank in both the per dataset and the per target analysis. In the per dataset analysis, it is found significantly better than TNR and Dirty and similar with MORF and RLC and in the per target analysis it is found better than TNR, Dirty and MORF and similar with RLC. ERC\(_{true}\) performs worse than SST\(_{train}\) but is still ranked above TNR, Dirty and MORF in the per dataset analysis and above TNR and Dirty in the per target analysis. In the per dataset analysis, ERC\(_{true}\) is found significantly better than TNR and Dirty and similar with MORF and RLC, while in the per target analysis it is found better than TNR and Dirty, similar with MORF and only RLC outperforms it significantly.

Fig. 4
figure 4

Comparison of direct adaptations (with bag as base regressor) using the Nemenyi test. Groups of methods that are not significantly different (at \(p=0.05\)) are connected. a Per dataset analysis. b Per target analysis

Interestingly, however, we see that according to both the per dataset and the per target analysis, SST\(_{train}\) and ERC\(_{true}\) are not significantly better than ST. This is an indication that the use of targets as meta-input as implemented by these variants of SST and ERC does not bring significant improvements. Actually, as can be seen from the detailed results, both SST\(_{train}\) and ERC\(_{true}\) perform worse than ST in several cases. This issue is studied in more detail in the following subsection.

Perhaps even more interestingly, none of the state-of-the-art multi-target methods participating in this comparison manages to significantly improve the performance of ST. In fact, ST is ranked second after SST\(_{train}\) in the per dataset analysis and third after SST\(_{train}\) and RLC in the per target analysis, and is found significantly better than TNR and Dirty in both types of analyses. This exceptionally good performance of ST might seem a bit surprising given the results of previous studies (e.g. Kocev et al. 2007; Tsoumakas et al. 2014) but is in accordance with empirical and theoretical results for Binary Relevance (as discussed in Sect. 2) and is attributed to the use of a very strong base regressor.

To validate this, we instantiate all problem transformation methods with ridge, a base regressor that was found to perform worse than bag in Sect. 5.1, and repeat the comparison. As shown in Fig. 5, the situation is quite different compared to when bag was used as base regressor. We observe that ST is now ranked below MORF, RLC and ERC\(_{true}\) in both the per dataset and the per target analysis and is found significantly worse than MORF according to the per target analysis. Clearly, as the strength of the base regressor increases, i.e. when the information provided by the features is well exploited, improving the performance of ST becomes more difficult. However, it is this challenging setting where performance improvements matter the most and it is thus interesting to see whether the proposed extensions of SST and ERC manage to obtain more consistent improvements over ST (compared to SST\(_{train}\) and ERC\(_{true}\)) under this setting.

Fig. 5
figure 5

Comparison of direct adaptations (with ridge as base regressor) using the Nemenyi test. Groups of methods that are not significantly different (at \(p=0.05\)) are connected. a Per dataset analysis. b Per target analysis

5.3 Evaluation of meta-input generation variants

In this subsection we evaluate the performance of SST and ERC when different types of values are used for the meta-inputs at training time. In particular, each method is evaluated using the actual target values (true variants), in-sample estimates (train variants) and out-of-sample estimates (cv variants) generated using the proposed internal cross-validation strategy. We want to see whether the cv variants (that according to the discussion of Sect. 2.4 are expected to be less affected by the discrepancy problem) can indeed perform better than the train and true variants and whether they manage to obtain more consistent improvements over ST. We also want to see how the SST variants compare to the ERC variants.

Fig. 6
figure 6

Comparison of SST variants using the Nemenyi test. Groups of methods that are not significantly different (at \(p=0.05\)) are connected. a SST, per dataset analysis. b SST, per target analysis

Figures 6 and 7 show the average ranks and the results of the Friedman and Nemenyi tests for the three variants of SST and ERC, respectively, according to the per dataset (left) and the per target (right) analysis. First, we see that in both SST and ERC and in both types of analyses, the variants that use the actual values of the targets (true) obtain the worst average ranks and are found significantly worse than both variants that use estimates (train and cv). Since the variants of each method differ only with respect to the type of values that they use for the meta-inputs, it is clear that the discrepancy problem has a significant impact on the performance of both SST and ERC and that the use of estimates can ameliorate this problem.

With respect to the kind of estimates that should be used (in-sample or out-of-sample) the situation is slightly different for each method. In the case of SST, the cv variant obtains the best average rank in both the per dataset and the per target analysis and its difference with the train variant is found significant in the per target analysis. In the case of ERC, while the cv variant is ranked higher than the train variant in the per target analysis, the train variant is ranked slightly higher in the per dataset analysis and in both cases the differences are not found significant. This suggests that using out-of-sample estimates is important for SST while ERC seems to be less affected by the discrepancy problem and, as a result, the use of in-sample estimates can be considered as a viable alternative.

Fig. 7
figure 7

Comparison of ERC variants using the Nemenyi test. Groups of methods that are not significantly different (at \(p=0.05\)) are connected. a ERC, per dataset analysis. b ERC, per target analysis

A question that has not been answered yet, is how the new variants of SST and ERC compare to ST and to each other. Figure 8 shows the results of the Friedman and the Nemenyi tests when all variants of SST and ERC are compared together with ST. We see that in both the per dataset (left) and the per target (right) analysis, the four variants that use estimates for the meta-inputs obtain lower average ranks than ST while the true variants obtain worse average ranks. The differences with ST are not found significant according to the per dataset analysis but according to the per target analysis ERC\(_{train}\) and ERC\(_{cv}\) are found significantly better. Comparing the SST variants with the ERC variants, we see that each ERC variant is always ranked above the corresponding SST variant. This suggests that ERC’s strategy for leveraging information from target variables is beneficial. Moreover, we see that that ERC\(_{train}\) and ERC\(_{cv}\) are found significantly better than the rest of the methods according to the per target analysis.

Fig. 8
figure 8

Comparison of SST\(_{true/train/cv}\), ERC\(_{true/train/cv}\) and ST using the Nemenyi test. Groups of methods that are not significantly different (at \(p=0.05\)) are connected. a Per dataset analysis. b Per target analysis

5.3.1 Cautiousness analysis

So far, our analysis has focused on the average performance of the proposed methods (as quantified by their average ranks over datasets and targets) and we found that ERC\(_{train}\) and ERC\(_{cv}\) outperform the independent regressions baseline significantly. However, it is also important to see the consistency of these improvements across different datasets and targets. In particular, we would like to study the degree of cautiousness that each method exhibits, i.e. how frequently and to what extent are the predictions produced by each method less accurate than the predictions of ST.

To facilitate a comparison of the methods in this regard, the following measures are defined:

$$\begin{aligned} R_d(M)= & {} \frac{aRRMSE(ST)}{aRRMSE(M)},\\ R_t(M)= & {} \frac{RRMSE(ST)}{RRMSE(M)}. \end{aligned}$$

For each method M and dataset d, \(R_d\) quantifies the amount of improvement or degradation induced by M compared to ST in terms of aRRMSE. Similarly, for each method M and target t, \(R_t\) quantifies the amount of improvement or degradation compared to ST in terms of RRMSE. Values of \(R_d(M)\) and \(R_t(M)\) \(<1\) indicate that the method produces worse estimates than ST (\(R_d(ST)=R_t(ST)=1\)). The upper part of Fig. 9 displays box plots of the values of \(R_d\) over the 18 datasets included in the experimental study, i.e. each box plot summarizes the distribution of 18 values, while the lower part displays box plots of the values of \(R_t\) over 143 targets, i.e. each box plot summarizes the distribution of 143 values.

Fig. 9
figure 9

Cautiousness analysis of SST and ERC variants. On each box, the whiskers extend to the most extreme data points still within 1.5 IQR of the lower quartile and outliers are plotted individually. a Distribution of \(R_d\) values for each method over 18 datasets. b Distribution of \(R_t\) values for each method over 143 targets

We see that that in both the per dataset and the per target analysis, the true variants are the ones exhibiting the more dispersed distributions with several cases of significant degradation of ST’s performance. The train and cv variants are clearly more cautious with much fewer cases of degradation and even fewer cases of significant degradation. Looking at the distributions of \(R_t\), we could say that the cv variants appear a bit more cautious than the train variants especially in the case of SST. We also see that the ERC variants are always more cautious than the corresponding SST variants. Clearly, ERC\(_{train}\) and ERC\(_{cv}\) are the two most cautious methods since they obtain very similar or better performance than ST on all datasets and on about 75 % of the targets. Even on targets where the two methods obtain a lower performance than ST, the reduction is less than about 5 %. This characteristic along with the fact that they obtain the largest average improvements over ST, make ERC\(_{train}\) and ERC\(_{cv}\) highly appealing.

5.4 Comparison with the state-of-the-art

In this section we compare the three best performing variants of the proposed methods, i.e. ERC\(_{cv}\), ERC\(_{train}\) and SST\(_{cv}\), with MORF, RLC, TNR and Dirty to see how they compare to the state-of-the-art. Figure 10 shows the results of the Friedman and Nemenyi tests for the analysis per dataset (left) and per target (right). The per dataset analysis shows that all our methods perform significantly better than TNR and Dirty while ERC\(_{cv}\) and ERC\(_{train}\) also perform significantly better than MORF. Moreover, all our methods obtain a lower average rank than RLC but according to this analysis the differences are not significant. According to the per target analysis, all our methods are found significantly better than TNR, Dirty and MORF, and additionally, ERC\(_{cv}\) and ERC\(_{train}\) are found significantly better than RLC. In Fig. 11 we compare the performance of the methods from a cautiousness perspective, as we did in Sect. 5.3. TNR, Dirty and MORF are far less cautious than SST\(_{cv}\), ERC\(_{train}\) and ERC\(_{cv}\) with many instances of extreme degradation of ST’s performance. RLC is more cautious but not as much as SST\(_{cv}\), ERC\(_{train}\) and ERC\(_{cv}\), especially according to the per target analysis.

Fig. 10
figure 10

Comparison of the best SST and ERC variants with the state-of-the-art using the Nemenyi test. Groups of methods that are not significantly different (at \(p=0.05\)) are connected. a Per dataset analysis. b Per target analysis

Fig. 11
figure 11

Comparing the cautiousness of the best SST and ERC variants to that of state-of-the-art methods. On each box, the whiskers extend to the most extreme data points still within 1.5 IQR of the lower quartile and outliers are plotted individually. a Distribution of \(R_d\) values for each method over 18 datasets. b Distribution of \(R_t\) values for each method over 143 targets

5.5 Running times

In this subsection we compare the running times of the studied methods. Experiments were run on a 64-bit CentOS Linux machine with 80 Intel Xeon E7-4860 processors running at 2.27 GHz and 1 TB of main memory. The detailed results per method and dataset are shown in Table 4. For ST, RLC, SST and ERC we report times with bag as base regressor. The number shown in parenthesis next to the name of each dataset corresponds to the maximum number of processor threads that were available during the experiment. ST, SST, ERC and RLC made use of multiple threads through Weka’s multi-threaded implementation of Bagging. Thus, running times are directly comparable for these methods. Multi-threading was also partly used in TNR for the computation of the gradients. dirty and MORF, on the other hand, always used a single processor thread.

Looking at the aggregated running times, we see that MORF is by far the most efficient method, followed by ST, SST\(_{true}\) and SST\(_{train}\) which have similar running times. On the other hand, dirty is the least efficient method, followed by ERC\(_{cv}\). The running times of the rest of the methods lie in between. With respect to the SST and ERC variants, we see that their running times agree with the complexity analysis of Sect. 2.6. The total running time of SST\(_{true}\) is roughly twice the total running time of ST and similar to the total running time of SST\(_{train}\). SST\(_{cv}\) is the least efficient among SST variants with a total running time that is about 5 times larger than that of SST\(_{true}\) and SST\(_{train}\). With respect to the ERC variants, we see that ERC\(_{true}\) and ERC\(_{train}\) have similar total running times (which are also roughly similar to the total running time of SST\(_{cv}\)) while ERC\(_{cv}\) is about 7.5 times slower.

Overall, we see that the improvements achieved by ERC\(_{cv}\) and ERC\(_{train}\) over ST come with an increased computational cost. However, this cost is manageable especially in the case of ERC\(_{train}\). Furthermore, when better efficiency is needed, besides the use of parallelization one might consider reducing the ensemble size (k) or using a smaller number of folds (f) when applying internal cross-validation (in ERC\(_{cv}\)).

Table 4 Running times (in s) using bag as base regressor

5.6 Discussion

Several interesting conclusions can be drawn from our experimental results. The experiments of Sects. 5.2 and 5.3 showed that while the directly adapted versions of SST and ERC have comparable or better performance than state-of-the-art methods, a careful handling of the discrepancy problem is crucial for obtaining consistent improvements over the independent regressions baseline and the state-of-the-art. In particular, as the experiments of Sect. 5.3 revealed, the use of estimates for the meta-inputs during training should clearly be preferred over using the actual target values. With regard to using in-sample versus out-of-sample estimates, the results indicate that while out-of-sample estimates are preferable in SST, ERC performs almost equally well using either type of estimates for the meta-inputs. As discussed in Sect. 2.5, ERC’s models are built on input spaces which are expanded with fewer meta-inputs compared to SST’s models and, as a result, a smaller amount of error accumulation is risked at prediction time.

Another interesting conclusion is that when a strong base regressor is employed, the task of improving the performance of ST becomes very difficult. As a result, multi-target methods which are considered state-of-the-art fail to improve ST’s performance and are even performing significantly worse. This was particularly the case for the two multi-task methods, TNR and Dirty, which were consistently found to be the worst performers. One explanation for their bad performance is the fact that both methods are based on a linear formulation of the problem that, as revealed by the base regressor exploration experiments, is not the most suitable hypothesis representation for the studied datasets (ridge and svr performed worse than sgb and bag that are based on a non-linear hypothesis representation). Moreover, multi-task methods are expected to work better than single-task methods in cases where there is a lack of training data for some of the tasks (Alvarez et al. 2011). This is not the case for most of the datasets that we used in this study as well as many recent multi-target prediction problems. In fact, the two datasets where TNR and Dirty perform better than ST (sf1 and slump) are among those with the fewest training examples.

With respect to MORF, although it was found significantly more competitive than TNR and Dirty, it also performed worse than ST on average. Nevertheless, we should point out that MORF achieved the best accuracy on three datasets (edm, wq, andro) and is the most computationally efficient of the compared methods. Similarly to TNR and Dirty, MORF has the disadvantage of having a fixed hypothesis representation (trees), as opposed to the proposed methods that have the ability of adapting better to a specific domain by being instantiated with a more suitable base regressor. This advantage of the proposed methods is shared with RLC which, however, was not found as accurate.

Overall, our experimental results demonstrate that of the methods proposed in this paper, ERC\(_{train}\) and ERC\(_{cv}\) and, to a lesser extent, SST\(_{train}\) and SST\(_{cv}\) provide increased accuracy over doing a separate regression per target. In addition, ERC\(_{train}\) and ERC\(_{cv}\) are significantly more accurate than TNR, Dirty, MORF and RLC (in the per target analysis). If caution is a further concern, then again ERC\(_{train}\) and ERC\(_{cv}\) compare favorably to the rest of the methods. With respect to the true variants of SST and ERC, we should stress out that despite having a worse average performance, they are worthy of being considered by a practitioner as they obtain the highest performance in datasets (e.g., sf1 and scfp) where the discrepancy problem is not predominant.

6 Conclusion

Motivated by the similarity between the tasks of multi-label classification and multi-target regression, this paper introduced SST and ERC, two new multi-target regression techniques derived through a simple adaptation of two well-known multi-label classification methods. Both methods are based on the idea of treating other prediction targets as additional input variables, and represent a conceptually simple way of exploiting target dependencies in order to improve prediction accuracy.

A comprehensive experimental analysis that includes a multitude of real-world datasets and four existing state-of-the-art methods, reveals that, despite being competitive with the state-of-the-art, the directly adapted versions of SST and ERC do not manage to obtain significant improvements or even degrade the performance of the independent regressions baseline. This degradation is attributed to an underestimation (in the original formulations of the methods) of the impact of the discrepancy of the values used for the additional input variables between training and prediction. Confirming our hypothesis, extended versions of the methods that attempt to mitigate the discrepancy using out-of-sample estimates of the targets during training, manage to obtain consistent and significant improvements over the baseline approach and are found significantly better than four state-of-the-art methods. The fact that these impressive results were obtained by applying relatively simple adaptations of existing multi-label classification methods, highlights the importance of exploiting relationships between similar machine learning tasks.

Concluding, let us point to some directions for future work. Although a mitigation of the discrepancy problem leads to significant performance improvements, a different amount of mitigation is ideal for each target. As a result, the use of in-sample estimates (or even the actual target values) gives better results for some targets. Thus, a promising direction for future work would be a deeper theoretical analysis of the different variants and the identification of problem characteristics that favor the use of one variant over the other. Finally, we should point out that SST and ERC can be viewed as strategies for leveraging variables that are available in the training phase but not in the prediction phase. This type of scenario is very common, for instance, in time series prediction. We believe that adapting SST and ERC for this type of problems is another valuable opportunity for future work.