Abstract
In this paper, we addressed the problem of building a decisiontheoretic classifier tailored for minimizing the Tversky loss under the framework of multilabel classification. The proposed approach is a generalization of the Dembczyński \(F_{\beta }\) measure optimization algorithm. The introduced technique is based on a series of discrete linear approximations of the Tversky measure. The approximated criterion is then optimized using original optimization algorithm. To assess quality of classification results produced by the designed strategy and compare its outcome with the results obtained by the stateoftheart approaches, we conducted an experimental study on 24 benchmark datasets. The investigated methods were compared with respect to eleven different quality criteria. We considered quality criteria belonging to three main groups, i.e., examplebased, microaveraged and macroaveraged. During the experimental study, we considered four testing scenarios. Two of them deal with a symmetric variant of the Tversky loss. Remaining scenarios examine asymmetric Tversky loss. The study shows that, in general, the proposed method is comparable to the Dembczyński approach. However, for both symmetric scenarios and one asymmetric scenario, the average ranks suggest that the proposed approach achieves better classification quality in terms of the examplebased Tversky measure. This is an important result because the proposed method was designed to optimize the abovementioned quality indicator. Additionally, the introduced procedure can outperform the reference methods with respect to the zeroone loss under all testing scenarios.
Introduction
Traditional singlelabel classification is concerned with learning from a set of examples which are assigned to a label (class) from a disjoint set of labels. In other words, only one label is relevant to the given object. Nevertheless, the assumption that labels are disjoint do not always hold. This issue emerges in many reallife recognition tasks: for example, a photograph may be tagged with such labels as lake, forest, sky and mountains. This label set constitutes a complete description of the object, and omitting one of labels in the classifier outcome must be considered as a classification error. Consequently, traditional singlelabel classification methods cannot directly be employed to solve a problem which violates the assumption that classes are disjoint. Strictly speaking, they are capable of predicting only a single category per object which in case of multilabel data is insufficient. A solution to this problem is to employ the multilabel (ML) classification framework which can be seen as a generalization of the classical recognition task [20, 49]. In the multilabel recognition, it is assumed that an object is simultaneously assigned to more than one class. What is more, the multilabel learning also considers two extraordinary cases: an object belongs to all possible labels or to none of the labels (interpretation of these cases is specific to the domain of the considered task).
Multilabel learning is employed in a variety of practical applications but the most widespread ones are: text classification [29, 30] and multimedia classification including classification of video objects [12], images [4, 57] and music [43]. Another important field of application is bioinformatics where multilabel classification is a powerful tool for prediction of: gene functions [44], protein functions [55, 56] or drug resistance [24], to name only a few. Nowadays, multilabel classification is becoming more and more common among the machine learning society. This growth is mainly caused by an increasing amount of data which can be effectively modelled using the multilabel framework [19]. A great example of this phenomenon is the growth in number of protein and nucleotide sequences stored in EMBL databases [58].
This paper is aimed at providing a flexible, effective and efficient classification procedure that is tailored to optimize the Tversky measure. We are focused on the Tversky measure because it is a far more general quality indicator than commonly used loss functions such as \(F_{\beta }\) measure, Jaccard measure, the zeroone loss, false discovery rate or false negative rate. Namely, all aforesaid quality indicators can be expressed in terms of the Tversky loss by setting proper values of its parameters. Consequently, building an effective classifier aimed at optimization of this measure can provide a general tool that can cover many userspecific loss criteria. To achieve our goal, we propose a generalization of the method described by Dembczyński in [8]. The introduced technique approximates the Tversky measure using a set of discrete linear functions. The task described by the linear approximation is solved using the inner–outer minimization approach in a way analogous to the solution proposed in the original approach.
The proposed method was also experimentally compared to the reference methods. The experimental procedure employs 24 benchmarks datasets and 11 quality criteria. We considered quality criteria belonging to three main groups, i.e., examplebased, microaveraged and macroaveraged. During the experimental study, we considered four testing scenarios. Two of them deal with a symmetric variant of the Tversky loss. Remaining scenarios examine asymmetric Tversky loss. The conducted experimental study provides an empirical evidence that the proposed method can outperform reference algorithms in terms of examplebased Tversky measure and the zeroone loss.
The paper is organized as follows. Section 2 provides a description of the work related to the topic of this paper. Section 3 introduces a formal description of the proposed method. The next Sect. 4 describes experimental setup. The obtained results are presented and discussed in Sect. 5. The paper is concluded in Sect. 6.
Related work
Multilabel classifiers predict a vector response. Due to the complexity of the output structure of multilabel models, it is possible to evaluate the quality of multilabel classification using many different criteria such as Hamming distance, zeroone subset loss [10] or \(F_{\beta }\) measure [41]. The criteria also differ on the method of combining wellknown singlelabel quality measures to produce a multilabel quality criterion. Namely, we can distinguish three possible ways: examplebased, macroaveraged and microaveraged [33]. Algorithms are usually designed to optimize a chosen quality measure, and the classifier designed to optimize one quality criterion is usually suboptimal under another quality criterion [10]. During this study, our focus is put on algorithms which are tailored to maximize classification quality expressed in terms of an asymmetric information retrieval measure known as the Tversky measure [53]. The Tversky measure is more general than \(F_{\beta }\) measure, so building an effective algorithm dedicated to this function allows us to express and optimize a wider range of quality criteria such as \(F_{\beta }\) measure, Jaccard measure [26] or zeroone subset loss [10] (relations between aforesaid measures are discussed in Sect. 3.1). Multilabel approaches aimed at dealing with optimization of the aforesaid quality criteria (including the Tversky measure) can be basically divided into empirical utility maximization methods and decisiontheoretic methods [35].
The empirical utility maximization approaches build classifiers which are designed to obtain the optimal value of quality measure defined in the learning set. Learning these models is usually done by determining values of the model parameters that optimize the quality criterion. After that, the model with determined parameters is used to calculate classifier output for a test instance. Algorithms from this group are commonly based on structured SVMs [14, 37], thresholding strategies [38,39,40] or regression [27]. The structured output SVM is a generalization of classical SVM algorithm [48]. The procedure is tailored to deal with the classification problems whose output is more complex than single class. The approach can be adopted to the task of multilabel classification in a straightforward way [14]. As in the classical SVM approach, it is also possible to utilize different kernel functions under the considered approach [60]. The thresholding procedures mainly employ a stateoftheart multilabel classifier that returns a set of label supports. The outcome of the classifier is then converted to binary prediction using a set of dynamically determined thresholds [39, 40]. The aforementioned approaches were originally harnessed to optimize the \(F_{\beta }\) measure, but they can also be employed to optimize the Tversky measure.
On the other hand, the decisiontheoretic methods use the learning set to estimate parameters of the underlying probability model. In the inference phase, the values of the probability distributions are calculated according to the estimated model. There were proposed a few methods based on this framework [6, 8, 28, 35]. Chai [6] tackled the posed problem by expressing the expected loss as a recursive function, and then, he solved the arisen optimization task using dynamic programming. Another approach to tackle with the abovementioned issue was provided by Jansche [28] who proved that the posed problem can be effectively solved via inner and outer optimization. To perform the optimization task, the space of all possible solutions is divided into nonoverlapping equivalence classes. Then, the optimal solution is found for each of equivalence classes separately. Finally, the outer optimization is performed in order to determine the globally optimal solution. The author designed a method based on Lebesgue integral and twotape automaton. Inspired by this methodology, Dembczyński et al. [8] proposed an alternative inner–outer optimization scheme which, in contrast to the formerly mentioned methods, does not make any assumptions about the underlying probability distribution. Unfortunately, the Dembczyński method, contrary to the remaining algorithms, cannot be directly employed to minimize the loss function based on the Tversky measure. An alternative set of equivalence classes was described by Nan et al. [35]. Additionally, the authors presented a heuristic procedure that allows them to reduce the computational burden.
Cheng et al. analysed the Classifier Chain approach [40], which extends the basic binary relevance approach [1], under probabilistic formalism. Their work showed that the original method is a simplified strategy of a more general framework [7] of conditional joint mode estimation. During the inference phase, the simplified approach performs greedy search procedure that follows only a single path in a tree of all possible solutions. The authors proposed to employ an inference algorithm that performs exhaustive search in order to determine the optimal solution. Although the routine enables us to find the optimal solution in terms of any loss function (including \(F_{\beta }\) and Tversky), its computational complexity grows exponentially (\(2^{L}\), where L is the number of labels) with the number of labels. As a consequence, the computational burden of the approach is extremely high and it can be directly employed only when the number of labels is relatively low. However, the abovementioned drawback was dealt with in an application of heuristic methods of finding the optimal path in the tree of possible solutions [10, 32].
Proposed method
Preliminaries
In the introductory section, we outlined the basic description of the multilabel classification task. Now, let us define a more formal description of the investigated issue. An object \(x\) is now interpreted as a vector \(x=\left[ {x}_{1},{x}_{2},\ldots ,{x}_{d} \right]\) that comes from the ddimensional input space \({\mathcal {X}}\). The set labels related to the object is indicated by a binary vector of length L: \(y=\left[ {y}_{1},{y}_{2},\ldots ,{y}_{L} \right]\) and \(y_{i}=1\) (\(y_{i}=0\)) denotes that \(i{\mathrm {th}}\) label is relevant (irrelevant) to the object \(x\). As a consequence, the output space is defined as \({\mathcal {Y}}=\{0,1\}^{L}\) which denotes a set of all possible binary vectors of length L. Additionally, it is assumed that object \(x\) and its set of labels y are realizations of corresponding random vectors \(\mathbf{X =\left[ \mathbf{X }_{1},\mathbf{X }_{2},\ldots ,\mathbf{X }_{d}\right] }\), \(\mathbf{Y =\left[ \mathbf{Y }_{1},\mathbf{Y }_{2},\ldots ,\mathbf{Y }_{L}\right] }\) and the joint probability distribution \(P(\mathbf X ,\mathbf Y )\) on \({\mathcal {X}}\times {\mathcal {Y}}\) is known.
Relevant labels are assigned to instances by an unknown mapping \(f:{\mathcal {X}} \mapsto {\mathcal {Y}}\). A classifier function \({\psi : {\mathcal {X}}\mapsto {\mathcal {Y}}}\) is an approximation of the unknown mapping. Finding the classifier function is usually stated as a problem of optimal decision making given loss function. The loss function \({\mathcal {L}}: {\mathcal {Y}}\times {\mathcal {Y}}\mapsto {\mathcal {R}}_{+}\) assesses similarity between vectors from the output space. Without loss of generality, it is assumed that only normalized loss functions \({\mathcal {L}}:{\mathcal {Y}}\times {\mathcal {Y}}\mapsto \left[ 0,1 \right]\) are considered. In general, the optimal decision making aims to find a classifier \(\psi ^{*}\) that minimizes the expected loss over the joint probability distribution \(P(\mathbf X ,\mathbf Y )\):
where \({\mathbb {E}}\) is the expected value operator. The abovementioned classifier can be found in a pointwise way by the Bayes optimal decisions
where \(h^{*}(x)=\left[ {h}_{1}^{*}(x),{h}_{2}^{*}(x),\ldots ,{h}_{L}^{*}(x) \right] \in {\mathcal {Y}}\) denotes an optimal prediction for instance \(x\) and \(P(yx)=P(Y=yX=x)\) is the conditional probability distribution of vector \(y\) given an object \(x\). It is clear that the optimal solution cannot be efficiently found via exhaustive search because the size of the output space is \({\mathcal {Y}}= 2^{L}\).
Although there is a bunch of loss functions that can be adopted under the multilabel classification methodology, in this paper we are focused on the learning algorithms that optimize the Tversky loss \(T_{\gamma ,\delta }\), which is defined as follows:abovementioned loss function
where \(h(x)\in {\mathcal {Y}}\) is the prediction of a classifier, \(y(x)\in {\mathcal {Y}}\) indicates the ground truth labels for the instance \(x\), \(\eta =1\gamma \delta\), \(\gamma >0\) and \(\delta >0\) can be interpreted as weights related to False Positive rate and False Negative rate, respectively. A growth in one of those weights increases the penalty related to the type of error associated with the weight. Additionally \(\left\\cdot \right\_1\) is the \(L_1\) norm, and \({h(x)}\cdot {y(x)}\) is a dot product of a given pair of vectors. For a special case when \(\left\h(x)\right\_1=\left\y(x)\right\_1=0\), it is assumed that \(T_{\gamma ,\delta }(h(x),y(x))=0\). The abovementioned loss function is worth considering because it is more general than the loss functions that are usually applied to build a multilabel classifier. Namely, using this loss function, it is possible to express such loss functions as \(F_{\beta }(h(x),y(x))\), the Jaccard loss \(J(h(x),y(x))\) or zeroone loss \(Z\,(h(x),y(x))\):
It is also possible to express such measures as false discovery rate \(\mathrm {FDR}(h(x),y(x))\) and false negative rate \(\mathrm {FNR}(h(x),y(x))\):
The differences between aforementioned loss functions become clearer when we express them using binary confusion matrix resulting from comparison of two binary strings \(h(x)\) and \(y(x)\) (Table 1). Entries of the matrix are defined as follows:
Then, the measures can be rewritten:
where \(\left[\kern0.15em\left[ \cdot \right]\kern0.15em\right]\) is the Ivreson bracket.
Bayes classifier for the \(F_{\beta }\) loss
In this section, we describe the original method proposed by Dembczyński [8]. The method is based upon the inner–outer framework which was proven to be an efficient way to find the optimal classifier tailored for the \(F_{\beta }\) loss [28]:
In the case of the \(F_{\beta }\) loss, the Bayes classifier (2) is given by
Next, the posed problem (13) is solved via inner and outer maximizations. In order to perform the inner maximization, the space of all possible solutions \({\mathcal {Y}}\) is partitioned into \(L+1\) nonoverlapping equivalence classes. Each equivalence class contains binary vectors \(h\in {\mathcal {Y}}\) with the number of ones equal to \(\mathrm {K}=\lefth\right_1\). As a consequence, we denote the equivalence class as a set \({\mathcal {H}}_{\mathrm {K}} = \left\{ h:h\in {\mathcal {Y}},\lefth\right_1=\mathrm {K}\right\}\) and \(\mathrm {K}\in \left\{ 0, 1,\dots , L \right\}\). Analogously, the partitioning into the equivalence can also be employed to vector \(y\) from the Eq. (13). The sum of ones in this vector is denoted by \(\mathrm {S}=\lefty\right_1\). Then, the optimization problem can be solved for each of equivalence classes separately. Before we define inner optimization problem, let us introduce a subset of \({\mathcal {Y}}\) in which the number of ones in vectors sums up to s:
Then, for each equivalence class \({\mathcal {H}}_{\mathrm {K}}\) an optimal prediction is described by:
After that, by swapping sums in (16) and skipping term \((1+\beta ^{2})\), the authors obtain
where
is the probability that \(i\mathrm {th}\) bit is set given \(x\) and
is the probability that the number of ones in vector \(y\) is \(\mathrm {S}\) given \(y_{i}=1\) and \(x\). Since we must set \(h_{i}=1\) for only \(\mathrm {K}\) positions, the optimization problem can be solved optimally by setting \(h_{i}=1\) for the top \(\mathrm {K}\) values of \(\Delta (i,\mathrm {K})\).
To perform the outer optimization, it is also necessary to calculate the expected loss related to the previously determined solution \({\mathbb {E}}_{YX=x}\left[ F_{\beta }(h^{*}(x)_{\mathrm {K}},Y)\right]\):
Additionally, the expected value associated with inner optimization task for \(\mathrm {K}=0\) is:
Finally, the outer minimization finds the best prediction among the predictions produced for each equivalence class \({\mathcal {H}}_{\mathrm {K}}\):
and its solution is found by checking all possible \(L+1\) (\(\mathrm {K}\in \{0,1,2,\ldots ,L \}\)) expected values.
Proposed method
In this subsection, an approximated method of building decisiontheoretic classifier for the Tversky measure is introduced. We begin with considering the Bayes classifier for the \(T_{\gamma ,\delta }\):
After transformation analogous to (15), we obtain:
Although a transformation analogous to (17) cannot eliminate term \(\eta {h}\cdot {y}\) from the denominator of (24), it is impossible to perform directly the optimization procedure described in the previous subsection. In order to apply the aforementioned procedure, one can simply remove the term but this simplification can lead to a coarse approximation. In this paper, we propose more accurate approximation which does not induce a significant increase in the computational complexity.
We start with a simple observation that for given \(\mathrm {K}\), \(\mathrm {S}\) and \(x\), each term of expected loss function (15) is a linear discrete function of \({h}\cdot {y}\). This remark leads to a conclusion that the approximation of the Tversky expected loss can be efficiently computed using linear approximations:
For the sake of simplicity, let us introduce the following notation:
Now, please note that for fixed L, \(\mathrm {K}\) and \(\mathrm {S}\), the term \({h}\cdot {y}\) is bounded as follows:
In the interval defined by \(b_{l}(\mathrm {K},\mathrm {S})\) and \(b_{u}(\mathrm {K},\mathrm {S})\), the original loss function can be approximated using a linear function (25) whose parameters \(c(\mathrm {K},\mathrm {S})\) and \(d(\mathrm {K},\mathrm {S})\) are calculated from the following system of linear equations
which gives (details are shown in "Appendix 2.1")
Example 1
Let us show an example of approximation \(T_{\gamma =10,\delta =1}\) loss using linear functions. Throughout this example, we assumed that \(L=20\), \(\mathrm {\mathrm {K}=15}\) and \(\mathrm {S}=15\). For this case, the bounds of \({h}\cdot {y}\) are \(b_{u}(\mathrm {15},\mathrm {15})=15\) and \(b_{l}(\mathrm {15},\mathrm {15})=10\). Considering all the above, we construct a linear approximation of the \(T_{\gamma =10,\delta =1}\) in the interval \(\left[ 10;15\right]\). This approximation is presented in Fig. 1. As we can see, under such circumstances, the difference between our approximation and approximation performed using Dembczyński approach is substantial (red and green lines).
Finally, we define an approximated inner classifier:
And since the term \(d(\mathrm {K},\mathrm {S})\) does not depend on \(h\) but only on \(\mathrm {K}\), which is constant in each inner optimization problem:
that can be efficiently found via the inner optimization approach proposed by Dembczyński:
The expected loss related to the determined classifier is:
where \(\tilde{\Delta }(i,\mathrm {K})\) is defined in a way analogous to the transformation applied in (17). As a consequence, the outer optimization algorithm does not differ from the procedure described in the previous subsection. That is, the probability of getting zero vector is:
and outer optimization is performed according to:
Plugin rule classifier for the \(T_{\gamma ,\delta }\) loss
The previous section provides us with description of the Bayes classifier tailored for the \(T_{\gamma ,\delta }\) loss function. The description assumes that all considered probability distributions are known, but in reallife classification tasks this assumption does not hold. In such situations, an approach referred as plugin rule classifier [31] can be employed. The plugin rule approach consists in estimating the unknown probabilities that are calculated on the basis of a training set
and then plugged into the formula of the Bayes classifier.
The abovedefined Bayes classifier requires \(L^{2}+2L\) probabilities to be estimated, namely:

\(L^{2}\) for \(P(\left\y\right\_1=\mathrm {S}y_{i}=1,x)\);

L for \(P(y_{i}=1x)\);

and L for \(P(\left\y\right\_1=\mathrm {S}x)\).
Calculation of these values can be efficiently done by employing a set of \(2L + 1\) multinominal regression models or classifiers (only classifiers that return an estimation of the posterior probability are considered).
Probabilities \(P(\left\y\right\_1=\mathrm {S}y_{i}=1,x)\) are estimated using L models. First step in the procedure of building the models is to make sets \({\mathcal {D}}_{i}\) that contains only objects for which \(y^{(j)}_{i}=1\). After that, a group of estimators is learned on the sets transformed in the following way
Probabilities \(P(y_{i}=1x)\) are modelled by applying the binary relevance transformation on the training set:
The transformation produces L onevsrest binary sets corresponding to each label.
Finally, we obtain an estimation of \(P(\left\y\right\_1=Sx)\) by performing a transformation
followed by learning of a related model.
System architecture
The description of learning and inference phases are provided in Figs. 2 and 3.
Experimental setup
During the experimental study, the proposed method was compared to five stateofthearts approaches. First of all, the most natural choice for a reference method is the one proposed by Dembczyński [8] upon which the developed approximation scheme is based. Additionally, we harnessed another decisiontheoretic approach introduced by Jansche [28]. We also employed an algorithm that follows the empirical utility maximization framework, namely Pillai thresholding procedure [39]. We considered four versions of the aforesaid algorithms, and they differ according to values of parameters \(\gamma\), \(\delta\) of the Tversky loss. The following parameter values were examined:

\(\left\{ \gamma =1; \,\delta =1 \right\}\) – corresponding to the wellknown Jaccard measure [26] (known also as multilabel accuracy [49]) which is a common measure of multilabel classification quality.

\(\left\{ \gamma =10;\, \delta =10 \right\}\) – corresponding to the optimization of zeroone loss measure. As stated previously, the Tversky measure ideally approximates the zeroone loss when \(\gamma \rightarrow \infty\) and \(\delta \rightarrow \infty\). However, preliminary experiments showed that increasing \(\gamma\) and \(\delta\) above 10 does not induce significant increase in classification quality measured by the zeroone loss.

\(\left\{ \gamma =1; \,\delta =10 \right\}\) and \(\left\{ \gamma =10;\, \delta =1 \right\}\) which are asymmetric variations of the abovementioned quality criteria.
We did not consider the instantiation of algorithms with parameters set to \(\left\{ \gamma =0.5;\, \delta =0.5 \right\}\) because, as it is shown in Sect. 3.3, under such setup the proposed algorithm is equivalent to the Dembczyński approach and the performance of this method was examined extensively [8]. Nevertheless, the Dembczyński algorithm can only be applied to minimize \(F_{\beta }\) loss, so its parameters must be set in such a way that allow us to obtain the best approximation of the Tversky measure with given parameters. Strictly speaking, \(\beta\) parameter was set to \(\beta = \sqrt{\frac{\delta }{\gamma }}\). All of the algorithms, were implemented using the Naïve Bayes classifier [23] as a base singlelabel model. We utilized Naïve Bayes implemented in WEKA framework [22]. The classifier parameters were set to its defaults.
In the next section, the aforementioned approaches are numbered as follows:
 1.

2.
the Dembczyński algorithm [8],

3.
the Jansche approach [28],

4.
Pillai thresholding procedure [39],

5.
Structured SVM approach [39],

6.
the binary relevance method [49].
The performances of the abovementioned classifiers were assessed using such quality criteria as false discovery rate (FDR,\(1\mathrm{precision}\)), false negative rate (FNR,\(1\text {recall}\)) and the Tversky measure with parameters relevant to investigated algorithms [33]. We employed examplebased (denoted by Ex), macroaveraged (denoted by Ma ) and microaveraged (denoted by Mi) measures. The algorithms were also compared with respect to the Hamming loss [10] and zeroone subset loss function.
The experimental evaluation was conducted on 24 benchmark multilabel datasets related to seven different domains: text categorization – 7 datasets, image annotation – 6, bioinformatics – 4, audio samples recognition – 3, video annotation – 1, astronomy – 2 and environmental science – 1. The datasets are summarized in Table 2. The first column (named ‘No.’) shows the number of the dataset. The number is further used to denote the set in the experimental section. The second column of the table holds names of datasets and reference to related papers. Next three columns contain the number of instances in dataset, input space dimensionality and the number of labels, respectively. Another three columns provide more detailed information about dataset properties that is, set cardinality, density and the number of unique label combinations [50]. To be more formal, the aforementioned properties are defined as follows:
During the datasetpreprocessing stage, we applied a few transformations on datasets. First and foremost, all nominal attributes, except binary attributes, were converted into a set of binary variables. For example, a nominal feature with three possible values is transformed into a set of three binary variables and each binary variable indicates the occurrence of the associated nominal value. This approach is one of the simplest methods to replace nominal variables with binary variables [46].
In our study, we employed datasets that follow multiinstancemultilabel (MIML) framework [61], [55]. In those datasets, each object consists of a bag of instances tagged with a set of labels. In order to tackle this data, we follow the suggestion made in [62], and we transformed the set to singleinstance multilabel data. Namely, we build a betweenbag distance matrix using the Hausdorff distance [42], and then, we constructed a set of new points in Euclidean space using multidimensional scaling [3].
We also harnessed multitarget regression sets (Solar_flare1, Solar_flare2 and Waterquality) which were converted into multilabel data using a simple thresholding procedure. To be more precise, when the value of output variable for a given object is greater than zero, the corresponding label is set to be relevant to this object. In case of Solar_flare sets, this transformation results in producing a multilabel set characterized by low label cardinality and density, and such a set should be considered as a kind of mischievous case.
The dimensionality of the input space for Rcv1subsetX datasets must be reduced since it causes the experimental software to run out of memory. To remove unnecessary attributes, we followed a filtering procedure suggested previously in [51]. Namely, we applied a simple frequency filter with the minimal number of word occurrences set to 50. As a consequence, the number of attributes is reduced from about \(47\mathrm {k}\) to about \(1.8\mathrm {k}\).
Finally, training and testing sets were extracted using tenfold crossvalidation. However, due to a large number of instances (nearly \(44\mathrm {k}\) instances), the number of crossvalidation folds was reduced to three for Mediamill dataset. Despite the reduced number of crossvalidation folds, the number of instances is large enough to provide a stable estimation of the classification quality criteria.
Statistical significance of obtained results was assessed using the Friedman test [18] and the post hoc Nemenyi test [11]. The corrected critical difference for the Nemenyi post hoc test is \(\mathrm {CD}_{\alpha =0.05}=1.59\). Additionally, we performed the Wilcoxon signedrank test [11, 54]. For all tests, the significance level was set to \(\alpha =0.05\). To control familywise error rates of the Wilcoxon testing procedure, the Holm approach of p value correction was employed [25].
Results and discussion
The experimental section was partitioned into two main subsections which contain the results related to the symmetric and asymmetric form of the Tversky loss. In the first section, we considered the symmetric case of the Tversky loss function, i.e., the loss function was instantiated with parameters set to \(\gamma =\delta =1\) (Jaccard loss) and \(\gamma =\delta =10\) (zeroone loss). The summarized results related to this scenario are presented in Tables 3, 4. The other subsection, on the other hand, is aimed at evaluating the introduced approach under two asymmetric variants of the Tversky loss. One is tuned to put a greater penalty to the false negatives \(\left\{ \gamma =10, \delta =1\right\}\), whereas the other increases the cost of false positives \(\left\{ \gamma =1, \delta =10\right\}\). The outcome of the asymmetric experiment is shown in Tables 5, 6. Both sections share the common format of a result table. Namely, the header table holds the ordinal numbers of assessed algorithms which are compatible with the numbering introduced in the description of the experimental setup. The second part of the table consists of 11 subsections, and each subsection is related to a quality measure which is presented in the subsection header. Each subsection contains three rows. The first one, denoted by Rnk, shows average ranks achieved by the investigated multilabel classifiers over the benchmark sets. The second row presents p value related to the Friedman test (Frd) applied to criterionspecific results. The last row contains the corrected (Holm’s correction) p value (Wp) produced by the Wilcoxon test which was applied to compare the introduced learning procedure against the reference methods.
For easier interpretation, we provided a visualization of the data presented by the abovementioned tables using radar plots. The plots are shown in Figs. 4, 5, 6 and 7. Each plot presents average ranks achieved by evaluated algorithms under different quality criteria. Additionally, plots also contains a graphical representation of the performed Nemenyi post hoc procedure. Namely, the critical differences are denoted by black bars parallel to criterionspecific axes in a radar plot.
In order to improve the readability of the paper, tables containing setspecific results for Tversky and zeroone measures are presented in “Appendix”. Results related to the symmetric scenario are shown in Tables 7, 8, 9, 10, 11 and 12, whereas the results for asymmetric scenarios are presented in Tables 15, 16. The detailed description of those tables is provided in the “Appendix”.
The symmetric loss
Let us begin with the analysis of results related to the first symmetric scenario which is the optimization of the Jaccard loss (\(\gamma =\delta =1\)). The outcome of the experiments is presented in Tables 3, 7, 8, 9 and Fig. 4.
From the point of view of this paper, we are interested in the lack of significant difference, in terms of Tverskybased loss functions, between the proposed method and the Dembczyński approach. The lack of the statistically important difference is confirmed by both statistical tests; however, the average ranks may suggest that the proposed method performs slightly better. This observation leads to a conclusion that the divergence between the \(F_{1}\) measure and the Jaccard loss is not substantial enough to produce significant difference between the investigated approaches. Additionally, the proposed method introduces an approximation error. The presence of this error has influenced the results.
On the other hand, the proposed method reveals to achieve the best classification quality under the zeroone loss. Only structured SVM approach is statistically comparable to the proposed one; however, the average rank of the proposed method is the lowest. This is an important result because the aforementioned criterion is the most restrictive loss function [33]. What is more, the mentioned measure is also the roughest quality indicator that is unable to distinguish a nearly correct outcome from a totally misclassified one. Thus, significant superiority suggests that our approach achieved the greatest number of ’perfect match’ solutions.
Additionally, the experimental evaluation unveils that the proposed approximation procedure tends to be more conservative than the reference methods. In other words, the method reduces the false positive rate at the cost of increasing the false negative rate, and the imbalance factor is greater in comparison with the other algorithms. This fact is manifested by the low rank earned in terms of FDR and relatively high rank under FNR criteria. This slight shift towards a majority class (the label is irrelevant to given object) must be interpreted as a beneficial property when the label density is low, which is a common situation under the multilabel framework. Moreover, the noticed shift is not a symptom of a harmful bias towards the majority class because the method performs well with respect to the Hamming loss, the Jaccard loss and the zeroone loss.
The results related to the second symmetric scenario (\(\gamma =10, \delta =10\)) are presented in Tables 4, 10, 11, 12 and Fig. 5. The change of the values of the parameters does not induce any major change in the general behaviour of the proposed method. Namely, the algorithm still is the most conservative approach among the investigated procedures.
What is more, it maintains its leading position under the zeroone criterion. An interpretation of this phenomenon may lie in the interpretation of the \(T_{\gamma ,\delta }\) when the parameters \(\gamma\) and \(\delta\) are relatively high. Namely, under such conditions, the loss function becomes similar to the zeroone loss. The previously analysed example shows that even the algorithm focused on optimization of the Jaccard loss outperforms the remaining methods in terms of the zeroone loss.
The quality assessment in terms of the examplebased Tversky loss shows that the proposed method significantly outperforms only the BR system and the Jansche classifier. This result is quite obvious if we consider the assumption about the relationship between labels that lie at the root of the investigated algorithms. Strictly speaking, the outperformed methods, in contrast to the remaining procedures, expect the labels to be conditionally independent, which results in creation of simplified models. As a consequence, the overall, predictive performance of those approaches is generally lower. On the other hand, there are no significant differences between the proposed method and the other methods under labelbased Tversky loss functions.
Equally important, the classification quality measured by the examplebased Tversky loss has increased. This observation is a positive sign since the proposed method is tailored to optimize the aforesaid quality measure. Furthermore, this fact confirms the previously made conclusion that the predictive ability of the proposed approximation scheme will rise if the disagreement between the Tversky loss and \(F_{1}\) measure increases. The graphical explanation of this phenomenon is provided by Fig. 1. That is, if the parameters of the Tversky loss become closer to \(\left\{ \gamma =0.5, \delta =0.5\right\}\), the curve related to \(T_{\gamma ,\delta }\) becomes more flat. As a consequence, the linear approximation becomes similar to the curve related to the \(F_1\) loss and the predicted outcome is close to the class assignment provided by the Dembczyński algorithm.
On the other hand, there is still no significant difference between the proposed method and its counterparts in terms of macroaveraged and microaveraged Tversky measures. Nonetheless, the average ranks show that the classification quality obtained by the Dembczyński algorithm may be a bit better.
The asymmetric loss
Under the first asymmetric scenario, we set the parameters of the Tversky loss to \(\gamma =10\) and \(\delta =1\), which should result in building classifiers that prefer more conservative solutions in comparison to the outcomes produced by the procedures that employ symmetrical loss measures. Indeed, the obtained results, which are summarized in Tables 5, 13, 14, 15 and Fig. 6, confirmed that assessed procedures exhibit the expected behaviour. To be more precise, the evaluation in terms of the FNR criteria shows a significant decrease in comparison with the binary relevance system which is a reference method that is not affected by the values of parameters \(\gamma\) and \(\delta\). The observed cutback concerns all investigated classifiers, even those that were significantly better than BR under the symmetric scenario. Contrary to the results obtained using symmetric quality measures, the significant difference between the proposed method and the Dembczyński approach does not hold although the corresponding p value is close to the assumed significance level. The disappearance of the significant differences can also be observed under FDR criterion. As a consequence, the introduced approach is no longer the most conservative classifier. This phenomenon suggests that the considered asymmetry ratio has pushed algorithms to the point where the precision cannot be further increased without significant deterioration of the recall.
Notwithstanding, our approximation procedure still achieves the lowest average rank in terms of the examplebased Tversky loss and zeroone criterion. Generally speaking, the relative differences, which are described by average ranks, fall under the scheme described in the section devoted to the analysis of the first symmetrical scenario.
The analysis of the second asymmetric scenario (\(\gamma =1,\delta =10\)) provides us with interesting information (Tables 6, 16, 17, 18 and Fig. 7). Namely, the attempt to reduce the false negative ratio at the cost of increasing the number of false positives leads to a significant decrease in the classification quality under each considered loss measure. The decline is confirmed by the Nemenyi post hoc procedure which shows that all classifiers perform in a manner similar to the BR system, although under the symmetric experiment the algorithms were able to outperform the BR system. What is more, the significant differences between the proposed method and reference approaches have not held. The experiment reveals that increasing the \(\delta\) causes an undesirable rise of the number of labels which are considered relevant. This strategy proves to be inappropriate for the multilabel datasets characterized by the low label density.
Conclusions
In this paper, we addressed the issue of building a multilabel classifier that utilizes the Tversky loss. Our approach is built using a discrete linear approximation of the analysed loss function. The optimization procedure was performed using the inner–outer optimization technique proposed by Dembczynski [9] in order to find the optimal solution of the considered classification problem in terms of the \(F_{\beta }\) loss. Although our algorithm solves the optimization task in a suboptimal manner, it does not put any assumption on the label distribution. Additionally, the Tversky loss allows us to express other loss functions as \(F_{\beta }\), Jaccard loss or the zeroone loss. As a consequence, the introduced procedure is more flexible than the other approaches proposed by literature. What is more, the computational cost of employing the modified procedure instead of the original one is relatively low.
During the experimental study, we obtained promising results which clearly show that the posed problem was solved with moderate success. However, there is still room for improvement. Namely, results produced by the proposed approach are comparable (under the Tversky loss) to the outcome of the Dembczynski approach for majority of experimental scenarios. What is more, under the examplebased version of the Tversky measure, the average ranks suggest that the proposed method outperforms the reference method. It also should be noticed that the quality of classification achieved by the proposed procedure was substantially higher when the parameters of the loss function was set in order to emulate the zeroone loss. What is more important, the introduced approximation scheme outperforms the other assessed methods with respect to the zeroone loss.
The presented results lead to cautious optimism, so we are willing to continue the development of the presented approach. Our further studies will be focused on different approximation techniques that are expected to reduce the approximation error. Another branch of our research is aimed at dealing with the imbalanced distribution problem which is a common issue under the multilabel classification framework. This is an important problem because our approach is based on a series of binaryrelevancebased estimations of a posterior probability distributions which are strongly affected by the uneven label distribution.
References
 1.
Alvares Cherman E, Metz J, Monard MC (2010) A simple approach to incorporate label dependency in multilabel classification. In: Advances in soft computing. Springer, Berlin, pp 33–43. doi:10.1007/9783642167737_3
 2.
Barnard K, Duygulu P, Forsyth D, de Freitas N, Blei DM, Jordan MI (2003) Matching words and pictures. J Mach Learn Res 3:1107–1135
 3.
Borg I, Groenen P (1997) Modern multidimensional scaling. Springer, New York. doi:10.1007/9781475727111
 4.
Boutell MR, Luo J, Shen X, Brown CM (2004) Learning multilabel scene classification. Pattern Recognit 37(9):1757–1771. doi:10.1016/j.patcog.2004.03.009
 5.
Briggs F, Lakshminarayanan B, Neal L, Fern XZ, Raich R, Hadley SJK, Hadley AS, Betts MG (2012) Acoustic classification of multiple simultaneous bird species: a multiinstance multilabel approach. J Acoust Soc Am 131(6):4640–4650. doi:10.1121/1.4707424
 6.
Chai KMA (2005) Expectation of fmeasures. In: Proceedings of the 28th annual international ACM SIGIR conference on research and development in information retrieval—SIGIR ’05. ACM Press. doi:10.1145/1076034.1076144
 7.
Cheng W, Hüllermeier E, Dembczynski KJ (2010) Bayes optimal multilabel classification via probabilistic classifier chains. In: Proceedings of the 27th international conference on machine learning (ICML10), pp 279–286
 8.
Dembczynski K, Jachnik A, Kotlowski W, Waegeman W, Huellermeier E (2013) Optimizing the fmeasure in multilabel classification: Plugin rule approach versus structured loss minimization. In: Dasgupta S, Mcallester D (eds) Proceedings of the 30th international conference on machine learning (ICML13). JMLR workshop and conference proceedings, vol 28, pp 1130–1138
 9.
Dembczynski KJ, Waegeman W, Cheng W, Hüllermeier E (2011) An exact algorithm for fmeasure maximization. In: ShaweTaylor J, Zemel R, Bartlett P, Pereira F, Weinberger K (eds) Advances in neural information processing systems 24. Curran Associates Inc, New York, pp 1404–1412
 10.
Dembczyski K, Waegeman W, Cheng W, Hllermeier E (2012) On label dependence and loss minimization in multilabel classification. Mach Learn 88(1–2):5–45. doi:10.1007/s1099401252858
 11.
Demšar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30
 12.
Dimou A, Tsoumakas G, Mezaris V, Kompatsiaris I, Vlahavas I (2009) An empirical study of multilabel learning methods for video annotation. In: 2009 seventh international workshop on contentbased multimedia indexing. IEEE. doi:10.1109/cbmi.2009.37
 13.
Diplaris S, Tsoumakas G, Mitkas PA, Vlahavas I (2005) Protein classification with multiple algorithms. In: Bozanis, Panayiotis, Houstis, Elias N (eds) Advances in informatics. Springer, Berlin, pp 448–456. doi:10.1007/11573036_42
 14.
Dez J, Luaces O, del Coz JJ, Bahamonde A (2014) Optimizing different loss functions in multilabel classifications. Prog Artif Intell 3(2):107–118. doi:10.1007/s1374801400607
 15.
Deroski S, Demar D, Grbovi J (2000) Predicting chemical parameters of river water quality from bioindicator data. Appl Intell 13(1):7–17. doi:10.1023/a:1008323212047
 16.
Elisseeff A, Weston J (2001) A kernel method for multilabelled classification. In: Dietterich TG, Becker S, Ghahramani Z (eds) In advances in neural information processing systems 14. MIT Press, pp 681–687
 17.
Fiore A, Heer J (2016) http://bailando.sims.berkeley.edu/enron_email.html. Accessed 21 Mar 2016
 18.
Friedman M (1940) A comparison of alternative tests of significance for the problem of \(m\) rankings. Ann Math Stat 11(1):86–92. doi:10.1214/aoms/1177731944
 19.
Gantz J, Reinsel D (2012) The digital universe in 2020: Big data, bigger digital shadows, and biggest growth in the far east. Technical report, IDC
 20.
Gibaja E, Ventura S (2014) Multilabel learning: a review of the state of the art and ongoing research. Wiley Interdiscip Rev Data Min Knowl Discov 4(6):411–444. doi:10.1002/widm.1139
 21.
Goncalves EC, Plastino A, Freitas AA (2013) A genetic algorithm for optimizing the label ordering in multilabel classifier chains. In: 2013 IEEE 25th international conference on tools with artificial intelligence. IEEE. doi:10.1109/ictai.2013.76
 22.
Hall M, Frank E, Holmes G, Pfahringer B, Reutemann P, Witten IH (2009) The weka data mining software. ACM SIGKDD Explor Newsl 11(1):10. doi:10.1145/1656274.1656278
 23.
Hand DJ, Yu K (2001) Idiot’s bayes: Not so stupid after all? Int Stat Rev Rev Int Stat 69(3):385. doi:10.2307/1403452
 24.
Heider D, Senge R, Cheng W, Hllermeier E (2013) Multilabel classification for exploiting crossresistance information in hiv1 drug resistance prediction. Bioinformatics 29(16):1946–1952. doi:10.1093/bioinformatics/btt331
 25.
Holm S (1979) A simple sequentially rejective multiple test procedure. Scand J Stat 6(2):65–70. doi:10.2307/4615733
 26.
Jaccard P (1912) The distribution of the flora in the alpine zone.1. New Phytol 11(2):37–50. doi:10.1111/j.14698137.1912.tb05611.x
 27.
Jansche M (2005) Maximum expected fmeasure training of logistic regression models. In: Proceedings of the conference on human language technology and empirical methods in natural language processing—HLT ’05. Association for Computational Linguistics. doi:10.3115/1220575.1220662
 28.
Jansche M (2007) A Maximum Expected Utility Framework for Binary Sequence Labeling. In: Proceedings of the 45th annual meeting of the association of computational linguistics. Association for Computational Linguistics, Prague, Czech Republic, pp 736–743
 29.
Jiang JY, Tsai SC, Lee SJ (2012) Fsknn: multilabel text categorization based on fuzzy similarity and k nearest neighbors. Expert Syst Appl 39(3):2813–2821. doi:10.1016/j.eswa.2011.08.141
 30.
Katakis I, Tsoumakas G, Vlahavas I (2008) Multilabel text classification for automated tag suggestion. In: Proceedings of the ECML/PKDD 2008 discovery challenge
 31.
Kohler M, Krzyzak A (2006) Rate of convergence of local averaging plugin classification rules under margin condition. In: 2006 IEEE international symposium on information theory. IEEE. doi:10.1109/isit.2006.261936
 32.
Kumar A, Vembu S, Menon AK, Elkan C (2012) Learning and inference in probabilistic classifier chains with beam search. In: Peter A, Flach, De Bie T, Cristianini N (eds) Machine learning and knowledge discovery in databases. Springer, Berlin, pp 665–680. doi:10.1007/9783642334603_48
 33.
Luaces O, Dez J, Barranquero J, del Coz JJ, Bahamonde A (2012) Binary relevance efficacy for multilabel classification. Prog Artif Intell 1(4):303–313. doi:10.1007/s137480120030x
 34.
Lichman M (2013) Uci machine learning databases. http://archive.ics.uci.edu/ml. Accessed 04 Sept 2017
 35.
Nan Y, Chai KM, Lee WS, Chieu HL (2012) Optimizing fmeasure: a tale of two approaches. In: Langford J, Pineau J (eds) Proceedings of the 29th international conference on machine learning (ICML12). ACM, New York, NY, USA, pp 289–296
 36.
Pestian JP, Brew C, Matykiewicz P, Hovermale DJ, Johnson N, Cohen KB, Duch W (2007) A shared task involving multilabel classification of clinical free text. In: Proceedings of the workshop on BioNLP 2007 biological, translational, and clinical language processing—BioNLP ’07. Association for Computational Linguistics. doi:10.3115/1572392.1572411
 37.
Petterson J, Caetano TS (2011) Submodular multilabel learning. In: Shawetaylor J, Zemel R, Bartlett P, Pereira F, Weinberger K (eds) Advances in neural information processing systems 24, pp 1512–1520
 38.
Pillai I, Fumera G, Roli F (2013) Multilabel classification with a reject option. Pattern Recognit 46(8):2256–2266. doi:10.1016/j.patcog.2013.01.035
 39.
Pillai I, Fumera G, Roli F (2013) Threshold optimisation for multilabel classifiers. Pattern Recognit 46(7):2055–2065. doi:10.1016/j.patcog.2013.01.012
 40.
Read J, Pfahringer B, Holmes G, Frank E (2011) Classifier chains for multilabel classification. Mach Learn 85(3):333–359. doi:10.1007/s1099401152565
 41.
Rijsbergen CJV (1979) Information retrieval, 2nd edn. ButterworthHeinemann, Newton
 42.
Rockafellar RT, Wets RJB (1998) Variational analysis. Springer, Berlin. doi:10.1007/9783642024313
 43.
Sanden C, Zhang JZ (2011) Enhancing multilabel music genre classification through ensemble techniques. In: Proceedings of the 34th international ACM SIGIR conference on research and development in information—SIGIR ’11. ACM Press. doi:10.1145/2009916.2010011
 44.
Schietgat L, Vens C, Struyf J, Blockeel H, Kocev D, Deroski S (2010) Predicting gene function using hierarchical multilabel decision tree ensembles. BMC Bioinform 11(1):2. doi:10.1186/14712105112
 45.
Snoek CGM, Worring M, van Gemert JC, Geusebroek JM, Smeulders AWM (2006) The challenge problem for automated detection of 101 semantic concepts in multimedia. In: Proceedings of the 14th annual ACM international conference on multimedia—MULTIMEDIA ’06. ACM Press. doi:10.1145/1180639.1180727
 46.
Tian Y, Deng N (2005) Support vector classification with nominal attributes. In: Hao Y, Liu J, Wang Y, Cheung YM, Yin H, Jiao L, Ma J, Jiao YC (eds) Computational intelligence and Security. Springer, Berlin, pp 586–591. doi:10.1007/11596448_86
 47.
Trohidis K, Tsoumakas G, Kalliris G, Vlahavas IP (2008) Multilabel classification of music into emotions. In: Bello JP, Chew E, Turnbull D (eds) ISMIR, pp 325–330
 48.
Tsochantaridis I, Joachims T, Hofmann T, Altun Y (2005) Large margin methods for structured and interdependent output variables. J Mach Learn Res 6:1453–1484
 49.
Tsoumakas G, Katakis I (2007) Multilabel classification. Int J Data Warehous Min 3(3):1–13. doi:10.4018/jdwm.2007070101
 50.
Tsoumakas G, Katakis I, Vlahavas I (2009) Mining multilabel data. In: Data mining and knowledge discovery handbook. Springer, New York, pp 667–685. doi:10.1007/9780387098234_34
 51.
Tsoumakas G, Katakis I, Vlahavas I (2011) Random klabelsets for multilabel classification. IEEE Trans Knowl Data Eng 23(7):1079–1089. doi:10.1109/tkde.2010.164
 52.
Turnbull D, Barrington L, Torres D, Lanckriet G (2008) Semantic annotation and retrieval of music and sound effects. IEEE Trans Audio Speech Lang Process 16(2):467–476. doi:10.1109/tasl.2007.913750
 53.
Tversky A (1977) Features of similarity. Psychol Rev 84(4):327–352. doi:10.1037/0033295x.84.4.327
 54.
Wilcoxon F (1945) Individual comparisons by ranking methods. Biom Bull 1(6):80. doi:10.2307/3001968
 55.
Wu JS, Huang SJ, Zhou ZH (2014) Genomewide protein function prediction through multiinstance multilabel learning. IEEE/ACM Trans Comput Biol Bioinform 11(5):891–902. doi:10.1109/tcbb.2014.2323058
 56.
Xiao X, Wang P, Lin WZ, Jia JH, Chou KC (2013) iamp2l: a twolevel multilabel classifier for identifying antimicrobial peptides and their functional types. Anal Biochem 436(2):168–177. doi:10.1016/j.ab.2013.01.019
 57.
Xu XS, Jiang Y, Peng L, Xue X, Zhou ZH (2011) Ensemble approach based on conditional random field for multilabel image and video annotation. In: Proceedings of the 19th ACM international conference on multimedia—MM ’11. ACM Press. doi:10.1145/2072298.2072019
 58.
Yang Y, Joachims T (2008) Text categorization. Scholarpedia 3(5):4242. doi:10.4249/scholarpedia.4242
 59.
Zhang ML, Zhou ZH (2006) Multilabel neural networks with applications to functional genomics and text categorization. IEEE Trans Knowl Data Eng 18(10):1338–1351. doi:10.1109/tkde.2006.162
 60.
Zhang X, Song Q (2015) A multilabel learning based kernel automatic recommendation method for support vector machine. PLoS ONE 10(4):e0120,455. doi:10.1371/journal.pone.0120455
 61.
Zhou ZH, Zhang ML (2007) Multiinstance multilabel learning with application to scene classification. In: In advances in neural information processing systems 19
 62.
Zhou ZH, Zhang ML, Huang SJ, Li YF (2012) Multiinstance multilabel learning. Artif Intell 176(1):2291–2320. doi:10.1016/j.artint.2011.10.002
Acknowledgements
The work was supported by the statutory funds of the Department of Systems and Computer Networks, Wroclaw University of Science and Technology, under Agreements S50020/K0402. Computational resources provided by PLGrid Infrastructure.
Author information
Affiliations
Corresponding author
Appendices
Appendix 1: Full results
All tables included in this section share common table format. Namely, the header table holds the names of investigated loss functions and the ordinal numbers of assessed algorithms which are compatible with the numbering introduced in the description of the experimental setup. The second part of the table consists of 24 rows, and each row is related to the set whose number is shown in the first column of the table. Within this part, the setspecific loss, averaged over the CV folds, is shown. Some entries included in this part of the table are set to .000, which means that the average loss related to these entries were less than \(10^{3}\). By analogy, entries which are set to 1.000 denotes the values greater than 0.999. The third section contains the average ranks achieved by the investigated multilabel classifiers over the benchmark sets.
Appendix 2: Derivations
Appendix 2.1: Linear approximation
which gives:
And consequently:
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Trajdos, P., Kurzynski, M. An approximated decisiontheoretic algorithm for minimization of the Tversky loss under the multilabel framework. Pattern Anal Applic 22, 389–416 (2019). https://doi.org/10.1007/s1004401706516
Received:
Accepted:
Published:
Issue Date:
Keywords
 Multilabel classification
 Tversky index
 F measure
 Pluginrule classifier
 structured output