A heuristic approach for multiple instance learning by linear separation

We present a fast heuristic approach for solving a binary multiple instance learning (MIL) problem, which consists in discriminating between two kinds of item sets: the sets are called bags and the items inside them are called instances. Assuming that only two classes of instances are allowed, a common standard hypothesis states that a bag is positive if it contains at least a positive instance and it is negative when all its instances are negative. Our approach constructs a MIL separating hyperplane by preliminary fixing the normal and reducing the learning phase to a univariate nonsmooth optimization problem, which can be quickly solved by simply exploring the kink points. Numerical results are presented on a set of test problems drawn from the literature.


Introduction
Multiple instance learning (MIL) (Herrera et al. 2016) is about classification of sets of items: in the MIL terminology, such sets are called bags and the corresponding items are called instances. In the binary case, when also the instances can belong only to two alternative classes, a MIL problem is stated on the basis of the so-called standard MIL assumption, which refers to a positive bag as a bag containing at least a positive instance and to a negative one as any bag whose instances are all negative. For this reason, MIL problems are often interpreted as a kind of weakly supervised classification problems.
The first MIL problem encountered in the literature is a drug design problem (Dietterich et al. 1997). It consists in determining whether a drug molecule is active or not. Each molecule can assume a finite number of three-dimensional conformations, and it is active if at least one among its conformations is able to bind a particular " binding site," which generally coincides with a larger protein molecule. The key question is that it is not known which conformation makes a molecule active. In this example, the drug molecule is represented by a bag, while various conformations it can assume correspond to the instances inside the bag.
The MIL paradigm finds application in a lot of fields: text categorization, image recognition (Astorino et al. 2017(Astorino et al. , 2018, video analysis, diagnostics by means of images (Astorino et al. 2019b, and Quellec et al. 2017) and so on. An example fitting very well the standard MIL assumption stated above is in discriminating between healthy and nonhealthy patients on the basis of their medical scan (bag): if at least a region (instance) of the medical scan (bag) is abnormal, then the patient is classified as nonhealthy and, on the contrary, when all the regions (instances) of the medical scan (bag) are normal, then the patient is classified as healthy.
In the last years, many papers have been devoted to MIL problems. Various approaches discussed in the literature fall into one of the three following classes: instance-space approaches, bag-space approaches and embedding-space approaches. In the instance-space approaches, the classifier is constructed in the instance space and the classification of the bags is inferred from the classification of the instances: as a consequence, this kind of approach is of the local type. Some instance-space approaches are found in Andrews et al. (2003); Astorino et al. (2019a, c); Avolio and Fuduli (2021); Bergeron et al. (2012); Gaudioso et al. (2020b); Mangasarian and Wild (2008); Vocaturo et al. (2020). In particular, in Andrews et al. (2003) the first SVM (Support Vector Machine) type model for MIL has been proposed, giving rise to a nonlinear mixed integer program solved by means of a BCD (Block Coordinate Descent) approach (Tseng 2001). The same SVM type model treated in Andrews et al. (2003) has been faced in Astorino et al. (2019a) by means of a Lagrangian relaxation technique, while in Astorino et al. (2019c) and Bergeron et al. (2012) a MIL linear separation has been obtained by using some ad hoc nonsmooth approaches. In Mangasarian and Wild (2008), the authors have proposed an instance-space algorithm, expressing each positive bag as convex combination of its instances, whereas in Avolio and Fuduli (2021) a combination of the SVM and the PSVM (Proximal Support Vector Machine) approaches has been adopted. In Gaudioso et al. (2020b) and Vocaturo et al. (2020), a spherical separation model has been tackled by using DC (Difference of Convex) techniques. Other SVM type instance space approaches for MIL are found in Li et al. (2009), Melki et al. (2018, Shan et al. (2018), and Zhang et al. (2013), while in Yuan et al. (2021) a spherical separation with margin is used.
Differently from the above instance-space approaches, the bag-space techniques (see for example Gärtner et al. (2002), Wang and Zucker (2000), and Zhou et al. (2009)) are of the global type since classification is performed considering each bag as an entire entity. Finally, the embedding-space approaches, such as Zhang et al. (2017), are a compromise between the two previous ones since the classifier is obtained in the instance space on the basis of some instances per bag, those ones, in particular, which are more representative of the bag. For more details on the MIL paradigm, we refer the reader to the exhaustive surveys Amores (2013) and Carbonneau et al. (2018).
In this work, stemming from a formulation similar to those adopted in Andrews et al. (2003) (MI-SVM formulation), Astorino et al. (2019c), and Bergeron et al. (2012), where both the normal and the bias of a separation hyperplane are computed, we present a fast instance-space algorithm, which generates a separation hyperplane by heuristically prefixing its normal and by successively computing the bias as an optimal solution to an univariate nonsmooth optimization problem. Solving efficiently this univariate nonsmooth problem (by simply exploring the kink points) constitutes the main novelty of our approach, which ensures quite low computational times while providing reasonable testing accuracy.
The paper is organized as follows. In Sect. 2, we introduce our approach, while some numerical results on a set of benchmark test problems are reported in Sect. 3. Finally, in Sect. 4 some conclusions are drawn.

The approach
Assume we are given the index sets I − and I + of k negative and m positive bags, respectively. We indicate by {x j ∈ R n } the set of all the instances, each of them belonging to exactly one bag, either negative or positive. We assume {J − 1 , . . . , J − k } and {J + 1 , . . . , J + m } be the instance index sets of the negative and positive bags, respectively.
The objective is to find a hyperplane with w ∈ R n and γ ∈ R, which (strictly) separates the two classes of bags on the basis of the standard MIL assumption, i.e., -all the negative bags are entirely confined in the interior of one of the two halfspaces generated by H ; -each positive bag has at least one of its instances falling into the interior of the other halfspace.
More formally, H (w, γ ) is a separating hyperplane if and only if: To state an optimization model able to provide a possibly separating hyperplane, we define the error e − i (w, γ ) in classifying the negative bag i ∈ I − as and the error e + i (w, γ ) in classifying the positive bag J + i as Summing up, we obtain the following overall error function: (3) and the resulting optimization problem We will refer to the model above as to Formulation 1. Note that e(w, γ ) ≥ 0 and it is e(w, γ ) = 0 if and only if H (w, γ ) is a separating hyperplane, according to (1) and (2).
Function e(w, γ ) is nonsmooth and nonconvex, but it can be put in DC (Difference of Convex) form (Le Thi and Pham Dinh 2005). This formulation is similar to those adopted in Andrews et al. (2003) (MI-SVM formulation) and Bergeron et al. (2012), while the DC decomposition has been exploited in Astorino et al. (2019c). The reader will find a fresh survey on nonsmooth optimization methods in Gaudioso et al. (2020a). Some specialized algorithms can be found in Gaudioso and Monaco (1992) and Astorino et al. (2011).
Our heuristic approach consists first in a judicious selection of w, the normal to the separating hyperplane, and then in minimizing the error function with respect to the scalar variable γ .
As for the choice of w, we calculate the barycenter a of all the instances of the negative bags and the barycenter b of the barycenters of the instances in each positive bag, and then, we fix the normal to the hyperplanew by setting: for some M > 0. Note that, whenever M = 1, provided a and b do not coincide, by setting γ − = a b − a 2 and γ + = b 2 −a b, the hyperplanes H (w, γ − ) and H (w, γ + ) pass through points a and b, respectively. Once the normalw has been fixed, defining and we rewrite function (3) as follows: As a consequence, problem (4) becomes which consists of minimizing a convex and nonsmooth (piecewise affine) function of the scalar variable γ . We note in passing that, by introducing the additional variables ξ i j , i ∈ I − , j ∈ J − i , and ζ i , i ∈ I + (grouped into the vectors ξ, ζ ), the problem can be equivalently rewritten as a linear program of the form To find an optimal solution to the problem, we prefer, however, to consider formulation (9). Note that the nonnegative function e(γ ) is continuous and coercive; consequently, it has a minimum. In particular, corresponds to a correct classification of all the bags. A brief discussion of the differential properties of function (8) is in order. Letting we have the following expressions of the correspondent subdifferentials: and From (10) and (11), it is easy to see that the points α i j , i ∈ I − , j ∈ J − i , and −β i , i ∈ I + , constitute the kinks, i.e., the points where function e(γ ) is nonsmooth. Note also that e(γ ) has constant negative slope and it has constant positive slope m = |I + | for Taking into account (10) and (11), the subdifferential ∂e(γ ) is the Minkowski sum of four sets, i.e., with and, at the non-kinks points where function e(γ ) is differentiable, it is Moreover, at each kink point, say γ , the slope jumps up of s, the multiplicity of the kink defined as |I J − 0 (γ )| + |I + 0 (γ )|. Letting the following property holds.

Proposition 1
The optimal objective function value e * of problem (9) is equal to zero if and only if In such case, every γ ∈ [γ α , γ β ] is optimal.
We consider now the case γ α > γ β and state the following theorem.
Proof We prove first that any optimal solution belongs to the interval [γ β , γ α ]. We observe in fact that, for everyγ < γ β and for i ∈ I + , it is max{0, β i +γ } = 0, while there exists at least one couple i j such that max{0, α i j −γ } > 0. This implies that the directional derivative of e(γ ) atγ along the positive semi-axis is negative. A similar argument can be used to show that at anyγ > γ α the directional derivative along the positive semi-axis is positive. As a consequence, e(γ ) has optimal solution necessarily in the interval [γ β , γ α ]. Now, observing that γ α and γ β are both kinks, consider any optimal non-kink solution γ * ∈ (γ β , γ α ). Since the function is differentiable at γ * , it follows that the derivative of e(γ ) at γ * vanishes, that is, from (13), i.e., |I J − 0 (γ * )| = |I + 0 (γ * )| = 0. Now consider the biggest kink smaller than γ * : the existence of such a kink is guaranteed recalling that γ β is a kink and γ * > γ β . Assume for the time being that such a kink is α sh for some s ∈ I − , h ∈ J − s and letγ = α sh . It is Summing up and taking into account (12), it follows 0 ∈ ∂e(γ ), i.e.,γ = α s is an optimal kink solution. The case when the biggest kink smaller than γ * is −β s for some s ∈ I + can be treated in a perfectly analogous way.
The properties of function e(γ ) we have discussed allow us to state the following kink exploring algorithm to solve problem (9) in order to compute an optimal solution γ * .
Step 2 (Exploring the kinks). Explore the kinks starting from γ β until a value γ * = α sh for some s ∈ I − , h ∈ J − s or γ * = β s for some s ∈ I + is found such that 0 ∈ ∂e(γ * ). We conclude this section by remarking that an alternative formulation of the error function is obtained by replacing function e − i (w, γ ) in (3) by

Proposition 2 Algorithm MIL-kink runs in time
Such formulation will be referred to as Formulation 2 and its theoretical treatment is perfectly analogous to that one of Formulation 1. Despite that, the two formulations present a relevant difference from the computational point of view, since in Formulation 2 the kinks α i j , i ∈ I − , j ∈ J − i , characterizing Formulation 1 (see formula 6), are replaced by the following ones: which are much less. As a consequence, In such case, Algorithm MIL-kink runs in time O(q), where q = max{nm, nk, (m + k) log(m + k)}.

Numerical results
Algorithm MIL-kink, described in the previous section, has been implemented in MATLAB (version R2017b) on a Windows 10 system, characterized by a 2.21 GHz processor and 16 GB of RAM. Both the formulations (code MIL-kink 1 , corresponding to Formulation 1, and code MIL-kink 2 , corresponding to Formulation 2, have been tested on twelve data sets drawn from the literature (Andrews et al. 2003) and are listed in Table 1. The first three data sets are image recognition problems, the last two ones consist in predicting whether a compound is a musk or not, while the TST data sets are large-scale text classification problems.
In all the experimentations, we have setw according to (5), taking M = 10 6 . Moreover, for each data set, we have adopted the classical tenfold cross-validation, coming out with the results reported in Table 2 in terms of average training and testing correctness.
In Table 3, we compare our results, in terms of average testing correctness and average CPU time, with those ones reported in Avolio and Fuduli (2021) and provided by the MATLAB implementations (launched on the same machine, with the same cross-validation fold structure) of the following algorithms taken from the literature: -mi-SPSVM (Avolio and Fuduli 2021): it is an instancespace approach, generating a separation hyperplane placed in the middle between a supporting hyperplane for the instances of the negative bags and a clustering hyperplane for the instances of the positive bags. -mi-SVM (Andrews et al. 2003): it is an instance-space approach, where a separating hyperplane is constructed by solving an SVM type optimization model by means of a BCD technique Tseng (2001). -MIL-RL (Astorino et al. 2019a): it is an instance-space approach, which provides a separating hyperplane by solving, by means of a Lagrangian relaxation technique Gaudioso (2020), the same SVM type optimization model adopted in mi-SVM.
All the above listed algorithms share with MIL-kink the characteristic of providing a linear separation classifier (i.e., a hyperplane); thus, the CPU time reported in Table 3 corre-sponds exactly to the execution time, averaged on tenfolds, needed to compute each time such a hyperplane.
In Table 3, for each data set, the best results have been underlined. Comparing all the algorithms, we observe that our approach is clearly very fast (with a CPU time always less than one second), especially when Formulation 2 is adopted (here the number of explored kinks is definitely smaller than in Formulation 1). In terms of average testing correctness, MIL-kink overcomes the other algorithms on four data sets (Elephant, Tiger, TST7, TST10), showing a comparable performance on the remaining test problems (especially on TST4 and TST9).
To have an idea of the performance of our method with respect to further approaches drawn from the literature, in Table 4 we report the comparison of our technique (only in terms of average testing correctness) against the following MIL algorithms, whose results have been taken from the corresponding papers: -MI-NPSVM (Zhang et al. 2013): it is an instance-space approach, generating two nonparallel hyperplanes by solving two respective SVM type problems. -MIRSVM (Melki et al. 2018): it is an embedding-space SVM type approach, based on identifying, at each iteration, the instances that mostly impact on the classification process. -SSLM-MIL (Yuan et al. 2021): it is an instance space approach, based on spherical separation with margin.
For each data set, the best results have been underlined and the character " -" means that the corresponding datum is not available. Moreover, about mi-NPSVM, in order to have a fair comparison, we have considered only the linear kernel version, for which there is no result on the musk data sets. Looking at the results of Table 4, we observe that MILkink exhibits the best performance on three data sets (TST7, TST9 and TST10) and it provides quite reasonable results also on Elephant, TST1 and TST4.

Conclusions
We have presented a fast heuristic algorithm for solving binary MIL problems characterized by two classes of instances. Our approach gives rise to a nonsmooth univariate optimization model that we solve exactly by simply exploring the kink points. The numerical results appear interesting mainly in terms of computational time, thus suggesting the use of the method either for dealing with very large data sets or as a first tool to check viability of a MIL approach in a specific application.

Author Contributions
The authors contributed to each part of this paper equally.
Funding No funding was received.

Declarations
Conflict of interest All authors declare that he has no conflict of interest.
Ethical approval This article does not contain any studies with human participants or animals performed by any of the authors.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/.