A comparative study on large scale kernelized support vector machines

  • Daniel Horn
  • Aydın Demircioğlu
  • Bernd Bischl
  • Tobias Glasmachers
  • Claus Weihs
Regular Article

Abstract

Kernelized support vector machines (SVMs) belong to the most widely used classification methods. However, in contrast to linear SVMs, the computation time required to train such a machine becomes a bottleneck when facing large data sets. In order to mitigate this shortcoming of kernel SVMs, many approximate training algorithms were developed. While most of these methods claim to be much faster than the state-of-the-art solver LIBSVM, a thorough comparative study is missing. We aim to fill this gap. We choose several well-known approximate SVM solvers and compare their performance on a number of large benchmark data sets. Our focus is to analyze the trade-off between prediction error and runtime for different learning and accuracy parameter settings. This includes simple subsampling of the data, the poor-man’s approach to handling large scale problems. We employ model-based multi-objective optimization, which allows us to tune the parameters of learning machine and solver over the full range of accuracy/runtime trade-offs. We analyze (differences between) solvers by studying and comparing the Pareto fronts formed by the two objectives classification error and training time. Unsurprisingly, given more runtime most solvers are able to find more accurate solutions, i.e., achieve a higher prediction accuracy. It turns out that LIBSVM with subsampling of the data is a strong baseline. Some solvers systematically outperform others, which allows us to give concrete recommendations of when to use which solver.

Keywords

Support vector machine Multi-objective optimization Supervised learning Machine learning Large scale Nonlinear SVM Parameter tuning 

Mathematics Subject Classification

62-07 Data analysis 

1 Introduction

In light of the ever growing amount of data, classification of large datasets is becoming more and more important. Though linear methods can be very effective for some datasets, e.g., with excessively large numbers of features, in general non-linear methods perform much better. Among the different state-of-the-art methods for solving large scale problems are non-linear support vector machines (SVM). They combine excellent performance with a sound mathematical foundation (see Cortes and Vapnik 1995).

Standard decomposition SVM solvers are known to scale at least quadratically with the dataset size (Bottou and Lin 2007). With growing dataset sizes training a non-linear SVM becomes increasingly difficult. Solving problems that are large by todays standards poses a huge computational burden on current hardware.

Many approximation methods have been developed to mitigate this problem. The hope is that such approximations will reduce the computational complexity and deliver solutions that are comparable in accuracy. Elaborate approximation methods can be expected to be more efficient than the simplest of all speed-up techniques, random subsampling of the training data. We include this option as a (surprisingly strong) baseline technique. The practical problem which of these solvers to use is largely under-explored. In the literature there exist a few theoretical studies that aim to give guidance. E.g., an influential theoretical framework was developed by Bottou and Bousquet (2008).

Unfortunately, detailed empirical comparative studies between these approximative solvers are missing. One reason is that such studies are computationally extremely expensive. This problem is exacerbated by the fact that in addition to the usual SVM hyperparameters, the approximation algorithms come with additional control settings that influence both running time and accuracy. Furthermore, it is nearly impossible to explicitly pre-specify how to balance these two objectives prior to experimental tests.

Trading runtime against accuracy is a necessity in the large scale setting. This is inherently a multi-objective problem. Approximate solvers have numerous parameters and a single training run can take several hours. Hence a standard, naïve grid search for parameter tuning is not feasible. Instead we apply model-based multi-objective optimization, which is state-of-the-art for this setting. In our experiments the Pareto front is approximated by the ParEGO algorithm (see Knowles 2006). This approach yields a set of classifiers representing a wide range of accuracy/runtime tradeoffs, and at the same time it tunes all relevant parameters of learning machine and solver.

The core questions of this paper are:
  • Which approximative solver should be used on large scale data sets?

  • Which approximations consistently outperform the standard solver LIBSVM?

  • Does subsampling influence the trade-off of accuracy and training time?

We do not expect to find a general answer to the first question since results usually depend too much on the data set under consideration. Nonetheless, general trends might show up for large scale data, which is what we examine here.

Our contributions are as follows: We present a thorough study comparing representative implementations of well-known approximate SVM training algorithms, including thorough parameter tuning. We analyze the trade-offs between training time and accuracy on a number of large scale data sets.

The paper is organized as follows. In Sects. 2 and 3 we define the kernelized SVM problem and discuss approximate methods. We proceed by describing how we explore the trade-offs between training time and accuracy with a multi-objective optimization approach. Experimental results follow in Sect. 5 and 6.

2 Kernelized support vector machines

Support vector machines (SVM, Cortes and Vapnik 1995) are binary large-margin classifiers. Given labeled data \(\mathcal {D}=\{(x_1, y_1), \ldots , (x_n, y_n)\} \in (X \times \{ \pm 1\})^n\) the predictive model \(h_{w,b}(x) = \text {sign}(\langle w, \phi (x)\rangle _{\mathcal {H}} + b)\) of the non-linear soft-margin Support Vector Machine (in the primal space) is given by the solution of
$$\begin{aligned} \min _{w \in \mathcal {H}, b \in \mathbb {R}} \quad \frac{1}{2} ||w||^2 + C \cdot \sum _{i=1}^n \max \Big ( 0, 1 - y_i \big ( \langle w, \varphi (x_i) \rangle _{\mathcal {H}} + b \big ) \Big ). \end{aligned}$$
(1)
Here, \(\varphi : X \rightarrow \mathcal {H}\) is a feature map into a reproducing kernel Hilbert space \(\mathcal {H}\), corresponding to a positive definite (Mercer) kernel function \(k : X \times X \rightarrow \mathbb {R}\), fulfilling \(\langle \varphi (x), \varphi (x') \rangle _{\mathcal {H}} = k(x, x')\) for all \(x, x' \in X\). Furthermore, \(C > 0\) is a regularization parameter. It controls the complexity of the SVM model. The equivalent dual optimization problem reads
$$\begin{aligned} \max _{\alpha \in \mathbb {R}^n} \quad&\sum _{i=1}^n \alpha _i - \frac{1}{2} \sum _{i,j=1}^n \alpha _i \alpha _j y_i y_j k(x_i, x_j) \nonumber \\ \text {s.t.} \quad&\sum _{i=1}^{n} y_i \alpha _i = 0\quad \text {and}\quad 0 \le \alpha _i \le C \,\,\, \forall \, i \in \{1, \ldots , n\}. \end{aligned}$$
(2)
The solution then takes the form \(w = \sum _{i=1}^n \alpha _i y_i \varphi (x_i)\) and the offset b can be computed from the Karush–Kuhn–Tucker (KKT) complementarity conditions. We use the RBF kernel \(k(x,x') = e^{-\gamma || x-x' ||^2}\). Its parameter \(\gamma > 0\) controls the kernel width. The RBF kernel yields excellent performance on a large set of problems, and is one of the off-the-shelf kernels with the universal approximation property.

On large scale data a non-parametric kernel-based model introduces a huge computational disadvantage as compared to, e.g., linear models, because the solution \(w = \sum _{i=1}^n \alpha _i \varphi (x_i)\) is a linear combination of the basis functions \(\varphi (x_i)\), the number of which is known to grow linearly with the size of the training set (see Steinwart 2003). Hence, single iterations even of simple optimization methods such as (primal) gradient descent and (dual) coordinate descent require \(\mathcal {O}(n)\) instead of \(\mathcal {O}(1)\) operations.

3 Support vector machine solvers

Sequential minimal optimization (SMO) Modern SVM solvers such as LIBSVM (Chang and Lin 2011) apply a decomposition technique to iteratively solve a sequence of small sub-problems of the dual problem (2). SMO (see Platt 1998) refers to the strategy of solving minimally sized sub-problems that allow for feasible update steps. These sub-problems can be solved analytically. Other important performance enhancing techniques are second order working set selection (Glasmachers and Igel 2006), online shrinking of variables, and a kernel cache (Joachims 1998).

We call SMO an exact SVM solver since it is usually applied to solve the problem with high precision. LIBSVM can be considered a modern reference implementation of the SMO algorithm.

Approximate SVMs Many algorithms have been proposed to improve on the SMO algorithm, usually with the promise to deliver approximate solutions of near-optimal quality in considerably reduced time.

There are several different ways to approximate the SVM solution. These include stochastic gradient descent methods (see Bordes et al. 2005; Shalev-Shwartz et al. 2011), low-rank matrix approximations (Williams and Seeger 2001; Fine and Scheinberg 2002), geometric methods (Tsang et al. 2007; Nandan et al. 2013), as well as ensemble methods (Graf et al. 2004). We restrict ourselves to well-known implementations that cover a representative set of techniques (see Table 1), reserving ensemble methods for future research:
Table 1

SVM solvers with loss type and parameters that were subject to tuning

SVM solver

Parameters

Optimization space

Loss type

Method

Type

Sparse

BSGD

Budget size, \(\#\)epochs

\(2^{[4, 11]}, 2^{[0, 7]}\)

Hinge

Online

Primal

Yes

LLSVM

Matrix rank

\(2^{[4, 11]}\)

Hinge

Batch

Dual

Yes

LASVM

\(\epsilon \) (accuracy), \(\#\)epochs

\(2^{[-13, -1]}, 2^{[0, 7]}\)

Hinge

Online

Dual

Yes

BVM/CVM

\(\epsilon \) (accuracy

\(2^{[-19, -1]}\)

Squared hinge

Batch

Dual

Yes

LIBSVM

\(\epsilon \) (accuracy)

\(2^{[-13, -1]}\)

Hinge

Batch

Dual

Yes

Pegasos

\(\#\)epochs

\(2^{[0, 7]}\)

Hinge

Online

Primal

No

SVMperf

\(\epsilon \) (accuracy), \(\#\)cutting planes

\(2^{[-13, -1]}, 2^{[4, 11]}\)

Hinge

Batch

Dual

Yes

‘Sparse’ denotes if the solver can utilize sparse data in some way, e.g. kernel computations. Note that all solvers are implemented in C/C++

  • LASVM an online learning variant of SMO (see Bordes et al. 2005).

  • SVMperf an adaptation of the cutting-plane method to SVM training, which approximates the empirical risk term (see Joachims and Yu 2009).

  • Pegasos a stochastic subgradient descent algorithm with a special choice of the learning rate (see Shalev-Shwartz et al. 2011). Pegasos is missing a rigorous stopping condition. It is run for a predefined number of epochs.

  • Budgeted stochastic gradient descent (BSGD) similar to Pegasos, BSGD is an SGD method (see Wang et al. 2012 and Djuric et al. 2013). The number of support vectors is limited to a predefined budget size that is maintained through projection or merging of support vectors.

  • LLSVM the idea of low-rank linearization (LLSVM) is to decompose the kernel matrix K with entries \(K_{ij} = k(x_i, x_j)\) into \(K = FF^T\), where F has lower rank and can be interpreted as a data matrix. LLSVM forms F with the Nyström method (see Zhang et al. 2012). LIBLINEAR (Fan et al. 2008) is then used to solve the linearized problem.

  • CVM/BVM the core vector machine (CVM) (Tsang et al. 2005) and the ball vector machine (BVM) (Tsang et al. 2007) reformulate the SVM problem as an enclosing ball problem. Both rely on the squared hinge loss.

All these solvers have further parameters in addition to the common \(\gamma \) (RBF kernel width) and C (regularization control), e.g., for controlling the approximation quality. Some of these additional parameters can strongly influence accuracy and runtime and were also tuned in our subsequent experiments. Table 1 gives an overview of the above mentioned algorithms and their tuned parameters.

A general recommendation for training large-scale data is to subsample the data, at least for the purpose of model selection. The hope is that a random subsample will still capture most of the structure of the data so that a model with nearly the same generalization capability can be found in less time. Since this technique is applicable in combination with any solver we have included also the subsampling ratio as an additional parameter to all solvers (see Table 1 for the list of other approximation parameters). It is optimized on a logarithmic scale over \(2^{[-10, 0]}\). By this we do not fix the subsampling rate beforehand, but leave it to our Multicriteria Optimization (see Sect. 4) to find the subsampling rates that will give rise to the best trade-offs between accuracy and speed.

Computational complexity The trade-off between accuracy and training time depends on (at least) three factors: the scaling of the computational complexity w.r.t. (a) the number of training points, (b) the desired accuracy (convergence rate of the iterative optimization method), and (c) the iteration cost. In general, exact training of non-linear SVMs scales between \(\mathcal {O}(n^2)\) and \(\mathcal {O}(n^3)\) (Bottou and Lin 2007). Approximations can scale better, e.g., \(\mathcal {O}(n)\) for CVM, at the price of a stopping criterion based on relative accuracy. The progress of SGD-based methods (PEGASOS, BSGD) per iteration is independent of n (Bottou and Bousquet 2008), but they suffer from slow convergence of order \(\mathcal {O}(1/\sqrt{t})\) (where t is the number of iterations), while SMO-based solvers (LIBSVM, LASVM, second phase of LLSVM) achieve locally linear convergence (Lin 2001). In both cases the iteration cost with budget B is \(\mathcal {O}(B)\) (BSGD, LLSVM), whereas other methods pay \(\mathcal {O}(n)\). Results combining data size and optimization error are found in Bottou and Bousquet (2008) for linear models, which—applied to kernel SVMs—suffer from the high dimension of the feature space. In practice, however, the resulting effects are often dominated by the dependency of the training time on the parameters C and \(\gamma \), which can make a difference of several orders of magnitude. This makes it difficult to relate our results to existing theory.

4 Multi-criteria parameter optimization

Multi-objective optimization (MOO) refers to an optimization setting with multiple objective functions \(f_1, f_2, \ldots , f_m : \mathcal X \rightarrow \mathbb {R}\), or equivalently, a vector-valued objective function \(f = (f_1, \ldots , f_m) : \mathcal X \rightarrow \mathbb {R}^m\), with the usual convention that each individual objective should be minimized. In general the objectives will be contradicting, and we are interested in which trade-offs are achievable.1 A solution x is said to dominate solution \(x'\) if x is at least as good in all objectives as \(x'\) and strictly better in at least one objective. This relation defines only a partial order, allowing the case of incomparable solutions. The dominance relation is sufficiently strong for a definition of optimality: a solution x is called Pareto optimal if and only if it is not dominated by any other solution \(x'\) (Ehrgott 2013). The set \(\big \{ f(x) \,\big |\, x \in \mathcal X \text { is Pareto optimal} \big \}\) of all non-dominated solutions is called the Pareto front. This front represents the inherent possible trade-off between all objectives, and the task of MOO is to approximate this set in an efficient manner. Nowadays, it is common practice to apply evolutionary algorithms for this, but our application contains a further complicating factor, as evaluating our objectives will be so time-consuming2 that only a severely restricted budget of evaluations is feasible.

Hence, instead of an evolutionary technique, which usually assumes that many objective evaluations are possible, we employ the ParEGO algorithm—a model-based multi-objective optimization algorithm—to approximate the Pareto front. Model-based optimizers are based on the general idea of the single-objective efficient global optimization (EGO) algorithm (Jones et al. 1998), and constitute a class of techniques specifically targeted for expensive optimization problems.

The EGO algorithm The EGO algorithm works by iteratively fitting and optimizing a surrogate model. The idea is to approximate the expensive objective with a regression model, based on the data of all previous evaluations, and to optimize this model instead of time-efficiently generate a new, promising point for a real objective evaluation. A Kriging model (Gaussian process) is a standard choice for the surrogate. Pseudo-code for EGO is given in Algorithm 1.
EGO is a global optimizer. It avoids getting stuck in local minima by optimizing the expected improvement (EI) instead of the surrogate model’s mean response. The EI is defined by
$$\begin{aligned} EI(\mathbf x) = \big ( y_{min} - \hat{f} (\mathbf x ) \big ) \cdot \Phi \left( \frac{y_{min} - \hat{f} (\mathbf x )}{\hat{s} (\mathbf x )} \right) + \hat{s}(\mathbf x ) \cdot \phi \left( \frac{y_{min} - \hat{f} (\mathbf x )}{\hat{s} (\mathbf x )} \right) \end{aligned}$$
where \(y_{min} = \min (\big \{y_1, y_2, y_3, \ldots \big \})\), \(\phi \) and \(\Phi \) are the density and cumulative distribution function of the standard normal distribution, respectively, and \(\hat{f}(\mathbf x )\) and \(\hat{s}(\mathbf x )\) are the estimated mean response and standard deviation of the model at \(\mathbf x\). The stopping criterion in Algorithm 1 is usually a prefixed budget of (expensive) function evaluations or a minimal threshold for the expected improvement, or a combination of both. Such an algorithm can be directly applied for efficient single-objective tuning of machine learning models, for an application to SVM tuning see, e.g., Koch et al. (2012).
The ParEGO algorithm ParEGO works very similar to EGO, but adapts it for the MOO setting. To approximate the Pareto front, it considers the augmented Chebyshev function
$$\begin{aligned} \tilde{f} (\mathbf x ) = \max _j \big ( \lambda _j \cdot f_j (\mathbf x ) \big ) + \rho \sum _j \lambda _j \cdot f_j (\mathbf x ) \end{aligned}$$
in each iteration, where \(\rho \) is a small positive number (set to 0.05, as in Knowles 2006) and \(\mathbf \lambda \) is a weight vector (with \(0 \le \lambda _j \le 1\) and \(\sum _j \lambda _j = 1\)), sampled anew in each iteration. This scalarization essentially reduces the multi-objective criterion to a single one and its non-linear part ensures that non-convex parts of the Pareto front can be reached. Hence, in each iteration of Algorithm 1, ParEGO constructs a random \(\mathbf \lambda \), the scalar costs \(\tilde{f}\) of all design points are computed, a single model is fitted to \(\mathcal D\) and these costs, and a new point is proposed by regular EI optimization. Since ParEGO is missing a proper stopping criterion, it is usually run for a fixed number of function evaluations. We parallelize the original ParEGO algorithm by stratified sampling of multiple weight vectors in each iteration (see Horn et al. 2015).
Table 2

Overview of the data sets

Data set

\(\#\) points

\(\#\) features

Coerced classes

Class ratio (%)

Sparsity (%)

arthrosis

262,142

178

46

99.99

aXa

36,974

123

24

8.21

cod-rna

343,564

8

39

99.99

covtype

581,012

54

49

22

ijcnn1

141,691

22

10

59.09

mnist

70,000

780

\(\{0, 3, 6, 8, 9\}\) vs. \(\{1, 2, 4, 5, 7\}\)

50

19.24

poker

1,025,010

10

\(\{0\}\) vs. \(\{1, \ldots , 9\}\)

50

100

protein

24,387

357

\(\{0\}\) vs. \(\{ 1, 2\}\)

46

28.2

shuttle

58,000

9

\(\{1\}\) vs. \(\{2, \ldots , 7\}\)

79

99.76

spektren

175,090

22

56

100

vehicle

98,528

100

\(\{1, 2\}\) vs. \(\{3\}\)

50

100

wXa

34,780

300

3

4.81

The columns describe the number of data points, the dimension of each point, the classes we merged to obtain a binary problem, the percentage size of the positive class and fraction of non-zero features in the data set.

5 Experimental setup

Data sets We selected several well-known and publicly available benchmark data sets as well as two non-public data sets for our study, see Table 2. For reproducibility, we made sure that all public data sets are available on the OpenML platform3 (van Rijn et al. 2013). We scaled each feature to unit variance, except for binary features.4 When a data set consists of multiple files (e.g., train, test, and validation), we merged these before scaling. As the ‘adult’ and ‘web’ data sets were provided in multiple different splits,5 we merged these and named them ‘aXa’ and ‘wXa’, respectively. We made sure that merging the different splits for each data set did not introduce duplicate data points, except for cod-rna, where already the training set had duplicate points. For mnist, poker, protein, shuttle, and (SensIT) vehicle, we coerced the data to obtain two roughly equally sized classes.

Software packages In general, we used the original software packages found online, see Table 3. As we did not find a kernelized Pegasos solver, we reimplemented it in the Shark library (Igel et al. 2008). We have tuned the solvers approximation parameters as described in Table 1, the parameter \((C, \gamma )\) of the SVM itself were optimized over \(2^{[-15, 15]}\) respectively. All further settings were left at their default values, except for SVMPerf, where the recommended settings from the website were used: ‘–i 2 -w 9 –b 0’. The option to use a bias term was turned off for all solvers except for LIBSVM where such an option does not exist. The kernel cache size was set to 1024 MB for all SVM solvers, except for BSGD and SVMperf where no such option exists.

Evaluation All data sets were randomly split into training, validation and test sets with a ratio of 2:1:1. During parameter optimization, every solver was trained on the training set and evaluated on the validation set, the resulting performance values were passed to ParEGO. After the ParEGO optimization was done, the found Pareto front was evaluated on the test set. These test errors were used as estimators for the prediction error. Since the models learned during the ParEGO optimization were not saved, they had to be retrained during the test evaluations. Hence also the execution times do differ between the validation and the test evaluations. Even non-dominated points may have been introduced during the test evaluations, since both training time and test error are stochastic values. All experiments were only run once; multiple runs were not feasible due to time restrictions on our cluster.

ParEGO settings For multi-objective parameter optimization, we used the ParEGO implementation found in mlrMBO.6 The initial design size was 20 points, and we performed 10 sequential iterations with 20 points proposed in parallel. Hence, 220 SVMs were trained per solver and data set. Two independent ParEGO runs were conducted for each combination, one on the full data set and one with subsampling enabled, tuning the subsampling rate as an additional parameter.

Benchmarking and parallel environment All our experiments were conducted on a distributed cluster, the LIDO cluster located at TU Dortmund (Germany). It offers roughly 400 nodes and 3500 cores. Each SVM training run was given a walltime of 8 hours. We used the software package “BatchExperiments” for distributed computing in R (Bischl et al. 2015) to structure our benchmark experiments.

6 Results and discussion

In this section we present selected results of our experiments with full and subsampled data. All reported error rates correspond to errors on the test sets. We discuss effects of the approximation parameters and we examine the question whether and when any of the solvers manages to outperform the standard solver LIBSVM. Finally we propose suggestions on the choice of the solver.
Fig. 1

Pareto fronts for the targets test error and training time on the mnist data set without subsampling. Note that the plot is cut off at the right border

Due to space constraints we present only four plots for individual ParEGO runs.7 Results for the full mnist and poker data sets are found in Figs. 1 and 2, respectively. Corresponding results with subsampling are presented in Figs. 3 and 4. The mnist results represent typical behavior, which is comparable to the results on several other data sets, while results on the poker data, the largest data set in our study, are rather atypical. Naturally, the best reachable test error differs for different data sets.The mnist data is a rather easy data set where test errors as low as \(2.5~\%\) can be reached with Gaussian kernel SVMs. However, on the poker data set a test error of around \(39~\%\) indicates a good performance. In Fig. 5 we show the normalized Pareto fronts of all data sets over all solvers with subsampling both disabled and enabled. Table 4 shows the dominated hypervolume of the best two and three solvers, respectively.

Most of our results show a general trade-off: higher accuracy correlates with longer training times (cf. Fig. 1). This observation is in agreement with the general expectation that given more time, solvers are able to find better solutions. Nonetheless, the effect of the SVM-parameters \(\gamma \) and C on both accuracy and execution time should not be underestimated, and it is necessary to tune both of them to reach the Pareto front.

The upper left parts of the Pareto fronts often contain solutions generated by LIBSVM. Since LIBSVM aims at a high-precision solution, it is plausible that it is the most accurate, but also one of the slower solvers. It reaches near-top accuracies on all data sets. On some data sets a few of the approximative solvers reached a slightly better accuracy than LIBSVM. This is the case for LASVM, which is often found on the top left part of the Pareto front. In contrast to the other solvers it can usually match or outpace the accuracy of LIBSVM, at the price of increased training times, which usually exceed those of LIBSVM.

Likewise, kernelized Pegasos is another very slow solver. While it also manages to produce models with high accuracy, these most often do not match the accuracy of the best models of LASVM and LIBSVM while requiring much higher training times. Therefore, solutions generated by Pegasos are nearly never found on the Pareto front. The points are indeed so far from the front that it can be considered the worst of all tested solvers. BSGD, the sibling of kernel Pegasos on a budget, is found on the Pareto front only in very rare cases. It is not found to be competitive.

On several data sets the lower right part of the Pareto front contains solutions from the solvers CVM and BVM. While both show often a rather clear and desirable trade-off like on mnist, both can fail completely on other data sets, e.g., on poker. Their performance is unstable, we did not find a rule explaining this.

As one can see on the bottom right end of the front, LLSVM is the fastest of all tested solvers. It nearly always generates a solution within seconds, but these classifiers are only slightly better than constant prediction of the majority class. Even the best solutions of LLSVM were not able to keep up with the accuracy of other solvers.
Fig. 2

Pareto fronts for the targets test error and training time on the poker data set without subsampling. Note that the plot is cut off at the right border

Due to computational limits of our cluster we had to set a maximum wall time of eight hours for each error estimation. Within this time limit LIBSVM, LASVM and Pegasos (without subsampling) did not finish at all on large data sets like poker. This is reflected in the missing points for these solvers in Fig. 2.

Approximation parameters All of the studied solvers introduce one or two approximation parameters, which seemingly gives the user some direct control over the approximation quality, and allow for a trade-off between accuracy and speed.
Fig. 3

Pareto fronts for the targets test error and training time on the mnist data set with subsampling. Note that the plot is cut off at the right border

While our results clearly show these trade-offs (except for LIBSVM) most often, it is not clear how to choose these parameters a priori. In the literature, usually either some heuristics for finding a good setting is given, or a specific recommendation. Even if results of experiments about the influence of the parameters for specific data sets are available, these might not hold for other data sets. For example consider the sparsity budget parameter k of SVMperf. In Joachims and Yu (2009) the (in some sense) optimal k for five data sets are stated. As k never exceeds 500 the general suggestion with \(k = 1000\), while the SVMperf software comes with a default of \(k = 500\). is to use \(k=500\). The author also states on his website that “the larger k, the better the approximation quality but the longer training and testing times”.8 While this is certainly true for a fixed pair of \((C, \gamma )\), this statement cannot be made in general for the Pareto optimal points, where a lower k in combination with tuned \((C, \gamma )\) can yield better accuracy and lower training times (this can be seen e.g. on mnist, where a point with \(k=42\) dominates a point with \(k=1360\)). As the optimal \((C, \gamma )\) is not known a priori and it may vary with k one is forced to perform parameter tuning also for k.

Surprisingly for us, the accuracy parameter \(\epsilon \) of LIBSVM did not yield a clean trade-off as we expected. Instead it seems that the influence of \(\epsilon \) on the accuracy is rather unsystematic, most probably because of its low overall impact. This is in contrast to the effect of the subsampling rate, which has a huge impact on the trade-offs of LIBSVM. We will discuss the subsampling rate next.

Subsampling We implemented subsampling by randomly permuting the stored training set in advance and loading the number of required data points starting from the beginning of the file. The subsampling rate can be expected to generate a rather clear trade-off between prediction accuracy and computation time. This can indeed be observed for nearly all solvers. In general solutions based on subsampling have lower training times at the cost of reduced accuracy.

The effect is very distinct and rather consistent across many data sets, especially in case of LIBSVM (see Figs. 3, 4). However, there are exceptions. For example, subsampling allows for better models and thus more accurate solutions for SVMperf, LLSVM, CVM and BVM on the poker problem. In this case, the best solution for BVM with subsampling achieves an error rate of 0.39, while the best solution without subsampling only achieves 0.46 and needs twice the training time. We suspect that the poker data set is quite redundant, and subsampling has therefore a positive impact for these solvers. This effect can also be seen on the cover type data set, but here it can be explained by the HPC cluster enforcing a wall time of 8 h which effectively cuts off some otherwise well-tuned solutions, in particular without subsampling (a subsampling rate close to one).
Fig. 4

Pareto fronts for the targets test error and training time on the poker data set with subsampling. Note that the plot is cut off at the right border

Which solver to choose Naturally one is inclined to put a ranking on the performance of the examined solvers. For many reasons this is not possible. First and foremost, no single solver dominates the Pareto front completely, not even on a single data set. Also the unavoidable design decisions like the selection of benchmark problems, number of repetitions, and the low budget of the parEGO optimizer do not allow for a final ranking of the performance of the solvers. Nonetheless, the set of all Pareto fronts of the experiment with subsampling allows to make slightly weaker qualitative statements. Nearly all of the Pareto fronts are dominated by LIBSVM and SVMperf, see Fig. 5.

To substantiate this observation we compute the ratio of the dominated hypervolume of the two solvers LIBSVM and SVMperf to the total dominated hypervolume of all solvers, see Table 4. Note that we calculated the hypervolume on the same scale as Fig. 5 except for the square root transformation.

The maximal value of 1.0 in this table indicates that the Pareto front contains only points generated by the considered solvers. The table shows that LIBSVM and SVMperf nearly always cover the Pareto front, as most values exceed 0.98. The combined contribution of all other solvers to the front is very small. The result on the spektren data set is an exception to this general picture, where a larger part of the Pareto front is not covered by the two solvers.
Fig. 5

Jointed Pareto fronts over all solvers for each data set. For better readability we focus on the most relevant part of each Pareto front. For each dataset we select an interval of relevant (not too high) test errors. The lower limit was chosen as the best reached test error, the upper limit was chosen so that the interesting decline of the front is shown and the rather unimportant lower tail is cut off. Afterwards each interval is scaled to [0, 1], the original intervals are given in the figure behind the data set names. Solutions on the Pareto front that were slower than the slowest Pareto-optimal LIBSVM run were cut off, as they can be regarded as being too slow. We removed BSGD and Pegasos for better readability from this plot since their contributions to the Pareto fronts are not substantial. The scale of the error axis is transformed with a square root for better readability. The runtime is on a log-scale, since it is natural to only look at its magnitude. Notice that the filled circles and squares corresponding to LIBSVM and SVMperf make up the lion’s share of all fronts

Table 4

Dominated hypervolume ratio of all data sets

 

arthrosis

aXa

cod-rna

covtype

ijcnn1

mnist

LIBSVM+SVMPerf

0.9390

0.9755

0.9837

1.0000

0.9278

0.9911

 

poker

protein

shuttle

spektren

vehicle

wXa

LIBSVM+SVMPerf

0.9970

0.9973

0.9896

0.7952

0.9957

0.9739

The hypervolume is calculated only for the relevant part of the Pareto fronts, while a log-scale is applied to the running time and the error is scaled to [0, 1]. Except for the root transformation of the error this matches with Fig. 5. As reference point we used the walltime of our cluster for the running time and 1.1 for the normalized error

Though we cannot declare a clear winner, our large amount of results gives good insights into the general performance of the solvers. In particular our study shows that on big data sets like poker, where time is usually more important than accuracy, both SVMperf and LIBSVM with subsampling perform very well, as can be seen very clearly in Fig. 4. This is also reflected by the fact that most Pareto-optimal solutions from solvers other than SVMperf and LIBSVM are found in the lower half of Fig. 5, corresponding to medium size problems.

On the other hand, if accuracy is most important, which is usually the case for small to medium data sets, then no single solver significantly outperforms LIBSVM. In nearly all cases LIBSVM is the most accurate solver, and in the few cases where LASVM obtains more accurate results, LIBSVM is still faster.

We conclude that a combination of LIBSVM and SVMperf is sufficient to achieve very good results in most cases. On large data sets subsampling should be considered as a way to speed up training.

7 Conclusions and future work

In this paper we have presented an approach for the comparison of approximate SVM solvers. We have applied multi-objective parameter tuning with respect to classification error and training time. Solvers were compared by analyzing their Pareto fronts and their contributions to the combined Pareto front of all solvers.

Although it is nearly impossible to give a data set-independent ranking for the solvers we were able to characterize most of them as follows: LIBSVM—the de-facto standard solver—reliably achieves high accuracy but is quite slow on large scale data. When combined with subsampling it turns out to be one of the best available solvers. This result comes as a surprise, since one could expect that dedicated approximation methods do systematically better than subsampling. SVMperf is promising on large data sets, especially if training time is more important than solution accuracy. LLSVM is a rather fast solver. Sometimes it lacks accuracy, however, for large data sets it is often competitive. BVM and CVM sometimes reach good results, but do not do so consistently. Pegasos and its budgeted counterpart BSGD as well as LASVM are not competitive.

If the training data set grows too large then training on the full data set becomes inefficient, even with solvers that are specifically designed for this situation, e.g., by means of low rank or budget techniques. As long as the data set is sufficiently redundant, simple subsampling turns out to be a powerful acceleration technique. Of course, the subsampling rate (just like the other algorithm parameters) must be subject to tuning.

We can answer our core questions given in the introduction as follows:
  • For smaller data sets where runtime is of minor concern LIBSVM is the method of choice for its high accuracy. Several options exist when LIBSVM training times become excessive. Depending on the type of data it can pay off to switch to SVMperf, to restrict training to a subsample, or both. Also LIBSVM with the option to subsample the data turns out to be a surprisingly good baseline strategy. When data sets grow extremely large then also SVMperf profit from subsampling.

  • No solver can consistently outperform LIBSVM with subsampling.

  • Subsampling allows for a clear trade-off between accuracy and training time for nearly all solvers, and can help especially on large scale data sets.

We plan to extend our study in multiple ways. Firstly, we will extend the comparison to multi-class and regression problems. Secondly, we want to integrate even more solvers and data sets. In particular we aim to analyze divide-and-conquer strategies in depth, e.g., based on ensemble methods. Finally, we will use the obtained insights to improve existing solvers and devise new ones.

Footnotes

  1. 1.

    In our actual experiments we will focus on the case of two objectives, namely SVM prediction error and training time

  2. 2.

    As an SVM has to be fitted on a large data set

  3. 3.
  4. 4.

    Note that we do not normalize to zero mean, as this might destroy sparsity.

  5. 5.
  6. 6.
  7. 7.

    Refer to http://largescalesvm.de/htmlplots/ for the excessive results and plots of all solvers on all data sets.

  8. 8.

Notes

Acknowledgments

We acknowledge support by the Mercator Research Center Ruhr, under Grant Pr-2013-0015 Support-Vektor-Maschinen für extrem große Datenmengen and partial support by the German Research Foundation (DFG) within the Collaborative Research Centers SFB 823 Statistical modelling of nonlinear dynamic processes, Project C2.

References

  1. Bischl B, Lang M, Mersmann O, Rahnenführer J, Weihs C (2015) BatchJobs and batchexperiments: abstraction mechanisms for using R in batch environments. J Stat Softw 64(11):1–25. http://www.jstatsoft.org/v64/i11/
  2. Bordes A, Ertekin S, Weston J, Bottou L (2005) Fast kernel classifiers with online and active learning. J Mach Learn Res 6:1579–1619MathSciNetMATHGoogle Scholar
  3. Bottou L, Lin C-J (2007) Support vector machine solvers. In: Bottou L, Chapelle O, DeCoste D, Weston J (eds) Large scale kernel machines. MIT Press, Cambridge, MA, pp 301–320. http://leon.bottou.org/papers/bottou-lin-2006
  4. Bousquet O, Bottou L (2008) The tradeoffs of large scale learning. In: Platt JC, Koller D, Singer Y, Roweis ST (eds) Advances in neural information processing systems, vol 20. Curran Associates Inc, Red Hook, NY, pp 161–168. http://papers.nips.cc/paper/3323-the-tradeoffs-of-large-scale-learning.pdf
  5. Chang C-C, Lin C-J (2011) LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol 2(3):27:1–27:27. doi:10.1145/1961189.1961199
  6. Cortes C, Vapnik V (1995) Support vector machine. Mach Learn 20(3):273–297MATHGoogle Scholar
  7. Djuric N, Lan L, Vucetic S, Wang Z (2013) Budgetedsvm: toolbox for scalable svm approximations. J Mach Learn Res 14:3813–3817MathSciNetMATHGoogle Scholar
  8. Ehrgott M (2013)Multicriteria optimization, vol 491. Springer Science & Business Media, BerlinGoogle Scholar
  9. Fan R-E, Chang K-W, Hsieh C-J, Wang X-R, Lin C-J (2008) Liblinear: a library for large linear classification. J Mach Learn Res 9:1871–1874MATHGoogle Scholar
  10. Fine S, Scheinberg K (2002) Efficient svm training using low-rank kernel representations. J Mach Learn Res 2:243–264MATHGoogle Scholar
  11. Glasmachers T, Igel C (2006) Maximum-gain working set selection for support vector machines. J Mach Learn Res 7:1437–1466MathSciNetMATHGoogle Scholar
  12. Graf HP, Cosatto E, Bottou L, Durdanovic I, Vapnik V (2004) Parallel support vector machines: the cascade svm. In: NIPS, pp 521–528Google Scholar
  13. Horn D, Wagner T, Biermann D, Weihs C, Bischl B (2015) Model-based multi-objective optimization: taxonomy, multi-point proposal, toolbox and benchmark. In: Evolutionary multi-criterion optimization, Lecture notes in computer science, vol 9018. Springer International Publishing, Cham, pp 64–78Google Scholar
  14. Igel C, Heidrich-Meisner V, Glasmachers T (2008) Shark. J Mach Learn Res 9:993–996MATHGoogle Scholar
  15. Joachims T (1998) Making large-scale SVM learning practical. In: Schölkopf B, Burges C, Smola A (eds) Advances in kernel methods—support vector learning, chapter 11. MIT Press, Cambridge, pp 169–184Google Scholar
  16. Joachims T, Yu C-NJ (2009) Sparse kernel svms via cutting-plane training. Mach Learn 76(2–3):179–193Google Scholar
  17. Jones DR, Schonlau M, Welch WJ (1998) Efficient global optimization of expensive black-box functions. J Glob Optim 13(4):455–492MathSciNetCrossRefMATHGoogle Scholar
  18. Knowles J (2006) ParEGO: a hybrid algorithm with online landscape approximation for expen-sive multiobjective optimization problems. Evol Comput 10(1):50–66CrossRefGoogle Scholar
  19. Koch P, Bischl B, Flasch O, Bartz-Beielstein T, Weihs C, Konen W (2012) Tuning and evolution of support vector kernels. Evol Intell 5(3):153–170CrossRefGoogle Scholar
  20. Lin C-J (2001) Linear convergence of a decomposition method for support vector machines. Technical reportGoogle Scholar
  21. Nandan M, Khargonekar PP, Talathi SS (2013) Fast svm training using approximate extreme points. arXiv:1304.1391
  22. Platt J (1998) Fast training of support vector machines using sequential minimal optimization. In: Schölkopf B, Burges C, Smola A (eds) Advances in kernel methods—support vector learning, chapter 12. MIT Press, Cambridge, pp 185–208Google Scholar
  23. Shalev-Shwartz S, Singer Y, Srebro N, Cotter A (2011) Pegasos: primal estimated sub-gradient solver for svm. Math Program 127(1):3–30MathSciNetCrossRefMATHGoogle Scholar
  24. Steinwart I (2003) Sparseness of support vector machines. J Mach Learn Res 4:1071–1105MathSciNetMATHGoogle Scholar
  25. Tsang IW, Kwok JT, Cheung P-M, Cristianini N (2005) Core vector machines: fast SVM training on very large data sets. J Mach Learn Res 6:363–392Google Scholar
  26. Tsang IW, Kocsor A, Kwok JT (2007) Simpler core vector machines with enclosing balls. In: Proceedings of the 24th international conference on machine learning. ACM, New York, NY, USA, pp 911–918Google Scholar
  27. van Rijn JN, Bischl B, Torgo L, Gao B, Umaashankar V, Fischer S, Winter P, Wiswedel B, Berthold MR, Vanschoren J (2013) Openml: a collaborative science platform. In: Machine learning and knowledge discovery in databases. Springer, Berlin, Heidelberg, pp 645–649Google Scholar
  28. Wang Z, Crammer K, Vucetic S (2012) Breaking the curse of kernelization: budgeted stochastic gradient descent for large-scale svm training. J Mach Learn Res 13:3103–3131MathSciNetMATHGoogle Scholar
  29. Williams C, Seeger M (2001) Using the Nyström method to speed up kernel machines. In: Advances in neural information processing systems, vol 13. MIT Press, Cambridge, pp 682–688Google Scholar
  30. Zhang K, Lan L, Wang Z, Moerchen F (2012) Scaling up kernel svm on limited resources: a low-rank linearization approach. In: International conference on artificial intelligence and statistics, pp 1425–1434Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2016

Authors and Affiliations

  1. 1.Fakultät StatistikTechnische Universität DortmundDortmundGermany
  2. 2.Ruhr-Universität BochumBochumGermany
  3. 3.Department of StatisticsLMU MünchenMunichGermany

Personalised recommendations