Abstract
Many machine learning algorithms minimize a regularized risk, and stochastic optimization is widely used for this task. When working with massive data, it is desirable to perform stochastic optimization in parallel. Unfortunately, many existing stochastic optimization algorithms cannot be parallelized efficiently. In this paper we show that one can rewrite the regularized risk minimization problem as an equivalent saddle-point problem, and propose an efficient distributed stochastic optimization (DSO) algorithm. We prove the algorithm’s rate of convergence; remarkably, our analysis shows that the algorithm scales almost linearly with the number of processors. We also verify with empirical evaluations that the proposed algorithm is competitive with other parallel, general purpose stochastic and batch optimization algorithms for regularized risk minimization.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
1 Introduction
Regularized risk minimization is a well-known paradigm in machine learning:
Here, we are given m training data points \(\mathbf {x}_{i} \in \mathbb {R}^{d}\) and their corresponding labels \(y_{i}\), while \(\mathbf {w}\in \mathbb {R}^{d}\) is the parameter of the model. Furthermore, \(w_{j}\) denotes the j-th component of \(\mathbf {w}\), while \(\phi _{j}\left( \cdot \right) \) is a convex function which penalizes complex models. \(\ell \left( \cdot , \cdot \right) \) is a loss function, which is convex in \(\mathbf {w}\). Moreover, \(\left\langle \cdot ,\cdot \right\rangle \) denotes the Euclidean inner product, and \(\lambda > 0\) is a scalar which trades-off between the average loss and the regularizer. For brevity, we will use \(\ell _{i}\left( \left\langle \mathbf {w},\mathbf {x}_{i} \right\rangle \right) \) to denote \(\ell \left( \left\langle \mathbf {w},\mathbf {x}_{i} \right\rangle , y_{i}\right) \).
Many well-known models can be derived by specializing (1). For instance, if \(y_{i} \in \left\{ \pm 1\right\} \), then setting \(\phi _{j}(w_{j}) = \frac{1}{2}w_{j}^{2}\) and \(\ell _{i}\left( \left\langle \mathbf {w},\mathbf {x}_{i} \right\rangle \right) = \max \left( 0, 1 - y_{i} \left\langle \mathbf {w},\mathbf {x}_{i} \right\rangle \right) \) recovers binary linear support vector machines (SVMs) [23]. On the other hand, using the same regularizer but changing the loss function to \(\ell _{i}\left( \left\langle \mathbf {w},\mathbf {x}_{i} \right\rangle \right) = \log \left( 1 + \exp \left( -y_{i} \left\langle \mathbf {w},\mathbf {x}_{i} \right\rangle \right) \right) \) yields regularized logistic regression [11]. Similarly, setting \(\phi _{j}\left( w_{j}\right) =\left| w_{j}\right| \) leads to sparse learning such as LASSO [11] with \(\ell _{i}\left( \left\langle \mathbf {w},\mathbf {x}_{i} \right\rangle \right) =\frac{1}{2} \left( y_i -\left\langle \mathbf {w},\mathbf {x}_{i} \right\rangle \right) ^{2}\).
A number of specialized as well as general purpose algorithms have been proposed for minimizing the regularized risk. For instance, if both the loss and the regularizer are smooth, as is the case with logistic regression, then quasi-Newton algorithms such as L-BFGS [17] have been found to be very successful. On the other hand, for smooth regularizers but non-smooth loss functions, Teo et al. [27] proposed a bundle method for regularized risk minimization (BMRM). Another popular first-order solver is alternating direction method of multipliers (ADMM) [4]. These optimizers belong to the broad class of batch minimization algorithms; that is, in order to perform a parameter update, at every iteration they compute the regularized risk \(P(\mathbf {w})\) as well as its gradient
where \(\mathbf {e}_{j}\) denotes the j-th standard basis vector. Both \(P\left( \mathbf {w}\right) \) as well as the gradient \(\nabla P\left( \mathbf {w}\right) \) take O(md) time to compute, which is computationally expensive when m, the number of data points, is large. Batch algorithms can be efficiently parallelized, however, by exploiting the fact that the empirical risk \(\frac{1}{m} \sum _{i=1}^{m} \ell _{i} \left( \left\langle \mathbf {w},\mathbf {x}_{i} \right\rangle \right) \) as well as its gradient \(\frac{1}{m} \sum _{i=1}^{m} \nabla \ell _{i} \left( \left\langle \mathbf {w},\mathbf {x}_{i} \right\rangle \right) \mathbf {x}_{i}\) decompose over the data points, and therefore one can compute \(P\left( \mathbf {w}\right) \) and \(\nabla P\left( \mathbf {w}\right) \) in a distributed fashion [7].
Batch algorithms, unfortunately, are known to be unfavorable for large scale machine learning both empirically and theoretically [3]. It is now widely accepted that stochastic algorithms which process one data point at a time are more effective for regularized risk minimization. In a nutshell, the idea here is that (2) can be stochastically approximated by
when i is chosen uniformly random in \(\left\{ 1,{\ldots },m\right\} \). Note that \(\mathbf {g}_{i}\) is an unbiased estimator of the true gradient \(\nabla P\left( \mathbf {w}\right) \); that is, \(\mathbb {E}_{i\in \left\{ 1,{\ldots }, m\right\} } \left[ \mathbf {g}_{i}\right] = \nabla P\left( \mathbf {w}\right) \). Now we can replace the true gradient by this stochastic gradient to approximate a gradient descent update as
where \(\eta \) is a step size parameter. Computing \(\mathbf {g}_i\) only takes O(d) effort, which is independent of m, the number of data points. Bottou and Bousquet [3] show that stochastic optimization is asymptotically faster than gradient descent and other second-order batch methods such as L-BFGS for regularized risk minimization.
However, a drawback of update (4) is that it is not easy to parallelize anymore. Usually, the computation of \(\mathbf {g}_{i}\) in (3) is a very lightweight operation for which parallel speed-up can rarely be expected. On the other hand, one cannot execute multiple updates of (4) simultaneously, since computing \(\mathbf {g}_{i}\) requires reading the latest value of \(\mathbf {w}\), while updating (4) requires writing to the components of \(\mathbf {w}\). The problem is even more severe in distributed memory systems, where the cost of communication between processors is significant.
Existing parallel stochastic optimization algorithms try to work around these difficulties in a somewhat ad-hoc manner (see Sect. 4). In this paper, we take a fundamentally different approach and propose a reformulation of the regularized risk (1), for which one can naturally derive a parallel stochastic optimization algorithm. Our technical contributions are:
-
We reformulate regularized risk minimization as an equivalent saddle-point problem, and show that it can be solved via a new distributed stochastic optimization (DSO) algorithm.
-
We prove \(O\left( 1/\sqrt{T}\right) \) rates of convergence for DSO which is independent of number of processors and theoretically described almost linear dependence of total required time with the number of processors.
-
We verify with empirical evaluations that when used for training linear support vector machines (SVMs) or binary logistic regression models, DSO is comparable to general-purpose stochastic (e.g., Zinkevich et al. [33]) or batch (e.g., Teo et al. [27]) optimizers.
2 Reformulating Regularized Risk Minimization
We begin by reformulating the regularized risk minimization problem as an equivalent saddle-point problem. Towards this end, we first rewrite (1) by introducing an auxiliary variable \(u_i\) for each \(\mathbf {x}_{i}\):
By introducing Lagrange multipliers \(\alpha _i\) to eliminate the constraints, we obtain
Here \(\mathbf {u}\) denotes a vector whose components are \(u_{i}\). Likewise, \(\varvec{\alpha }\) is a vector whose components are \(\alpha _{i}\). Since the objective function (5) is convex and the constraints are linear, strong duality applies [5]. Therefore, we can switch the maximization over \(\varvec{\alpha }\) and the minimization over \(\mathbf {w}, \mathbf {u}\). Note that \(\min _{u_i} \alpha _i u_i + \ell _{i}(u_i)\) can be written \(-\ell _{i}^{\star } (-\alpha _{i})\), where \(\ell _i^{\star }(\cdot )\) is the Fenchel-Legendre conjugate of \(\ell _{i}(\cdot )\) [5]. The above transformations yield to our formulation:
If we analytically minimize \(f(\mathbf {w}, \varvec{\alpha })\) in terms of \(\mathbf {w}\) to eliminate it, then we obtain so-called dual objective which is only a function of \(\varvec{\alpha }\). Moreover, any combination of \(\mathbf {w}^{*}\) which is a solution of the primal problem (1), and \(\varvec{\alpha }^{*}\) which is a solution of the dual problem, forms a saddle point of \(f\left( \mathbf {w}, \varvec{\alpha }\right) \) [5]. In other words, minimizing the primal, maximizing the dual, and finding a saddle point of \(f\left( \mathbf {w}, \varvec{\alpha }\right) \) are all equivalent problems.
2.1 Stochastic Optimization
Let \(x_{ij}\) denote the j-th coordinate of \(\mathbf {x}_{i}\), and \(\varOmega _{i} := \left\{ j: x_{ij} \ne 0\right\} \) denote the non-zero coordinates of \(\mathbf {x}_{i}\). Similarly, let \(\bar{\varOmega }_{j} := \left\{ i: x_{ij} \ne 0\right\} \) denote the set of data points where the j-th coordinate is non-zero and \(\varOmega := \left\{ \left( i,j\right) : x_{ij} \ne 0\right\} \) denotes the set of all non-zero coordinates in the training dataset \(\mathbf {x}_{1},{\ldots }, \mathbf {x}_{m}\). Then, \(f(\mathbf {w}, \varvec{\alpha })\) can be rewritten as
where \(|\cdot |\) denotes the cardinality of a set. Remarkably, each component \(f_{i,j}\) in the above summation depends only on one component \(w_{j}\) of \(\mathbf {w}\) and one component \(\alpha _{i}\) of \(\varvec{\alpha }\). This allows us to derive an optimization algorithm which is stochastic in terms of both i and j. Let us define
Under the uniform distribution over \(\left( i,\,j\right) \in \varOmega \), one can easily see that \(\mathbf {g}_{i,\,j}\) is an unbiased estimate of the gradient of \(f\left( \mathbf {w}, \varvec{\alpha }\right) \), that is, \(\mathbb {E}_{\left\{ \left( i,\,j\right) \in \varOmega \right\} } \left[ \mathbf {g}_{i,\,j}\right] = \left( \nabla _{\mathbf {w}} f\left( \mathbf {w}, \varvec{\alpha }\right) , -\nabla _{\varvec{\alpha }} {-f}\left( \mathbf {w}, \varvec{\alpha }\right) \right) \). Since we are interested in finding a saddle point of \(f\left( \mathbf {w}, \varvec{\alpha }\right) \), our stochastic optimization algorithm uses the stochastic gradient \(\mathbf {g}_{i,\,j}\) to take a descent step in \(\mathbf {w}\) and an ascent step in \(\varvec{\alpha }\) [19]:
Surprisingly, the time complexity of update (8) is independent of the size of data; it is O(1). Compare this with the O(md) complexity of batch update and O(d) complexity of regular stochastic gradient descent.
Note that in the above discussion, we implicitly assumed that \(\phi _{j}\left( \cdot \right) \) and \(\ell _{i}^{\star }\left( \cdot \right) \) are differentiable. If that is not the case, then their derivatives can be replaced by sub-gradients [5]. Therefore this approach can deal with wide range of regularized risk minimization problem.
3 Parallelization
The minimax formulation (6) not only admits an efficient stochastic optimization algorithm, but also allows us to derive a distributed stochastic optimization (DSO) algorithm. The key observation underlying DSO is the following: Given \(\left( i,\,j\right) \) and \(\left( i', j'\right) \) both in \(\varOmega \), if \(i \ne i'\) and \(j \ne j'\) then one can simultaneously perform update (8) on \((w_{j}, \alpha _{i})\) and \((w_{j'}, \alpha _{i'})\). In other words, the updates to \(w_{j}\) and \(\alpha _{i}\) are independent of the updates to \(w_{j'}\) and \(\alpha _{i'}\), as long as \(i \ne i'\) and \(j \ne j'\).
Before we formally describe DSO we would like to present some intuition using Fig. 1. Here we assume that we have 4 processors. The data matrix X is an \(m \times d\) matrix formed by stacking \(\mathbf {x}_{i}^{\top }\) for \(i=1,{\ldots },m\), while \(\mathbf {w}\) and \(\varvec{\alpha }\) denote the parameters to be optimized. The non-zero entries of X are marked by an \(\mathrm {x}\) in the figure. Initially, both parameters as well as rows of the data matrix are partitioned across processors as depicted in Fig. 1 (left); colors in the figure denote ownership e.g., the first processor owns a fraction of the data matrix and a fraction of the parameters \(\varvec{\alpha }\) and \(\mathbf {w}\) (denoted as \(\mathbf {w}^{\left( 1\right) }\) and \(\varvec{\alpha }^{\left( 1\right) }\)) shaded with red. Each processor samples a non-zero entry \(x_{ij}\) of X within the dark shaded rectangular region (active area) depicted in the figure, and updates the corresponding \(w_{j}\) and \(\alpha _{i}\). After performing updates, the processors stop and exchange coordinates of \(\mathbf {w}\). This defines an inner iteration. After each inner iteration, ownership of the \(\mathbf {w}\) variables and hence the active area change, as shown in Fig. 1 (right). If there are p processors, then p inner iterations define an epoch. Each coordinate of \(\mathbf {w}\) is updated by each processor at least once in an epoch. The algorithm iterates over epochs until convergence.
Four points are worth noting. First, since the active area of each processor does not share either row or column coordinates with the active area of other processors, as per our key observation above, the updates can be carried out by each processor in parallel without any need for intermediate communication with other processors. Second, we partition and distribute the data only once. The coordinates of \(\varvec{\alpha }\) are partitioned at the beginning and are not exchanged by the processors; only coordinates of \(\mathbf {w}\) are exchanged. This means that the cost of communication is independent of m, the number of data points. Third, our algorithm can work in both shared memory, distributed memory, and hybrid (multiple threads on multiple machines) architectures. Fourth, the \(\mathbf {w}\) parameter is distributed across multiple machines and there is no redundant storage, which makes the algorithm scale linearly in terms of space complexity. Compare this with the fact that most parallel optimization algorithms require each local machine to hold a copy of \(\mathbf {w}\).
To formally describe DSO, suppose p processors are available, and let \(I_{1},{\ldots }, I_{p}\) denote a fixed partition of the set \(\left\{ 1,{\ldots }, m\right\} \) and \(J_{1},{\ldots }, J_{p}\) denote a fixed partition of the set \(\left\{ 1,{\ldots }, d\right\} \) such that \(\left| I_{q}\right| \approx \left| I_{q'}\right| \) and \(\left| J_{r}\right| \approx \left| J_{r'}\right| \) for any \(1 \le q,q',r,r' \le p\). We partition the data \(\left\{ \mathbf {x}_{1},{\ldots }, \mathbf {x}_{m}\right\} \) and the labels \(\left\{ y_{1},{\ldots }, y_{m}\right\} \) into p disjoint subsets according to \(I_{1},{\ldots }, I_{p}\) and distribute them to p processors. The parameters \(\left\{ \alpha _{1},{\ldots }, \alpha _{m}\right\} \) are partitioned into p disjoint subsets \(\varvec{\alpha }^{(1)},{\ldots }, \varvec{\alpha }^{(p)}\) according to \(I_{1},{\ldots }, I_{p}\) while \(\left\{ w_{1},{\ldots }, w_{d}\right\} \) are partitioned into p disjoint subsets \(\mathbf {w}^{(1)},{\ldots }, \mathbf {w}^{(p)}\) according to \(J_{1},{\ldots }, J_{p}\) and distributed to p processors, respectively. The partitioning of \(\left\{ 1,{\ldots }, m\right\} \) and \(\left\{ 1,{\ldots }, d\right\} \) induces a \(p \times p\) partition of \(\varOmega \):
The execution of DSO proceeds in epochs, and each epoch consists of p inner iterations; at the beginning of the r-th inner iteration (\(r \ge 1\)), processor q owns \(\mathbf {w}^{\left( \sigma _{r}\left( q\right) \right) }\) where \(\sigma _{r}\left( q\right) := \left\{ \left( q + r - 2\right) \text {mod} p\right\} + 1\), and executes stochastic updates (8) on coordinates in \(\varOmega ^{\left( q,\sigma _{r}\left( q\right) \right) }\). Since these updates only involve variables in \(\varvec{\alpha }^{\left( q\right) }\) and \(\mathbf {w}^{\left( \sigma \left( q\right) \right) }\), no communication between processors is required to execute them. After every processor has finished its updates, \(\mathbf {w}^{\left( \sigma _{r}\left( q\right) \right) }\) is sent to machine \(\sigma _{r+1}^{-1}\left( \sigma _{r}\left( q\right) \right) \) and the algorithm moves on to the \((r+1)\)-st inner iteration. Detailed pseudo-code for the DSO algorithm can be found in Algorithm 1.
3.1 Convergence Analysis
It is known that the stochastic procedure in Sect. 2.1 is guaranteed to converge to asaddle point of \(f(\mathbf {w},\varvec{\alpha })\) if \((i,\,j)\) is randomly accessed [19]. The main technical difficulty in proving convergence in our case is due to the fact that DSO does not sample \(\left( i,\,j\right) \) coordinates uniformly at random due to its distributed nature. Therefore, first we prove that DSO is serializable in a certain sense, that is, there exists an ordering of the updates such that replaying them on a single machine would recover the same solution produced by DSO. We then analyze this serial algorithm to establish convergence. We believe that this proof technique is of independent interest, and differs significantly from convergence analysis for other parallel stochastic algorithms which typically assume correlation between data points e.g. [6, 15]. We first formally state the main theorem and then prove 3 lemmas. Finally we give a proof of the main theorem in the last part of this section.
Theorem 1
Let \(\left( \mathbf {w}^{t}, \varvec{\alpha }^{t}\right) \) and \(\left( \tilde{\mathbf {w}}^{t}, \tilde{\varvec{\alpha }}^{t}\right) := \left( \frac{1}{t}\sum _{s=1}^{t} \mathbf {w}^{s}, \frac{1}{t} \sum _{s=1}^{t} \varvec{\alpha }^{s}\right) \) denote the parameter values, and the averaged parameter values respectively after the t-th epoch of Algorithm 1. Moreover, assume that \(\left\| \mathbf {w}\right\| ,\left\| \varvec{\alpha }\right\| ,\left| \nabla \phi _j(w_j)\right| ,\left| \nabla \ell _i^\star \left( -\alpha _{i}\right) \right| ,\) and \(\lambda \) are upper bounded by a constant \(c>1\). Then, there exists a constant C, which is dependent only on c, such that after T epochs the duality gap is
On the other hand, if \(\phi _j(s) = \frac{1}{2}s^2\), \(\sqrt{{\max _i\left| \varOmega _i\right| }} < m\) and \(\eta _t < \frac{1}{\lambda }\) hold, then there exists a different constant \(C'\) dependent only on c and satisfying
The first lemma states that there exists an ordering of the pairs of coordinates \((i,\,j)\)s that recovers the solution produced by DSO.
Lemma 1
On the inner iterations of the t-th epoch of Algorithm 1, let us index all \((i,\,j)\in \varOmega \) as \(\left( i_{k}, j_{k}\right) \) by \(k=1,{\ldots },|\varOmega |\) as follows: \(a<b\) if updates to \(\left( w_{j_{a}}, \alpha _{i_{a}}\right) \) were performed before updating \(\left( w_{j_{b}}, \alpha _{i_{b}}\right) \). On the other hand, if \(\left( w_{j_{a}}, \alpha _{i_{a}}\right) \) and \(\left( w_{j_{b}}, \alpha _{i_{b}}\right) \) were updated at the same time because we have p processors simultaneously updating the parameters, then the updates are ordered according to the rank of the processor performing the updateFootnote 1. Then, denote the parameter values after k updates by \(\left( \mathbf {w}_{k}^{t}, \varvec{\alpha }_{k}^{t}\right) \). For all k and t we have
where
Proof
Let q be the processor which performed the k-th update in the \(\left( t+1\right) \)-st epoch. Moreover, let \(\left( k-\delta \right) \) be the most recent previous update done by processor q. There exists \(1 \le \delta ',\delta '' \le \delta \) such that \(\left( \mathbf {w}^t_{k-\delta '}, \varvec{\alpha }^t_{k-\delta ''}\right) \) be the parameter values read by the q-th processor to the perform k-th update. Because of our data partitioning scheme, only q can change the value of the \(i_k\)-th component of \(\varvec{\alpha }\) and the \(j_k\)-th component of \(\mathbf {w}\). Therefore, we have
Since \(f_k\) is invariant to changes in any coordinate other than \(\left( i_k,j_k\right) \), we have
The claim holds because we can write the k-th update formula as
\(\square \)
Next, we prove the following technical lemma that shows a sufficient condition to establish a global convergence of general iterative algorithms on general convex-concave functions. Note that it is closely related to well-known results on convex functions ( e.g., Theorem 3.2.2 in [20], Lemma 14.1. in [24] ).
Lemma 2
Suppose there exists \(D>0\) and \(C>0\) such that for all \(\left( \mathbf {w}, \varvec{\alpha }\right) \) and \(\left( \mathbf {w}', \varvec{\alpha }'\right) \) we have \(\left\| \mathbf {w}- \mathbf {w}'\right\| ^{2} + \left\| \varvec{\alpha }- \varvec{\alpha }'\right\| ^{2} \le D\), and for all \(t = 1,{\ldots }, T\) and all \(\left( \mathbf {w},\varvec{\alpha }\right) \) we have
then setting \(\eta _{t} = \sqrt{\frac{D}{2Ct}}\) ensures that
Proof
Rearrange (15) and divide by \(\eta _{t}\) to obtain
Summing the above for \(t=1,{\ldots }, T\) yields
On the other hand, thanks to convexity in \(\mathbf {w}\) and concavity in \(\varvec{\alpha }\), we see
Using them for (17) and letting \(\eta _{t} = \sqrt{\frac{D}{2Ct}}\) leads to the following inequalities
The claim in (16) follows by using \(\sum _{t=1}^{T} \frac{1}{\sqrt{2t}} \le \sqrt{2T}\). \(\square \)
In order to derive (9), C of (15) has to be the order of d. In case of \(L_2\)-regularizer, it has to be dependent only on c to obtain (10). The last lemma validates them. The proof is technical, and related to techniques outlined in Nedić and Bertsekas [18]. See Appendix for the proof.
Lemma 3
Under the assumptions outlined in Theorem 1, (13) and (14), (15) is satisfied with C of the form of \(C=C_1 d\). It does with \(C=C_2\) in case of \(L_2\)-regularizer. Here \(C_1\) and \(C_2\) are dependent only on c.
The proof of Theorem 1 can be shown in a very simple form given those 3 lemmas.
Proof
Because the parameter produced by Algorithm 1 is the same as one defined by (13) and (14), it is sufficient to show (13) and (14) lead to the statements in the theorem. From Lemma 3 and the fact that \(\left\| \mathbf {w}- \mathbf {w}'\right\| ^{2} + \left\| \varvec{\alpha }- \varvec{\alpha }'\right\| ^{2} \le 8c^2\), (16) of Lemma 2 holds with \(\sqrt{CD} = 2c\sqrt{2C_1 d}\) for general case and \(\sqrt{CD} = 2c\sqrt{2C_2}\) in case of \(L_2\)-regularizer, where \(C_1\) and \(C_2\) are dependent only on c. This immediately implies (9) and (10). \(\square \)
To understand the implications of the above theorem, let us assume that Algorithm 1 is run with \(p \le \min \left( m, d\right) \) processors with a partitioning of \(\varOmega \) such that \(\left| \varOmega ^{\left( q, \sigma _{r}\left( q\right) \right) }\right| \approx \frac{\left| \varOmega \right| }{p^{2}}\) and \(\left| J_q\right| \approx \frac{d}{p}\) for all q. Let us denote time amount taken in performing updates in one epoch by \(T_\mathrm{u}\), which is of \(\mathcal {O}(|\varOmega |)\). Moreover, let us assume that communicating \(\mathbf {w}\) across the network takes time amount denoted by \(T_\mathrm{c}\), which is of \(\mathcal {O}(d)\), and communicating a subset of \(\mathbf {w}\) takes time proportional to its cardinality.Under these assumptions, the time for each inner iteration of Algorithm 1 can be written as
Since there are p inner iterations per epoch, the time required to finish an epoch is \(T_\mathrm{u} / {p} + T_\mathrm{c}\). As per Theorem 1 the number of epochs to obtain an \(\epsilon \) accurate solution is independent of p. Therefore, one can conclude that DSO scales linearly in p as long as \( T_\mathrm{u}/{ T_\mathrm{c}} \gg p \) holds. As is to be expected, for large enough p the cost of communication \(T_\mathrm{c}\) will eventually dominate.
4 Related Work
Effective parallelization of stochastic optimization for regularized risk minimization has received significant research attention in recent years. Because of space limitations, our review of related work will unfortunately only be partial.
The key difficulty in parallelizing update (4) is that gradient calculation requires us to read, while updating the parameter requires us to write to the coordinates of \(\mathbf {w}\). Consequently, updates have to be executed in serial. Existing work has focused on working around the limitation of stochastic optimization by either (a) introducing strategies for computing the stochastic gradient in parallel (e.g., Langford et al. [15]), (b) updating the parameter in parallel (e.g., Bradley et al. [6], Recht et al. [21]), (c) performing independent updates and combining the resulting parameter vectors (e.g., Zinkevich et al. [33]), or (d) periodically exchanging information between processors (e.g., Bertsekas and Tsitsiklis [2]). While the former two strategies are popular in the shared memory setting, the latter two are popular in the distributed memory setting. In many cases the convergence bounds depend on the amount of correlation between data points and are limited to the case of strongly convex regularizer (Hsieh et al. [12], Yang [30], Zhang and Xiao [32]). In contrast our bounds in Theorem 1 do not depend on such properties of data and more general.
Algorithms that use so-called parameter server to synchronize variable updates across processors have recently become popular (e.g., Li et al. [16]). The main drawback of these methods is that it is not easy to “serialize” the updates, that is, to replay the updates on a single machine. This makes proving convergence guarantees, and debugging such frameworks rather difficult, although some recent progress has been made [16].
The observation that updates on individual coordinates of the parameters can be carried out in parallel has been used for other models. In the context of Latent Dirichlet Allocation, Yan et al. [29] used a similar observation to derive an efficient GPU based collapsed Gibbs sampler. On the other hand, for matrix factorization Gemulla et al. [10] and Recht and Ré [22] independently proposed parallel algorithms based on a similar idea. However, to the best of our knowledge, rewriting (1) as a saddle point problem in order to discover parallelism is our novel contribution.
5 Experimental Results
5.1 Dataset and Implementation Details
We implemented DSO, SGD, and PSGD ourselves, while for BMRM we used the optimized implementation that is available from the toolkit for advanced optimization (TAO, https://bitbucket.org/sarich/tao-2.2). All algorithms are implemented in C++ and use MPI for communication. In our multi-machine experiments, each algorithm was run on four machines with eight cores per machine. DSO, SGD, and PSGD used AdaGrad [8] step size adaptation. We also used stochastic variance reduced gradient (SVRG) of Johnson and Zhang [14] to accelerate updates of DSO. In the multi-machine setting DSO initializes parameters of each MPI process by locally executing twenty iterations of dual coordinate descent [9] on its local data to locally initialize \(w_j\) and \(\alpha _i\) parameters; then \(w_j\) values were averaged across machines. We chose binary logistic regression and SVM as test problems, i.e., \(\phi _j(s)=\frac{1}{2}s^2\) and \(\ell _i(u)=\log (1+\exp (-u)),[1-u]_+\). To prevent degeneracy in logistic regression, values of \(\alpha _{i}\)’s are restricted to \((10^{-14}, 1-10^{-14})\), while in the case of linear SVM they are restricted to [0, 1]. Similarly, the \(w_{j}\)’s are restricted to lie in the interval \([-1/\sqrt{\lambda }, 1/\sqrt{\lambda }]\) for linear SVM and \([-\sqrt{\log (2)/\lambda }, \sqrt{\log (2)/\lambda }]\) for logistic regression, following the idea of Shalev-Shwartz et al. [25].
5.2 Scalability of DSO
We first verify, that the per epoch complexity of DSO scales as \(T_\mathrm{u} / p + T_\mathrm{c}\), as predicted by our analysis in Sect. 3.1. Towards this end, we took the webspam-t dataset of Webb et al. [28], which is one of the largest datasets we could comfortably fit on a single machine. We let \(p = \left\{ 1, 2, 4, 8, 16\right\} \) while fixing the number of cores on each machine to be 4.
Using the average time per epoch on one and two machines, one can estimate \(T_\mathrm{u}\) and \(T_\mathrm{c}\). Given these values, one can then predict the time per iteration for other values of p. Figure 2 shows the predicted time and the measured time averaged over 40 epochs. As can be seen, the time per epoch indeed goes down as \(\approx 1/p\) as predicted by the theory. The test error and objective function values on multiple machines was very close to the test error and objective function values observed on a single machine, thus confirming Theorem 1.
5.3 Comparison with Other Solvers
In our single machine experiments we compare DSO with stochastic gradient descent (SGD) and bundle methods for regularized risk minimization (BMRM) of Teo et al. [27]. In our multi-machine experiments we compare with parallel stochastic gradient descent (PSGD) of Zinkevich et al. [33] and BMRM. We chose these competitors because, just like DSO, they are general purpose solvers for regularized risk minimization (1), and hence can solve non-smooth problems such as SVMs as well as smooth problems such as logistic regression. Moreover, BMRM is a specialized solver for regularized risk minimization, which has similar performance to other first-order solvers such as ADMM.
We selected two representative datasets and two values of the regularization parameter \(\lambda = \left\{ 10^{-5}, 10^{-6}\right\} \) to present our results. For the single machine experiments we used the real-sim dataset from Hsieh et al. [13], while for the multi-machine experiments we used webspam-t. Details of the datasets can be found in Table 1 in the appendix. We use test error rate as comparison metric, since stochastic optimization algorithms are efficient in terms of minimizing generalization error, not training error [3]. The results for single machine experiments on linear SVM training can be found in Fig. 3. As can be seen, DSO shows comparable efficiency to that of SGD, and outperforms BMRM. This demonstrates that saddle-point optimization is a viable strategy even in serial setting.
Our multi-machine experimental results for linear SVM training can be found in Fig. 5. As can be seen, PSGD converges very quickly, but the quality of the final solution is poor; this is probably because PSGD only solves processor-local problems and does not have a guarantee to converge to the global optimum. On the other hand, both BMRM and DSO converges to similar quality solutions, and at fairly comparable rates. Similar trends we observed on logistic regression. Therefore we only show the results with \(10^{-5}\) in Fig. 4.
5.4 Terascale Learning with DSO
Next, we demonstrate the scalability of DSO on one of the largest publicly available datasets. Following the same experimental setup as Agarwal et al. [1], we work with the splice site recognition dataset [26] which contains 50 million training data points, each of which has around 11.7 million dimensions. Each datapoint has approximately 2000 non-zero coordinates and the entire dataset requires around 3 TB of storage. Previously [26], it has been shown that sub-sampling reduces performance, and therefore we need to use the entire dataset for training.
Similar to Agarwal et al. [1], our goal is not to show the best classification accuracy on this data (this is best left to domain experts and feature designers). Instead, we wish to demonstrate the scalability of DSO and establish that (a) it can scale to such massive datasets, and (b) the empirical performance as measured by AUPRC (Area Under Precision-Recall Curve) improves as a function of time.
We used 14 machines with 8 cores per machine to train a linear SVM, and plot AUPRC as a function of time in Fig. 6. Since PSGD did not perform well in earlier experiments, here we restrict our comparison to BMRM. This experiment demonstrates one of the advantages of stochastic optimization, namely that the test performance increases steadily as a function of the number of iterations. On the other hand, for a batch solver like BMRM the AUPRC fluctuates as a function of the iteration number. The practical consequence of this observation is that, one usually needs to wait for a batch optimizer to converge before using the resulting solution. On the other hand, even the partial solutions produced by a stochastic optimizer such as DSO usually exhibit good generalization properties.
6 Discussion and Conclusion
We presented a new reformulation of regularized risk minimization as a saddle point problem and showed that one can derive an efficient distributed stochastic optimizer (DSO). We also proved rates of convergence of DSO. Unlike other solvers, our algorithm does not require strong convexity and thus has wider applicability. Our experimental results show that DSO is competitive with state-of-the-art optimizers such as BMRM and SGD, and outperforms simple parallel stochastic optimization algorithms such as PSGD.
A natural next step is to derive an asynchronous version of DSO algorithm along the lines of the NOMAD algorithm proposed by Yun et al. [31]. We can see that our convergence proof which only relies on having an equivalent serial sequence of updates will still apply. Of course, there is also more room to further improve the performance of DSO by deriving better step size adaptation schedules, and exploiting memory caching to speed up random access.
Notes
- 1.
Any other tie-breaking rule would also suffice.
References
Agarwal, A., Chapelle, O., Dudík, M., Langford, J.: A reliable effective terascale linear learning system. JMLR 15, 1111–1133 (2014)
Bertsekas, D., Tsitsiklis, J.: Parallel and Distributed Computation: Numerical Methods (1997)
Bottou, L., Bousquet, O.: The tradeoffs of large-scale learning. In: Optimization for Machine Learning (2011)
Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends ML 3(1), 1–123 (2010)
Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)
Bradley, J., Kyrola, A., Bickson, D., Guestrin, C.: Parallel coordinate descent for L1-regularized loss minimization. In: ICML, pp. 321–328 (2011)
Chu, C.T., Kim, S.K., Lin, Y.A., Yu, Y., Bradski, G., Ng, A.Y., Olukotun, K.: Map-reduce for machine learning on multicore. In: NIPS, pp. 281–288 (2006)
Duchi, J., Hazan, E., Singer, Y.: Adaptive subgradient methods for online learning and stochastic optimization. JMLR 12, 2121–2159 (2010)
Fan, R.E., Chang, J.W., Hsieh, C.J., Wang, X.R., Lin, C.J.: LIBLINEAR: a library for large linear classification. JMLR 9, 1871–1874 (2008)
Gemulla, R., Nijkamp, E., Haas, P.J., Sismanis, Y.: Large-scale matrix factorization with distributed stochastic gradient descent. In: KDD, pp. 69–77 (2011)
Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning. (2009)
Hsieh, C.J., Yu, H.F., Dhillon, I.S.: PASSCoDe: parallel asynchronous stochastic dual coordinate descent. In: ICML (2015)
Hsieh, C.J., Chang, K.W., Lin, C.J., Keerthi, S.S., Sundararajan, S.: A dual coordinate descent method for large-scale linear SVM. In: ICML, pp. 408–415 (2008)
Johnson, R., Zhang, T.: Accelerating stochastic gradient descent using predictive variance reduction. In: NIPS, pp. 315–323 (2013)
Langford, J., Smola, A.J., Zinkevich, M.: Slow learners are fast. In: NIPS (2009)
Li, M., Andersen, D.G., Smola, A.J., Yu, K.: Communication efficient distributed machine learning with the parameter server. In: Neural Information Processing Systems (2014)
Liu, D.C., Nocedal, J.: On the limited memory BFGS method for large scale optimization. Math. Program. 45(3), 503–528 (1989)
Nedić, A., Bertsekas, D.P.: Incremental subgradient methods for nondifferentiable optimization. SIAM J. Optim. 12(1), 109–138 (2001)
Nemirovski, A., Juditsky, A., Lan, G., Shapiro, A.: Robust stochastic approximation approach to stochastic programming. SIAM J. Optim. 19(4), 1574–1609 (2009)
Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Springer, Heidelberg (2004). https://doi.org/10.1007/978-1-4419-8853-9
Recht, B., Re, C., Wright, S., Niu, F.: Hogwild: a lock-free approach to parallelizing stochastic gradient descent. In: NIPS, pp. 693–701 (2011)
Recht, B., Ré, C.: Parallel stochastic gradient algorithms for large-scale matrix completion. Math. Program. Comput. 5(2), 201–226 (2013)
Schölkopf, B., Smola, A.J.: Learning with Kernels (2002)
Shalev-Shwartz, S., Ben-David, S.: Understanding Machine Learning. Cambridge University Press, Cambridge (2014)
Shalev-Shwartz, S., Singer, Y., Srebro, N.: Pegasos: Primal estimated sub-gradient solver for SVM. In: ICML (2007)
Sonnenburg, S., Franc, V.: COFFIN: a computational framework for linear SVMs. In: ICML (2010)
Teo, C.H., Vishwanthan, S.V.N., Smola, A.J., Le, Q.V.: Bundle methods for regularized risk minimization. JMLR 11, 311–365 (2010)
Webb, S., Caverlee, J., Pu, C.: Introducing the webb spam corpus: using email spam to identify web spam automatically. In: CEAS (2006)
Yan, F., Xu, N., Qi, Y.: Parallel inference for latent Dirichlet allocation on graphics processing units. In: NIPS, pp. 2134–2142 (2009)
Yang, T.: Trading computation for communication: distributed stochastic dual coordinate ascent. In: NIPS (2013)
Yun, H., Yu, H.F., Hsieh, C.J., Vishwanathan, S.V.N., Dhillon, I.S.: NOMAD: non-locking, stOchastic multi-machine algorithm for asynchronous and decentralized matrix completion. VLDB 7, 975–986 (2014)
Zhang, Y., Xiao, L.: DiSCO: distributed optimization for self-concordant empirical loss. In: ICML (2015)
Zinkevich, M., Smola, A.J., Weimer, M., Li, L.: Parallelized stochastic gradient descent. In: NIPS, pp. 2595–2603 (2010)
Acknowledgments
This work is partially supported by MEXT KAKENHI Grant Number 26730114 and JST-CREST JPMJCR1304.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Matsushima, S., Yun, H., Zhang, X., Vishwanathan, S.V.N. (2017). Distributed Stochastic Optimization of Regularized Risk via Saddle-Point Problem. In: Ceci, M., Hollmén, J., Todorovski, L., Vens, C., Džeroski, S. (eds) Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2017. Lecture Notes in Computer Science(), vol 10534. Springer, Cham. https://doi.org/10.1007/978-3-319-71249-9_28
Download citation
DOI: https://doi.org/10.1007/978-3-319-71249-9_28
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-71248-2
Online ISBN: 978-3-319-71249-9
eBook Packages: Computer ScienceComputer Science (R0)