1 Introduction

Many machine learning problems involve minimising high dimensional composite objectives (Dhurandhar et al., 2018; Lu et al., 2014; Ribeiro et al., 2016; Xie et al., 2018). For example, in the task of explaining predictions of an image classifier (Dhurandhar et al., 2018; Ribeiro et al., 2016), we need to find a sufficiently small set of features explaining the prediction by solving the following constrained optimisation problem

$$\begin{aligned} \begin{aligned} \min _{x\in {\mathbb{R}}^{d}}\quad&l(x)+\lambda _{1}\Vert x\Vert _{1}+\frac{\lambda _{2}}{2}\Vert x\Vert _{2}^{2}\\ \ {\text{s.t.}} \quad&\vert x_{i} |\le c_{i} \quad {\text{for all }} i=1,\ldots , d, \\ \end{aligned} \end{aligned}$$

where l is a function relating to the classifier, \(\lambda _{1}\) controls the sparsity of the feature set, \(\lambda _{2}\) controls the complexity of the feature set, and \(c_{1},\ldots ,c_{d}\) are the ranges of the features. For l with a complicated structure and large d, it is practical to solve the problem by optimising the first-order approximation of the objective function (Lan, 2020). However, the first-order methods can not attain optimal performance due to the non-smooth component \(\lambda _{1}\Vert \cdot \Vert _{1}\). Furthermore, the purpose of introducing the \(\ell _{1}\) regularisation is to ensure the sparsity of the decision variable. Applying the first-order algorithms directly on the subgradient of \(\lambda _{1}\Vert \cdot \Vert _{1}\) does not lead to sparse updates (Duchi et al., 2010). We refer to the objective function consisting of a loss with a complicated structure and a simple (possibly non-smooth) convex regularisation term as a composite objective.

This paper focuses on the more general online convex optimisation (OCO), which can be considered as an iterative game between a player and an adversary. In each round t of the game, the player makes a decision \(x_{t}\in {\mathcal{K}}\). Next, the adversary selects and reveals a convex loss \(l_{t}\) to the player, who then suffers the composite loss \(f_{t}(x)=l_{t}(x)+r_{t}(x)\), where \(l_{t}:{\mathcal{K}}\rightarrow {\mathbb{R}}\) is a convex function revealed at each iteration and \(r_{t}:{\mathbb{X}}\rightarrow {\mathbb{R}}_{\ge 0}\) is a known closed convex function. The target is to develop algorithms minimising the regret of not choosing the best decision \(x\in {\mathcal{K}}\)

$$\begin{aligned} {\mathcal{R}}_{1:T}=\sum _{t=1}^{T}f_{t}(x_{t})-\min _{x\in {\mathcal{K}}}\sum _{t=1}^{T}f_{t}(x). \end{aligned}$$

An online optimisation algorithm can be converted into a stochastic optimisation algorithm using the online-to-batch conversion technique (Cesa-Bianchi et al., 2004), which is our primary motivation. In addition to that, online optimisation also has many direct applications, such as recommender systems (Song et al., 2014) and time series prediction (Anava et al., 2013).

Given a sequence of subgradients \(\{g_{t}\}\) of \(\{l_{t}\}\), we are interested in the so-called adaptive algorithms ensuring regret bounds of the form \({\mathcal{O}}(\sqrt{\sum _{t=1}^{T}\Vert g_{t}\Vert _{*} ^{2}})\). The adaptive algorithms are worst-case optimal in the online setting (McMahan & Streeter, 2010) and can be converted into stochastic optimisation algorithms with optimal convergence rates (Cutkosky, 2019; Joulani et al., 2020; Kavis et al., 2019; Levy et al., 2018). The adaptive subgradient methods (AdaGrad) (Duchi et al., 2011) and their variants (Alacaoglu et al., 2020; Duchi et al., 2011; Orabona & Pál, 2018; Orabona et al., 2015) have become the most popular adaptive algorithms in recent years. They are often applied to estimating deep learning models and outperform standard optimisation algorithms when the gradient vectors are sparse. However, such property can not be expected in every problem. If the decision variables are in an \(\ell _{1}\) ball and gradient vectors are dense, the Adagrad-style algorithms do not have an optimal theoretical guarantee due to the sub-linear regret dependence on the dimensionality.

The exponentiated gradient (EG) methods (Arora et al., 2012; Kivinen & Warmuth, 1997), which are designed for estimating weights in the positive orthant, enjoy the regret bound growing logarithmically with the dimensionality. The \({{\textit{EG}}^{\pm }}\) algorithm generalises this idea to negative weights (Kivinen & Warmuth, 1997; Warmuth, 2007). Given d dimensional problems with the maximum norm of the gradient bounded by G, the regret of \({{\textit{EG}}^{\pm }}\) is upper bounded by \({\mathcal{O}}(G\sqrt{T\ln d})\). As the performance of the \({{\textit{EG}}^{\pm }}\) algorithm depends strongly on the choice of hyperparameters, the p-norm algorithm (Gentile, 2003), which is less sensitive to the tuning of hyperparameters, is introduced to approach the logarithmic behaviour of \({{\textit{EG}}^{\pm }}\). Kakade et al. (2012) further extends the p-norm algorithm to learning with matrices. An adaptive version of the p-norm algorithm is analysed in Orabona et al. (2015), which has a regret upper bound proportional to \(\Vert x\Vert _{p,*}^{2}\sqrt{\sum _{t=1}^{T}\Vert g_{t}\Vert _{p}^{2}}\) for a given sequence of gradients \(\{g_{t}\}\). By choosing \(p=2\ln d\), a regret upper bound \({\mathcal{O}}(\Vert x\Vert _{1}^{2}\sqrt{\ln d \sum _{t=1}^{T}\Vert g_{t}\Vert _{\infty }^{2}})\) can be achieved. However, tuning hyperparameters is still required to attain the optimal regret \({\mathcal{O}}(\Vert x\Vert _{1}\sqrt{\ln d \sum _{t=1}^{T}\Vert g_{t}\Vert _{\infty }^{2}})\).

Recently, Ghai et al. (2020) has introduced a hyperbolic regulariser for online mirror descent update (HU), which can be viewed as an interpolation between gradient descent and EG. It has a logarithmic behaviour as in EG and a stepsize that can be flexibly scheduled as gradient descent. However, many optimisation problems with sparse targets have an \(\ell _{1}\) or nuclear regulariser in the objective function. Otherwise, the optimisation algorithm has to pick a decision variable from a compact decision set. Due to the hyperbolic regulariser, it is difficult to derive a closed-form solution for either case. Ghai et al. (2020) has proposed a workaround by tuning a temperature-like hyperparameter to normalise the decision variable at each iteration, which is equivalent to the \({{\textit{EG}}^{\pm }}\) algorithm and leads to a performance dependence on the tuning.

This paper proposes a family of algorithms for the online optimisation of composite objectives. The algorithms employ an entropy-like regulariser combined with algorithmic ideas of adaptivity and optimism. Equipped with the regulariser, the online mirror descent (OMD) and the follow-the-regulariser-leader (FTRL) algorithms update the absolute value of the scalar components of the decision variable in the same way as EG in the positive orthant. The directions of the decision variables are set in the same way as the p-norm algorithm. To derive the regret upper bound, we first show that the regulariser is strongly convex with respect to the \(\ell _{1}\)-norm over the \(\ell _{1}\) ball. Then we analyse the algorithms in the comprehensive framework for optimistic algorithms with adaptive regularisers (Joulani et al., 2017). Given the radius of decision set D, sequences of gradients \(\{g_{t}\}\) and hints \(\{h_{t}\}\), the proposed algorithms achieve a regret upper bound in the form of \({\mathcal{O}}(D\sqrt{\ln d\sum _{t=1}^{T}\Vert g_{t}-h_{t}\Vert ^{2}_{\infty }})\). With the techniques introduced in Ghai et al. (2020), a spectral analogue of the entropy-like regulariser can be found and proved to be strongly convex with respect to the nuclear norm over the nuclear ball, from which the best-known regret upper bound depending on \(\sqrt{\ln (\min \{m,n\})}\) for problems in \({\mathbb{R}}^{m,n}\) follows.

Furthermore, the algorithms have closed-form solutions for the \(\ell _{1}\) and nuclear regularised objective functions. For the \(\ell _{2}\) and Frobenius regularised objectives, the update rules involve values of the principal branch of the Lambert function, which can be well approximated. We propose a sorting based procedure projecting the solution to the decision set for the \(\ell _{1}\) or nuclear ball constrained problems. Finally, the proposed online algorithms can be converted into algorithms for stochastic optimisation with the technique introduced in Joulani et al. (2020). We show that the converted algorithms guarantee an optimal accelerated convergence rate for smooth objective functions. The convergence rate depends logarithmically on the dimensionality of the problem, which suggests its advantage compared to the accelerated AdaGrad-Style algorithms (Cutkosky, 2019; Joulani et al., 2020; Levy et al., 2018).

The rest of the paper is organised as follows. Section 2 reviews the existing work. Section 3 introduces the notation and preliminary concepts. Next, we present and analyse our algorithms in Sect. 4. In Sect. 5, we derive efficient implementations for some popular choices of composite objectives, constraints and stochastic optimisation. Section 6 demonstrates the empirical evaluations using both synthetic and real-world data. Finally, we conclude our work in Sect. 7.

2 Related work

Our primary motivation is to solve the optimisation problems with an elastic net regulariser in their objective function, which are highly involved in attacking (Cancela et al., 2021; Carlini & Wagner, 2017; Chen et al., 2018) and explaining (Dhurandhar et al., 2018; Ribeiro et al., 2016) deep neural networks. The proximal gradient method (PGD) (Nesterov, 2003) and its accelerated variants (Beck & Teboulle, 2009) are usually applied to solving the problem. However, these algorithms are not practical since they require prior knowledge about the smoothness of the objective function to ensure their convergence.

The AdaGrad-style algorithms (Alacaoglu et al., 2020; Duchi et al., 2011; Orabona & Pál, 2018; Orabona et al., 2015) have become popular in the machine learning community in recent years. Given the gradient vectors \(g_{1},\ldots , g_{t}\) received at iteration t, the core idea of these algorithms is to set the stepsizes proportional to \(\frac{1}{\sqrt{\sum _{s=1}^{t-1}\Vert g_{s}\Vert _{*} ^{2}}}\) to ensure a regret upper bounded by \({\mathcal{O}}(\sqrt{\sum _{t=1}^{T}\Vert g_{t}\Vert _{*} ^{2}})\) after T iterations. Online learning algorithms with this adaptive regret can be directly applied to the stochastic optimisation problems (Alacaoglu et al., 2020; Li & Orabona, 2019) or can be converted into a stochastic algorithm (Cesa-Bianchi & Gentile, 2008) with a convergence rate \({\mathcal{O}}(\frac{1}{\sqrt{T}})\). This rate can be further improved to \({\mathcal{O}}(\frac{1}{T^{2}})\) for unconstrained problems with smooth loss functions by applying the acceleration techniques (Cutkosky, 2019; Kavis et al., 2019; Levy et al., 2018). These acceleration techniques do not require prior knowledge about the smoothness of the loss function and a guarantee convergence rate of \({\mathcal{O}}(\frac{1}{\sqrt{T}})\) for non-smooth functions. Joulani et al. (2020) has proposed a simple approach to accelerate optimistic online optimisation algorithms with adaptive regret bound.

Given a d-dimensional problem, the algorithms mentioned above have a regret upper bound depending (sub-) linearly on d. We are interested in a logarithmic regret dependence on the dimensionality, which can be attained by the \({\textit{EG}}\) family algorithms (Arora et al., 2012; Kivinen & Warmuth, 1997; Warmuth, 2007) and their adaptive optimistic extension (Steinhardt & Liang, 2014). However, these algorithms work only for decision sets in the form of cross-polytopes and require prior knowledge about the radius of the decision set for general convex optimisation problems. The p-norm algorithm (Gentile, 2003; Kakade et al., 2012) does not have the limitation mentioned above; however, it still requires prior knowledge about the problem to attain optimal performance (Orabona et al., 2015). The HU algorithm (Ghai et al., 2020), which interpolates gradient descent and EG, can theoretically be applied to loss functions with elastic net regularisers and decision sets other than cross-polytopes. However, it is not practical due to the complex projection step.

Following the idea of HU, we propose more practical algorithms interpolating EG and the p-norm algorithm. The core of our algorithm is a symmetric logarithmic function. Orabona (2013) first introduced the idea of composing the single-dimensional symmetric logarithmic function and a norm to generalise EG to the infinite-dimensional space. It has become popular for parameter-free optimisation (Cutkosky & Boahen, 2016, 2017a, b; Kempka et al., 2019) since one can easily construct an adaptive regulariser with this composition (Cutkosky & Boahen, 2017a). In this paper, instead of using the composition, we apply the symmetric logarithmic function directly to each entry of a vector to construct a symmetric entropy-like function that is strongly convex with respect to the \(\ell _{1}\) norm. We analyse MD and FTRL with the entropy-like function in the framework developed in Joulani et al. (2017). The analysis of the spectral analogue of the entropy-like function follows the idea proposed in Ghai et al. (2020).

3 Preliminary

The focus of this paper is OCO with the decision variable taken from a compact convex subset \({\mathcal{K}}\subseteq {\mathbb{X}}\) of finite dimensional vector space equipped with a norm \(\Vert \cdot \Vert\). Given a sequence of vectors \(\{v_{t}\}\), we use the compressed-sum notation \(v_{1:t}= \sum _{s=1}^{t}v_{s}\) for simplicity. We denote by \({\mathbb{X}}_{*}\) the dual space with the dual norm \(\Vert \cdot \Vert _{*}\). The bi-linear map combining vectors in \({\mathbb{X}}_{*}\) and \({\mathbb{X}}\) is denoted by

$$\begin{aligned} \langle \cdot ,\cdot \rangle :{\mathbb{X}}_{*}\times {\mathbb{X}}\rightarrow {\mathbb{R}}, (\theta , x)\mapsto \theta x. \end{aligned}$$

For \({\mathbb{X}}={\mathbb{R}}^{d}\), we denote by \(\Vert \cdot \Vert _{1}\) the \(\ell _{1}\) norm, the dual norm of which is the maximum norm denoted by \(\Vert \cdot \Vert _{\infty }\). It is well known that the \(\ell _{2}\) norm denoted by \(\Vert \cdot \Vert _{2}\) is self-dual. In case \({\mathbb{X}}\) is the space of the matrices, for simplicity, we also use \(\Vert \cdot \Vert _{1}\), \(\Vert \cdot \Vert _{2}\) and \(\Vert \cdot \Vert _{\infty }\) for the nuclear, Frobenius and spectral norm, respectively.

Let \(\sigma :{\mathbb{R}}^{m,n}\rightarrow {\mathbb{R}}^{\min \{m,n\}}\) be the function mapping a matrix to its singular values. Define

$$\begin{aligned} {\text{diag}}:{\mathbb{R}}^{\min \{m,n\}}\rightarrow {\mathbb{R}}^{m,n}, x\mapsto X \end{aligned}$$

with

$$\begin{aligned} X_{ij}= {\left\{ \begin{array}{ll} x_{i}, &{}\quad {\text{if }} i=j \\ 0, &{} \quad {\text{otherwise}}. \\ \end{array}\right. } \end{aligned}$$

Clearly, the singular value decomposition (SVD) of a matrix X can be expressed as

$$\begin{aligned} X=U{\text{diag}}(\sigma (X))V^{\top }. \end{aligned}$$

Similarly, we write the eigendecomposition of a symmetric matrix X as

$$\begin{aligned} X=U{\text{diag}}(\lambda (X))U^{\top }, \end{aligned}$$

where we denote by \(\lambda :{\mathbb{S}}^{d}\mapsto {\mathbb{R}}^{d}\) the function mapping a symmetric matrix to its spectrum.

Given a convex set \({\mathcal{K}}\subseteq {\mathbb{X}}\) and a convex function \(f:{\mathcal{K}}\rightarrow {\mathbb{R}}\) defined on \({\mathcal{K}}\), we denote by \(\partial f(y)=\{g\in {\mathbb{X}}_{*}|\forall y\in {\mathcal{K}}.f(x)-f(y)\ge \langle g,x-y\rangle \}\) the subgradient of f at y. We refer to \(\triangledown f(y)\) any element in \(\partial f(y)\). A function is \(\eta\)-strongly convex with respect to \(\Vert \cdot \Vert\) over \({\mathcal{K}}\) if

$$\begin{aligned} f(x)-f(y)\ge \langle \triangledown f(y),x-y\rangle +\frac{\eta }{2}\Vert x-y\Vert ^{2} \end{aligned}$$

holds for all \(x,y\in {\mathcal{K}}\) and \(\triangledown f(y)\in \partial f(y)\).

4 Algorithms and analysis

In this section, we present and analyse our algorithms, which begins with a short review on EG and the p-norm algorithm for the case \(f_{t}= l_{t}\). The EG algorithm can be considered as an instance of OMD, the update rules of which is given by

$$\begin{aligned} x_{t+1,i}\propto \exp \left( \ln (x_{t,i})-\frac{1}{\eta }g_{t,i}\right) , \end{aligned}$$

where \(g_{t}\in \partial f_{t}(x_{t})\) is the subgradient, and \(\eta > 0\) is the stepsize. Although the algorithm has the expected logarithmic dependence on the dimensionality, its update rule is applicable only to the decision variables on the standard simplex. For the problem with decision variables taken from an \(\ell _{1}\) ball \(\{x|\Vert x\Vert _{1}\le D\}\), one can apply the \({\textit{EG}}^{\pm }\) trick, i.e. use the vector \([\frac{D}{2}g_{t}^{\top },-\frac{D}{2}g_{t}^{\top }]^{\top }\) to update \([x_{t+1,+}^{\top },x_{t+1,-}^{\top }]^{\top }\) at iteration t and choose the decision variable \(x_{t+1,+}-x_{t+1,-}\). However, if the decision set is implicitly given by a regularisation term, the parameter D has to be tuned. Since applying an overestimated D increases regret, while using an underestimated D decreases the freedom of the model, the algorithm is sensitive to tuning. For composite objectives, EG is not practical due to its update rule.

Compared to EG, the p-norm algorithm, the update rule of which is given by

$$\begin{aligned} \begin{aligned} y_{t+1,i}=\,&{\text{sgn}}(x_{t,i})\vert x_{t,i} |^{p-1}\Vert x_{t}\Vert _{p}^{\frac{2}{p-1}}-\frac{1}{\eta }g_{t,i}\\ x_{t+1,i}=\,&{\text{sgn}}(y_{t+1,i})\vert y_{t+1,i} |^{q-1}\Vert y_{t+1}\Vert _{q}^{\frac{2}{q-1}},\\ \end{aligned} \end{aligned}$$

is better applicable for unknown D. To combine the ideas of EG and the p-norm algorithm, we consider the following generalised entropy function

$$\begin{aligned} \phi : {\mathbb{R}}\rightarrow {\mathbb{R}}, x\mapsto \alpha (\vert x |+\beta )\ln \left( \frac{\vert x |}{\beta }+1\right) -\alpha \vert x |. \end{aligned}$$
(1)

In the next lemma, we show the twice differentiability and strict convexity of \(\phi\), based on which a strongly convex potential function for OMD in a compact decision set can be constructed.

Lemma 1

\(\phi\) is twice continuous differentiable and strictly convex with

  1. 1.

    \(\phi '(x)=\alpha \ln \left( \frac{\vert x |}{\beta }+1\right) {\text{sgn}}(x)\)

  2. 2.

    \(\phi ''(x)=\frac{\alpha }{\vert x |+\beta }\).

Furthermore, the convex conjugate given by \(\phi ^{*}:{\mathbb{R}}\rightarrow {\mathbb{R}}, \theta \mapsto \alpha \beta \exp \frac{\vert \theta |}{\alpha }-\beta \vert \theta |-\alpha \beta\) is also twice continuous differentiable with

  1. 1.

    \(\phi ^{*\prime }(\theta )=\left( \beta \exp \frac{\vert \theta |}{\alpha }-\beta \right) {\text{sgn}}(\theta )\)

  2. 2.

    \(\phi ^{*\prime \prime }(\theta )=\frac{\beta }{\alpha } \exp \frac{\vert \theta |}{\alpha }.\)

Since we can expand the natural logarithm as \(\ln (\frac{\vert x |}{\beta }+1)=\frac{\vert x |}{\beta }-\frac{\vert x |^{2}}{2\beta ^{2}}+\frac{\vert x |^{3}}{3\beta ^{3}}-\cdots\), \(\phi (x)\) can be intuitively considered as an interpolation between the absolute value and square. As observed in Fig. 1a, it is closer to the absolute value compared to the hyperbolic entropy introduced in Ghai et al. (2020). Moreover, running OMD with regulariser \(x\mapsto \sum _{i=1}^{d}\phi (x_{i})\) yields an update rule

$$\begin{aligned} \begin{aligned} y_{t+1,i}=\,&{\text{sgn}}(x_{t,i})\ln \left( \frac{\vert x_{t,i} |}{\beta }+1\right) -\frac{1}{\alpha }g_{t,i}\\ x_{t+1,i}=\,&{\text{sgn}}(y_{t+1,i})(\beta \exp (\vert y_{t+1,i} |)-\beta ),\\ \end{aligned} \end{aligned}$$

which sets the signs of coordinates like the p-norm algorithm and updates the scale similarly to EG. As illustrated in Fig. 1b, the mirror map \(\triangledown \phi ^{*}\) is close to the mirror map of EG, while the behavior of HU is more similar to the gradient descent update.

Fig. 1
figure 1

Comparison of convex regularisers

4.1 Algorithms in the Euclidean space

To obtain an adaptive and optimistic algorithm, we define the following time varying function

$$\begin{aligned} \phi _{t}:{\mathbb{R}}^{d}\rightarrow {\mathbb{R}}, x\mapsto \alpha _{t} \sum _{i=1}^{d}\left( (\vert x_{i} |+\beta )\ln \left( \frac{\vert x_{i} |}{\beta }+1\right) -\vert x_{i} |\right) , \end{aligned}$$
(2)

and apply it to the adaptive optimistic OMD (AO-OMD) given by

$$\begin{aligned} \begin{aligned} x_{t+1}&=\mathop {{\text{arg min}}}\limits _{x\in {\mathcal{K}}}\langle g_{t}-h_{t}+h_{t+1},x\rangle +r_{t+1}(x)+{\mathcal{B}}_{\phi _{t+1}}(x,x_{t}) \end{aligned} \end{aligned}$$
(3)

for the sequence of subgradients \(\{g_{t}\}\) and hints \(\{h_{t}\}\). In a bounded domain, \(\phi _{t}\) is strongly convex with respect to \(\Vert \cdot \Vert _{1}\), which is shown in the next lemma.

Lemma 2

Let \({\mathcal{K}}\subseteq {\mathbb{R}}^{d}\) be convex and bounded such that \(\Vert x\Vert _{1}\le D\) for all \(x\in {\mathcal{K}}\). Then we have for all \(x,y\in {\mathcal{K}}\)

$$\begin{aligned} \phi _{t}(x)-\phi _{t}(y)\ge \triangledown \phi _{t}(y)^{\top }(x-y)+\frac{\alpha _{t}}{D+d\beta }\Vert x-y\Vert _{1}^{2}. \end{aligned}$$

With the property of the strong convexity, the regret of AO-OMD with regulariser (2) can be analysed in the framework of optimistic algorithm (Joulani et al., 2017) and is upper bounded by the following theorem.

Theorem 1

Let \({\mathcal{K}}\subseteq {\mathbb{R}}^{d}\) be a compact convex set. Assume that there is some \(D>0\) such that \(\Vert x\Vert _{1}\le D\) holds for all \(x\in {\mathcal{K}}\). Let \(\{x_{t}\}\) be the sequence generated by update rule (3) with regulariser (2). Setting \(\beta =\frac{1}{d}\), \(\eta =\sqrt{\frac{1}{\ln (D+1)+\ln d}}\), and \(\alpha _{t}=\eta \sqrt{\sum _{s=1}^{t-1}\Vert g_{s}-h_{s}\Vert ^{2}_{\infty }}\), we obtain

$$\begin{aligned} \begin{aligned} {\mathcal{R}}_{1:T}\le&r_{1}(x_{1})+c(d,D)\sqrt{\sum _{t=1}^{T}\Vert g_{t}-h_{t}\Vert _{\infty }^{2}} \end{aligned} \end{aligned}$$

for some \(c(d,D)\in {\mathcal{O}}(D\sqrt{\ln (D+1)+\ln d})\).

EG can also be considered as an instance of FTRL with a constant stepsize. The update rule of the adaptive optimistic FTRL (AO-FTRL) is given by

$$\begin{aligned} \begin{aligned} x_{t+1}&=\mathop {{\text{arg min}}}\limits _{x\in {\mathcal{K}}}\langle g_{1:t}+h_{t+1},x\rangle +r_{1:t+1}(x)+{\mathcal{B}}_{\phi _{t+1}}(x,x_{1}). \end{aligned} \end{aligned}$$
(4)

The regret of AO-FTRL is upper bounded by the following theorem.

Theorem 2

Let \({\mathcal{K}}\subseteq {\mathbb{R}}^{d}\) be a compact convex set with \(d>e\). Assume that there is some \(D\ge 1\) such that \(\Vert x\Vert _{1}\le D\) holds for all \(x\in {\mathcal{K}}\subseteq {\mathbb{R}}^{d}\). Let \(\{x_{t}\}\) be the sequence generated by updating rule (4) with regulariser (2) at iteration t. Setting \(\beta =\frac{1}{d}\), \(\eta =\sqrt{\frac{1}{\ln (D+1)+\ln d}}\) and \(\alpha _{t}=\eta \sqrt{\sum _{s=1}^{t-1}\Vert g_{s}-h_{s}\Vert _{\infty }^{2}}\), we obtain

$$\begin{aligned} \begin{aligned} {\mathcal{R}}_{1:T}\le \,&c(d,D)\sqrt{\sum _{t=1}^{T}\Vert g_{t}-h_{t}\Vert _{\infty }^{2}}\\ \end{aligned} \end{aligned}$$

for some \(c(d,D)\in {\mathcal{O}}(D\sqrt{\ln (D+1)+\ln d})\).

4.2 Spectral algorithms

We now consider the setting in which the decision variables are matrices taken from a compact convex set \({\mathcal{K}}\subseteq {\mathbb{R}}^{m,n}\). A direct attempt to solve this problem is to apply the updating rule (3) or (4) to the vectorised matrices. A regret bound of \({\mathcal{O}}(D\sqrt{T\ln (mn)})\) can be guaranteed if the \(\ell _{1}\) norm of the vectorised matrices from \({\mathcal{K}}\) are bounded by D, which is not optimal. In many applications, elements in \({\mathcal{K}}\) are assumed to have bounded nuclear norm, for which the regulariser

$$\begin{aligned} \Phi _{t}=\phi _{t}\circ \sigma \end{aligned}$$
(5)

can be applied. The next theorem gives the strong convexity of \(\Phi _{t}\) with respect to \(\Vert \cdot \Vert _{1}\) over \({\mathcal{K}}\), which allows us to use \(\{\Phi _{t}\}\) as the potential functions in OMD and FTRL.

Theorem 3

Let \(\sigma :{\mathbb{R}}^{m,n}\rightarrow {\mathbb{R}}^{d}\) be the function mapping a matrix to its singular values. Then the function \(\Phi _{t}=\phi _{t}\circ \sigma\) is \(\frac{\alpha _{t}}{2(D+\min \{m,n\}\beta )}\)-strongly convex with respect to the nuclear norm over the nuclear ball with radius D.

The proof of Theorem 3 follows the idea introduced in Ghai et al. (2020). Define the operator

$$\begin{aligned} S:{\mathbb{R}}^{m,n}\rightarrow {\mathbb{S}}^{m+n}, X\mapsto \begin{bmatrix} 0 &{}\quad X\\ X^{\top } &{}\quad 0 \end{bmatrix} \end{aligned}$$

The set \({\mathcal{X}}=\{S(X)|\in {\mathbb{R}}^{m,n}\}\) is a finite dimensional linear subspace of the space of symmetric matrices \({\mathbb{S}}^{m+n}\). Its dual space \({\mathcal{X}}_{*}\) determined by the Frobenius inner product can be represented by \({\mathcal{X}}\) itself. For any \(S(X)\in {\mathcal{X}}\), the set of eigenvalues of S(X) consists of the singular values and the negative singular values of X. Since \(\phi\) is even, we have \(\sum _{i=1}^{d}\phi (\sigma _{i}(X))=\sum _{i=1}^{d}\phi (\lambda _{i}(X))\) for symmetric X. The next lemma shows that both \(\Phi _{t}|_{\mathcal{X}}\) and \(\Phi ^{*}_{t}|_{\mathcal{X}}\) are twice differentiable.

Lemma 3

Let \(f:{\mathbb{R}}\rightarrow {\mathbb{R}}\) be twice continuously differentiable. Then the function given by

$$\begin{aligned} F:{\mathbb{S}}^{d}\rightarrow {\mathbb{R}}, X\mapsto \sum _{i=1}^{d} f(\lambda _{i}(X)) \end{aligned}$$

is twice differentiable. Furthermore, let \(X\in {\mathbb{S}}^{d}\) be a symmetric matrix with eigenvalue decomposition

$$\begin{aligned}X=U{\text{diag}}(\lambda _{1}(X),\ldots ,\lambda _{d}(X))U^{\top }.\end{aligned}$$

Define the matrix of the divided difference \(\Gamma (f,X)=[\gamma (f,X)_{ij}]\) with

$$\begin{aligned} \gamma (f,X)_{ij}={\left\{ \begin{array}{ll} \frac{f(\lambda _{i}(X))-f(\lambda _{j}(X))}{\lambda _{i}(X)-\lambda _{j}(X)},&{}\quad {\text{if }}\lambda _{i}(X)\ne \lambda _{j}(X)\\ f'(\lambda _{i}(X)), &{} \quad {\text{otherwise}} \end{array}\right. } \end{aligned}$$

Then for any \(G,H\in {\mathbb{S}}^{d}\), we have

$$\begin{aligned} D^{2}F(X)(G,H)=\sum _{i.j}\gamma (f',X)_{ij}\tilde{g}_{ij}\tilde{h}_{ij},\\ \end{aligned}$$

where \(\tilde{g}_{ij}\) and \(\tilde{h}_{ij}\) are the elements of the i-th row and j-th column of the matrix \(U^{\top } G U\) and \(U^{\top } H U\), respectively.

Lemma 3 implies the unsurprising positive semidefiniteness of \(D^{2}F(X)\) for convex f. Furthermore, the exact expression of the second differential allows us to show the local smoothness of \(\Phi ^{*}_{t}\) using the local smoothness of \(\phi ^{*}\). Together with Lemma 4, the locally strong convexity of \(\Phi _{t}|_{\mathcal{X}}\) can be proved.

Lemma 4

Let \(\Phi :{\mathbb{X}}\rightarrow {\mathbb{R}}\) be a closed convex function such that \(\Phi ^{*}\) is twice differentiable at some \(\theta \in {\mathbb{X}}_{*}\) with positive definite \(D^{2}\Phi ^{*}(\theta )\in {\mathcal{L}}({\mathbb{X}}_{*},{\mathcal{L}}({\mathbb{X}}_{*},{\mathbb{R}}))\). Suppose that \(D^{2}\Phi ^{*}(\theta )(v,v)\le \Vert v\Vert _{*} ^{2}\) holds for all \(v\in {\mathbb{X}}_{*}\). Then we have \(D^{2}\Phi (D\Phi ^{*}(\theta ))(x,x)\ge \Vert x\Vert ^{2}\) for all \(x\in {\mathbb{X}}\).

Lemma 4 can be considered as a generalised version of the local duality of smoothness and convexity proved in Ghai et al. (2020). The required positive definiteness of \(D^{2}\Phi ^{*}_{t}(\theta )\) is guaranteed by the exact expression of the second differential described in Lemma 3 and the fact \(\phi ^{*\prime \prime }(\theta )> 0\) for all \(\theta \in {\mathbb{R}}\). Finally, using the construction of \({\mathcal{X}}\), the locally strong convexity of \(\Phi _{t}|_{\mathcal{X}}\) can be extended to \(\Phi _{t}\). The complete proofs of Theorem 3 and the technical lemmata can be found in “Appendix 2.1”.

With the property of the strong convexity, the regret of applying (5) to AO-OMD and AO-FTRL can be upper bounded by the following theorems.

Theorem 4

Let \({\mathcal{K}}\subseteq {\mathbb{R}}^{m,n}\) be a compact convex set. Assume that there is some \(D>0\) such that \(\Vert x\Vert _{1}\le D\) holds for all \(x\in {\mathcal{K}}\). Let \(\{x_{t}\}\) be the sequence generated by update rule (3) with regulariser (5) at iteration t. Setting \(\beta =\frac{1}{\min \{m,n\}}\), \(\eta =\sqrt{\frac{1}{\ln (D+1)+\ln \min \{m,n\}}}\), and \(\alpha _{t}=\eta \sqrt{\sum _{s=1}^{t-1}\Vert g_{s}-h_{s}\Vert ^{2}_{\infty }}\), we obtain

$$\begin{aligned} \begin{aligned} {\mathcal{R}}_{1:T}\le&r_{1}(x_{1})+c(m,n,D)\sqrt{\sum _{t=1}^{T}\Vert g_{t}-h_{t}\Vert _{\infty }^{2}} \end{aligned} \end{aligned}$$

with \(c(m,n,D)\in {\mathcal{O}}(D\sqrt{\ln (D+1)+\ln \min \{m,n\}})\).

Theorem 5

Let \({\mathcal{K}}\subseteq {\mathbb{R}}^{\min \{m,n\}}\) be a compact convex set with \(\min \{m,n\}>e\). Assume that there is some \(D\ge 1\) such that \(\Vert x\Vert _{1}\le D\) holds for all \(x\in {\mathcal{K}}\). Let \(\{x_{t}\}\) be the sequence generated by updating rule (4) with time varying regulariser (5). Setting \(\beta =\frac{1}{\min \{m,n\}}\), \(\eta =\sqrt{\frac{1}{\ln (D+1)+\ln \min \{m,n\}}}\) and \(\alpha _{t}=\eta \sqrt{\sum _{s=1}^{t-1}\Vert g_{s}-h_{s}\Vert _{\infty }^{2}}\), we obtain

$$\begin{aligned} \begin{aligned} {\mathcal{R}}_{1:T}\le \,&c(m,n,D)\sqrt{\sum _{t=1}^{T}\Vert g_{t}-h_{t}\Vert _{\infty }^{2}},\\ \end{aligned} \end{aligned}$$

with \(c(m,n,D)\in {\mathcal{O}}(D\sqrt{\ln (D+1)+\ln \min \{m,n\}})\).

With regulariser (5), both AO-OMD and AO-FTRL guarantee a regret upper bound proportional to \(\sqrt{\ln \min \{m,n\}}\), which is the best known dependence on the size of the matrices.

5 Derived algorithms

Given \(z_{t+1}\in {\mathbb{X}}_{*}\) and a time varying closed convex function \(R_{t+1}:{\mathcal{K}}\rightarrow {\mathbb{R}}\), we consider the following updating rule

$$\begin{aligned} \begin{aligned} y_{t+1}&=\triangledown \phi ^{*}_{t+1}(z_{t+1})\\ x_{t+1}&=\mathop {{\text{arg min}}}\limits _{x\in {\mathcal{K}}}R_{t+1}(x)+{\mathcal{B}}_{\phi _{t+1}}(x,y_{t+1}). \end{aligned} \end{aligned}$$
(6)

It is easy to verify that (6) is equivalent to

$$\begin{aligned} \begin{aligned} x_{t+1}=\,&\mathop {{\text{arg min}}}\limits _{x\in {\mathcal{K}}}R_{t+1}(x)+{\mathcal{B}}_{\phi _{t+1}}(x,y_{t+1})\\ =\,&\mathop {{\text{arg min}}}\limits _{x\in {\mathcal{K}}}R_{t+1}(x)+\phi _{t+1}(x)-\langle \triangledown \phi _{t+1}(y_{t+1}),x\rangle \\ =\,&\mathop {{\text{arg min}}}\limits _{x\in {\mathcal{K}}}R_{t+1}(x)+\phi _{t+1}(x)-\langle z_{t+1},x\rangle \\ \end{aligned} \end{aligned}$$

Setting \(z_{t+1}=\triangledown \phi _{t+1}(x_{t})-g_{t}+h_{t}-h_{t+1}\) and \(R_{t+1}=r_{t+1}\), we obtain the AO-OMD update

$$\begin{aligned} \begin{aligned} x_{t+1}=\,&\mathop {{\text{arg min}}}\limits _{x\in {\mathcal{K}}}\langle g_{t}-h_{t}+h_{t+1},x\rangle -\langle \triangledown \phi _{t+1}(x_{t}),x\rangle +\phi _{t+1}(x)+r_{t+1}(x)\\ =\,&\mathop {{\text{arg min}}}\limits _{x\in {\mathcal{K}}}\langle g_{t}-h_{t}+h_{t+1},x\rangle +r_{t+1}(x)+{\mathcal{B}}_{\phi _{t+1}}(x,x_{t}).\\ \end{aligned} \end{aligned}$$

Setting \(z_{t+1}=-\triangledown \phi _{t+1}(x_{1})+g_{1:t}+h_{t+1}\) and \(R_{t+1}=r_{1:t+1}\), we obtain the AO-FTRL update

$$\begin{aligned} \begin{aligned} x_{t+1}=\,&\mathop {{\text{arg min}}}\limits _{x\in {\mathcal{K}}}\langle g_{1:t}-\theta _{1}+h_{t+1},x\rangle +\phi _{t+1}(x)+r_{1:t+1}(x).\\ \end{aligned} \end{aligned}$$

The rest of this section focuses on solving the second line of (6) for some popular choices of \(r\) and \({\mathcal{K}}\).

5.1 Elastic net regularisation

We first consider the setting of \({\mathcal{K}}={\mathbb{R}}^{d}\) and \(R_{t+1}(x)=\gamma _{1} \Vert x\Vert _{1}+\frac{\gamma _{2}}{2}\Vert x\Vert ^{2}_{2}\), which has countless applications in machine learning. It is easy to verify that the Bregman divergence associated with \(\psi _{t+1}\) is given by

$$\begin{aligned} \begin{aligned} {\mathcal{B}}_{\phi _{t+1}}(x,y)=\,&\alpha _{t+1}\sum _{i=1}^{d}\left( (\vert x_{i} |+\beta )\ln \left( \frac{\vert x_{i} |}{\beta }+1\right) -\vert x_{i} |\right. \\&\left. -\,({\text{sgn}}(y_{i})x_{i}+\beta )\ln \left( \frac{\vert y_{i} |}{\beta }+1\right) +\vert y_{i} |\right) . \end{aligned} \end{aligned}$$

The minimiser of

$$\begin{aligned} R_{t+1}(x)+{\mathcal{B}}_{\phi _{t+1}}(x,y_{t+1}) \end{aligned}$$

in \({\mathbb{R}}^{d}\) can be simply obtained by setting the subgradient to 0. For \(\ln (\frac{\vert y_{i,t+1} |}{\beta }+1)\le \frac{\gamma _{1}}{\alpha _{t+1}}\), we set \(x_{i,t+1}=0\). Otherwise, the 0 subgradient implies \({\text{sgn}}(x_{i,t+1})={\text{sgn}}(y_{i,t+1})\) and \(\vert x_{i,t+1} |\) given by the root of

$$\begin{aligned} \begin{aligned} \ln \left( \frac{\vert y_{i,t+1} |}{\beta }+1\right) =\ln \left( \frac{\vert x_{i,t+1} |}{\beta }+1\right) +\frac{\gamma _{1}}{\alpha _{t+1}}+\frac{\gamma _{2}}{\alpha _{t+1}}\vert x_{i,t+1} | \end{aligned} \end{aligned}$$

for \(i=1,\ldots , d\). For simplicity, we set \(a=\beta\), \(b=\frac{\gamma _{2}}{\alpha _{t+1}}\) and \(c=\frac{\gamma _{1}}{\alpha _{t+1}}-\ln (\frac{\vert y_{i,t+1} |}{\beta }+1)\). It can be verified that \(\vert x_{i,t+1} |\) is given by

$$\begin{aligned} \vert x_{i,t+1} |=\frac{1}{b}W_{0}(ab\exp (ab-c))-a, \end{aligned}$$
(7)

where \(W_{0}\) is the principal branch of the Lambert function and can be well approximated. For \(\gamma _{2}=0\), i.e. the \(\ell _{1}\) regularised problem, \(\vert x_{i,t+1} |\) has the closed form solution

$$\begin{aligned} \vert x_{i,t+1} |=\beta \exp \left( \ln \left( \frac{\vert y_{i,t+1} |}{\beta }+1\right) -\frac{\gamma _{1}}{\alpha _{t+1}}\right) -\beta . \end{aligned}$$
(8)

The implementation is described in Algorithm 1.

figure a

5.2 Nuclear and Frobenius regularisation

Similarly, we consider \({\mathcal{K}}={\mathbb{R}}^{m,n}\) with a regulariser \(R_{t+1}(x)=\gamma _{1} \Vert x\Vert _{1}+\frac{\gamma _{2}}{2}\Vert x\Vert ^{2}_{2}\) mixed with the nuclear and Frobenius norm. The second line of update rule (6) can be implemented as follows

$$\begin{aligned} \begin{aligned} {\text{Compute SVD:}}\; y_{t+1}=\,&U_{t+1}{\text{diag}}(\tilde{y}_{t+1})V_{t+1}^{\top }\\ {\text{Apply Algorithm 1:}}\; \tilde{x}_{t+1}=\,&\mathop {{\text{arg min}}}\limits _{x\in {\mathbb{R}}^{d}} R_{t+1}(x)+{\mathcal{B}}_{\phi _{t+1}}(x,\tilde{y}_{t+1})\\ {\text{Construct:}}\; x_{t+1}=\,&U_{t+1}{\text{diag}}(\tilde{x}_{t+1})V_{t+1}^{\top }. \end{aligned} \end{aligned}$$
(9)

Let \(y_{t+1}\) and \(\tilde{y}_{t+1}\) be as defined in (9). It is easy to verify

$$\begin{aligned} \begin{aligned}&\mathop {{\text{arg min}}}\limits _{x\in {\mathbb{R}}^{m,n}} R_{t+1}(x)+{\mathcal{B}}_{\Phi _{t+1}}(x,y_{t+1})\\ =\,&\mathop {{\text{arg min}}}\limits _{x\in {\mathbb{R}}^{m,n}} R_{t+1}(x)+\Phi _{t+1}(x)-\langle U_{t+1}{\text{diag}}(\triangledown \phi _{t+1}(\tilde{y}_{t+1}))V_{t+1}^{\top },x\rangle _{F}. \end{aligned} \end{aligned}$$
(10)

From the characterisation of subgradient, it follows

$$\begin{aligned} \begin{aligned} \triangledown R_{t+1}(x)=U {\text{diag}}(\gamma _{1} {\text{sgn}}(\sigma (x))+\gamma _{2} \sigma (x)) V^{\top }, \end{aligned} \end{aligned}$$

and

$$\begin{aligned} \begin{aligned} \triangledown \Phi _{t}(x)=U {\text{diag}}(\triangledown \phi _{t}(\sigma (x))) V^{\top }, \end{aligned} \end{aligned}$$

where \(x=U{\text{diag}}(\sigma (x))V^{\top }\) is SVD of x. Similar to the case in \({\mathbb{R}}^{d}\), \(\tilde{x}_{t+1}\) is the root of

$$\begin{aligned} \gamma _{1} {\text{sgn}}(\sigma (x))+\gamma _{2} \sigma (x)+\triangledown \phi _{t}(\sigma (x))=\triangledown \phi _{t}(\tilde{y}_{t+1}). \end{aligned}$$

The subgradient of the objective (10) at \(x_{t+1}=U_{t+1}{\text{diag}}(\tilde{x}_{t+1})V_{t+1}^{\top }\) is clearly 0.

5.3 Projection onto the cross-polytope

Next, we consider the setting where \(r_{t}\) is the zero function and \({\mathcal{K}}\) is the \(\ell _{1}\) ball with radius D. Clearly, we simply set \(x_{t+1}=y_{t+1}\) for \(\Vert y_{t+1}\Vert _{1}\le D\). Otherwise, Algorithm 2 describes a sorting based procedure projecting \(y_{t+1}\) onto the \(\ell _{1}\) ball with time complexity \({\mathcal{O}}(d\log d)\). The correctness of the algorithm is shown in the next lemma.

figure b

Lemma 5

Let \(y\in {\mathbb{R}}^{d}\) with \(\Vert y\Vert _{1}>D\) and \(x^{*}\) as returned by Algorithm 2, then we have

$$\begin{aligned} x^{*}\in \mathop {{\text{arg min}}}\limits _{x\in {\mathcal{K}}}{\mathcal{B}}_{\psi _{t+1}}(x,y). \end{aligned}$$

For the case that \({\mathcal{K}}\subseteq {\mathbb{R}}^{m,n}\) is the nuclear ball with radius D and \(\Vert y_{t+1}\Vert _{1}> D\), we need to solve the problem

$$\begin{aligned} \min _{x\in {\mathcal{K}}} \Phi _{t+1}(x)-\langle U_{t+1}{\text{diag}}(\triangledown \phi _{t+1}(\tilde{y}_{t+1}))V_{t+1}^{\top },x\rangle _{F}, \end{aligned}$$

where the constant part of the Bregman divergence is removed. From the von Neumann’s trace inequality, the Frobenius inner product is upper bounded by

$$\begin{aligned} \langle U_{t+1}\triangledown \phi _{t+1}(\tilde{y}_{t+1})V_{t+1}^{\top },x\rangle _{F}\le \sigma (x)^{\top }\triangledown \phi _{t+1}(\tilde{y}_{t+1}). \end{aligned}$$

The equality holds when x and \(U_{t+1}\triangledown \phi _{t+1}(\tilde{y}_{t+1})V_{t+1}^{\top }\) share a simultaneous SVD, i.e. the minimiser has an SVD of the form

$$\begin{aligned} x=U_{t+1}{\text{diag}}(\triangledown \sigma (x))V_{t+1}^{\top }. \end{aligned}$$

Thus the problem is reduced to

$$\begin{aligned} \begin{aligned} \min _{x\in {\mathbb{R}}^{\min \{m,n\}}}&\phi _{t+1}(x)-\triangledown \phi _{t+1}(\tilde{y}_{t+1})^{\top } x\\ {\text{s.t.}} \quad&\sum _{i=1}^{\min \{m,n\}} x_{i}\le D\\&x_{i}\ge 0 {\text{ for all }} i=1,\ldots ,\min \{m,n\}, \\ \end{aligned} \end{aligned}$$

which can be solved by Algorithm 2. Thus, the projection of update rule (6) can be implemented as follows

$$\begin{aligned} \begin{aligned} {\text{Compute SVD:}}\; y_{t+1}=\,&U_{t+1}{\text{diag}}(\tilde{y}_{t+1})V_{t+1}^{\top }\\ {\text{Apply Algorithm 2:}}\; \tilde{x}_{t+1}=\,&{\text{project}}(\tilde{y}_{t+1},D,\beta )\\ {\text{Construct:}}\; x_{t+1}=\,&U_{t+1}{\text{diag}}(\tilde{x}_{t+1})V_{t+1}^{\top }. \end{aligned} \end{aligned}$$
(11)

5.4 Stochastic acceleration

Finally, we consider the stochastic optimisation problem of the form

$$\begin{aligned} \min _{x\in {\mathcal{K}}} l(x)+r(x), \end{aligned}$$

where \(l:{\mathbb{X}}\rightarrow {\mathbb{R}}\) and \(r:{\mathcal{K}}\rightarrow {\mathbb{R}}_{\ge 0}\) are closed convex functions. In the stochastic setting, instead of having a direct access to \(\triangledown l\), we query a stochastic gradient \(g_{t}\) of l at \(z_{t}\) in each iteration t with \({\mathbb{E}}[g_{t}|z_{t}]\in \partial l(z_{t})\). Algorithms with a regret bound of the form \({\mathcal{O}}(\sqrt{\sum _{t=1}^{T}\Vert g_{t}-h_{t}\Vert _{*} ^{2}})\) can be easily converted into a stochastic optimisation algorithm by applying the update rule to the scaled stochastic gradient \(a_{t}g_{t}\) and hint \(a_{t+1}g_{t}\), which is described in Algorithm 3. Joulani et al. (2020) has shown the convergence of accelerating Adagrad for the problem in \({\mathbb{R}}^{d}\). We extend the result to any finite dimensional normed vector space in the following corollary.

figure c

Corollary 1

Let \(({\mathbb{X}},\Vert \cdot \Vert )\) be a finite dimensional normed vector space and \({\mathcal{K}}\subseteq {\mathbb{X}}\) a compact convex set. Denote by \(\mathcal{A}\) be some optimistic algorithm generating \(x_{t}\in {\mathcal{K}}\) at iteration t. Denote by

$$\begin{aligned} \nu _{t}^{2}={\mathbb{E}}[\Vert g_{t}-\triangledown l_{t}(z_{t})\Vert _{*} ^{2}|z_{t}] \end{aligned}$$

the variance. If \(\mathcal{A}\) has a regret upper bound in the form of

$$\begin{aligned} c_{1}+c_{2}\sqrt{\sum _{t=1}^{T}\Vert a_{t}(g_{t}-g_{t-1})\Vert _{*} ^{2}} \end{aligned}$$

then there is some \(L>0\) such that the error incurred by Algorithm 3 is upper bounded by

$$\begin{aligned} \begin{aligned} {\mathbb{E}}[f(z_{T})-f(x)]\le \,&\frac{c_{1}+c_{2}\sqrt{8\sum _{t=1}^{T}a_{t}^{2}(\nu _{t}^{2}+L^{2})}}{a_{1:T}}.\\ \end{aligned} \end{aligned}$$

Furthermore, if l is M-smooth, then we have

$$\begin{aligned} \begin{aligned} {\mathbb{E}}[f(z_{T})-f(x)]\le \,&\frac{ c_{1}+c_{2}\sqrt{8\sum _{t=1}^{T}a_{t}^{2}\nu _{t}^{2}}+\sqrt{2}c_{2}L+2Mc_{2}^{2}}{a_{1:T}}.\\ \end{aligned} \end{aligned}$$

Setting \(\alpha _{t}=t\), we obtain a convergence of \({\mathcal{O}}(\frac{c_{2}}{\sqrt{T}})\) in general case, and \({\mathcal{O}}(\frac{c_{2}}{T^{2}}+\frac{c_{2}\max _{t}\nu _{t}}{\sqrt{T}})\) for smooth loss function. Applying update rule (3) or (4) with regulariser (2) or (5) to Algorithm 3, the constant \(c_{2}\) is proportional to \(\sqrt{\ln d}\) and \(\sqrt{\ln (\min \{m,n\})}\) for \({\mathbb{X}}={\mathbb{R}}^{d}\) and \({\mathbb{X}}={\mathbb{R}}^{m,n}\) respectively, while the accelerated AdaGrad has a linear dependence on the dimensionality (Joulani et al., 2020).

6 Experiments

This section shows the empirical evaluation of the developed algorithms. We carry out experiments on both synthetic and real-world data and demonstrate the performances of the OMD (Exp-MD) and FTRL (Exp-FTRL) based on the exponentiated update.

6.1 Online logistic regression

For a sanity check, we simulate an d-dimensional online logistic regression problem, in which the model parameter \(w^{*}\) has a \(99\%\) sparsity and the non-zero values are randomly drawn from the uniform distribution over \([-1,1]\). At each iteration t, we sample a random feature vector \(x_{t}\) from a uniform distribution over \([-1,1]^{d}\) and generate a label \(y_{t}\in \{-1,1\}\) using a logit model, i.e. \({\text{Pr}}[y_{t}=1]=(1+\exp (-w^{\top } x_{t}))^{-1}\). The goal is to minimise the cumulative regret

$$\begin{aligned} {\mathcal{R}}_{1:T}=\sum _{t=1}^{T}l_{t}(w_{t})-\sum _{t=1}^{T}l_{t}(w^{*}) \end{aligned}$$

with \(l_{t}(w)=\ln (1+\exp (-y_{t}w^{\top } x_{t}))\). We choose \(d=10\),000 and compare our algorithms with AdaGrad, AdaFTRL (Duchi et al., 2011) and HU (Ghai et al., 2020). For both AdaGrad and AdaFTRL, we set the i-th entry of the proximal matrix \(H_{t}\) to \(h_{ii}=10^{-6}+\sum _{s=1}^{t-1}g_{s,i}^{2}\) as their theory suggested (Duchi et al., 2011). The stepsize of HU is set to \(\sqrt{\frac{1}{\sum _{s=1}^{t-1}\Vert g_{s}\Vert _{\infty }^{2}}}\) leading to an adaptive regret upper bound. All algorithms take decision variables from an \(\ell _{1}\) ball \(\{w\in {\mathbb{R}}^{d}|\Vert w\Vert _{1}\le D\}\), which is the ideal case for HU. We examine the performances of the algorithms with known, underestimated and overestimated \(\Vert w^{*}\Vert _{1}\) by setting \(D=\Vert w^{*}\Vert _{1}\), \(D=\frac{1}{2}\Vert w^{*}\Vert _{1}\) and \(D=2\Vert w^{*}\Vert _{1}\), respectively. For each choice of D, we simulate the online process of each algorithm for 10,000 iterations and repeat the experiments for 20 trials.

Figure 2 plots the curves of the average cumulative regret with the ranges of standard deviation as shaded regions. As can be observed, our algorithms have a clear and stable advantage over the AdaGrad-style algorithms and slightly outperform HU in the experiments with known \(\Vert w^{*}\Vert _{1}\). As the combination of the entropy-like regulariser and FTRL can also be used for parameter-free optimisation (Cutkosky & Boahen, 2017a), overestimating \(\Vert w^{*}\Vert _{1}\) does not have a tangible impact on the performance of Exp-FTRL, which leads to its clear advantage over the rest.

Fig. 2
figure 2

Online logistic regression

6.2 Online multitask learning

Next, we examine the performance of the developed spectral algorithms using a simulated online multi-task learning problem (Kakade et al., 2012), in which we need to solve k highly correlated d-dimensional online prediction problems simultaneously. The data are generated as follows. We first randomly draw two orthogonal matrices \(U\in {\text{GL}}(d,{\mathbb{R}})\) and \(V\in {\text{GL}}(k,{\mathbb{R}})\). Then we generate a k-dimensional vector \(\sigma\) with r non-zero values randomly drawn from a uniform distribution over [0, 10] and construct a low rank parameter matrix \(W^{*}=U{\text{diag}}(\sigma )V\). At each iteration t, k feature and label pairs \((x_{t,1},y_{t,1}),\ldots ,(x_{t,k},y_{t,k})\) are generated using k logit models with the i-th parameters taken from the i-th rows of W. The loss function is given by \(l_{t}(W)=\sum _{i=1}^{k}\ln (1+\exp (-y_{t,i}w_{i}^{\top } x_{t,i}))\). We set \(d=100\), \(k=25\) and \(r=5\), take the nuclear ball \(\{W\in {\mathbb{R}}^{d,k}|\Vert W\Vert _{1}\le D\}\) as the decision set and run the experiment as in Sect. 6.1. The average and standard deviation of the results over 20 trials are shown in Fig. 3.

Fig. 3
figure 3

Online multitask learning

Similar to the online logistic regression, our algorithms have a clear advantage over AdaGrad and AdaFTRL and slightly outperform HU in all settings. While the regret of the AdaGrad-style algorithms spread over a wider range, our algorithms yield relatively stabler results. The superiority of Exp-FTRL for the overestimated \(\Vert W^{*}\Vert _{1}\) can also be observed from Fig. 3c.

6.3 Optimisation for contrastive explanations

Generating the contrastive explanation of a machine learning model (Dhurandhar et al., 2018) is the most motivating application of this paper. Given a sample \(x_{0}\in {\mathcal{X}}\) and machine learning model \(f:{\mathcal{X}}\rightarrow {\mathbb{R}}^{K}\), the contrastive explanation consists of a set of pertinent positive (PP) features and a set of pertinent negative (PN) features, which can be found by solving the following optimisation problem (Dhurandhar et al., 2018)

$$\begin{aligned} \begin{aligned} \min _{x\in {\mathcal{W}}}\quad&l_{x_{0}}(x)+\lambda _{1}\Vert x\Vert _{1}+\frac{\lambda _{2}}{2}\Vert x\Vert _{2}^{2}.\\ \end{aligned} \end{aligned}$$

Let \(\kappa \ge 0\) be a constant and define \(k_{0}=\arg \max _{i}f(x_{0})_{i}\). The loss function for finding PP is given by

$$\begin{aligned} l_{x_{0}}(x)=\max \left\{ \max _{i\ne k_{0}}f(x)_{i}-f(x)_{k_{0}},-\kappa \right\} , \end{aligned}$$

which imposes a penalty on the features that do not justify the prediction. PN is the set of features altering the final classification and is modelled by the following loss function

$$\begin{aligned} l_{x_{0}}(x)=\max \left\{ f(x_{0}+x)_{k_{0}}-\max _{i\ne k_{0}}f(x_{0}+x)_{i},-\kappa \right\} . \end{aligned}$$

In the experiment, we first train a ResNet20 model (He et al., 2016) on the CIFAR-10 dataset (Krizhevsky, 2009), which attains a test accuracy of \(91.49\%\). For each class of the images, we randomly pick 100 correctly classified images from the test dataset and generate PP and PN for them. For PP, we take the set of all feasible images as the decision set, while for PN, we take the set of tensors x, such that \(x_{0}+x\) is a feasible image.

We first consider the white-box setting, in which we have the access to \(\triangledown l_{x_{0}}\). Our goal is to demonstrate the performance of the accelerated AO-OMD and AO-FTRL based on the exponentiated update (AccAOExpMD and AccAOExpFTRL). In Dhurandhar et al. (2018), the fast iterative shrinkage-thresholding algorithm (FISTA) (Beck & Teboulle, 2009) is applied to finding the PP and PN. Therefore, we take FISTA as our baseline. In addition, our algorithms are also compared with the accelerated AO-OMD and AO-FTRL with AdaGrad-style stepsizes (AccAOMD and AccAOFTRL) (Joulani et al., 2020).

We pick \(\lambda _{1}=\lambda _{2}=\frac{1}{2}\), which is the largest value from the set \(\{2^{-i}|i\in {\mathbb{N}}\}\) allowing FISTA to attain a negative loss \(l_{x_{0}}\) for 10 randomly selected images. All algorithms start from \(x_{1}=0\). Figure 4 plots the convergence behaviour of the five algorithms, averaged over the 1000 images. In the experiment for PP, our algorithms are obviously better than the AdaGrad-style algorithms. Although FISTA converges faster at the first 100 iterations, it does not make further progress afterwards due to the tiny stepsize found by the backtracking rule. In the experiment for PN, all algorithms behave similarly. It is worth pointing out that the backtracking rule of FISTA requires multiple function evaluations, which are expensive for explaining deep neural networks.

Fig. 4
figure 4

White box contrastive explanations on CIFAR-10

Next, we consider the black-box setting, in which the gradient is estimated through the two-points estimation

$$\begin{aligned} \frac{1}{b}\sum _{i=1}^{b}\frac{\delta }{\mu }(f(x+\mu v_{i})-f(x))v_{i}, \end{aligned}$$

where \(\delta\), \(\mu\) are constants and \(v_{i}\) is a random vector. Following Chen et al. (2019), we set \(\delta =d\) and sample \(v_{i}\) independently from the uniform distribution over the unit sphere for AdaGrad-style algorithms. Since the convergence of our algorithms depends on the variance of the gradient estimation in \(({\mathbb{R}}^{d}, \Vert \cdot \Vert _{\infty })\), we set \(\delta =1\) and sample \(\nu _{i,1},\ldots ,\nu _{i,d}\) independently from Rademacher distribution according to Corollary 3 in Duchi et al. (2015). To ensure a small bias of the gradient estimation, we set \(\mu =\frac{1}{\sqrt{dT}}\), which is the recommended value for non-convex and constrained optimisation in Chen et al. (2019). The performances of the algorithms are examined in the high and low variance settings with \(b=1\) and \(b=\sqrt{T}\), respectively. Since the problem is stochastic, FISTA, which searches for the stepsize at each iteration, is not practical. Thus, we remove it from the comparison.

Figure 5 plots the convergence behaviour of the algorithms in the high variance setting. Our algorithms outperform the AdaGrad-style algorithms for generating both PP and PN. Furthermore, the FTRL based algorithms have higher convergence rates than the MD based ones at the first few iterations, leading to overall better performance. The experimental results of the low variance setting are plotted in Fig. 6. Though AccAOExpFTRL yields the smallest objective value at the beginning of the experiments, it gets stuck in the local minimum around 0 and is outperformed by AccAOExpMD and AccAOFTRL at the later iterations. Overall, the algorithms based on the exponentiated update have an advantage over the AdaGrad-style algorithms for both high and low variance settings.

Fig. 5
figure 5

Black box contrastive explanations: high variance setting

Fig. 6
figure 6

Black box contrastive explanations: low variance setting

7 Conclusion

This paper proposes and analyses a family of online optimisation algorithms based on an entropy-like regulariser combined with the ideas of optimism and adaptivity. The proposed algorithms have adaptive regret bounds depending logarithmically on the dimensionality of the problem, can handle popular composite objectives and can be easily converted into stochastic optimisation algorithms with optimal accelerated convergence rates for smooth function. As a future research direction, we plan to analyse the convergence of the proposed algorithms together with variance reduction techniques for non-convex stochastic optimisation and analyse their empirical performance for training deep neural networks.