1 Introduction

In the “big data” era, the size of the dataset and the number of decision variables involved in many areas such as health care, the Internet, economics, and engineering are becoming tremendously large [1]. It motivates the development of new computational approaches by efficiently utilizing modern multi-core computers or computing clusters.

In this paper, we consider the block-structured optimization problem

$$\begin{aligned} \mathop {{{\mathrm{minimize}}}}\limits _{{\mathbf {x}}\in \mathbb {R}^n} F({\mathbf {x}})\equiv f({\mathbf {x}}_1,\cdots ,{\mathbf {x}}_m)+\sum _{i=1}^m r_i({\mathbf {x}}_i), \end{aligned}$$
(1.1)

where \({\mathbf {x}}=({\mathbf {x}}_1,\ldots ,{\mathbf {x}}_m)\) is partitioned into m disjoint blocks, f has a Lipschitz continuous gradient (possibly nonconvex), and \(r_i\)’s are (possibly nondifferentiable) proper closed convex functions. Note that \(r_i\)’s can be extended-valued, and thus, (1.1) can have block constraints \({\mathbf {x}}_i\in X_i\) by incorporating the indicator function of \(X_i\) in \(r_i\) for all i.

Many applications can be formulated in the form of (1.1), and they include classic machine learning problems: support vector machine (squared hinge loss and its dual formulation) [2], least absolute shrinkage and selection operator (LASSO) [3], and logistic regression (linear or multilinear) [4], and also subspace learning problems: sparse principal component analysis [5], nonnegative matrix or tensor factorization [6], just to name a few.

Toward the solutions for these problems with extremely large-scale datasets and many variables, first-order methods and also stochastic methods become particularly popular because of their scalability to the problem size, such as fast iterative shrinkage-thresholding algorithm (FISTA) [7], stochastic approximation [8], randomized coordinate descent [9], and their combinations [10, 11]. Recently, lots of efforts have been made to the parallelization of these methods, and in particular, asynchronous parallel (async-parallel) methods attract more attention (e.g., [12, 13]) over their synchronous counterparts partly due to the better speedup performance.

This paper focuses on the async-parallel block coordinate update (async-BCU) method (see Algorithm 1) for solving (1.1). To the best of our knowledge, all works on async-BCU before 2013 consider a deterministic selection of blocks with an exception to [14], and thus, they require strong conditions (like a contraction) for convergence. Recent works, e.g., [12, 13, 15, 16], employ randomized block selection and significantly weaken the convergence requirement. However, all of them require bounded delays and/or are restricted to convex problems. The work [15] allows unbounded delays but requires convexity, and [17, 18] do not assume convexity but require bounded delays. We consider unbounded delays and deal with nonconvex problems.

1.1 Algorithm

We describe the async-BCU method as follows. Assume there are \(p+1\) processors, and the data and variable \({\mathbf {x}}\) are accessible to all processors. We let all processors continuously and asynchronously update the variable \({\mathbf {x}}\) in parallel. At each time k, one processor reads the variable \({\mathbf {x}}\) as \(\hat{{\mathbf {x}}}^k\) from the global memory, randomly picks a block \(i_k\in \{1,2,\ldots ,m\}\), and renews \({\mathbf {x}}_{i_k}\) by a prox-linear update while keeping all other blocks unchanged. The pseudocode is summarized in Algorithm 1, where the \({\mathbf {prox}}\) operator is defined in (1.3).

The algorithm first appeared in [12], where the age of \({\hat{{\mathbf {x}}}}^k\) relative to \({\mathbf {x}}^k\), which we call the delay of iteration k, was assumed to be bounded by a certain integer \(\tau \). For general convex problems, sublinear convergence was established, and for the strongly convex case, linear convergence was shown. However, its convergence for nonconvex problems and/or with unbounded delays was unknown. In addition, numerically, the stepsize is difficult to tune because it depends on \(\tau \), which is unknown before the algorithm completes.

figure a

1.2 Contributions

We summarize our contributions as follows:

  • We analyze the convergence of Algorithm 1 and allow for large unbounded delays following a certain distribution. We require the delays to have certain bounded expected quantities (e.g., expected delay, variance of delay). Our results are more general than those requiring bounded delays such as [12, 16].

  • Both nonconvex and convex problems are analyzed, and those problems include both smooth and nonsmooth functions. For nonconvex problems, we establish the global convergence in terms of first-order optimality conditions and show that any limit point of the iterates is a critical point almost surely. It appears to be the first result of an async-BCU method for general nonconvex problems and allowing unbounded delays. For weakly convex problems, we establish a sublinear convergence result, and for strongly convex problems, we show the linear convergence.

  • We show that if all \(p+1\) processors run at the same speed, the delay follows the Poisson distribution with parameter p. In this case, all the relevant expected quantities can be explicitly computed and are bounded. By setting appropriate stepsizes, we can reach a near-linear speedup if \(p=o(\sqrt{m})\) for smooth cases and \(p=o(\root 4 \of {m})\) for nonsmooth cases.

  • When the delay follows the Poisson distribution, we can explicitly set the stepsize based on the delay expectation (which equals p). We simulate the async-BCU method on one convex problem: LASSO, and one nonconvex problem: the nonnegative matrix factorization. The results demonstrate that async-BCU performs consistently better with a stepsize set based on the expected delay than on the maximum delay. The number of processors is known while the maximum delay is not. Hence, the setting based on expected delay is practically more useful.

Our algorithm updates one (block) coordinate of \({\mathbf {x}}\) in each step and is sharply different from stochastic gradient methods that sample one function in each step to update all coordinates of \({\mathbf {x}}\). While there are async-parallel algorithms in either classes and how to handle delays is important to both of their convergence, their basic lines of analysis are different with respect to how to absorb the delay-induced errors. The results of the two classes are in general not comparable. That is said, for problems with certain proper structures, it is possible to apply both coordinate-wise update and stochastic sampling (e.g., [11, 18,19,20]), and our results apply to the coordinate part.

1.3 Notation and Assumptions

Throughout the paper, bold lowercase letters \({\mathbf {x}}, {\mathbf {y}},\ldots ,\) are used for vectors. We denote \({\mathbf {x}}_i\) as the ith block of \({\mathbf {x}}\) and \(U_i\) as the ith sampling matrix, i.e., \(U_i {\mathbf {x}}\) is a vector with \({\mathbf {x}}_i\) as its ith block and \(\mathbf {0}\) for the remaining ones. \(\textit{E}_{i_k}\) denotes the expectation with respect to \(i_k\) conditionally on all previous history, and \([m]=\{1,\ldots ,m\}\).

We consider the Euclidean norm denoted by \(\Vert \cdot \Vert \), but all our results can be directly extended to problems with general primal and dual norms in a Hilbert space.

The projection to a convex set X is defined as

$$\begin{aligned} {\mathcal {P}}_X({\mathbf {y}})=\mathop {{{\mathrm{arg\,min}}}}\limits _{{\mathbf {x}}\in X} \Vert {\mathbf {x}}-{\mathbf {y}}\Vert ^2, \end{aligned}$$

and the proximal mapping of a convex function h is defined as

$$\begin{aligned} {\mathbf {prox}}_h({\mathbf {y}})=\mathop {{{\mathrm{arg\,min}}}}\limits _{\mathbf {x}}h({\mathbf {x}})+\tfrac{1}{2}\Vert {\mathbf {x}}-{\mathbf {y}}\Vert ^2. \end{aligned}$$
(1.3)

Definition 1.1

(Critical point) A point \({\mathbf {x}}^*\) is a critical point of (1.1) if \(\mathbf {0}\in \nabla f({\mathbf {x}}^*)+\partial R({\mathbf {x}}^*),\) where \(\partial R({\mathbf {x}})\) denotes the subdifferential of R at \({\mathbf {x}}\) and

$$\begin{aligned} R({\mathbf {x}})=\sum _{i=1}^m r_i({\mathbf {x}}_i). \end{aligned}$$
(1.4)

Throughout our analysis, we make the following three assumptions to problem (1.1) and Algorithm 1. Other assumed conditions will be specified if needed.

Assumption 1.1

The function F is lower bounded. The problem (1.1) has at least one solution, and the solution set is denoted as \(X^*\).

Assumption 1.2

\(\nabla f({\mathbf {x}})\) is Lipschitz continuous with constant \(L_f\), namely,

$$\begin{aligned} \Vert \nabla f({\mathbf {x}})-\nabla f({\mathbf {y}})\Vert \leqslant L_f\Vert {\mathbf {x}}-{\mathbf {y}}\Vert ,\,\forall {\mathbf {x}},~{\mathbf {y}}. \end{aligned}$$
(1.5)

In addition, for each \(i\in [m]\), fixing all block coordinates but the ith one, \(\nabla f({\mathbf {x}})\) and \(\nabla _i f({\mathbf {x}})\) are Lipschitz continuous about \({\mathbf {x}}_i\) with \(L_r\) and \(L_c\), respectively, i.e., for any \({\mathbf {x}},~{\mathbf {y}}\), and i,

$$\begin{aligned}&\Vert \nabla f({\mathbf {x}})-\nabla f({\mathbf {x}}+ U_i {\mathbf {y}})\Vert \leqslant L_r\Vert {\mathbf {y}}_i\Vert ,\nonumber \\&\Vert \nabla _i f({\mathbf {x}})-\nabla _i f({\mathbf {x}}+ U_i {\mathbf {y}})\Vert \leqslant L_c\Vert {\mathbf {y}}_i\Vert . \end{aligned}$$
(1.6)

From (1.6), we have that for any \({\mathbf {x}},~{\mathbf {y}}\), and i,

$$\begin{aligned} f({\mathbf {x}}+U_i{\mathbf {y}})\leqslant f({\mathbf {x}})+\langle \nabla _i f({\mathbf {x}}), {\mathbf {y}}_i\rangle +\tfrac{L_c}{2}\Vert {\mathbf {y}}_i\Vert ^2. \end{aligned}$$
(1.7)

We denote \(\kappa =\frac{L_r}{L_c}\) as the condition number.

Assumption 1.3

For each \(k\geqslant 1\), the reading \(\hat{{\mathbf {x}}}^k\) is consistent and delayed by \(j_k\), namely, \(\hat{{\mathbf {x}}}^k={\mathbf {x}}^{k-j_k}\). The delay \(j_k\) follows an identical distribution as a random variable \({\mathbf {j}}\)

$$\begin{aligned} {\mathrm {Prob}}({\mathbf {j}}=t)=q_t,\quad t=0,1,2,\cdots , \end{aligned}$$
(1.8)

and is independent of \(i_k\). We let

$$\begin{aligned} c_k := \sum _{t=k}^\infty q_t,\quad T:=\textit{E}[{\mathbf {j}}],\quad S:=\textit{E}[{\mathbf {j}}^2]. \end{aligned}$$

Remark 1.1

Although the delay always satisfies \(0\leqslant j_k\leqslant k\), the assumption in (1.8) is without loss of generality if we make negative iterates and regard \({\mathbf {x}}^k={\mathbf {x}}^0,\,\forall k<0\). For simplicity, we make the identical distribution assumption, which is the same as that in [14]. Our results can still hold for nonidentical distribution; see the analysis for the smooth nonconvex case in the arXiv version of the paper.

2 Related Works

We briefly review block coordinate update (BCU) and async-parallel computing methods.

The BCU method is closely related to the Gauss–Seidel method for solving linear equations, which can date back to 1823. In the literature of optimization, BCU method first appeared in [21] as the block coordinate descent method, or more precisely, block minimization (BM), for quadratic programming. The convergence of BM was established early for both convex and nonconvex problems, for example [22,23,24]. However, in general, its convergence rate result was only shown for strongly convex problems (e.g., [23]) until the recent work [25] shows sublinear convergence for weakly convex cases. Tseng and Yun [26] proposed a new version of BCU methods, called coordinate gradient descent method, which mimics proximal gradient descent but only updates a block coordinate every time. The block coordinate gradient or block prox-linear update (BPU) becomes popular since [9] proposed to randomly select a block to update. The convergence rate of the randomized BPU is easier to show than the deterministic BPU. It was firstly established for convex smooth problems (both unconstrained and constrained) in [9] and then generalized to nonsmooth cases in [27, 28]. Recently, Refs. [10, 11] incorporated stochastic approximation into the BPU framework to deal with stochastic programming, and both established sublinear convergence for convex problems and also global convergence for nonconvex problems.

The async-parallel computing method (also called chaotic relaxation) first appeared in [29] to solve linear equations arising in electrical network problems. Chazan and Miranker [30] first systematically analyzed (more general) asynchronous iterative methods for solving linear systems. Assuming bounded delays, it gave a necessary and sufficient condition for convergence. Bertsekas [31] proposed an asynchronous distributed iterative method for solving more general fixed-point problems and showed its convergence under a contraction assumption. Tseng et al. [32] weakened the contraction assumption to pseudo-nonexpansiveness but made other more assumptions. Frommer and Szyld [33] made a thorough review of asynchronous methods before 2000. It summarized convergence results under nested sets and synchronous convergence conditions, which are satisfied by P-contraction mappings and isotone mappings.

Since it was proposed in 1969, the async-parallel method has not attracted much attention until recent years when the size of data is increasing exponentially in many areas. Motivated by “big data” problems, Refs. [12, 16] proposed the async-parallel stochastic coordinate descent method (i.e., Algorithm 1) for solving problems in the form of (1.1). Their analysis focuses on convex problems and assumes bounded delays. Specifically, they established sublinear convergence for weakly convex problems and linear convergence for strongly convex problems. In addition, near-linear speedup was achieved if \(\tau =o(\sqrt{m})\) for unconstrained smooth convex problems and \(\tau =o(\root 4 \of {m})\) for constrained smooth or nonsmooth cases. For nonconvex problems, Davis [18] introduced an async-parallel coordinate descent method, whose convergence was established under iterate boundedness assumptions and appropriate stepsizes.

3 Convergence Results for the Smooth Case

Throughout this section, let \(r_i=0,\,\forall i\), i.e., we consider the smooth optimization problem

$$\begin{aligned} \mathop {{{\mathrm{minimize}}}}\limits _{{\mathbf {x}}\in \mathbb {R}^n} f({\mathbf {x}}). \end{aligned}$$
(3.1)

The general (possibly nonsmooth) case will be analyzed in the next section. The results for nonsmooth problems of course also hold for smooth ones. However, the smooth case requires weaker conditions for convergence than those required by the nonsmooth case, and their analysis techniques are different. Hence, we consider the two cases separately.

3.1 Convergence for the Nonconvex Case

In this subsection, we establish a subsequence convergence result for the general (possibly nonconvex) case. We begin with some technical lemmas. The first lemma deals with certain infinite sums that will appear later in our analysis.

Lemma 3.1

For any k and \(t\leqslant k\), let

$$\begin{aligned} \gamma _k&=\frac{\eta ^2 L_r}{2m\sqrt{m}}\sum _{d=1}^{k-1} (c_{k-d}-c_k) c_d+\frac{\eta }{2m}c_k+\frac{\eta ^2 L_c}{2m}c_k, \end{aligned}$$
(3.2a)
$$\begin{aligned} \beta _k&=\left( \frac{\eta }{m}-\frac{\eta ^2 L_c}{2m}\right) q_0-\frac{\eta }{2m}c_k\quad for \quad k\geqslant 1, ~~\left( and \beta _0=0\right) , \end{aligned}$$
(3.2b)
$$\begin{aligned} C_{t,k}&=\left( \frac{\eta }{m}-\frac{\eta ^2 L_c}{2m}\right) q_t-\frac{\eta ^2 L_r}{2m\sqrt{m}}\left( tq_t+\sum _{d=1}^{t}(c_d-c_{k})q_{t-d}\right) . \end{aligned}$$
(3.2c)

Then

$$\begin{aligned}&\sum _{k=0}^\infty \gamma _k\leqslant {\tfrac{\eta ^2L_r}{2m\sqrt{m}}}T^2+\left( \frac{\eta }{2m}+\frac{\eta ^2 L_c}{2m}\right) (1+T), \end{aligned}$$
(3.3)
$$\begin{aligned}&\beta _k+\sum _{t=k+1}^\infty C_{t-k,t}\geqslant \frac{\eta }{2m}-\frac{\eta ^2 L_c}{2m}-\frac{\eta ^2 L_rT}{m\sqrt{m}},\quad \forall k. \end{aligned}$$
(3.4)

Proof

To bound \(\sum _{k=0}^\infty \gamma _k\), we bound the first term \(\sum _{d=1}^{k-1} (c_{k-d}-c_k) c_d\) in (3.2a). Specifically,

$$\begin{aligned} \sum _{k=0}^\infty \sum _{d=1}^{k-1} (c_{k-d}-c_k) c_d\leqslant \sum _{k=0}^\infty \sum _{d=1}^{k-1} c_{k-d} c_d= \sum _{d=1}^\infty \sum _{k=d+1}^{\infty } c_{k-d} c_d=T^2, \end{aligned}$$

where the last equality holds since \(T:=\textit{E}[{\mathbf {j}}]=\sum _{t=1}^\infty t q_t=\sum _{t=1}^\infty \sum _{d=1}^t q_t=\sum _{d=1}^\infty \sum _{t=d}^\infty q_t=\sum _{d=1}^\infty c_d. \) We obtain (3.3) by combining these two equations.

To prove (3.4), we will use

$$\begin{aligned} \sum \limits _{t=1}^\infty \sum \limits _{d=1}^{t}(c_d-c_{k+t})q_{t-d}\leqslant \sum \limits _{t=1}^\infty \sum \limits _{d=1}^{t}c_d q_{t-d}=\sum \limits _{d=1}^\infty \sum \limits _{t=d}^\infty c_d q_{t-d}=\sum \limits _{d=1}^\infty c_d=T. \end{aligned}$$
(3.5)

The above inequality yields

$$\begin{aligned}&\beta _k+\sum _{t=k+1}^\infty C_{t-k,t}=\beta _k+\sum _{t=1}^\infty C_{t,t+k}\\= & {} \left( \frac{\eta }{m}-\frac{\eta ^2 L_c}{2m}\right) q_0-\frac{\eta }{2m}c_k\\&+\sum _{t=1}^\infty \left( \left( \frac{\eta }{m}-\frac{\eta ^2 L_c}{2m}\right) q_t-\frac{\eta ^2 L_r}{2m\sqrt{m}}tq_t-\frac{\eta ^2 L_r}{2m\sqrt{m}}\sum _{d=1}^{t}(c_d-c_{k+t})q_{t-d}\right) \\&{\mathop {\geqslant }\limits ^{(3.5)}}&\frac{\eta }{m}-\frac{\eta ^2 L_c}{2m}-\frac{\eta }{2m}c_k-\frac{\eta ^2 L_rT}{m\sqrt{m}}\geqslant \frac{\eta }{2m}-\frac{\eta ^2 L_c}{2m}-\frac{\eta ^2 L_rT}{m\sqrt{m}}, \end{aligned}$$

where the last inequality follows from \(c_k\leqslant 1\).

The second lemma below bounds the cross term that appears in our analysis.

Lemma 3.2

(Cross term bound) For any \(k > 1\) and \(t\leqslant k\), it holds that

$$\begin{aligned}&\sum _{t=1}^{k-1}q_t\textit{E}\left[ -\langle \nabla f({\mathbf {x}}^k)-\nabla f({\mathbf {x}}^{k-t}), \nabla f({\mathbf {x}}^{k-t})\rangle \right] \nonumber \\ \leqslant&\frac{\eta L_r}{2\sqrt{m}}\sum _{t=1}^{k-1}\left( tq_t+\sum \limits _{d=1}^t (c_{d}-c_k)q_{t-d}\right) \textit{E}\Vert \nabla f({\mathbf {x}}^{k-t})\Vert ^2\nonumber \\&+ \frac{\eta L_r}{2\sqrt{m}}\sum _{d=1}^{k-1} (c_{k-d}-c_k) c_d\Vert \nabla f({\mathbf {x}}^0)\Vert ^2. \end{aligned}$$
(3.6)

Proof

Define \(\Delta ^d:=\nabla f({\mathbf {x}}^d) - \nabla f({\mathbf {x}}^{d+1})\). Applying the Cauchy–Schwarz inequality with \(\nabla f({\mathbf {x}}^{k-t}) - \nabla f({\mathbf {x}}^{k}) = \sum _{d=k-t}^{k-1}\Delta ^d\) yields

$$\begin{aligned} -\langle \nabla f({\mathbf {x}}^k)-\nabla f({\mathbf {x}}^{k-t}), \nabla f({\mathbf {x}}^{k-t})\rangle \leqslant \sum _{d=k-t}^{k-1} \Vert \Delta ^d\Vert \cdot \Vert \nabla f({\mathbf {x}}^{k-t})\Vert . \end{aligned}$$

Since \(\Vert \Delta ^d\Vert \leqslant L_r\Vert {\mathbf {x}}^{d+1}-{\mathbf {x}}^d\Vert = \eta L_r\Vert \nabla _{i_d}f({\hat{{\mathbf {x}}}}^{d})\Vert ,\) by applying Young’s inequality, we get

$$\begin{aligned}&-\langle \nabla f({\mathbf {x}}^k)-\nabla f({\mathbf {x}}^{k-t}), \nabla f({\mathbf {x}}^{k-t})\rangle \nonumber \\ \leqslant&\frac{\eta L_r}{2}\sum _{d=k-t}^{k-1}\left( \sqrt{m}\Vert \nabla _{i_d}f({\hat{{\mathbf {x}}}}^{d})\Vert ^2+\frac{1}{\sqrt{m}}\Vert \nabla f({\mathbf {x}}^{k-t})\Vert ^2\right) . \end{aligned}$$
(3.7)

By taking expectation, we have

$$\begin{aligned} \textit{E}_{i_d, j_d} \Vert \nabla _{i_d}f({\hat{{\mathbf {x}}}}^{d})\Vert ^2&= \frac{1}{m}\textit{E}_{j_d} \Vert \nabla f({\mathbf {x}}^{d-j_d})\Vert ^2\\&= \frac{1}{m}\left( \sum _{r=0}^{d-1} q_r\Vert \nabla f({\mathbf {x}}^{d-r})\Vert ^2+c_d\Vert \nabla f({\mathbf {x}}^0)\Vert ^2\right) . \end{aligned}$$

Now taking expectation on both sides of (3.7) and using the above equation, we get

$$\begin{aligned}&\textit{E}[-\langle \nabla f({\mathbf {x}}^k)-\nabla f({\mathbf {x}}^{k-t}), \nabla f({\mathbf {x}}^{k-t})\rangle ]\nonumber \\ \leqslant&\frac{\eta L_r}{2\sqrt{m}} \sum _{d=k-t}^{k-1}\left( \sum _{r=0}^{d-1} q_r\textit{E}\Vert \nabla f({\mathbf {x}}^{d-r})\Vert ^2+c_d\Vert \nabla f({\mathbf {x}}^0)\Vert ^2\right) \nonumber \\&+ \frac{\eta L_r}{2\sqrt{m}} \sum _{d=k-t}^{k-1}t\textit{E}\Vert \nabla f({\mathbf {x}}^{k-t})\Vert ^2. \end{aligned}$$
(3.8)

Finally, (3.6) follows from

$$\begin{aligned} \sum _{t=1}^{k-1} q_t\sum _{d=k-t}^{k-1}c_d\Vert \nabla f({\mathbf {x}}^0)\Vert ^2&\overset{(A.1)}{=}&\sum _{d=1}^{k-1}\left( \sum _{t=k-d}^{k-1} q_t\right) c_d\Vert \nabla f({\mathbf {x}}^0)\Vert ^2\\= & {} \sum _{d=1}^{k-1} (c_{k-d}-c_k) c_d\Vert \nabla f({\mathbf {x}}^0)\Vert ^2, \end{aligned}$$

and

(3.9)

Using the above lemma, we show a result of running one iteration of the algorithm.

Theorem 3.1

(Fundamental bound) Set \(\gamma _k, \beta _k\) and \(C_{t,k}\) as in (3.2). For any \(k>1\), we have

$$\begin{aligned} \textit{E}[f({\mathbf {x}}^{k+1})]&\leqslant \textit{E}[ f({\mathbf {x}}^k)] + \gamma _k\Vert \nabla f({\mathbf {x}}^0)\Vert ^2-\beta _k\textit{E}\Vert \nabla f({\mathbf {x}}^k)\Vert ^2\nonumber \\&\quad -\sum _{t=1}^{k-1} C_{t,k} \textit{E}\Vert \nabla f({\mathbf {x}}^{k-t})\Vert ^2. \end{aligned}$$
(3.10)

Proof

Since \({\mathbf {x}}^{k+1}={\mathbf {x}}^k-\eta U_{i_k}\nabla f({\mathbf {x}}^{k-j_k})\), we have from (1.7) that

$$\begin{aligned} f({\mathbf {x}}^{k+1})\leqslant f({\mathbf {x}}^k)-\eta \left\langle \nabla f({\mathbf {x}}^k), U_{i_k}\nabla f({\mathbf {x}}^{k-j_k})\right\rangle +\tfrac{L_c}{2}\left\| \eta U_{i_k}\nabla f({\mathbf {x}}^{k-j_k})\right\| ^2. \end{aligned}$$

Taking conditional expectation on \((i_k, j_k)\) gives

$$\begin{aligned} \textit{E}_{i_k,j_k}f({\mathbf {x}}^{k+1})&\leqslant f({\mathbf {x}}^k)-\tfrac{\eta }{m}\textit{E}_{j_k}\langle \nabla f({\mathbf {x}}^k), \nabla f({\mathbf {x}}^{k-j_k})\rangle +\tfrac{\eta ^2 L_c}{2m}\textit{E}_{j_k}\Vert \nabla f({\mathbf {x}}^{k-j_k})\Vert ^2\nonumber \\&= f({\mathbf {x}}^k)-\tfrac{\eta }{m}\sum _{t=0}^{k-1} q_t\langle \nabla f({\mathbf {x}}^k), \nabla f({\mathbf {x}}^{k-t})\rangle -\tfrac{\eta }{m}c_k\langle \nabla f({\mathbf {x}}^k), \nabla f({\mathbf {x}}^0)\rangle \nonumber \\&\quad +\tfrac{\eta ^2 L_c}{2m}\sum _{t=0}^{k-1} q_t\Vert \nabla f({\mathbf {x}}^{k-t})\Vert ^2+\tfrac{\eta ^2 L_c}{2m}c_k\Vert \nabla f({\mathbf {x}}^0)\Vert ^2. \end{aligned}$$
(3.11)

For the first cross term in (3.11), we write each summand as

$$\begin{aligned} \langle \nabla f({\mathbf {x}}^k), \nabla f({\mathbf {x}}^{k-t})\rangle =\langle \nabla f({\mathbf {x}}^k)-\nabla f({\mathbf {x}}^{k-t}), \nabla f({\mathbf {x}}^{k-t})\rangle +\Vert \nabla f({\mathbf {x}}^{k-t})\Vert ^2, \end{aligned}$$
(3.12)

and we use Young’s inequality to bound the second cross term by

$$\begin{aligned} -\tfrac{\eta }{m}c_k\langle \nabla f({\mathbf {x}}^k), \nabla f({\mathbf {x}}^0)\rangle \leqslant \tfrac{\eta c_k}{2m}\left[ \Vert \nabla f({\mathbf {x}}^k)\Vert ^2+\Vert \nabla f({\mathbf {x}}^0)\Vert ^2\right] . \end{aligned}$$
(3.13)

Now taking expectation over both sides of (3.11), plugging in (3.12) and (3.13), and using Lemma 3.2, we have the desired result.

We are now ready to show the main result in the following theorem.

Theorem 3.2

(Convergence for the nonconvex smooth case) Under Assumptions  1.1 through  1.3, let \(\{{\mathbf {x}}^k\}_{k\geqslant 1}\) be generated from Algorithm  1. Assume \(T<\infty \). Take the stepsize as \(0<\eta <\frac{1/L_c}{1+2\kappa T/\sqrt{m}}.\) If \(q_0>0\) or \(\nabla f({\mathbf {x}})\) is bounded for all \({\mathbf {x}}\), then

$$\begin{aligned} \lim _{k\rightarrow \infty } \textit{E}\Vert \nabla f({\mathbf {x}}^k)\Vert = 0, \end{aligned}$$
(3.14)

and any limit point of \(\{{\mathbf {x}}^k\}_{k\geqslant 1}\) is almost surely a critical point of (3.1).

Remark 3.1

If \(T=\textit{E}[{\mathbf {j}}]=o(\sqrt{m})\), then \(\eta \) only weakly depends on the delay. The conditions \(q_0>0\) or \(\nabla f({\mathbf {x}})\) being bounded can be dropped if \(S=\textit{E}[{\mathbf {j}}^2]\) is bounded; see Theorem 4.1.

Proof

Summing up (3.10) from \(k=0\) through K and using (A.3), we have

$$\begin{aligned} \textit{E}[f({\mathbf {x}}^{K+1})]&\leqslant f({\mathbf {x}}^0)+\sum _{k=0}^K\gamma _k\Vert \nabla f({\mathbf {x}}^0)\Vert ^2-\beta _K\textit{E}\Vert \nabla f({\mathbf {x}}^K)\Vert ^2 \nonumber \\&\quad -\sum _{k=1}^{K-1}\left( \beta _k+\sum _{t=k+1}^K C_{t-k,t}\right) \textit{E}\Vert \nabla f({\mathbf {x}}^k)\Vert ^2. \end{aligned}$$
(3.15)

Note that \(\beta _K\rightarrow \left( \frac{\eta }{m}-\frac{\eta ^2 L_c}{2m}\right) q_0\) as \(K\rightarrow \infty \). If \(q_0>0\) or \(\Vert \nabla f({\mathbf {x}})\Vert \) is bounded, by letting \(K\rightarrow \infty \) in (3.15) and using the lower boundedness of f, we have from Lemma 3.1 that

$$\begin{aligned} \sum _{k=1}^\infty \left( \frac{\eta }{2m}-\frac{\eta ^2 L_c}{2m}-\frac{\eta ^2 L_r T}{m\sqrt{m}}\right) \textit{E}\Vert \nabla f({\mathbf {x}}^k)\Vert ^2<\infty . \end{aligned}$$

Since \(\eta <\frac{1/L_c}{1+2\kappa T/\sqrt{m}}\), we have (3.14) from the above inequality.

From the Markov inequality, it follows that \(\Vert \nabla f({\mathbf {x}}^k)\Vert \) converges to zero with probability one. Let \(\bar{{\mathbf {x}}}\) be a limit point of \(\{{\mathbf {x}}^k\}_{k\geqslant 1}\), i.e., there is a subsequence \(\{{\mathbf {x}}^k\}_{k\in {\mathcal {K}}}\) convergent to \(\bar{{\mathbf {x}}}\). Hence, \(\Vert \nabla f({\mathbf {x}}^k)\Vert \rightarrow 0\) almost surely as \({\mathcal {K}}\ni k\rightarrow \infty \). By [34, Theorem 3.4, p. 212], there is a sub-subsequence \(\{{\mathbf {x}}^k\}_{k\in {\mathcal {K}}'}\) such that \(\Vert \nabla f({\mathbf {x}}^k)\Vert \rightarrow 0\) almost surely as \({\mathcal {K}}'\ni k\rightarrow \infty \). This completes the proof.

3.2 Convergence Rate for the Convex Case

In this subsection, we assume the convexity of f and establish convergence rate results of Algorithm 1 for solving (3.1). Besides Assumptions 1.1 through 1.3, we make an additional assumption to the delay as follows. It means the delay follows a sub-exponential distribution.

Assumption 3.1

There is a constant \(\sigma >1\) such that

$$\begin{aligned} M_\sigma := \textit{E}[\sigma ^{{\mathbf {j}}}]<\infty . \end{aligned}$$
(3.16)

The condition in (3.16) is stronger than \(T<\infty \), and both of them hold if the delay \(j_k\) is uniformly bounded by some number \(\tau \) or follows the Poisson distribution; see the discussions in Sect. 5. Using this additional assumption and choosing an appropriate stepsize, we are able to control the gradient of f such that it changes not too fast.

Lemma 3.3

Under Assumptions  1.2 through  3.1, for any \(1<\rho \leqslant \sigma \), if the stepsize satisfies

$$\begin{aligned} 0<\eta \leqslant \tfrac{(\rho -1)\sqrt{m}}{\rho L_r(1+M_\rho )}, \end{aligned}$$
(3.17)

with \(M_\rho \) defined in (3.16), then for all k, it holds that

$$\begin{aligned} \textit{E}\Vert \nabla f({\mathbf {x}}^k)\Vert ^2\leqslant \rho \textit{E}\Vert \nabla f({\mathbf {x}}^{k+1})\Vert ^2 \quad \text {and}\quad \textit{E}\Vert \nabla f({\mathbf {x}}^{k+1})\Vert ^2\leqslant \rho \textit{E}\Vert \nabla f({\mathbf {x}}^k)\Vert ^2. \end{aligned}$$
(3.18)

The proof of Lemma 3.3 follows an argument similar to [12]. Since it is rather long, it is included in Appendix. Similar to Lemma 3.2, we can show the following result.

Lemma 3.4

For any k, it holds that

$$\begin{aligned}&\sum _{t=0}^{k-1} q_t\textit{E}[-\langle \nabla f({\mathbf {x}}^k), \nabla f({\mathbf {x}}^{k-t})-\nabla f({\mathbf {x}}^k)\rangle ] -c_k\textit{E}\langle \nabla f({\mathbf {x}}^k),\nabla f({\mathbf {x}}^0)-\nabla f({\mathbf {x}}^k)\rangle \nonumber \\ \leqslant&\frac{\eta L_r}{2\sqrt{m}}\sum \limits _{d=1}^{k}c_{k-d}c_d\Vert \nabla f({\mathbf {x}}^0)\Vert ^2+\frac{\eta L_r}{2\sqrt{m}}\sum \limits _{t=1}^{k-1}\sum \limits _{d=1}^t c_dq_{t-d}\textit{E}\Vert \nabla f({\mathbf {x}}^{k-t})\Vert ^2\nonumber \\&+\frac{\eta L_r}{2\sqrt{m}}\left( \sum _{t=0}^{k-1} tq_t + kc_k\right) \textit{E}\Vert \nabla f({\mathbf {x}}^k)\Vert ^2. \end{aligned}$$
(3.19)

Proof

Following an argument similar to how (3.8) is obtained, we can show

$$\begin{aligned}&\sum \limits _{t=0}^{k-1} q_t\textit{E}[-\langle \nabla f({\mathbf {x}}^k), \nabla f({\mathbf {x}}^{k-t})-\nabla f({\mathbf {x}}^k)\rangle ]\\ \leqslant&\frac{\eta L_r}{2\sqrt{m}}\sum \limits _{t=0}^{k-1} q_t\left( \sum \limits _{d=k-t}^{k-1}(\sum \limits _{r=0}^{d-1} q_r\textit{E}\Vert \nabla f({\mathbf {x}}^{d-r})\Vert ^2+c_d\Vert \nabla f({\mathbf {x}}^0)\Vert ^2)+t \textit{E}\Vert \nabla f({\mathbf {x}}^k)\Vert ^2\right) \\&-c_k\textit{E}\langle \nabla f({\mathbf {x}}^k),\nabla f({\mathbf {x}}^0)-\nabla f({\mathbf {x}}^k)\rangle \\ \leqslant&\frac{\eta L_r}{2\sqrt{m}}c_k\left( \sum \limits _{d=0}^{k-1}\left( \sum \limits _{r=0}^{d-1} q_r\textit{E}\Vert \nabla f({\mathbf {x}}^{d-r})\Vert ^2+c_d\Vert \nabla f({\mathbf {x}}^0)\Vert ^2\right) +k\textit{E}\Vert \nabla f({\mathbf {x}}^k)\Vert ^2\right) . \end{aligned}$$

Using the above inequalities, we complete the proof by noting (3.9),

$$\begin{aligned} \sum _{t=0}^{k-1} q_t\sum _{d=k-t}^{k-1}c_d+c_k\sum _{d=0}^{k-1}c_d&= \sum _{d=1}^{k-1} (c_{k-d}-c_k) c_d+c_k\sum _{d=0}^{k-1}c_d\nonumber \\&=\sum _{d=1}^{k-1}c_{k-d}c_d+c_k = \sum _{d=1}^{k}c_{k-d}c_d, \end{aligned}$$
(3.20)

and \(\textstyle c_k\sum _{d=0}^{k-1}\sum _{r=0}^{d-1} q_r \Vert \nabla f({\mathbf {x}}^{d-r})\Vert ^2=\sum _{t=1}^{k-1}\sum _{d=1}^t c_k q_{t-d}\Vert \nabla f({\mathbf {x}}^{k-t})\Vert ^2.\)

Using the above two lemmas, we establish sufficient objective decrease.

Theorem 3.3

(Sufficient progress) Under Assumptions 1.1 through 3.1, we let \(\{{\mathbf {x}}^k\}_{k\geqslant 1}\) be the sequence generated from Algorithm  1. For a certain \(1<\rho < \sigma \), define

$$\begin{aligned} N_\rho :=\textit{E}[{\mathbf {j}}\rho ^{{\mathbf {j}}}]. \end{aligned}$$
(3.21)

Take the stepsize such that (3.17) is satisfied and also

$$\begin{aligned} 0<\eta < 2\left( L_c\left( M_\rho +\tfrac{\kappa (2N_\rho M_\rho +T)}{\sqrt{m}}\right) \right) ^{-1}. \end{aligned}$$
(3.22)

Let

$$\begin{aligned} D=\frac{\eta }{2m}\left( 2-\tfrac{\eta L_r}{\sqrt{m}}(2N_\rho M_\rho +T)-\eta L_c M_\rho \right) . \end{aligned}$$
(3.23)

Then,

$$\begin{aligned} \textit{E}f[({\mathbf {x}}^{k+1})]\leqslant \textit{E}[f({\mathbf {x}}^k)] - D\textit{E}\Vert \nabla f({\mathbf {x}}^k)\Vert ^2. \end{aligned}$$
(3.24)

Proof

First note that for any \(\rho <\sigma , t\rho ^t\) is dominated by \(\sigma ^t\) as t is sufficiently large. Hence, \(N_\rho <\infty \) from (3.16), and it is easy to see \(T<\infty \). Also note that

$$\begin{aligned} \textit{E}[{\mathbf {j}}\rho ^{{\mathbf {j}}}]= & {} \sum \limits _{t=1}^\infty t q_t \rho ^t=\sum \limits _{t=1}^\infty \sum \limits _{d=1}^t q_t \rho ^t=\sum \limits _{d=1}^\infty \sum \limits _{t=d}^\infty q_t \rho ^t\nonumber \\\geqslant & {} \sum \limits _{d=1}^\infty \sum \limits _{t=d}^\infty q_t \rho ^d=\sum \limits _{d=1}^\infty c_d \rho ^d. \end{aligned}$$
(3.25)

We write the cross terms in (3.11) to

$$\begin{aligned} \langle \nabla f({\mathbf {x}}^k), \nabla f({\mathbf {x}}^{k-t})\rangle =\langle \nabla f({\mathbf {x}}^k), \nabla f({\mathbf {x}}^{k-t})-\nabla f({\mathbf {x}}^k)\rangle + \Vert \nabla f({\mathbf {x}}^k)\Vert ^2. \end{aligned}$$

Taking expectation on both sides of (3.11) and using (3.19), we have

$$\begin{aligned} \textit{E}[f({\mathbf {x}}^{k+1})]&\leqslant \textit{E}[f({\mathbf {x}}^k)]+\frac{\eta ^2 L_r}{2m\sqrt{m}}\sum \limits _{d=1}^{k}c_{k-d}c_d\Vert \nabla f({\mathbf {x}}^0)\Vert ^2\nonumber \\&\quad +\frac{\eta ^2 L_r}{2m\sqrt{m}}\sum \limits _{t=1}^{k-1}\sum \limits _{d=1}^t c_dq_{t-d}\textit{E}\Vert \nabla f({\mathbf {x}}^{k-t})\Vert ^2\nonumber \\&\quad +\frac{\eta ^2 L_r}{2m\sqrt{m}}\left( \sum \limits _{t=0}^{k-1} tq_t + kc_k\right) \textit{E}\Vert \nabla f({\mathbf {x}}^k)\Vert ^2 -\frac{\eta }{m}\textit{E}\Vert \nabla f({\mathbf {x}}^k)\Vert ^2\nonumber \\&\quad +\frac{\eta ^2 L_c}{2m}\sum \limits _{t=0}^{k-1} q_t\textit{E}\Vert \nabla f({\mathbf {x}}^{k-t})\Vert ^2+\frac{\eta ^2 L_c}{2m} c_k\Vert \nabla f({\mathbf {x}}^0)\Vert ^2. \end{aligned}$$
(3.26)

The above inequality together with (3.18) implies

$$\begin{aligned} \textit{E}[f({\mathbf {x}}^{k+1})]&\leqslant \textit{E}[ f({\mathbf {x}}^k)]+\frac{\eta ^2 L_r}{2m\sqrt{m}}\sum _{d=1}^{k}c_{k-d}c_d\rho ^k\textit{E}\Vert \nabla f({\mathbf {x}}^k)\Vert ^2\nonumber \\&\quad +\frac{\eta ^2 L_r}{2m\sqrt{m}}\sum _{t=1}^{k-1}\sum _{d=1}^t c_dq_{t-d}\rho ^t\textit{E}\Vert \nabla f({\mathbf {x}}^{k})\Vert ^2\nonumber \\&\quad +\frac{\eta ^2 L_r}{2m\sqrt{m}}\left( \sum _{t=0}^{k-1} tq_t + kc_k\right) \textit{E}\Vert \nabla f({\mathbf {x}}^k)\Vert ^2 -\frac{\eta }{m}\textit{E}\Vert \nabla f({\mathbf {x}}^k)\Vert ^2\nonumber \\&\quad +\frac{\eta ^2 L_c}{2m}\sum _{t=0}^{k-1} q_t\rho ^t\textit{E}\Vert \nabla f({\mathbf {x}}^{k})\Vert ^2+\frac{\eta ^2 L_c}{2m} c_k\rho ^k\textit{E}\Vert \nabla f({\mathbf {x}}^k)\Vert ^2. \end{aligned}$$
(3.27)

Note that \(\sum _{t=1}^{k-1}\sum _{d=1}^t c_dq_{t-d}\rho ^t\leqslant \sum _{t=1}^\infty \sum _{d=1}^t c_dq_{t-d}\rho ^t\), which by exchanging summations equals \(\sum _{d=1}^\infty c_d\rho ^d\sum _{t=d}^\infty q_{t-d}\rho ^{t-d}\overset{(3.25)}{\leqslant }N_\rho M_\rho \). Also note that

$$\begin{aligned}&\sum _{d=1}^{k}c_{k-d}c_d\rho ^k=\sum _{d=1}^{k}c_{d}\rho ^d c_{k-d}\rho ^{k-d}\\ \leqslant&\sum _{d=1}^{k}c_{d}\rho ^d\left( \sum _{r=0}^\infty q_r \rho ^r\right) \\ \leqslant&N_\rho M_\rho . \end{aligned}$$

From these relations and (3.27), we obtain

$$\begin{aligned} \textit{E}[ f({\mathbf {x}}^{k+1})]&\leqslant \textit{E}[f({\mathbf {x}}^k)]+\frac{\eta ^2 L_r}{m\sqrt{m}}N_\rho M_\rho \Vert \nabla f({\mathbf {x}}^k)\Vert ^2 \\&\quad +\frac{\eta ^2 L_r}{2m\sqrt{m}}\left( \sum _{t=0}^{k-1} tq_t + kc_k\right) \textit{E}\Vert \nabla f({\mathbf {x}}^k)\Vert ^2 -\frac{\eta }{m}\Vert \nabla f({\mathbf {x}}^k)\Vert ^2\\&\quad +\frac{\eta ^2 L_c}{2m}\sum _{t=0}^{k-1} q_t\rho ^t\textit{E}\Vert \nabla f({\mathbf {x}}^{k})\Vert ^2+\frac{\eta ^2 L_c}{2m} c_k\rho ^k\textit{E}\Vert \nabla f({\mathbf {x}}^k)\Vert ^2\\&\leqslant \textit{E}[f({\mathbf {x}}^k)]+\left( \frac{\eta ^2 L_r}{2m\sqrt{m}}(2N_\rho M_\rho +T)+\frac{\eta ^2 L_c}{2m}M_\rho -{\eta \over m}\right) \textit{E}\Vert \nabla f({\mathbf {x}}^k)\Vert ^2, \end{aligned}$$

which completes the proof.

Using (3.24) and the convexity of f, we establish the following convergence rate.

Theorem 3.4

(Convergence rate for the convex smooth case) Under the assumptions of Theorem  3.3, we have

  1. 1.

    If f is convex and \(\Vert {\mathbf {x}}^k-{\mathcal {P}}_{X^*}({\mathbf {x}}^k)\Vert \leqslant B,\,\forall k\) for a certain constant B, then

    $$\begin{aligned} \textit{E}[f({\mathbf {x}}^{k+1})-f^*] \leqslant \frac{1}{(f({\mathbf {x}}^0)-f^*)^{-1}+(k+1)DB^{-2}}, \end{aligned}$$
    (3.28)

    where \(f^*\) denotes the minimum value of (3.1) and D is given in (3.23).

  2. 2.

    If f is strongly convex with constant \(\mu \), then

    $$\begin{aligned} \textit{E}[f({\mathbf {x}}^{k+1})-f^*]\leqslant (1-2\mu D)\textit{E}[f({\mathbf {x}}^k)-f^*], \end{aligned}$$
    (3.29)

    where D is given in (3.23).

Remark 3.2

For the sublinear rate in (3.28), we assume the boundedness of the iterates. This assumption can be relaxed if we use potentially smaller stepsize; see Theorem 4.2.

For the linear convergence, the assumption on strongly convexity can be weakened to either essential or restrict strong convexity, see [12, 35].

Proof

If \(\Vert {\mathbf {x}}^k-{\mathcal {P}}_{X^*}({\mathbf {x}}^k)\Vert \leqslant B\), then from \(f({\mathbf {x}}^k)-f({\mathcal {P}}_{X^*}({\mathbf {x}}^k))\leqslant \langle \nabla f({\mathbf {x}}^k),{\mathbf {x}}^k-{\mathcal {P}}_{X^*}({\mathbf {x}}^k)\rangle \), we have

$$\begin{aligned} |f({\mathbf {x}}^k)-f^*|\leqslant \Vert \nabla f({\mathbf {x}}^k)\Vert \cdot \Vert {\mathbf {x}}^k-{\mathcal {P}}_{X^*}({\mathbf {x}}^k)\Vert \leqslant B\Vert \nabla f({\mathbf {x}}^k)\Vert , \end{aligned}$$

and thus

$$\begin{aligned} \Vert \nabla f({\mathbf {x}}^k)\Vert ^2\geqslant \frac{1}{B^2}(f({\mathbf {x}}^k)-f^*)^2. \end{aligned}$$
(3.30)

Substituting (3.30) into (3.24) yields

$$\begin{aligned} \textit{E}[ f({\mathbf {x}}^{k+1})]\leqslant \textit{E}[ f({\mathbf {x}}^k)]-\frac{D}{B^2}\textit{E}(f({\mathbf {x}}^k)-f^*)^2. \end{aligned}$$

Hence,

$$\begin{aligned}&\textit{E}[f({\mathbf {x}}^{k+1})-f^*] \leqslant \textit{E}[f({\mathbf {x}}^k)-f^*]-\frac{D}{B^2}\textit{E}(f({\mathbf {x}}^k)-f^*)^2\\ \Rightarrow&\frac{1}{\textit{E}[f({\mathbf {x}}^{k+1})-f^*]}\geqslant \frac{1}{\textit{E}[f({\mathbf {x}}^k)-f^*]}+\frac{D}{B^2}\frac{\textit{E}[f({\mathbf {x}}^k)-f^*]}{\textit{E}[f({\mathbf {x}}^{k+1})-f^*]}\\ \geqslant&\frac{1}{\textit{E}[f({\mathbf {x}}^k)-f^*]}+\frac{D}{B^2}\\ \Rightarrow&\frac{1}{\textit{E}[f({\mathbf {x}}^{k+1})-f^*]}\geqslant \frac{1}{[f({\mathbf {x}}^0)-f^*]}+\frac{D(k+1)}{B^2}, \end{aligned}$$

and thus (3.28) holds.

If f is strongly convex with constant \(\mu \), then

$$\begin{aligned} -\tfrac{1}{2\mu }\Vert \nabla f({\mathbf {x}}^k)\Vert ^2\leqslant f^*-f({\mathbf {x}}^k). \end{aligned}$$

We immediately have (3.29) from (3.24) and the above inequality. This completes the proof.

4 Convergence Results for the Nonsmooth Case

In this section, we analyze the convergence of Algorithm 1 for possibly nonsmooth cases. Throughout this section, we let

$$\begin{aligned} \bar{{\mathbf {x}}}^{k+1}={\mathbf {prox}}_{\eta R}\left( {\mathbf {x}}^k-\eta \nabla f({\mathbf {x}}^{k-j_k})\right) \end{aligned}$$

a virtual full-update iterate, where R is defined in (1.4), and denote

$$\begin{aligned} {\mathbf {d}}^{k} = \bar{\mathbf {x}}^{k+1}-{\mathbf {x}}^k. \end{aligned}$$

Due to more generality, we will make stronger assumptions on the delay than those made in the previous section. But all these assumptions are satisfied if the delay is uniformly bounded or follows the Poisson distribution, as shown in Sect. 5.

4.1 Convergence for the Nonconvex Case

We first establish the almost sure global convergence for possibly nonconvex cases starting with the following square summable result.

Lemma 4.1

(Square summability) Under Assumptions 1.1 through 1.3, we let \(\{{\mathbf {x}}^k\}_{k\geqslant 1}\) be the sequence generated in Algorithm  1. Assume \(S<\infty \), and the stepsize is taken as \(0<\eta < \frac{1/L_c}{1+\kappa ^2S/(2m)}.\) Then

$$\begin{aligned} \sum _{k=0}^\infty \textit{E}\Vert {\mathbf {d}}^k\Vert ^2<\infty . \end{aligned}$$
(4.1)

Proof

By the definition of \(\bar{{\mathbf {x}}}^{k+1}\), we have \(-\nabla f({\mathbf {x}}^{k-j_k})-\tfrac{1}{\eta }{\mathbf {d}}^k\in \partial R(\bar{{\mathbf {x}}}^{k+1})\), which together with the convexity of R implies that, for any \({\mathbf {x}}\),

$$\begin{aligned} R(\bar{{\mathbf {x}}}^{k+1})-R({\mathbf {x}})\leqslant -\langle \nabla f({\mathbf {x}}^{k-j_k})+\tfrac{1}{\eta }{\mathbf {d}}^k,\bar{{\mathbf {x}}}^{k+1}-{\mathbf {x}}\rangle . \end{aligned}$$
(4.2)

By \({\mathbf {x}}^{k+1}={\mathbf {x}}^k+U_{i_k}{\mathbf {d}}^k\) and (1.7), we get \(F({\mathbf {x}}^{k+1})\leqslant f({\mathbf {x}}^k)+\langle \nabla _{i_k}f({\mathbf {x}}^k),{\mathbf {d}}^k_{i_k}\rangle +\tfrac{L_c}{2}\Vert {{\mathbf {d}}}^{k}_{i_k}\Vert ^2+R({\mathbf {x}}^{k+1}).\) To this inequality, take conditional expectation on \(i_k\):

$$\begin{aligned} \textit{E}_{i_k} F({\mathbf {x}}^{k+1})\leqslant F({\mathbf {x}}^k)+\frac{1}{m}\left( \langle \nabla f({\mathbf {x}}^k), {\mathbf {d}}^k\rangle +\frac{L_c}{2}\Vert {\mathbf {d}}^k\Vert ^2+R(\bar{{\mathbf {x}}}^{k+1})-R({\mathbf {x}}^k)\right) . \end{aligned}$$

To bound the right-hand side, we split the cross term as

$$\begin{aligned} \langle \nabla f({\mathbf {x}}^k), {\mathbf {d}}^k\rangle =\langle \nabla f({\mathbf {x}}^{k-j_k}), {\mathbf {d}}^k\rangle +\langle \nabla f({\mathbf {x}}^k)-\nabla f({\mathbf {x}}^{k-j_k}),{\mathbf {d}}^k\rangle \end{aligned}$$

and apply (4.2) with \({\mathbf {x}}={\mathbf {x}}^k\), arriving at

$$\begin{aligned} \textit{E}_{i_k} F({\mathbf {x}}^{k+1})&\leqslant F({\mathbf {x}}^k)+\frac{1}{m}\left( \frac{L_c}{2}-\frac{1}{\eta }\right) \Vert {\mathbf {d}}^k\Vert ^2\nonumber \\&\quad +\frac{1}{m}\langle \nabla f({\mathbf {x}}^k)-\nabla f({\mathbf {x}}^{k-j_k}),{\mathbf {d}}^k\rangle . \end{aligned}$$
(4.3)

Following a similar argument in the proof of Lemma 3.2 and Young’s inequality, we get

$$\begin{aligned} \langle \nabla f({\mathbf {x}}^k)-\nabla f({\mathbf {x}}^{k-j_k}),{\mathbf {d}}^k\rangle&\leqslant L_r\sum \limits _{d=k-j_k}^{k-1}\Vert {\mathbf {x}}^{d+1}-{\mathbf {x}}^d\Vert \cdot \Vert {\mathbf {d}}^k\Vert \nonumber \\&\leqslant \frac{L_r}{2\kappa }\Vert {\mathbf {d}}^k\Vert ^2+\frac{\kappa L_r}{2} \left( j_k\sum \limits _{d=k-j_k}^{k-1}\Vert {\mathbf {x}}^{d+1}-{\mathbf {x}}^d\Vert ^2\right) . \end{aligned}$$
(4.4)

Note that

$$\begin{aligned}&\textit{E}\left[ j_k\sum \limits _{d=k-j_k}^{k-1}\Vert {\mathbf {x}}^{d+1}-{\mathbf {x}}^d\Vert ^2\right] \nonumber \\ =&\sum \limits _{t=1}^{k-1} q_t t \sum \limits _{d=k-t}^{k-1}\textit{E}\Vert {\mathbf {x}}^{d+1}-{\mathbf {x}}^d\Vert ^2+ \sum \limits _{t=k}^\infty q_tt\sum \limits _{d=0}^{k-1}\textit{E}\Vert {\mathbf {x}}^{d+1}-{\mathbf {x}}^d\Vert ^2 \nonumber \\ =&\frac{1}{m}\sum \limits _{t=1}^{k-1} q_t t \sum \limits _{d=k-t}^{k-1}\textit{E}\Vert {{\mathbf {d}}}^{d}\Vert ^2+\frac{1}{m} \sum \limits _{t=k}^\infty q_tt\sum \limits _{d=0}^{k-1}\textit{E}\Vert {\mathbf {d}}^d\Vert ^2. \end{aligned}$$
(4.5)

Hence, taking expectation yields

$$\begin{aligned}&\textit{E}\langle \nabla f({\mathbf {x}}^k)-\nabla f({\mathbf {x}}^{k-j_k}),{\mathbf {d}}^k\rangle \nonumber \\ \leqslant&\frac{L_r}{2}\left[ \frac{1}{\kappa }\textit{E}\Vert {\mathbf {d}}^k\Vert ^2+\frac{\kappa }{m}\left( \sum \limits _{t=1}^{k-1} q_t t \sum \limits _{d=k-t}^{k-1}\textit{E}\Vert {\mathbf {d}}^d\Vert ^2+\sum \limits _{t=k}^{\infty } q_t t \sum \limits _{d=0}^{k-1}\textit{E}\Vert {\mathbf {d}}^d\Vert ^2\right) \right] . \end{aligned}$$
(4.6)

Taking expectation on both sides of (4.3) and substituting (4.6) yield

$$\begin{aligned}&\textit{E}[F({\mathbf {x}}^{k+1})-F({\mathbf {x}}^k)]+\frac{1}{m}\left( \frac{1}{\eta }-L_c\right) \textit{E}\Vert {\mathbf {d}}^k\Vert ^2\nonumber \\ \leqslant&\frac{\kappa L_r}{2m^2}\left( \sum \limits _{t=1}^{k-1} q_t t \sum \limits _{d=k-t}^{k-1}\textit{E}\Vert {\mathbf {d}}^d\Vert ^2+\sum \limits _{t=k}^{\infty } q_t t \sum \limits _{d=0}^{k-1}\textit{E}\Vert {\mathbf {d}}^d\Vert ^2\right) . \end{aligned}$$
(4.7)

From Lemma A.1, we have that for any \(K\geqslant 0\),

$$\begin{aligned} \sum \limits _{k=0}^K\sum \limits _{t=1}^{k-1} q_t t \sum \limits _{d=k-t}^{k-1}\textit{E}\Vert {\mathbf {d}}^d\Vert ^2&\overset{(A.1)}{=} \sum \limits _{k=0}^K\sum \limits _{d=1}^{k-1}\left( \sum \limits _{t=k-d}^{k-1}q_t t\right) \textit{E}\Vert {\mathbf {d}}^d\Vert ^2\nonumber \\&\overset{(A.2)}{=} \sum \limits _{d=1}^{K-1}\sum \limits _{k=d+1}^K\left( \sum \limits _{t=k-d}^{k-1}q_t t\right) \textit{E}\Vert {\mathbf {d}}^d\Vert ^2\nonumber \\ [k\leftrightarrow d]&\quad = \sum \limits _{k=1}^{K-1}\left( \sum \limits _{d=k+1}^K\sum \limits _{t=d-k}^{d-1}q_t t\right) \textit{E}\Vert {\mathbf {d}}^k\Vert ^2,\end{aligned}$$
(4.8)
$$\begin{aligned} \text {and}\quad \sum \limits _{k=0}^K\sum \limits _{t=k}^{\infty } q_t t \sum \limits _{d=0}^{k-1}\textit{E}\Vert {\mathbf {d}}^d\Vert ^2&=\sum \limits _{k=1}^K\sum \limits _{d=0}^{k-1}\left( \sum \limits _{t=k}^{\infty } q_t t\right) \textit{E}\Vert {\mathbf {d}}^k\Vert ^2\nonumber \\&\overset{(A.2)}{=}\sum \limits _{d=0}^{K-1}\sum \limits _{k=d+1}^K\left( \sum \limits _{t=k}^{\infty } q_t t\right) \textit{E}\Vert {\mathbf {d}}^d\Vert ^2\nonumber \\ [k\leftrightarrow d]\quad&=\sum \limits _{k=0}^{K-1}\left( \sum \limits _{d=k+1}^K\sum \limits _{t=d}^{\infty } q_t t\right) \textit{E}\Vert {\mathbf {d}}^k\Vert ^2. \end{aligned}$$
(4.9)

Summing up (4.7) from \(k=0\) through K and substituting (4.8) and (4.9), we have

$$\begin{aligned}&\textit{E}[F({\mathbf {x}}^{K+1})-F({\mathbf {x}}^0)]+\frac{1}{m}\left( \frac{1}{\eta }-{L_c}\right) \sum \limits _{k=0}^K\textit{E}\Vert {\mathbf {d}}^k\Vert ^2\nonumber \\ \leqslant&\frac{\kappa L_r}{2m^2}\sum \limits _{k=0}^{K-1}\left( \sum \limits _{d=k+1}^K\sum \limits _{t=d-k}^{\infty } q_t t\right) \textit{E}\Vert {\mathbf {d}}^k\Vert ^2. \end{aligned}$$
(4.10)

Note that

$$\begin{aligned} \sum \limits _{d=k+1}^K\sum \limits _{t=d-k}^{\infty } q_t t=\sum \limits _{d=1}^{K-k}\sum \limits _{t=d}^{\infty } q_t t\leqslant \sum \limits _{d=1}^\infty \sum \limits _{t=d}^\infty q_t t=\sum \limits _{t=1}^\infty t^2q_t=S. \end{aligned}$$

Since F is lower bounded, we have (4.1) from (4.10) by letting \(K\rightarrow \infty \).

Since \(\left( \textit{E}[{\mathbf {j}}]\right) ^2\leqslant \textit{E}[{\mathbf {j}}^2]\), the condition \(S<\infty \) implies \(T<\infty \). Equation (4.1) indicates that \(\textit{E}\Vert {\mathbf {d}}^k\Vert \rightarrow 0\) as \(k\rightarrow \infty \). Together with \(S<\infty \), we are able to show \(\textit{E}\Vert {\mathbf {x}}^k-{\mathbf {x}}^{k-j_k}\Vert \) also approaches zero, as summarized in the following.

Lemma 4.2

Under the assumptions of Lemma 4.1, we have

$$\begin{aligned} \lim _{k\rightarrow \infty }\textit{E}\Vert {\mathbf {x}}^k-{\mathbf {x}}^{k-j_k}\Vert =0. \end{aligned}$$

Proof

Pick any \(\varepsilon >0\). From (4.1), there must exist an integer \(J>0\) such that

$$\begin{aligned} \sum \limits _{d=J}^\infty \textit{E}\Vert {\mathbf {d}}^d\Vert ^2\leqslant m\varepsilon \left( 3\sum \limits _{t=1}^\infty q_t t\right) ^{-1}. \end{aligned}$$
(4.11)

For the above J, there must exist an integer \(K>J\) such that, for any \(k\geqslant K\),

$$\begin{aligned} \sum \limits _{t=k-J}^\infty q_t t\leqslant {m\varepsilon }{\left( 3\sum \limits _{d=0}^\infty \textit{E}\Vert {\mathbf {d}}^d\Vert ^2\right) ^{-1}}. \end{aligned}$$
(4.12)

From Young’s inequality, it follows that \(\Vert {\mathbf {x}}^k-{\mathbf {x}}^{k-j_k}\Vert ^2\leqslant j_k\sum _{d=k-j_k}^{k-1}\Vert {\mathbf {x}}^{d+1}-{\mathbf {x}}^d\Vert ^2.\) Hence, for any \(k\geqslant K\), using (4.5) and (A.1), we have

$$\begin{aligned} \textit{E}\Vert {\mathbf {x}}^k-{\mathbf {x}}^{k-j_k}\Vert ^2&\leqslant \frac{1}{m}\left[ \sum \limits _{d=1}^{k-1}\left( \sum \limits _{t=k-d}^{k-1}q_t t\right) \textit{E}\Vert {\mathbf {d}}^d\Vert ^2+\sum \limits _{d=0}^{k-1}\left( \sum \limits _{t=k}^\infty q_t t\right) \textit{E}\Vert {\mathbf {d}}^d\Vert ^2\right] \\&=\frac{1}{m}\sum \limits _{d=1}^{J}\left( \sum \limits _{t=k-d}^{k-1}q_t t\right) \textit{E}\Vert {\mathbf {d}}^d\Vert ^2\\&\quad +\frac{1}{m}\left[ \sum \limits _{d=J+1}^{k-1}\left( \sum \limits _{t=k-d}^{k-1}q_t t\right) \textit{E}\Vert {\mathbf {d}}^d\Vert ^2+\sum \limits _{d=0}^{k-1}\left( \sum \limits _{t=k}^\infty q_t t\right) \textit{E}\Vert {\mathbf {d}}^d\Vert ^2\right] \\&\leqslant \frac{1}{m}\sum \limits _{d=1}^{J}\left( \sum \limits _{t=k-J}^\infty q_t t\right) \textit{E}\Vert {\mathbf {d}}^d\Vert ^2\\&\quad +\frac{1}{m}\left[ \sum \limits _{d=J+1}^{k-1}\left( \sum \limits _{t=1}^\infty q_t t\right) \textit{E}\Vert {\mathbf {d}}^d\Vert ^2+\sum \limits _{d=0}^{k-1}\left( \sum \limits _{t=k-J}^\infty q_t t\right) \textit{E}\Vert {\mathbf {d}}^d\Vert ^2\right] , \end{aligned}$$

which implies \(\textit{E}\Vert {\mathbf {x}}^k-{\mathbf {x}}^{k-j_k}\Vert ^2\leqslant \varepsilon \) under (4.11) and (4.12). We have \(\lim _{k\rightarrow \infty }\textit{E}\Vert {\mathbf {x}}^k-{\mathbf {x}}^{k-j_k}\Vert ^2=0\) as \(\varepsilon \) is arbitrary. Now note \(\textit{E}\Vert {\mathbf {x}}^k-{\mathbf {x}}^{k-j_k}\Vert \leqslant \sqrt{\textit{E}\Vert {\mathbf {x}}^k-{\mathbf {x}}^{k-j_k}\Vert ^2}\) to complete the proof.

Using Lemmas 4.1 and 4.2, we establish the almost sure global convergence of Algorithm 1.

Theorem 4.1

Under the assumptions of Lemma  4.1, any limit point \({\mathbf {x}}^*\) of \(\{{\mathbf {x}}^k\}\) is a critical point of (1.1) almost surely.

Before proving this theorem, we make two remarks as follows.

Remark 4.1

From the theorem, we see that if \(S=\textit{E}[{\mathbf {j}}^2]=o(m)\), then the stepsize required for convergence only weakly depends on the delay.

Remark 4.2

(Comparison of stepsize) The works [18] consider asynchronous coordinate descent for nonconvex problems. To have convergence to critical points, they assume delays bounded by a number \(\tau \). Also, they require the boundedness of iterates and the stepsize less than \(\frac{1/L_c}{1+2\kappa \tau /\sqrt{m}}\). Note that our stepsize in Theorem 4.1 is larger if \(\kappa ^2 S\leqslant 16m\), where \(S=\textit{E}[{\mathbf {j}}^2]<\tau ^2\), and that can lead to faster convergence.

Proof

Let \(\{{\mathbf {x}}^k\}_{k\in {\mathcal {K}}}\) be a subsequence that converges to \({\mathbf {x}}^*\). Since \(\textit{E}\Vert {\mathbf {d}}^k\Vert \rightarrow 0\) as \({\mathcal {K}}\ni k\rightarrow \infty \), from the Markov inequality, \(\Vert {\mathbf {d}}^k\Vert \) converges to zero in probability as \({\mathcal {K}}\ni k\rightarrow \infty \). By [34, Theorem 3.4, p. 212], there is a sub-subsequence \(\{{\mathbf {x}}^k\}_{k\in {\mathcal {K}}'}\) such that \(\Vert {\mathbf {d}}^k\Vert \) almost surely converges to zero as \({\mathcal {K}}'\ni k\rightarrow \infty \). Hence, \(\bar{{\mathbf {x}}}^{k+1}\) almost surely converges to \({\mathbf {x}}^*\) as \({\mathcal {K}}'\ni k\rightarrow \infty \).

Since \(-\nabla f({\mathbf {x}}^{k-j_k})-\frac{1}{\eta }{\mathbf {d}}^k\in \partial R(\bar{{\mathbf {x}}}^{k+1})\), we have

$$\begin{aligned} \text {dist}\left( \mathbf {0},\partial F(\bar{{\mathbf {x}}}^{k+1})\right) \leqslant \left\| \nabla f(\bar{{\mathbf {x}}}^{k+1})-\nabla f({\mathbf {x}}^{k-j_k})-\tfrac{1}{\eta }{\mathbf {d}}^k\right\| . \end{aligned}$$

Using triangle inequality and the Lipschitz continuity of \(\nabla f\), and taking expectation give

$$\begin{aligned} \textit{E}\left[ \text {dist}\left( \mathbf {0},\partial F(\bar{{\mathbf {x}}}^{k+1})\right) \right] \leqslant L_f \textit{E}\Vert {\mathbf {d}}^k\Vert +L_f\textit{E}\Vert {\mathbf {x}}^k-{\mathbf {x}}^{k-j_k}\Vert +\tfrac{1}{\eta }\textit{E}\Vert {\mathbf {d}}^k\Vert . \end{aligned}$$

From Lemmas 4.1 and 4.2, it follows that the right-hand side approaches to zero as \(k\rightarrow \infty \). Hence, \(\textit{E}\text {dist}\left( \mathbf {0},\partial F(\bar{{\mathbf {x}}}^{k+1})\right) \rightarrow 0\) as \(k\rightarrow \infty \). If necessary, passing to another subsequence, we use Markov inequality and [34, Theorem 3.4, p. 212] again to have \(\text {dist}\left( \mathbf {0},\partial F(\bar{{\mathbf {x}}}^{k+1})\right) \) almost surely converges to zero as \({\mathcal {K}}'\ni k\rightarrow \infty \). Now use the outer semicontinuity [36] of \(\text {dist}\left( \mathbf {0},\partial F({\mathbf {x}})\right) \) to obtain the desired result.

4.2 Convergence Rate for the Convex Case

In this subsection, we establish convergence rates of Algorithm 1 for nonsmooth convex cases. Similar to (3.18), we first show that choosing an appropriate stepsize, the iterate difference does not change too fast.

Lemma 4.3

(Fundamental bounds) Assume Assumptions  1.2 through 3.1. Then for any \(1<\rho <\sigma \), it holds that

$$\begin{aligned} \gamma _{\rho , 1}:=\sum _{t=1}^\infty q_t \tfrac{\rho ^{t/2}-1}{\rho ^{1/2}-1}<\infty \quad \text {and}\quad \gamma _{\rho ,2}:=\left( \sum _{t=1}^\infty q_t t\tfrac{\rho ^{t}-1}{1-\rho ^{-1}}\right) ^{1/2}<\infty . \end{aligned}$$
(4.13)

In addition, if the stepsize is taken such that

$$\begin{aligned} 0<\eta \leqslant \frac{(1-\rho ^{-1})\sqrt{m}-4}{2L_r(1+\gamma _{\rho ,1}+\gamma _{\rho ,2})}, \end{aligned}$$
(4.14)

then, for all \(k\geqslant 1\),

$$\begin{aligned} \textit{E}\Vert {\mathbf {d}}^{k-1}\Vert ^2\leqslant \rho \textit{E}\Vert {\mathbf {d}}^k\Vert ^2. \end{aligned}$$
(4.15)

Proof

It is easy to show (4.13) by noting that \(t\rho ^t\) is dominated by \(\sigma ^t\) as t is sufficiently large. Next we show (4.15) by induction.

Using the inequality \(\Vert {\mathbf {u}}\Vert ^2-\Vert {\mathbf {v}}\Vert ^2\leqslant 2\Vert {\mathbf {u}}\Vert \cdot \Vert {\mathbf {v}}-{\mathbf {u}}\Vert \), we have

$$\begin{aligned} \Vert {\mathbf {d}}^{k-1}\Vert ^2-\Vert {\mathbf {d}}^{k}\Vert ^2\leqslant 2\Vert {\mathbf {d}}^{k-1}\Vert \cdot \Vert {\mathbf {d}}^{k}-{\mathbf {d}}^{k-1}\Vert ,\,\forall k. \end{aligned}$$
(4.16)

In addition, for all k,

$$\begin{aligned} \textit{E}\Vert {\mathbf {x}}^{k-1}-{\mathbf {x}}^k\Vert \Vert {\mathbf {d}}^{k-1}\Vert&\leqslant \tfrac{1}{2}\textit{E}\left[ \sqrt{m}\Vert {\mathbf {x}}^{k-1}-{\mathbf {x}}^k\Vert ^2+\tfrac{1}{\sqrt{m}}\Vert {\mathbf {d}}^{k-1}\Vert ^2\right] \nonumber \\&=\tfrac{1}{\sqrt{m}}\textit{E}\Vert {\mathbf {d}}^{k-1}\Vert ^2. \end{aligned}$$
(4.17)

Furthermore, from \({\mathbf {d}}^{k}-{\mathbf {d}}^{k-1}={\mathbf {x}}^k-{\mathbf {prox}}_{\eta R}\left( {\mathbf {x}}^k-\eta \nabla f({\mathbf {x}}^{k-j_k})\right) -{\mathbf {x}}^{k-1}+{\mathbf {prox}}_{\eta R}\left( {\mathbf {x}}^{k-1}-\eta \nabla f({\mathbf {x}}^{k-1-j_{k-1}})\right) \), the nonexpansiveness of \({\mathbf {prox}}_{\eta R}\), and the triangle inequality, we have

$$\begin{aligned}&\Vert {\mathbf {d}}^{k}-{\mathbf {d}}^{k-1}\Vert \, \nonumber \\ \leqslant&\Vert {\mathbf {x}}^k-{\mathbf {x}}^{k-1}\Vert +\Vert {\mathbf {x}}^k-\eta \nabla f({\mathbf {x}}^{k-j_k})-{\mathbf {x}}^{k-1}+\eta \nabla f({\mathbf {x}}^{k-1-j_{k-1}})\Vert \nonumber \\ \leqslant&2\Vert {\mathbf {x}}^k-{\mathbf {x}}^{k-1}\Vert +\eta \Vert \nabla f({\mathbf {x}}^{k-j_k})-\nabla f({\mathbf {x}}^{k-1-j_{k-1}})\Vert \end{aligned}$$
(4.18)
$$\begin{aligned} \leqslant&2\Vert {\mathbf {x}}^k-{\mathbf {x}}^{k-1}\Vert +\eta \Vert \nabla f({\mathbf {x}}^{k-j_k})-\nabla f({\mathbf {x}}^{k})\Vert \nonumber \\&+\eta \Vert \nabla f({\mathbf {x}}^{k})-\nabla f({\mathbf {x}}^{k-1-j_{k-1}})\Vert . \end{aligned}$$
(4.19)

When \(k=1\), we have \(j_0=0\) and \(j_1\in \{0, 1\}\) because \(j_k\leqslant k,\,\forall k\). Hence, from (4.18),

$$\begin{aligned} \Vert {\mathbf {d}}^1-{\mathbf {d}}^0\Vert \leqslant 2\Vert {\mathbf {x}}^1-{\mathbf {x}}^0\Vert +\eta \Vert \nabla f({\mathbf {x}}^1)-\nabla f({\mathbf {x}}^0)\Vert \leqslant (2+\eta L_r)\Vert {\mathbf {x}}^1-{\mathbf {x}}^0\Vert , \end{aligned}$$

which together with (4.16) and (4.17) implies

$$\begin{aligned} \textit{E}\left[ \Vert {\mathbf {d}}^0\Vert ^2-\Vert {\mathbf {d}}^1\Vert ^2\right] \leqslant (4+2\eta L_r)\textit{E}\left[ \Vert {\mathbf {d}}^0\Vert \cdot \Vert {\mathbf {x}}^0-{\mathbf {x}}^1\Vert \right] \leqslant \frac{4+2\eta L_r}{\sqrt{m}}\textit{E}\Vert {\mathbf {d}}^0\Vert ^2. \end{aligned}$$

Hence,

$$\begin{aligned} \textit{E}\Vert {\mathbf {d}}^0\Vert ^2\leqslant \left( 1-\frac{4+2\eta L_r}{\sqrt{m}}\right) ^{-1}\textit{E}\Vert {\mathbf {d}}^1\Vert ^2\overset{(4.14)}{\leqslant }\rho \textit{E}\Vert {\mathbf {d}}^1\Vert ^2. \end{aligned}$$

Assume (4.15) holds for all \(k\leqslant K-1\). We show it holds for \(k=K\). First, for any \(d\leqslant K-1\),

$$\begin{aligned} \textit{E}\Vert {\mathbf {d}}^{K-1}\Vert \cdot \Vert {\mathbf {x}}^d-{\mathbf {x}}^{d+1}\Vert&\leqslant \frac{1}{2}\textit{E}\left[ \frac{\rho ^{\frac{K-1-d}{2}}}{\sqrt{m}}\Vert {\mathbf {d}}^{K-1}\Vert ^2+\frac{\sqrt{m}}{\rho ^{\frac{K-1-d}{2}}}\Vert {\mathbf {x}}^d-{\mathbf {x}}^{d+1}\Vert ^2\right] \nonumber \\&= \frac{1}{2}\textit{E}\left[ \frac{\rho ^{\frac{K-1-d}{2}}}{\sqrt{m}}\Vert {\mathbf {d}}^{K-1}\Vert ^2+\frac{1}{\sqrt{m}\rho ^{\frac{K-1-d}{2}}}\Vert {\mathbf {d}}^d\Vert ^2\right] \nonumber \\&\leqslant \frac{1}{2}\textit{E}\left[ \frac{\rho ^{\frac{K-1-d}{2}}}{\sqrt{m}}\Vert {\mathbf {d}}^{K-1}\Vert ^2+\frac{\rho ^{K-1-d}}{\sqrt{m}\rho ^{\frac{K-1-d}{2}}}\Vert {\mathbf {d}}^{K-1}\Vert ^2\right] \nonumber \\&= \frac{\rho ^{\frac{K-1-d}{2}}}{\sqrt{m}}\textit{E}\Vert {\mathbf {d}}^{K-1}\Vert ^2. \end{aligned}$$
(4.20)

Secondly, we have

$$\begin{aligned}&\textit{E}\left[ \Vert {\mathbf {d}}^{K-1}\Vert ^2-\Vert {\mathbf {d}}^{K}\Vert ^2\right] \overset{(4.16)}{\leqslant }2\textit{E}\Vert {\mathbf {d}}^{K-1}\Vert \Vert {\mathbf {d}}^{K}-{\mathbf {d}}^{K-1}\Vert \nonumber \\ \overset{(4.19)}{\leqslant }&4\textit{E}\Vert {\mathbf {d}}^{K-1}\Vert \Vert {\mathbf {x}}^K-{\mathbf {x}}^{K-1}\Vert +2\eta \textit{E}\Vert {\mathbf {d}}^{K-1}\Vert \Vert \nabla f({\mathbf {x}}^{K})-\nabla f({\mathbf {x}}^{K-1})\Vert \nonumber \\&+2\eta \textit{E}\Vert {\mathbf {d}}^{K-1}\Vert \Vert \nabla f({\mathbf {x}}^{K-j_K})-\nabla f({\mathbf {x}}^{K})\Vert \nonumber \\&+2\eta \textit{E}\Vert {\mathbf {d}}^{K-1}\Vert \Vert \nabla f({\mathbf {x}}^{K-1})-\nabla f({\mathbf {x}}^{K-1-j_{K-1}})\Vert \nonumber \\ \overset{(4.17)}{\leqslant }&\tfrac{4+2\eta L_r}{\sqrt{m}}\textit{E}\Vert {\mathbf {d}}^{K-1}\Vert ^2+2\eta \textit{E}\Vert {\mathbf {d}}^{K-1}\Vert \Vert \nabla f({\mathbf {x}}^{K-j_K})-\nabla f({\mathbf {x}}^{K})\Vert \nonumber \\&+2\eta \textit{E}\Vert {\mathbf {d}}^{K-1}\Vert \Vert \nabla f({\mathbf {x}}^{K-1})-\nabla f({\mathbf {x}}^{K-1-j_{K-1}})\Vert . \end{aligned}$$
(4.21)

Note that

$$\begin{aligned} \textit{E}_{j_K}\Vert \nabla f({\mathbf {x}}^{K-j_K})-\nabla f({\mathbf {x}}^{K})\Vert&= \sum \limits _{t=1}^{K-1}q_t\Vert \nabla f({\mathbf {x}}^{K-t})-\nabla f({\mathbf {x}}^{K})\Vert \\&\quad +c_K\Vert \nabla f({\mathbf {x}}^0)-\nabla f({\mathbf {x}}^{K})\Vert . \end{aligned}$$

By the triangle inequality and the Lipschitz of \(\nabla f\), it follows that, for any \(1\leqslant t\leqslant K\),

$$\begin{aligned} \Vert \nabla f({\mathbf {x}}^{K-t})-\nabla f({\mathbf {x}}^{K})\Vert&\leqslant \sum _{d=K-t}^{K-1}\Vert \nabla f({\mathbf {x}}^d)-\nabla f({\mathbf {x}}^{d+1})\Vert \nonumber \\&\leqslant L_r \sum _{d=K-t}^{K-1}\Vert {\mathbf {x}}^d-{\mathbf {x}}^{d+1}\Vert . \end{aligned}$$
(4.22)

Since \(\Vert {\mathbf {d}}^{K-1}\Vert \) is independent of \(j_K\), we have from the above two equations that

$$\begin{aligned} \textit{E}\Vert {\mathbf {d}}^{K-1}\Vert \cdot \Vert \nabla f({\mathbf {x}}^{K-j_K})-\nabla f({\mathbf {x}}^{K})\Vert&\leqslant L_r\sum \limits _{t=1}^{K-1}q_t\textit{E}\Vert {\mathbf {d}}^{K-1}\Vert \sum \limits _{d=K-t}^{K-1}\Vert {\mathbf {x}}^d-{\mathbf {x}}^{d+1}\Vert \\&\quad +L_r c_K\textit{E}\Vert {\mathbf {d}}^{K-1}\Vert \sum \limits _{d=0}^{K-1}\Vert {\mathbf {x}}^d-{\mathbf {x}}^{d+1}\Vert . \end{aligned}$$

Using (4.20), the definition of \(\gamma _{\rho ,1}\) in (4.13) and \(\sum _{d=K-t}^{K-1} \rho ^{\frac{K-1-d}{2}}=\frac{\rho ^{t/2}-1}{\rho ^{1/2}-1},\,\forall 1\leqslant t\leqslant K,\) we have

$$\begin{aligned} \textit{E}\Vert {\mathbf {d}}^{K-1}\Vert \Vert \nabla f({\mathbf {x}}^{K-j_K})-\nabla f({\mathbf {x}}^{K})\Vert \leqslant \tfrac{L_r}{\sqrt{m}}\gamma _{\rho ,1}\textit{E}\Vert {\mathbf {d}}^{K-1}\Vert ^2. \end{aligned}$$
(4.23)

Also, using Young’s inequality and (4.22) with K replaced by \(K-1\) and \(t=j_{K-1}\), we have, for any \(\beta >0\),

$$\begin{aligned}&\textit{E}\Vert {\mathbf {d}}^{K-1}\Vert \Vert \nabla f({\mathbf {x}}^{K-1})-\nabla f({\mathbf {x}}^{K-1-j_{K-1}})\Vert \nonumber \\ \leqslant&\frac{L_r}{2\beta }\textit{E}\Vert {\mathbf {d}}^{K-1}\Vert ^2+\frac{L_r\beta }{2}\textit{E}\left[ \sum \limits _{d=K-1-j_{K-1}}^{K-2}\Vert {\mathbf {x}}^{d} - {\mathbf {x}}^{d+1}\Vert \right] ^2. \end{aligned}$$
(4.24)

Note that

$$\begin{aligned}&\textit{E}\left[ \sum \limits _{d=K-1-j_{K-1}}^{K-2}\Vert {\mathbf {x}}^{d} - {\mathbf {x}}^{d+1}\Vert \right] ^2\\ =&\sum \limits _{t=1}^{K-2}q_t\textit{E}\left[ \sum \limits _{d=K-1-t}^{K-2}\Vert {\mathbf {x}}^{d} - {\mathbf {x}}^{d+1}\Vert \right] ^2+c_{K-1}\textit{E}\left[ \sum \limits _{d=0}^{K-2}\Vert {\mathbf {x}}^{d} - {\mathbf {x}}^{d+1}\Vert \right] ^2\\ \leqslant&\sum \limits _{t=1}^{K-2}q_t t\sum \limits _{d=K-1-t}^{K-2}\textit{E}\Vert {\mathbf {x}}^{d} - {\mathbf {x}}^{d+1}\Vert ^2+c_{K-1}(K-1)\sum \limits _{d=0}^{K-2}\textit{E}\Vert {\mathbf {x}}^{d} - {\mathbf {x}}^{d+1}\Vert ^2. \end{aligned}$$

Substituting this inequality into (4.24), noting \(\textit{E}\Vert {\mathbf {x}}^d-{\mathbf {x}}^{d+1}\Vert ^2=\frac{1}{m}\textit{E}\Vert {\mathbf {d}}^d\Vert ^2\), and applying (4.15) for all \(k\leqslant K-1\), we have

$$\begin{aligned} \textit{E}\Vert {\mathbf {d}}^{K-1}\Vert \Vert \nabla f({\mathbf {x}}^{K-1})-\nabla f({\mathbf {x}}^{K-1-j_{K-1}})\Vert \leqslant C\textit{E}\Vert {\mathbf {d}}^{K-1}\Vert ^2, \end{aligned}$$

where \(C= \frac{L_r}{2\beta }+\frac{L_r\beta }{2m}\sum \nolimits _{t=1}^{K-2}q_t t\sum \nolimits _{d=K-1-t}^{K-2}\rho ^{K-1-d}+\frac{L_r\beta }{2m}c_{K-1}(K-1)\sum \nolimits _{d=0}^{K-2}\rho ^{K-1-d}\). Now let \(\beta ={\sqrt{m}}{\left( \sum _{t=1}^{K-2}q_tt\frac{\rho ^{t}-1}{1-\rho ^{-1}}+c_{K-1}(K-1)\frac{\rho ^{K-1}-1}{1-\rho ^{-1}}\right) ^{-1/2}}\) and recall the definition of \(\gamma _{\rho ,2}\) in (4.13). From the above inequality, we have

$$\begin{aligned} \textit{E}\Vert {\mathbf {d}}^{K-1}\Vert \Vert \nabla f({\mathbf {x}}^{K-1})-\nabla f({\mathbf {x}}^{K-1-j_{K-1}})\Vert \leqslant \frac{L_r\gamma _{\rho ,2}}{\sqrt{m}}\textit{E}\Vert {\mathbf {d}}^{K-1}\Vert ^2. \end{aligned}$$
(4.25)

Substituting (4.23) and (4.25) into (4.21) gives

$$\begin{aligned} \textit{E}\left[ \Vert {\mathbf {d}}^{K-1}\Vert ^2-\Vert {\mathbf {d}}^{K}\Vert ^2\right] \leqslant \frac{4+2\eta L_r(1+\gamma _{\rho ,1}+\gamma _{\rho ,2})}{\sqrt{m}}\textit{E}\Vert {\mathbf {d}}^{K-1}\Vert ^2, \end{aligned}$$

and thus

$$\begin{aligned} \textit{E}\Vert {\mathbf {d}}^{K-1}\Vert ^2\leqslant \left( 1-\frac{4+2\eta L_r(1+\gamma _{\rho ,1}+\gamma _{\rho ,2})}{\sqrt{m}}\right) ^{-1}\textit{E}\Vert {\mathbf {d}}^{K}\Vert ^2\overset{(4.14)}{\leqslant }\rho \textit{E}\Vert {\mathbf {d}}^{K}\Vert ^2. \end{aligned}$$

Therefore, by induction, it follows that (4.15) holds for all k, and we complete the proof.

By this lemma, we are able to establish the convergence rate result of Algorithm 1 for solving (1.1) if the problem is convex.

Theorem 4.2

(Convergence rate for the nonsmooth convex case) Under Assumptions 1.1 through  3.1, let \(\{{\mathbf {x}}^k\}_{k\geqslant 1}\) be the sequence generated from Algorithm 1 with stepsize satisfying (4.14) and also

$$\begin{aligned} \eta \leqslant \left( L_c+\frac{2 L_f \gamma ^2_{\rho ,2}}{m}+\frac{2 L_r\gamma _{\rho ,2}}{\sqrt{m}}\right) ^{-1}, \end{aligned}$$
(4.26)

where \(\gamma _{\rho ,1}\) and \(\gamma _{\rho ,2}\) are defined in (4.13). We have

  1. 1.

    If the function F is convex, then

    $$\begin{aligned} \textit{E}[F({\mathbf {x}}^{k})-F^*] \leqslant \frac{m\Phi ({\mathbf {x}}^0)}{2\eta (m+k)}, \end{aligned}$$
    (4.27)

    where

    $$\begin{aligned} \Phi ({\mathbf {x}}^k)=\textit{E}\left\| {\mathbf {x}}^{k}-{\mathcal {P}}_{X^*}({\mathbf {x}}^{k})\right\| ^2+2\eta \textit{E}[F({\mathbf {x}}^{k})-F^*]. \end{aligned}$$
  2. 2.

    If F is strongly convex with constant \(\mu \), then

    $$\begin{aligned} \Phi ({\mathbf {x}}^k)\leqslant \left( 1-\frac{\eta \mu }{m(1+\eta \mu )}\right) ^{k} \Phi ({\mathbf {x}}^0). \end{aligned}$$
    (4.28)

Before proving this theorem, we make two remarks and present a few lemmas below.

Remark 4.3

Similar to (3.29), for the linear convergence result (4.28), the strong convexity assumption can be weakened to optimal strong convexity. The latter one is strictly weaker than the former one; see [16] for more discussions.

Remark 4.4

(Comparison of stepsize) For the special case that the delay is bounded by \(\tau =o(\root 4 \of {m})\), choosing \(\rho =O(1+\frac{1}{\tau })\), we have both \(\gamma _{\rho ,1}\) and \(\gamma _{\rho ,2}\) are \(O(\tau )\). Thus we can take stepsize almost \(\frac{1}{L_c}\), which is larger than the stepsize \(\frac{1}{2L_c}\) given in [16].

Lemma 4.4

Let \(\gamma _{\rho ,2}\) be defined in (4.13). We have

$$\begin{aligned} \textit{E}\langle \nabla _{i_k}f({\mathbf {x}}^{k-j_k})-\nabla _{i_k}f({\mathbf {x}}^{k}),{\mathbf {x}}_{i_k}^{k}-{\mathbf {x}}_{i_k}^{k+1}\rangle \leqslant \frac{L_r\gamma _{\rho ,2}}{m\sqrt{m}}\textit{E}\Vert {\mathbf {d}}^{k}\Vert ^2. \end{aligned}$$
(4.29)

Proof

It is proved via the Cauchy–Schwarz inequality, the bound (4.25), and \(\textit{E}\langle \nabla _{i_k}f({\mathbf {x}}^{k-j_k})-\nabla _{i_k}f({\mathbf {x}}^{k}),{\mathbf {x}}_{i_k}^{k}-{\mathbf {x}}_{i_k}^{k+1}\rangle = \tfrac{1}{m}\textit{E}\langle \nabla f({\mathbf {x}}^{k-j_k})-\nabla f({\mathbf {x}}^{k}),{\mathbf {d}}^{k}\rangle \).

Lemma 4.5

It holds that

$$\begin{aligned}&\textit{E}\left[ f({\mathbf {x}}^k)-f({\mathbf {x}}^{k+1})+r_{i_k}(({\mathcal {P}}_{X^*}({\mathbf {x}}^{k}))_{i_k})-r_{i_k}({\mathbf {x}}_{i_k}^{k+1})\right] \nonumber \\ =&\textit{E}[F({\mathbf {x}}^k)-F({\mathbf {x}}^{k+1})]+\tfrac{1}{m}\textit{E}[R({\mathcal {P}}_{X^*}({\mathbf {x}}^{k}))-R({\mathbf {x}}^k)]. \end{aligned}$$
(4.30)

Proof

Equation (4.30) is a direct consequence of \(r_{i_k}(({\mathcal {P}}_{X^*}({\mathbf {x}}^{k}))_{i_k})-r_{i_k}({\mathbf {x}}_{i_k}^{k+1})=r_{i_k}(({\mathcal {P}}_{X^*}({\mathbf {x}}^{k}))_{i_k})-r_{i_k}({\mathbf {x}}_{i_k}^k)+R({\mathbf {x}}^{k})-R({\mathbf {x}}^{k+1}).\)

Lemma 4.6

Let \(\gamma _{\rho ,2}\) be defined in (4.13). It holds that

$$\begin{aligned} \textit{E}\langle \nabla _{i_k}f({\mathbf {x}}^{k-j_k}), ({\mathcal {P}}_{X^*}({\mathbf {x}}^{k}))_{i_k}-{\mathbf {x}}_{i_k}^{k}\rangle&\leqslant \tfrac{1}{m}\textit{E}\left[ f({\mathcal {P}}_{X^*}({\mathbf {x}}^{k}))-f({\mathbf {x}}^k)\right] \nonumber \\&\quad +\tfrac{L_f\gamma _{\rho ,2}^2}{m^2}\textit{E}\Vert {\mathbf {d}}^k\Vert ^2. \end{aligned}$$
(4.31)

Proof

Since \(i_k\) is uniformly distributed and independent of \(j_k\), we have

$$\begin{aligned} \textit{E}_{i_k} \langle \nabla _{i_k}f({\mathbf {x}}^{k-j_k}), ({\mathcal {P}}_{X^*}({\mathbf {x}}^{k}))_{i_k}-{\mathbf {x}}_{i_k}^{k}\rangle = \tfrac{1}{m} \langle \nabla f({\mathbf {x}}^{k-j_k}), {\mathcal {P}}_{X^*}({\mathbf {x}}^{k})-{\mathbf {x}}^{k}\rangle . \end{aligned}$$
(4.32)

We split the term and apply the convexity of f and Lipschitz continuity of \(\nabla f\) to get

$$\begin{aligned}&\langle \nabla f({\mathbf {x}}^{k-j_k}), {\mathcal {P}}_{X^*}({\mathbf {x}}^{k})-{\mathbf {x}}^{k}\rangle \nonumber \\ =&\langle \nabla f({\mathbf {x}}^{k-j_k}), {\mathcal {P}}_{X^*}({\mathbf {x}}^{k})-{\mathbf {x}}^{k-j_k}\rangle \nonumber \\&+\langle \nabla f({\mathbf {x}}^k) + \nabla f({\mathbf {x}}^{k-j_k})-\nabla f({\mathbf {x}}^k), {\mathbf {x}}^{k-j_k}-{\mathbf {x}}^{k}\rangle \nonumber \\ \leqslant&\left[ f({\mathcal {P}}_{X^*}({\mathbf {x}}^{k}))-f({\mathbf {x}}^{k-j_k})+f({\mathbf {x}}^{k-j_k})-f({\mathbf {x}}^k)\right] \nonumber \\&+\langle \nabla f({\mathbf {x}}^{k-j_k})-\nabla f({\mathbf {x}}^k), {\mathbf {x}}^{k-j_k}-{\mathbf {x}}^{k}\rangle \nonumber \\ \leqslant&\left[ f({\mathcal {P}}_{X^*}({\mathbf {x}}^{k}))-f({\mathbf {x}}^k)\right] + L_f\Vert {\mathbf {x}}^{k-j_k}-{\mathbf {x}}^{k}\Vert ^2. \end{aligned}$$
(4.33)

Substituting (4.33) into (4.32) and taking expectation yield

$$\begin{aligned}&\textit{E}\langle \nabla _{i_k}f({\mathbf {x}}^{k-j_k}), ({\mathcal {P}}_{X^*}({\mathbf {x}}^{k}))_{i_k}-{\mathbf {x}}_{i_k}^{k}\rangle \\&\quad \leqslant \tfrac{1}{m}\textit{E}\left[ f({\mathcal {P}}_{X^*}({\mathbf {x}}^{k}))-f({\mathbf {x}}^k)\right] +\tfrac{L_f}{m}\textit{E}\Vert {\mathbf {x}}^{k-j_k}-{\mathbf {x}}^{k}\Vert ^2. \end{aligned}$$

Noting \(\Vert {\mathbf {x}}^k-{\mathbf {x}}^{k-j_k}\Vert ^2\leqslant j_k\sum _{d=k-j_k}^{k-1}\Vert {\mathbf {x}}^{d+1}-{\mathbf {x}}^d\Vert ^2\), applying (4.5) and (4.15) and using the definition of \(\gamma _{\rho ,2}\), we complete the proof of (4.31).

Lemma 4.7

Under the assumptions of Theorem  4.2, we have \(\textit{E}[F({\mathbf {x}}^{k+1})]\leqslant \textit{E}[F({\mathbf {x}}^k)],\,\forall k\).

Proof

Taking expectation on both sides of (4.3) and using (4.29) yield

$$\begin{aligned} \textit{E}[F({\mathbf {x}}^{k+1})]&\leqslant \textit{E}[F({\mathbf {x}}^k)]+\frac{1}{m}\left( \frac{L_c}{2}-\frac{1}{\eta }+{L_r\gamma _{\rho ,2}\over \sqrt{m}}\right) \textit{E}\Vert {\mathbf {d}}^k\Vert ^2, \end{aligned}$$

which implies \(\textit{E}[F({\mathbf {x}}^{k+1})]\leqslant \textit{E}[F({\mathbf {x}}^k)]\) from the condition on \(\eta \) in (4.26).

Now we are ready to prove Theorem 4.2.

Proof of Theorem 4.2

From the update of \({\mathbf {x}}^{k+1}\), we have

$$\begin{aligned} \mathbf {0}\in \nabla _{i_k}f({\mathbf {x}}^{k-j_k})+\tfrac{1}{\eta }\left( {\mathbf {x}}_{i_k}^{k+1}-{\mathbf {x}}_{i_k}^k\right) +\partial r_{i_k}\left( {\mathbf {x}}_{i_k}^{k+1}\right) ,\end{aligned}$$

and thus for any \({\mathbf {x}}_{i_k}\), it holds from the convexity of \(r_{i_k}\) that

$$\begin{aligned} r_{i_k}({\mathbf {x}}_{i_k})\geqslant r_{i_k}\left( {\mathbf {x}}_{i_k}^{k+1}\right) -\left\langle \nabla _{i_k}f({\mathbf {x}}^{k-j_k})+\tfrac{1}{\eta }\left( {\mathbf {x}}_{i_k}^{k+1}-{\mathbf {x}}_{i_k}^k\right) , {\mathbf {x}}_{i_k}-{\mathbf {x}}_{i_k}^{k+1}\right\rangle . \end{aligned}$$
(4.34)

Since \({\mathbf {x}}^{k+1}={\mathbf {x}}^{k}+U_{i_k}({\mathbf {x}}^{k+1}-{\mathbf {x}}^k)\), we have

$$\begin{aligned} \left\| {\mathbf {x}}^{k+1}-{\mathcal {P}}_{X^*}({\mathbf {x}}^{k})\right\| ^2&= \left\| {\mathbf {x}}^k-{\mathcal {P}}_{X^*}({\mathbf {x}}^{k})\right\| ^2-\left\| {\mathbf {x}}_{i_k}^{k+1}-{\mathbf {x}}_{i_k}^k\right\| ^2\nonumber \\&\quad +2\left\langle {\mathbf {x}}_{i_k}^{k+1}-\left( {\mathcal {P}}_{X^*}({\mathbf {x}}^{k})\right) _{i_k}, {\mathbf {x}}_{i_k}^{k+1}-{\mathbf {x}}_{i_k}^k\right\rangle . \end{aligned}$$
(4.35)

From the definition of \({\mathcal {P}}_{X^*}\), it follows that \(\Vert {\mathbf {x}}^{k+1}-{\mathcal {P}}_{X^*}({\mathbf {x}}^{k+1})\Vert ^2\leqslant \Vert {\mathbf {x}}^{k+1}-{\mathcal {P}}_{X^*}({\mathbf {x}}^{k})\Vert ^2\). Then using (4.34) and (4.35), we have

$$\begin{aligned} \Vert {\mathbf {x}}^{k+1}-{\mathcal {P}}_{X^*}({\mathbf {x}}^{k+1})\Vert ^2&\leqslant \Vert {\mathbf {x}}^k-{\mathcal {P}}_{X^*}({\mathbf {x}}^{k})\Vert ^2-\left\| {\mathbf {x}}_{i_k}^{k+1}-{\mathbf {x}}_{i_k}^k\right\| ^2\nonumber \\&\quad +2\eta \left( r_{i_k}(({\mathcal {P}}_{X^*}({\mathbf {x}}^{k}))_{i_k})-r_{i_k}\left( {\mathbf {x}}_{i_k}^{k+1}\right) \right) \nonumber \\&\quad +\left\langle \nabla _{i_k}f({\mathbf {x}}^{k-j_k}), ({\mathcal {P}}_{X^*}({\mathbf {x}}^{k}))_{i_k}-{\mathbf {x}}_{i_k}^{k+1}\right\rangle . \end{aligned}$$
(4.36)

We split the cross term to have

$$\begin{aligned}&\left\langle \nabla _{i_k}f({\mathbf {x}}^{k-j_k}), ({\mathcal {P}}_{X^*}({\mathbf {x}}^{k}))_{i_k}-{\mathbf {x}}_{i_k}^{k+1}\right\rangle = \left\langle \nabla _{i_k}f({\mathbf {x}}^{k-j_k}), ({\mathcal {P}}_{X^*}({\mathbf {x}}^{k}))_{i_k}-{\mathbf {x}}_{i_k}^{k}\right\rangle \\&\quad + \left\langle \nabla _{i_k}f({\mathbf {x}}^{k}), {\mathbf {x}}_{i_k}^{k}-{\mathbf {x}}_{i_k}^{k+1}\right\rangle +\left\langle \nabla _{i_k}f({\mathbf {x}}^{k-j_k})-\nabla _{i_k}f({\mathbf {x}}^{k}),{\mathbf {x}}_{i_k}^{k}-{\mathbf {x}}_{i_k}^{k+1}\right\rangle . \end{aligned}$$

From (1.7), it follows that

$$\begin{aligned} \left\langle \nabla _{i_k}f({\mathbf {x}}^{k}), {\mathbf {x}}_{i_k}^{k}-{\mathbf {x}}_{i_k}^{k+1}\right\rangle \leqslant f({\mathbf {x}}^k)-f({\mathbf {x}}^{k+1})+\tfrac{L_c}{2}\left\| {\mathbf {x}}_{i_k}^{k}-{\mathbf {x}}_{i_k}^{k+1}\right\| ^2. \end{aligned}$$

Plugging the above two equations into (4.36) gives

$$\begin{aligned} \left\| {\mathbf {x}}^{k+1}-{\mathcal {P}}_{X^*}({\mathbf {x}}^{k+1})\right\| ^2&\leqslant \left\| {\mathbf {x}}^k-{\mathcal {P}}_{X^*}({\mathbf {x}}^{k})\right\| ^2-(1-\eta L_c)\left\| {\mathbf {x}}_{i_k}^{k+1}-{\mathbf {x}}_{i_k}^k\right\| ^2\nonumber \\&\quad +2\eta \left\langle \nabla _{i_k}f({\mathbf {x}}^{k-j_k}), ({\mathcal {P}}_{X^*}({\mathbf {x}}^{k}))_{i_k}-{\mathbf {x}}_{i_k}^{k}\right\rangle \nonumber \\&\quad +2\eta \left\langle \nabla _{i_k}f({\mathbf {x}}^{k-j_k})-\nabla _{i_k}f({\mathbf {x}}^{k}),{\mathbf {x}}_{i_k}^{k}-{\mathbf {x}}_{i_k}^{k+1}\right\rangle \nonumber \\&\quad +2\eta \left[ f({\mathbf {x}}^k)-f({\mathbf {x}}^{k+1})+r_{i_k}(({\mathcal {P}}_{X^*}({\mathbf {x}}^{k}))_{i_k})-r_{i_k}\left( {\mathbf {x}}_{i_k}^{k+1}\right) \right] . \end{aligned}$$
(4.37)

Substituting (4.29) through (4.31) into (4.37) and rearranging terms yield

$$\begin{aligned} \textit{E}\left\| {\mathbf {x}}^{k+1}-{\mathcal {P}}_{X^*}({\mathbf {x}}^{k+1})\right\| ^2&\leqslant \textit{E}\left\| {\mathbf {x}}^k-{\mathcal {P}}_{X^*}({\mathbf {x}}^{k})\right\| ^2\\&\quad -\frac{1}{m}\left[ 1-\eta L_c-\frac{2\eta L_f \gamma _{\rho ,2}^2}{m}-\frac{2\eta L_r\gamma _{\rho ,2}}{\sqrt{m}}\right] \textit{E}\Vert {\mathbf {d}}^k\Vert ^2\\&\quad +\tfrac{2\eta }{m}\textit{E}\left[ F^*-F({\mathbf {x}}^k)]+2\eta \textit{E}[F({\mathbf {x}}^k)-F({\mathbf {x}}^{k+1})\right] . \end{aligned}$$

The above inequality together with (4.26) implies

$$\begin{aligned} \textit{E}\Vert {\mathbf {x}}^{k+1}-{\mathcal {P}}_{X^*}({\mathbf {x}}^{k+1})\Vert ^2&\leqslant \textit{E}\Vert {\mathbf {x}}^k-{\mathcal {P}}_{X^*}({\mathbf {x}}^{k})\Vert ^2+\tfrac{2\eta }{m}\textit{E}\left[ F^*-F({\mathbf {x}}^k)\right] \\&\quad +2\eta \textit{E}\left[ F({\mathbf {x}}^k)-F({\mathbf {x}}^{k+1})\right] \end{aligned}$$

and thus, with the monotonicity of \(\textit{E}[F({\mathbf {x}}^k)]\) in Lemma 4.7,

$$\begin{aligned}&\textit{E}\left\| {\mathbf {x}}^{k+1}-{\mathcal {P}}_{X^*}({\mathbf {x}}^{k+1})\right\| ^2+2\eta \textit{E}\left[ F({\mathbf {x}}^{k+1})-F^*\right] \nonumber \\ \leqslant&\textit{E}\left\| {\mathbf {x}}^k-{\mathcal {P}}_{X^*}({\mathbf {x}}^{k})\right\| ^2+ 2\eta \textit{E}\left[ F({\mathbf {x}}^{k})-F^*\right] -\tfrac{2\eta }{m}\textit{E}\left[ F({\mathbf {x}}^{k})-F^*\right] \end{aligned}$$
(4.38)
$$\begin{aligned} \leqslant&\left\| {\mathbf {x}}^0-{\mathcal {P}}_{X^*}({\mathbf {x}}^{0})\right\| ^2+ 2\eta \textit{E}\left[ F({\mathbf {x}}^{0})-F^*\right] - \tfrac{2\eta }{m}\sum _{t=0}^k\textit{E}\left[ F({\mathbf {x}}^{t})-F^*\right] \nonumber \\ \leqslant&\left\| {\mathbf {x}}^0-{\mathcal {P}}_{X^*}({\mathbf {x}}^{0})\right\| ^2+ 2\eta \textit{E}\left[ F({\mathbf {x}}^{0})-F^*\right] - \tfrac{2\eta }{m}(k+1)\textit{E}\left[ F({\mathbf {x}}^{k+1})-F^*\right] . \end{aligned}$$
(4.39)

Hence, (4.27) follows.

When F is strongly convex with constant \(\mu \), we have

$$\begin{aligned}F({\mathbf {x}}^k)-F^*\geqslant \tfrac{\mu }{2}\Vert {\mathbf {x}}^k-{\mathcal {P}}_{X^*}({\mathbf {x}}^{k})\Vert ^2,\end{aligned}$$

and thus from (4.38), it follows that

$$\begin{aligned}&\textit{E}\left\| {\mathbf {x}}^{k+1}-{\mathcal {P}}_{X^*}({\mathbf {x}}^{k+1})\right\| ^2+2\eta \textit{E}[F({\mathbf {x}}^{k+1})-F^*]\\ \leqslant&\textit{E}\left\| {\mathbf {x}}^k-{\mathcal {P}}_{X^*}({\mathbf {x}}^{k})\right\| ^2+ \left( 2\eta -\frac{2\eta ^2\mu }{m(1+\eta \mu )}\right) \textit{E}[F({\mathbf {x}}^{k})-F^*]\\&-\left( \frac{2\eta }{m}-\frac{2\eta ^2\mu }{m(1+\eta \mu )}\right) \frac{\mu }{2}\textit{E}\Vert {\mathbf {x}}^k-{\mathcal {P}}_{X^*}({\mathbf {x}}^{k})\Vert ^2\\ =&\left( 1-\frac{\eta \mu }{m(1+\eta \mu )}\right) \left( \textit{E}\left\| {\mathbf {x}}^{k}-{\mathcal {P}}_{X^*}({\mathbf {x}}^{k})\right\| ^2+2\eta \textit{E}[F({\mathbf {x}}^{k})-F^*]\right) . \end{aligned}$$

Therefore, (4.28) follows, and we complete the proof.

5 Poisson Distribution

We can treat the asynchronous reading and writing as a queueing system. Assume the \(p\,+\,1\) processors have the same computing power (i.e., the same speed of reading and writing). At any time k, suppose the update to \({\mathbf {x}}_{i_k}\) is performed by the \(p_k\)th processor, which can be treated as the server with speed (or service rate) one of reading and writing. All the other p processors can be treated as customers, each with speed (or arrival rate) one, where any update to \({\mathbf {x}}\) from the p processors can be regarded as one customer’s arrival. Under this setting, from the \(p_k\)th processor starts reading \({\mathbf {x}}\) until it finishes updating \({\mathbf {x}}_{i_k}\), there would be p customers in the queue in average, namely, the delay \(j_k\) follows the Poisson distribution with parameter p. Summarizing the above discussion, we have the following result.

Claim

Suppose Algorithm 1 runs on a system with \(p+ 1\) processors, which have the same speed of reading and writing during the iterations. Then the delay \(j_k\) follows the Poisson distribution with parameter p, i.e., for all k,

$$\begin{aligned} {\mathrm {Prob}}(j_k=t)=\frac{p^t \mathrm{e}^{-p}}{t!},\quad t=0,1,\cdots , \end{aligned}$$
(5.1)

which implies no delay if \(p=0\).

In general, if the processors have different computing power, \(j_k\) would follow Poisson distribution with a parameter being the speed ratio of the other p processors to the \(p_k\)th one. However, in a multi-core workstation with shared memory, the processors are usually of the same style and can have the same computing ability. In the following, we assume the distribution in (1.8) to be Poisson distribution with parameter p and discuss the convergence results we obtained in the previous sections. First we give the values of the expected quantities we used before.

Proposition 5.1

Suppose there are \(p+1\) processors and (5.1) holds. Then for any \(\rho >1\), we have that for all k,

$$\begin{aligned} T&=\textit{E}[{\mathbf {j}}]= p, \quad S=\textit{E}[{\mathbf {j}}^2]=p(p+1),\nonumber \\ M_\rho&=\textit{E}[\rho ^{{\mathbf {j}}}]=\mathrm{e}^{p(\rho -1)},\quad N_\rho =\textit{E}[{\mathbf {j}}\rho ^{{\mathbf {j}}}]=\rho p\mathrm{e}^{p(\rho -1)},\nonumber \\ \gamma _{\rho ,1}&=\tfrac{\mathrm{e}^{p(\sqrt{\rho }-1)}-1}{\sqrt{\rho }-1}, \quad \gamma _{\rho ,2}=\left( \tfrac{\rho p \mathrm{e}^{p(\rho -1)}-p}{1-\rho ^{-1}}\right) ^{-1}. \end{aligned}$$
(5.2)

where \(\gamma _{\rho ,1}\) and \(\gamma _{\rho ,2}\) are defined in (4.13).

The proof of this proposition is standard. From the quantities in (5.2) and the theorems, which we established in the previous sections, we make the following observations:

  1. 1.

    If \(p=o(\sqrt{m})\), we can guarantee the convergence of Algorithm 1 for both smooth and nonsmooth problems by setting \(\eta \lessapprox \frac{1}{L_c}\) (see Theorems 3.2 and 4.1), where \(\lessapprox \) means “less than but close to”;

  2. 2.

    If \(2\mathrm{e}^2(p+1)+p=o(\sqrt{m})\), then choosing \(\rho =1+\frac{1}{p}\), we have the convergence rate of Algorithm 1 obtained in Theorem 3.4 by setting \(\eta \lessapprox \frac{2}{\mathrm{e}L_c}\). Then \(D\approx \frac{\eta }{m}\) in (3.23), and thus, near-linear speedup is achieved for solving convex smooth problems;

  3. 3.

    If \(p=o(\root 4 \of {m})\), we can guarantee the convergence rate of Algorithm 1 in Theorem 4.2 by setting \(\eta \lessapprox \frac{1}{L_c}\) and thus a near-linear speedup for convex nonsmooth problems.

6 Numerical Experiments

In this section, we evaluate the numerical performance of Algorithm 1 on solving two problems: the LASSO problem and the nonnegative matrix factorization (NMF). The tests were carried out on a machine with 64GB memory and two Intel Xeon E5-2690 v2 processors (20 cores, 40 threads). All of the experiments were coded in C\(++\) and its threading library was used for parallelization. We use the Eigen library for numerical linear algebra operations. To measure the delay, we use an atomic variable to track the number of iterations as defined in the paper. The atomic variable will be incremented by one for each update. For each thread, the delay is calculated based on the difference of the iteration counters before and after the update. For LASSO, two different settings were used. The first one sets the stepsize by the expected delay according to the analysis of this paper, and the other one used the maximum delay from [12, 16] and is dubbed as AsySCD. We compared the async-BCU to the serial BCU, which can be regarded as a special case of Algorithm 1 with the delay \(j_k\equiv 0,\,\forall k\). For NMF, we set the stepszie by the expected delay and test its convergence behavior with different numbers of threads.

6.1 Parameter Settings

According to Theorem 4.1, the following two stepsizes were used:Footnote 1

$$\begin{aligned} \text{ This } \text{ paper }:\ \eta&=\tfrac{1/L_c}{1+\kappa ^2p^2/(2m)}, \end{aligned}$$
(6.1a)
$$\begin{aligned} \text{ Max } \text{ delay }:\ \eta&=\tfrac{1/L_c}{1+\kappa ^2\tau ^2/(2m)}, \end{aligned}$$
(6.1b)

where \(\tau \) equals the maximum number of the generated sequence of delays.

6.2 LASSO

We measure the performance of Algorithm 1 on the LASSO problem [3]

$$\begin{aligned} \mathop {{{\mathrm{minimize}}}}\limits _{{\mathbf {x}}\in \mathbb {R}^n} \tfrac{1}{2}\Vert {\mathbf {A}}{\mathbf {x}}-{\mathbf {b}}\Vert _2^2+\lambda \Vert {\mathbf {x}}\Vert _1, \end{aligned}$$
(6.2)

where \({\mathbf {A}}\in \mathbb {R}^{N\times n}, {\mathbf {b}}\in \mathbb {R}^N\), and \(\lambda \) is a parameter balancing the fitting term and the regularization term. We randomly generated \({\mathbf {A}}\) and \({\mathbf {b}}\) following the standard normal distribution. The size was fixed to \(n = 2N\) and \(N=10\,000\), and \(\lambda =\frac{1}{N}\) was used. The Lipschitz constant \(L_c = \max \{ \Vert ({\mathbf {A}}_i^\mathrm{T}{\mathbf {A}}_i)\Vert ^2, ~\forall i \}\), where \({\mathbf {A}}_i\) represents the ith column block of \({\mathbf {A}}\).

Fig. 1
figure 1

Delay distribution behaviors of Algorithm 1 for solving LASSO (6.2). The tested problem has 20 000 coordinates, and it was running with 5, 10, 20, and 40 threads. (Color figure online)

Figure 1 shows the delay distribution of Algorithm 1 with different numbers of threads. The blue bars are the normalized histogram so that the bar heights add to 1. Orange curve is the probability density function of Poisson distribution. By using 5 and 10 threads, we observe that the number of delays is concentrated on 4 and 9, respectively. When the number of threads is relatively large, the actual delay distribution closely matches with the theoretical distribution as we discussed in Sect. 5. For 20 threads, an interesting observation is that the actual probability density is higher than the theoretical probability density when the number of delays is around 9. We think this is due to the architecture of the testing environment, i.e., the average delay within a CPU is smaller than the average delay across two different CPUs. We observe a similar behavior when 40 threads are used.

Figure 2 plots the convergence behavior of Algorithm 1 running on 40 threads with different block sizes. We partition \({\mathbf {x}}\) into m equal-sized blocks with block sizes varying among \(\{10,~50,~100,~500\}\). The results of the serial randomized coordinate descent method is also plotted for comparison. Here, one epoch is equivalent to updating all coordinates once. Comparing to the serial method, we observe that the delay does affect the convergence speed, and the affect becomes weaker as m increases. Hence, Algorithm 1 can have nearly linear speedup when the number of blocks is large. In addition, we note that the stepsize setting of AsySCD is too conservative, and Algorithm 1 with stepsize set by the expected delay converges significantly faster. However, we observed that, in general, we could not take larger stepsize than that in (6.1a). Some divergence behaviors are observed when using stepsizes larger than that in (6.1a).

Fig. 2
figure 2

Convergence behaviors of Algorithm 1 for solving the LASSO problem (6.2) with the stepsize given in (6.1), and also the serial randomized coordinate descent method. The tested problem has \(10\,000\) samples and \(20\,000\) coordinates that are evenly partitioned into m blocks. It was simulated as running with 40 threads. We run 100 epochs for each experiments. (Color figure online)

Fig. 3
figure 3

Delay distribution behaviors of Algorithm 1 for solving NMF (6.3). It was running with 5, 10, 20, and 40 threads. (Color figure online)

6.3 Nonnegative Matrix Factorization (NMF)

This section presents the numerical results of applying Algorithm 1 for solving the NMF problem [37]

$$\begin{aligned} \begin{array}{l} \mathop {{{\mathrm{minimize}}}}\limits _{{\mathbf {X}},{\mathbf {Y}}}\ \frac{1}{2}\Vert {\mathbf {X}}{\mathbf {Y}}^\top -{\mathbf {Z}}\Vert _F^2 \\ \text{ s.t. } {\mathbf {X}}\in \mathbb {R}^{M\times m}_+,\quad {\mathbf {Y}}\in \mathbb {R}^{N\times m}_+, \end{array} \end{aligned}$$
(6.3)

where \({\mathbf {Z}}\in \mathbb {R}^{M\times N}_+\) is a given nonnegative matrix. We generated \({\mathbf {Z}}={\mathbf {Z}}_L{\mathbf {Z}}_R^\top \) with the elements of \({\mathbf {Z}}_L\) and \({\mathbf {Z}}_R\) first drawn from the standard normal distribution and then projected into the nonnegative orthant. The size was fixed to \(M=N=10\,000\) and \(m=100\).

We treated one column of \({\mathbf {X}}\) or \({\mathbf {Y}}\) as one block coordinate, and during the iterations, every column of \({\mathbf {X}}\) was kept with unit norm. Therefore, the partial gradient Lipschitz constant equals one if one column of \({\mathbf {Y}}\) is selected to update and \(\Vert {\mathbf {y}}_{i_k}^k\Vert _2^2\) if the \(i_k\)th column of \({\mathbf {X}}\) is selected. Since \(\Vert {\mathbf {y}}_{i_k}^k\Vert _2^2\) could approach to zero, we set the Lipschitz constant to \(\max (0.001, \Vert {\mathbf {y}}_{i_k}^k\Vert _2^2)\). This modification can guarantee the whole sequence convergence of the coordinate descent method [38]. Due to nonconvexity, global optimality cannot be guaranteed. Thus, we set the starting point close to \({\mathbf {Z}}_L\) and \({\mathbf {Z}}_R\). Specifically, we let \({\mathbf {X}}^0={\mathbf {Z}}_L+0.5\varvec{\Xi }_L\) and \({\mathbf {Y}}^0={\mathbf {Z}}_R+0.5\varvec{\Xi }_R\) with the elements of \(\varvec{\Xi }_L\) and \(\varvec{\Xi }_R\) following the standard normal distribution. All methods used the same starting point.

Figure 3 shows the delay distribution behavior of Algorithm 1 for solving NMF. The observation is similar to Fig. 1. Figure 4 plots the convergence results of Algorithm 1 running with 1, 5, 10, 20 and 40 threads. From the results, we observe that Algorithm 1 scales up to 10 threads for the tested problem. Degenerated convergence is observed with 20 and 40 threads. This is mostly due to the following three reasons: (1) since the number of blocks is relatively small (\(m =200\)), as shown in (6.1a), using more threads leads to smaller stepsize, hence, slower convergence; (2) the gradient used for the current update is more staled when a relative large number of threads are used, which also leads to slow convergence; (3) high cache miss rates and false sharing also downgrade the speedup performance.

Fig. 4
figure 4

Convergence behaviors of Algorithm 1 for solving the NMF problem (6.3) with the stepsize set based on the expected delay. The size of the tested problem is \(M=N=10\,000\) and \(m=100\), i.e., 200 block coordinates, and the algorithm was tested with 1, 5, 10, 20, and 40 threads. (Color figure online)

7 Conclusions

We have analyzed the convergence of the async-BCU method for solving both convex and nonconvex problems in a probabilistic way. We showed that the algorithm is guaranteed to converge for smooth problems if the expected delay is finite and for nonsmooth problems if the variance of the delay is also finite. In addition, we established sublinear convergence of the method for weakly convex problems and linear convergence for strongly convex ones. The stepsize we obtained depends on certain expected quantities. Assuming the given \(p+1\) processors perform identically, we showed that the delay follows a Poisson distribution with parameter p and thus fully determined the stepsize. We have simulated the performance of the algorithm with our determined stepsize on solving LASSO and the nonnegative matrix factorization, and the numerical results validated our analysis.

8 Proofs of Lemmas

The following lemma is used in other proofs several times, and it is easy to verify.

Lemma A.1

For any scalar sequences \(\{a_{i,j}\}\) and \(\{b_i\}\), it holds that

$$\begin{aligned} \sum _{t=1}^{k-1}\sum _{d=k-t}^{k-1}a_{d,t}&=\sum _{d=1}^{k-1}\sum _{t=k-d}^{k-1}a_{d,t},\,\forall k \geqslant 0, \end{aligned}$$
(A.1)
$$\begin{aligned} \sum _{t=1}^k\sum _{d=0}^{t-1}a_{d,t}&=\sum _{d=0}^{k-1}\sum _{t=d+1}^ka_{d,t},\,\forall k\geqslant 0, \end{aligned}$$
(A.2)
$$\begin{aligned} \sum _{t=1}^k\sum _{d=1}^{t-1}a_{d,t}b_{t-d}&=\sum _{t=1}^{k-1}\left( \sum _{d=t+1}^ka_{d-t,d}\right) b_t,\,\forall k\geqslant 0. \end{aligned}$$
(A.3)

8.1 Proof of Lemma 3.3

Proof

Following the proof of Theorem 1 in [12], we have

$$\begin{aligned}&\textit{E}\left[ \Vert \nabla f({\mathbf {x}}^t)\Vert ^2-\Vert \nabla f({\mathbf {x}}^{t+1})\Vert ^2\right] \nonumber \\ \leqslant&2\textit{E}\left[ \Vert \nabla f({\mathbf {x}}^t)\Vert \cdot \Vert \nabla f({\mathbf {x}}^t)-\nabla f({\mathbf {x}}^{t+1})\Vert \right] \;(\text {from }\Vert {\mathbf {u}}\Vert ^2-\Vert {\mathbf {v}}\Vert ^2\leqslant 2\Vert {\mathbf {u}}\Vert \cdot \Vert {\mathbf {u}}-{\mathbf {v}}\Vert )\nonumber \\ \leqslant&2L_r\textit{E}\left[ \Vert \nabla f({\mathbf {x}}^t)\Vert \cdot \Vert {\mathbf {x}}^t- {\mathbf {x}}^{t+1}\Vert \right] =2\eta L_r\textit{E}\left[ \Vert \nabla f({\mathbf {x}}^t)\Vert \cdot \Vert U_{i_t}\nabla f({\mathbf {x}}^{t-j_t})\Vert \right] \nonumber \\ \leqslant&\eta L_r\left( \frac{1}{\sqrt{m}}\textit{E}\Vert \nabla f({\mathbf {x}}^t)\Vert ^2+\sqrt{m}\textit{E}\Vert U_{i_t}\nabla f({\mathbf {x}}^{t-j_t})\Vert ^2\right) \nonumber \\ =&\frac{\eta L_r}{\sqrt{m}}\left( \textit{E}\Vert \nabla f({\mathbf {x}}^t)\Vert ^2+\textit{E}\Vert \nabla f({\mathbf {x}}^{t-j_t})\Vert ^2\right) \nonumber \\ =&\frac{\eta L_r}{\sqrt{m}}\left( \textit{E}\Vert \nabla f({\mathbf {x}}^t)\Vert ^2+\sum \limits _{r=0}^{t-1}q_r\textit{E}\Vert \nabla f({\mathbf {x}}^{t-r})\Vert ^2+c_t\Vert \nabla f({\mathbf {x}}^0)\Vert ^2\right) \end{aligned}$$
(A.4)

and

$$\begin{aligned}&\textit{E}\left[ \Vert \nabla f({\mathbf {x}}^{t+1})\Vert ^2-\Vert \nabla f({\mathbf {x}}^t)\Vert ^2\right] \nonumber \\ \leqslant&\textit{E}\left[ \Vert \nabla f({\mathbf {x}}^{t+1})+\nabla f({\mathbf {x}}^t)\Vert \cdot \Vert \nabla f({\mathbf {x}}^{t+1})-\nabla f({\mathbf {x}}^t)\Vert \right] \nonumber \\ \leqslant&L_r\textit{E}\left[ \left( 2\Vert \nabla f({\mathbf {x}}^t)\Vert +\Vert \nabla f({\mathbf {x}}^{t+1})-\nabla f({\mathbf {x}}^t)\Vert \right) \Vert {\mathbf {x}}^{t+1}- {\mathbf {x}}^t\Vert \right] \nonumber \\ \leqslant&L_r\textit{E}\left[ 2\Vert \nabla f({\mathbf {x}}^t)\Vert \cdot \Vert {\mathbf {x}}^{t+1}- {\mathbf {x}}^t\Vert +L_r\Vert {\mathbf {x}}^{t+1}- {\mathbf {x}}^t\Vert ^2\right] \nonumber \\ =&L_r\textit{E}\left[ 2\eta \Vert \nabla f({\mathbf {x}}^t)\Vert \cdot \Vert U_{i_t}\nabla f({\mathbf {x}}^{t-j_t})\Vert +\eta ^2 L_r\Vert U_{i_t}\nabla f({\mathbf {x}}^{t-j_t})\Vert ^2\right] \nonumber \\ \leqslant&L_r\textit{E}\left[ \frac{\eta }{\sqrt{m}}\Vert \nabla f({\mathbf {x}}^t)\Vert ^2+\eta \sqrt{m}\Vert U_{i_t}\nabla f({\mathbf {x}}^{t-j_t})\Vert ^2+\eta ^2 L_r\Vert U_{i_t}\nabla f({\mathbf {x}}^{t-j_t})\Vert ^2\right] \nonumber \\ =&\frac{\eta L_r}{\sqrt{m}}\textit{E}\Vert \nabla f({\mathbf {x}}^t)\Vert ^2+\left( \frac{\eta L_r}{\sqrt{m}}+\frac{\eta ^2 L_r^2}{m}\right) \textit{E}\Vert \nabla f({\mathbf {x}}^{t-j_t})\Vert ^2\nonumber \\ =&\frac{\eta L_r}{\sqrt{m}}\textit{E}\Vert \nabla f({\mathbf {x}}^t)\Vert ^2\nonumber \\&+\,\left( \frac{\eta L_r}{\sqrt{m}}{+}\frac{\eta ^2 L_r^2}{m}\right) \left( \sum \limits _{r=0}^{t-1}q_r\textit{E}\Vert \nabla f({\mathbf {x}}^{t-r})\Vert ^2+c_t\Vert \nabla f({\mathbf {x}}^0)\Vert ^2\right) . \end{aligned}$$
(A.5)

We first show the first inequality in (3.18). Note that (3.17) gives us

$$\begin{aligned} \frac{1}{1-(1+M_\rho )\frac{\eta L_r}{\sqrt{m}}}\leqslant \rho . \end{aligned}$$
(A.6)

When \(t=0\), we have from (A.4) that \(\textstyle \Vert \nabla f({\mathbf {x}}^0)\Vert ^2-\textit{E}\Vert \nabla f({\mathbf {x}}^1)\Vert ^2\leqslant \frac{2 \eta L_r}{\sqrt{m}}\Vert \nabla f({\mathbf {x}}^0)\Vert ^2\leqslant (1+M_\rho )\frac{\eta L_r}{\sqrt{m}}\Vert \nabla f({\mathbf {x}}^0)\Vert ^2.\) Hence, \(\Vert \nabla f({\mathbf {x}}^0)\Vert ^2\leqslant \rho \textit{E}\Vert \nabla f({\mathbf {x}}^1)\Vert ^2\) from (A.6). Now we assume that \(\textit{E}\Vert \nabla f({\mathbf {x}}^t)\Vert ^2\leqslant \rho \textit{E}\Vert \nabla f({\mathbf {x}}^{t+1})\Vert ^2\) for all \(t\leqslant k-1\). For \(t=k\), it holds from (A.4) and the induction assumption that

$$\begin{aligned}&\textit{E}\Vert \nabla f({\mathbf {x}}^k)\Vert ^2-\textit{E}\Vert \nabla f({\mathbf {x}}^{k+1})\Vert ^2\\ \leqslant&\frac{\eta L_r}{\sqrt{m}}\left( \textit{E}\Vert \nabla f({\mathbf {x}}^k)\Vert ^2+\sum \limits _{t=0}^{k-1}q_t\rho ^t\textit{E}\Vert \nabla f({\mathbf {x}}^k)\Vert ^2+c_k\rho ^k\textit{E}\Vert \nabla f({\mathbf {x}}^k)\Vert ^2\right) \\ =&\frac{\eta L_r}{\sqrt{m}}\left( 1+\sum \limits _{t=0}^{k-1}q_t\rho ^t+c_k\rho ^k\right) \textit{E}\Vert \nabla f({\mathbf {x}}^k)\Vert ^2 \leqslant \frac{\eta L_r}{\sqrt{m}}(1+M_\rho )\cdot \textit{E}\Vert \nabla f({\mathbf {x}}^k)\Vert ^2. \end{aligned}$$

Hence, we have \(\textit{E}\Vert \nabla f({\mathbf {x}}^k)\Vert ^2\leqslant \rho \textit{E}\Vert \nabla f({\mathbf {x}}^{k+1})\Vert ^2\) from (A.6). Therefore, we finish the induction step, and thus, the first inequality of (3.18) holds.

Next we show the second inequality of (3.18). Since (3.17) implies \(\textstyle \eta \leqslant \frac{\rho -1}{\frac{L_r}{\sqrt{m}}\left( 1+M_\rho +\frac{(\rho -1)M_\rho }{\rho (1+M_\rho )}\right) }\),

$$\begin{aligned}&1+\frac{\eta L_r}{\sqrt{m}}+\left( \frac{\eta L_r}{\sqrt{m}}+\frac{\eta ^2 L_r^2}{m}\right) M_\rho \nonumber \\ \overset{(3.17)}{\leqslant }&1+\frac{\eta L_r}{\sqrt{m}}(1+M_\rho )+M_\rho \frac{\eta L_r^2}{m}\frac{(\rho -1)\sqrt{m}}{\rho L_r(1+M_\rho )}\nonumber \\&=\,1+\frac{\eta L_r}{\sqrt{m}}\left( 1+M_\rho +\frac{(\rho -1)M_\rho }{\rho (1+M_\rho )}\right) \leqslant \rho . \end{aligned}$$
(A.7)

When \(t=0\), we have from (A.5) that

$$\begin{aligned} \textit{E}\Vert \nabla f({\mathbf {x}}^1)\Vert ^2-\Vert \nabla f({\mathbf {x}}^0)\Vert ^2&\leqslant \left( \frac{2\eta L_r}{\sqrt{m}}+\frac{\eta ^2 L_r^2}{m}\right) \Vert \nabla f({\mathbf {x}}^0)\Vert ^2\\&\leqslant \left( (1+M_\rho )\frac{\eta L_r}{\sqrt{m}}+\frac{\eta ^2 L_r^2}{m}\right) \Vert \nabla f({\mathbf {x}}^0)\Vert ^2. \end{aligned}$$

Hence, \(\textit{E}\Vert \nabla f({\mathbf {x}}^1)\Vert ^2\leqslant \rho \Vert \nabla f({\mathbf {x}}^0)\Vert ^2\) holds from (A.7). Assume \(\textit{E}\Vert \nabla f({\mathbf {x}}^{t+1})\Vert ^2\leqslant \rho \textit{E}\Vert \nabla f({\mathbf {x}}^t)\Vert ^2\) for all \(t\leqslant k-1\). It follows from (A.5) and the induction assumption that

$$\begin{aligned}&\textit{E}\Vert \nabla f({\mathbf {x}}^{k+1})\Vert ^2-\textit{E}\Vert \nabla f({\mathbf {x}}^{k})\Vert ^2\\ \leqslant&\frac{\eta L_r}{\sqrt{m}} \textit{E}\Vert \nabla f({\mathbf {x}}^{k})\Vert ^2 \\&+\left( \frac{\eta L_r}{\sqrt{m}}+\frac{\eta ^2 L_r^2}{m}\right) \left( \sum \limits _{t=0}^{k-1}q_t\rho ^t\textit{E}\Vert \nabla f({\mathbf {x}}^k)\Vert ^2+c_k\rho ^k\textit{E}\Vert \nabla f({\mathbf {x}}^k)\Vert ^2\right) \\ =&\left( \frac{\eta L_r}{\sqrt{m}}+\left( \frac{\eta L_r}{\sqrt{m}}+\frac{\eta ^2 L_r^2}{m}\right) \left( \sum \limits _{t=0}^{k-1}q_t\rho ^t+c_k\rho ^k\right) \right) \textit{E}\Vert \nabla f({\mathbf {x}}^{k})\Vert ^2\\ \leqslant&\left( \frac{\eta L_r}{\sqrt{m}}+\left( \frac{\eta L_r}{\sqrt{m}}+\frac{\eta ^2 L_r^2}{m}\right) M_\rho \right) \textit{E}\Vert \nabla f({\mathbf {x}}^{k})\Vert ^2. \end{aligned}$$

Hence, from (A.7), \(\textit{E}\Vert \nabla f({\mathbf {x}}^{k+1})\Vert ^2\leqslant \rho \textit{E}\Vert \nabla f({\mathbf {x}}^k)\Vert ^2\) holds, and we complete the proof.