1 Introduction

In this work, we are concerned with the fundamental problem of approximating sparse solutions of large-scale linear systems of the form

$$ \mathbf{A} x = b $$
(1.1)

with matrix \(\mathbf {A} \in \mathbb {R}^{m \times n}\) and right hand side \(b \in \mathbb {R}^{m}\). Linear systems like (1.1) arise in several fields of engineering and physics problems, such as sensor networks [47], signal processing [14], partial differential equations [36], filtering [20], computerized tomography [17], optimal control [37], inverse problems [18, 40], and machine learning. When A is too large to fit in memory, direct methods for solving (1.1) are not feasible and iterative methods are preferred. As long as one can afford full matrix vector products or the system matrix fits in memory, Krylov methods including the conjugate gradient (CG) algorithms [45] are the industrial standard. On the other hand, randomized methods such as the randomized (block) Kaczmarz [19, 46] and coordinate descent method [35] are effective if a single matrix vector product is too expensive and in some situations are even more efficient than CG method (see, e.g., [46] for an example). Linear convergence of the Kaczmarz method has been shown in the randomized case in [46] and [39] analyzes convergence rates in the deterministic case. Moreover, block Kaczmarz methods [28, 29, 31, 38, 42] have received much attention for their high efficiency for solving (1.1) and distributed implementations. In this work, we propose and analyze the randomized sparse Kaczmarz method [43] and show that parallel computations and averaging as in [28] lead to faster convergence.

1.1 Related work

Randomized Kaczmarz

In the large data regime, the randomized Kaczmarz (RK) is a popular iterative method for solving linear systems. In each iteration,Footnote 1 a row vector \({a_{i}^{T}}\) of A is chosen at random from the system (1.1) and the current iterate xk is projected onto the solution space of that equation to obtain xk+ 1. Geometrically, at each iteration

$$ x_{k+1} = \operatorname*{arg min}_{x \in \mathbb{R}^{n}} \|x - x_{k}\|^{2}_{2} \quad \text{s.t.} \quad \langle a_{i}, x \rangle = b_{i}. $$

It has been observed that the convergence of RK method can be accelerated by introducing relaxation. In a relaxed variant of RK, a step is taken in the direction of this projection with the size of the step depending on a relaxation parameter. Explicitly, the relaxed RK update is given by

$$ \begin{aligned} x_{k+1} = x_{k} - w_{k,i} \frac{\langle{a_{i}}{x_{k}}\rangle-b_{i}}{\|{a_{i}}\|^{2}_{2}} \cdot a_{i} , \end{aligned} $$
(1.2)

with initial values x0 = 0, where the \(w_{k,i}, i \in \{1, \dots , m\}\) are relaxation parameters. Note that this update rule requires low cost per iteration and storage of order \({\mathscr{O}} (n)\). For consistent systems, the relaxation parameters must satisfy

$$ 0 < \liminf_{k \rightarrow \infty}w_{k,i} \leq \limsup_{k \rightarrow \infty}w_{k,i} < 2 $$
(1.3)

to ensure convergence [15]. Fixing the relaxation parameters wk,i = 1 for all iterations k and indices i leads to the standard RK method. In [29], a block Kaczmarz variant under the name randomized block Kaczmarz (RBK) has been analyzed. Linear convergence in expectation was shown for consistent systems of equations, with a rate depending on the geometric properties of the matrix, its submatrices, and on the size of the blocks. The convergence rate given in [29] depends on the block size and the stochastic conditioning parameter of the most ill-conditioned block of the partition when a partition is used and on the most ill-conditioned block of the entire matrix A when the indices are sampled i.i.d. with replacement. The paper [31] considers more general sampling strategies such as sampling from a partition of the rows of the matrix. In [10], the authors investigate an extension of the randomized averaged block Kaczmarz method that can solve least squares problems. A parallel version of RK where a weighted average of independent updates is used was studied in [28]. They showed that as the number of threads increases, the rate of convergence improves and the convergence horizon for inconsistent systems decreases. Another more general class of block methods are sketch-and-project methods [13, 42]. For a linear system Ax = b, sketch-and-project methods iteratively project the current iterate onto the solution space of a sketched subsystem STAx = STb. In particular, RK is a sketch-and-project method with S being rows of the identity matrix.

Randomized sparse Kaczmarz

Recently, a new variant of the standard RK method, namely the randomized sparse Kaczmarz method (RSK) [24, 38, 43] with almost the same low cost and storage requirements has shown good performance in approximating sparse solutions of large consistent linear systems. It uses two variables \(x_{k}^{*}\) and xk and the relaxed RSK update is given by

$$ \begin{aligned} x_{k+1}^{*} &= x_{k}^{*} - w_{k,i} \frac{\langle{a_{i}}{x_{k}}\rangle-b_{i}}{\|{a_{i}}\|^{2}_{2}} \cdot a_{i} ,\\ x_{k+1} &= S_{\lambda}(x_{k+1}^{*}) \end{aligned} $$
(1.4)

with initial values \(x_{0}=x_{0}^{*}=0\), λ > 0, and the soft shrinkage operator

$$ S_{\lambda}(x) = \max\{|x|-\lambda,0\} \cdot \text{sign}(x) . $$

Fixing the relaxation parameters wk,i = 1 for all iterations k and indices i lead to the standard RSK method. For consistent systems, the iterates of the standard RSK method converge in expectation to the solution of the regularized Basis Pursuit Problem

$$ \underset{x \in \mathbb{R}^{n}}{\min} \lambda \cdot \|{x}\|_{1} + \frac{1}{2} \cdot \|{x}\|^{2}_{2} \quad \text{s.t.}\quad \mathbf{A}x=b . $$
(1.5)

The advantage of (1.5) is the strong convexity property of the objective function. It is important to note that (see [11]) for a large but finite parameter λ > 0, the solution of (1.5) gives a solution of

$$ \underset{x \in \mathbb{R}^{n}}{\min} \|{x}\|_{1} \quad \text{s.t.}\quad \mathbf{A}x=b , $$
(1.6)

which is the famous Basis Pursuit Problem [6]. The 1-norm has been used in many applications, with the goal of obtaining sparse or even sparsest solutions of underdetermined systems of linear equations and least-squares problems which is the basis of the theory of compressed sensing [4, 7]. The sparsest solution is given by minimizing the so-called zero-norm, ∥x0 counting the number of nonzero components in x. However, it is computationally intractable even for the simplest instances due to its combinatorial nature (it is even strongly NP-complete, see problem [MP5] in the seminal reference [12]). In [5, 8], reasonable conditions are given under which a solution of (1.6) is a sparsest solution. In [44], an extension of the RSK with linear expected convergence has been proposed for solving sparse least squares and impulsive noise problems while requiring only one additional column of the system matrix in each iteration.

Block sparse Kaczmarz methods

In this setting, a subset of rows \(\mathbf {A}_{\tau _{k}}\) is used at each iteration, with \(\tau _{k} \subseteq \left \{ 1,\dots ,m \right \}\) and |τk| > 1 where |τk| denotes the cardinality of the set of indices τk. We usually have two approaches. The first variant is simply a block generalization of the basic sparse Kaczmarz and the update is given by

$$ \begin{aligned} x_{k+1}^{*} &= x_{k}^{*} - w_{k,i} \frac{ \mathbf{A}_{\tau_{k}}^{T}(\mathbf{A}_{\tau_{k}}x_{k} - b_{\tau_{k}})}{\|\mathbf{A}_{\tau_{k}}\|^{2}_{2}} ,\\ x_{k+1} &= S_{\lambda}(x_{k+1}^{*}) \end{aligned} $$
(1.7)

Such block variants are considered, e.g., in [29, 31] (with λ = 0) and in [23, 38] for λ ≥ 0 and we refer to these iterative process as block sparse Kaczmarz method. When |τk| = m, we refer to (1.7) as the linearized Bregman method [2, 23, 48] and when |τk| = 1 as the randomized sparse Kaczmarz method [24, 43]. The main drawback of (1.7) is that it is not adequate for distributed implementations. The second variant of block sparse Kaczmarz can take advantage of distributed computing: each iteration takes η steps of the relaxed randomized sparse Kaczmarz, independently in parallel, averages the results and applies the soft shrinkage to form the next iterate. This leads to the following iteration:

$$ \begin{array}{@{}rcl@{}} x_{k+1}^{*} &=& x_{k}^{*} - \frac{1}{\eta} \sum\limits_{i \in \tau_{k}} w_{i}\frac{\langle{a_{i}}{x_{k}}\rangle-b_{i}}{\|{a_{i}}\|^{2}_{2}} \cdot a_{i} ,\\ x_{k+1} &=& S_{\lambda}(x_{k+1}^{*}) \end{array} $$
(1.8)

with initial values \(x_{0}=x_{0}^{*}=0\), where \(\tau _{k} \subseteq \left \{ 1,\dots ,m \right \}\) denotes a random set of η row indices sampled with replacement (and the i th row is chosen with probability pi) and wi represents the weight corresponding to the i th row. Such block variants are considered, e.g., in [13, 27,28,29, 42] (with λ = 0). Method (1.8) is the main method presented and analyzed in this paper and we refer to it as the randomized sparse Kaczmarz with averaging (RSKA) with more details in Algorithm 1. Note that the update (1.8) is easy to implement on distributed computing units, and it is comparable in terms of cost per iteration to the basic sparse Kaczmarz update, i.e., of order \({\mathscr{O}} (\eta n)\). If τk is a set of one index, we recover the relaxed RSK method and if in addition the weights are chosen as wi = 1 for \(i \in \left \{ 1,\dots ,m \right \}\), we recover the standard RSK method.

Algorithm 1
figure a

Randomized sparse Kaczmarz with averaging (RSKA).

1.2 Contribution and organization

To the best of our knowledge, the proposed block variant (1.8) has not yet been proposed and analyzed for the randomized sparse Kaczmarz method. In this work, we make the following contributions:

  • We propose a mini batch version termed RSKA of the randomized sparse Kaczmarz method, a general algorithm that unifies a variety of other methods such as the randomized Kaczmarz and the randomized sparse Kaczmarz with their relaxed variants. It is theoretically well-motivated, can exploit parallel computation, and converges linearly in expectation.

  • We prove that our proposal leads to faster convergence than its standard counterpart. We also validate this empirically and we provide implementations of our algorithm in Python.

The remainder of the paper is organized as follows. Section 2 provides a brief overview on convexity and Bregman distances. In Section 3, we give several interpretations of our method. Section 4 provides convergence guarantees for our proposed method. In Section 5, numerical experiments demonstrate the effectiveness of RSKA and provides several insights regarding its behavior and its hyper-parameters. Finally, Section 6 draws some conclusions.

1.3 Notation

In this section, we introduce notation that will be used throughout. The first m integers are denoted by \([m] \overset {\text {def}}{=} \{1, 2, \dots , m \}\). Given a symmetric positive definite matrix B, we equip the space \(\mathbb {R}^{n}\) with the Euclidean inner product defined by

$$ \langle x, y \rangle_{\mathbf{B}} \overset{\text{def}}{=} \langle x, \mathbf{B} y \rangle = \sum\limits_{i,j \in [n]} x_{i} \mathbf{B}_{ij} y_{j} , \quad x, y \in \mathbb{R}^{n} $$

We also define the induced norm: \(\|.\|_{\mathbf {B}}^{2} \overset {\text {def}}{=} \langle \cdot , \cdot \rangle _{\mathbf {B}}\) and use the short-hand notation ∥.∥ to mean ∥.∥I to denote the standard 2-norm.

Let \(\mathbf {A} \in \mathbb {R}^{m \times n}\) be a real matrix. By Range(A),∥AF, and \({a_{i}^{T}}\), we denote its range space, Frobenius norm, and i th row respectively, and by A, we denote the (Moore-Penrose) pseudo-inverse. The indicator function of a set C is denoted by

$$ \delta_{C}(x) \overset{\text{def}}{=} \begin{cases} 0, & \text{if } x \in C\\ +\infty, & \text{if } x \not \in C \end{cases} $$

By ei, we denote the i th column of the identity matrix \(\mathbf {I}_{n} \in \mathbb {R}^{n \times n}\). For a random vector xi that depends on a random index i ∈ [q] (where i is chosen with probability pi), we denote \(\mathbb {E}_{i \sim p} [x_{i}] \overset {\text {def}}{=} {\sum }_{i\in [q]} p_{i}x_{i}\) and we will just write \(\mathbb E [x_{i}]\) when the probability distribution is clear from the context. Let σi(A) be the i th singular value of A (when ordered decreasingly) and \(\sigma _{\min \limits }(\mathbf {A})\) and \(\sigma _{\max \limits }(\mathbf {A})\) be the smallest and largest singular values of A, respectively. They are given by

$$ \sigma_{\min}(\mathbf{A}) \overset{\text{def}}{=} \underset{x \in \mathbb{R}^{n},\ x\neq0} {\min} \frac{\|\mathbf{A} x\|_{2}}{\|x\|_{2}} \quad \text{and} \quad \sigma_{\max}(\mathbf{A})\overset{\text{def}}{=} \sigma_{1}(\mathbf{A})\overset{\text{def}}{=} \underset{x \in \mathbb{R}^{n},\ x\neq0}{\max} \frac{\|\mathbf{A} x\|_{2}}{\|x\|_{2}} $$
(1.9)

Finally, a result we will need, if A is a symmetric positive semi-definite matrix, the largest singular value of A (the L2 induced matrix norm or the spectral norm) can be defined instead as

$$ \|\mathbf{A}\|_{2} \overset{\text{def}}{=}\sigma_{\max}(\mathbf{A}) = \underset{x \in \mathbb{R}^{n},\ x\neq0}{\max} \frac{|\langle \mathbf{A} x, x \rangle|}{\|x\|_{2}^{2}} = \underset{x \in \mathbb{R}^{n},\ x\neq0}{\max} \frac{\|\mathbf{A} x\|_{2}}{\|x\|_{2}}. $$
(1.10)

Thus clearly

$$ \frac{|\langle x, x \rangle_{\mathbf{ A}}|}{\|x\|_{2}^{2}} \leq \sigma_{\max}(\mathbf{A}). $$

Let \(\tilde {\sigma }_{\min \limits }(\mathbf {A}) \overset {\text {def}}{=} \min \limits \{\sigma _{\min \limits }(\mathbf {A}_{J}) \mid J\subseteq [n], \mathbf {A}_{J}\neq 0 \}\) where AJ denotes the submatrix of A that is built up by the columns indexed by J and \(| x|_{\min \limits } \overset {\text {def}}{=} \min \limits \{|x_{j}| \mid x_{j}\neq 0\}\).

2 Basic notions

At first we recall some well-known concepts and properties of convex functions and Bregman distances and later give upper-bounds of singular values of sum of matrices. Let \(f:\mathbb {R}^{n} \to \mathbb {R}\) be convex (note that we assume that f is finite everywhere, hence also continuous). The subdifferential of f is defined by

$$ \partial f(x) \overset{\text{def}}{=} \left\{x^{*} \in \mathbb{R}^{n} | f(y) \ge f(x) + \langle{x^{*}}{y-x}\rangle \text{for all y} \in \mathbb{R}^{n}\right\} $$

at any \(x \in \mathbb {R}^{n}\) is nonempty, compact and convex.

The function \(f:\mathbb {R}^{n} \to \mathbb {R}\) is said to be α-strongly convex, if for all \(x,y \in \mathbb {R}^{n}\) and subgradients xf(x), we have

$$ f(y) \ge f(x) + \langle{x^{*}}{y-x}\rangle + \frac{\alpha}{2} \cdot \|{y-x}\|^{2}_{2} . $$

If f is α-strongly convex, then f is coercive, i.e.,

$$ \lim_{\|{x}\|_{2} \to \infty} f(x)=\infty , $$

and its Fenchel conjugate \(f^{*}:\mathbb {R}^{n} \to \mathbb {R}\) given by

$$ f^{*}(x^{*})\overset{\text{def}}{=} \sup_{y \in \mathbb{R}^{n}} \langle{x^{*}},{y}\rangle - f(y) $$

is also convex, finite everywhere, and coercive.

Additionally, f is differentiable with a Lipschitz-continuous gradient with constant \(L_{f^{*}}=\frac {1}{\alpha }\), i.e., for all \(x^{*},y^{*} \in \mathbb {R}^{n}\), we have

$$ \|{\nabla f^{*}(x^{*})-\nabla f^{*}(y^{*})}\|_{2} \le L_{f^{*}} \cdot \|{x^{*}-y^{*}}\|_{2} , $$

which implies the estimate

$$ f^{*}(y^{*}) \le f^{*}(x^{*}) -\langle{\nabla f^{*}(x^{*})},{y^{*}-x^{*}}\rangle + \frac{L_{f^{*}}}{2} \cdot \|{x^{*}-y^{*}}\|^{2}_{2} . $$
(2.1)

Example 2.1

The objective function

$$ f(x) \overset{\text{def}}{=} \lambda \cdot \|{x}\|_{1} + \frac{1}{2} \cdot \|[2]|{x}|^{2}_{2} $$
(2.2)

is strongly convex with constant α = 1 and its conjugate function can be computed with the soft shrinkage operator as

$$ f^{*}(x^{*}) = \frac{1}{2} \cdot \|{S_{\lambda}(x^{*})}\|^{2}_{2} \quad \text{with} \quad \nabla f^{*}(x^{*}) = S_{\lambda}(x^{*}) . $$

Definition 2.2

The Bregman distance \(D_{f}^{x^{*}}(x,y)\) between \(x,y \in \mathbb {R}^{n}\) with respect to f and a subgradient xf(x) is defined as

$$ D_{f}^{x^{*}}(x,y) \overset{\text{def}}{=} f(y)-f(x) -\langle{x^{*}},{y - x}\rangle . $$

Fenchel’s equality states that f(x) + f(x) = 〈x,x〉 if xf(x) and implies that the Bregman distance can be written as

$$ D_{f}^{x^{*}}(x,y) = f^{*}(x^{*})-\langle x^{*},y \rangle + f(y). $$

Example 2.3 (cf. [43])

For \(f(x)=\frac {1}{2} \cdot \|{x}\|^{2}_{2}\), we just have f(x) = {x} and \(D_{f}^{x^{*}}(x,y)=\frac {1}{2}\|{x-y}\|^{2}_{2}\). For \(f(x) = \lambda \cdot \|{x}\|_{1} + \frac {1}{2} \cdot \|{x}\|^{2}_{2}\) and any x = x + λsf(x), we have

$$ D_{f}^{x^{*}}(x,y)=\frac{1}{2} \cdot \|{x-y}\|^{2}_{2} + \lambda \cdot(\|{y}\|_{1}-\langle{s},{y})\rangle . $$

The following properties are crucial for the convergence analysis of the randomized algorithms. They immediately follow from the definition of the Bregman distance and the assumption of strong convexity of f, cf. [23]. For all \(x,y,z \in \mathbb {R}^{n}\) and xf(x), yf(y), zf(z), we have

$$ \frac{\alpha}{2} \|{x-y}\|^{2}_{2} \leq D_{f}^{x^{*}}(x,y) \leq \langle{x^{*}-y^{*}},{x-y}\rangle \leq \|{x^{*}-y^{*}}\|_{2} \cdot \|{x-y}\|_{2} $$
(2.3)
$$ D_{f}^{x^{*}}(x,y) + D_{f}^{y^{*}}(y,z) - D_{f}^{x^{*}}(x,z) = \langle{x^{*} - y^{*}},{z-y}\rangle $$
(2.4)

Note that if f is differentiable with a Lipschitz-continuous gradient, then we also have the (better) upper estimate \(D_{f}^{x^{*}}(x,y) \le L_{f} \cdot \|{x-y}\|^{2}_{2}\), but in general, this needs not be the case.

The following Theorem will be use in the convergence analysis more precisely in Lemma (4.7).

Theorem 2.4 ([16, Theorem 3.3.16(c)])

Let \(\mathbf {A}, \mathbf {B} \in \mathbb {R}^{m\times n}\) be given and let \(p = \min \limits \{m,n\}\). Then it holds for the decreasingly ordered singular values of A,B,A + B that

$$ |\sigma_{i}(\mathbf{A+B}) - \sigma_{i}(\mathbf{A}) | \leq \sigma_{1}(\mathbf{B}), \quad \text{for} \quad i \in [p]. $$

In particular, we have

$$ \sigma_{i}(\mathbf{A}+\mathbf{B}) \geq \sigma_{i}(\mathbf{A}) - \sigma_{1}(\mathbf{B}), \quad \text{for} \quad i \in [p]. $$

3 Interpretations

We can view the randomized sparse Kaczmarz with averaging algorithm as an optimization method for solving a specific primal or dual optimization problem. More precisely, the RSKA algorithm is a particular case of the following.

3.1 Randomized block/parallel coordinate descent

Considering the regularized Basis Pursuit Problem as primal problem

$$ \underset{x \in \mathbb{R}^{n}}{\min} \lambda \cdot \|{x}\|_{1} + \frac{1}{2} \cdot \|{x}\|^{2}_{2} \quad \text{s.t.}\quad \mathbf{A}x=b . $$
(3.1)

The dual of optimization problem (3.1) takes the form of a quadratic program:

$$ \underset{y \in \mathbb{R}^{m}}{\min} \frac{1}{2} \cdot \|{S_{\lambda}(\mathbf{A}^{T}y)}\|^{2}_{2} - \langle{b},{y}\rangle . $$
(3.2)

where the primal variable x and the dual variable y are related through the relation \(x = S_{\lambda }(\mathbf {A}^{T} y)\). Let us define the primal and dual objective functions

$$ f(x) = \lambda \cdot \|{x}\|_{1} + \frac{1}{2} \cdot \|{x}\|^{2}_{2} + \delta_{\{0\}}(b-\mathbf{A}x) $$

and

$$ g(y) = \frac{1}{2} \cdot \|{S_{\lambda}(\mathbf{A}^{T}y)}\|^{2}_{2} - \langle{b},{y}\rangle, $$

respectively. One iteration of the RSKA algorithm can be viewed as one step of the randomized block coordinate descent (RBCD) applied to the dual problem (3.2) when the weights wi are chosen in a particular form [38]. More formally a negative gradient step in the random i th component of y having \(\nabla _{i_{k}} g(y) = a_{i_{k}}^{T}S_{\lambda }(\mathbf {A}^{T} y) - b_{i_{k}}\) with step size \(t_{k} = \frac {1}{\|a_{i_{k}}\|^{2}_{2}}\) yields

$$ \begin{array}{@{}rcl@{}} y_{k+1} &= y_{k} - \frac{1}{\|a_{i_{k}}\|^{2}_{2}} \nabla_{i_{k}} g(y_{k}) e_{i_{k}} \end{array} $$

We easily recover (1.4) by simply multiplying this update with AT and using the relation between the primal and dual variables given by \(x = S_{\lambda }(\mathbf {A}^{T} y)\). Consider the dual problem (3.2). If we choose the particular weights \(w_{k,i} = \frac {\|a_{i}\|^{2}_{2}}{{\sum }_{i \in \tau _{k}}\|a_{i}\|^{2}_{2}}\), the block coordinate descent method applied to g from (3.2) reads

$$ \begin{aligned} x_{k+1}^{*} &= x_{k}^{*} - \frac{1}{\eta{\sum}_{i \in \tau_{k}}\|a_{i}\|^{2}_{2}} \sum\limits_{i \in \tau_{k}} (\langle{a_{i}},{x_{k}}\rangle-b_{i}) \cdot a_{i} ,\\ x_{k+1} &= S_{\lambda}(x_{k+1}^{*}) \end{aligned} $$

Parallel coordinate descent [41] applied to the dual problem (3.2) with learning rate \(t_{k} = \frac {1}{\eta \|a_{i}\|^{2}_{2}}\) yields

$$ y_{k+1} = y_{k} - \sum\limits_{i \in \tau_{k}} \frac{1}{\eta\|a_{i}\|^{2}_{2}} \nabla_{i} g(y_{k}) e_{i} $$
(3.3)

In Update (3.3), only coordinate iτk are updated in yk+ 1 and the remaining coordinate are unchanged. Multiplying this update with AT and using the relation between the primal and dual variables, we recover (1.4) with particular weights wi = 1. However, for general weights wk, the RSKA algorithm cannot be interpreted in these ways, and thus our scheme is more general. It is important to note that in [38] for the randomized block sparse Kaczmarz method of type (1.7), sublinear convergence rates have been obtained by identifying the iteration as a randomized block coordinate gradient descent method applied to the objective function g of the unconstrained dual of f. However, the rates given in [38] are in terms of the dual objective function g, and not of the primal iterates only, although, as mentioned there in the conclusions, the experimental results indicate that such rates also hold for the primal iterates.

3.2 Stochastic mirror descent with stochastic Polyak stepsize

The stochastic mirror descent (SMD) method and its variants [1, 21, 32] is one of the most widely used family of algorithms in stochastic optimization for non-smooth, Lipschitz continuous—convex and non-convex functions. Starting with the orginal work of [34], SMD has been studied in the context of convex programming [33], saddle-point problems [25], and monotone variational inequalities [26]. Now we draw a connection of Algorithm 1 to the stochastic mirror descent method using stochastic Polyak stepsizes. We consider a set of sketching matrices \(\mathbf {S}_{i}\in \mathbb {R}^{m\times s}\) (i ∈ [m]), define \(\mathbf {Z}_{i} \overset {\text {def}}{=} \mathbf {S}_{i}(\mathbf {S}_{i}^{T} \mathbf {A} \mathbf {A}^{T} \mathbf {S}_{i})^{\dagger }\mathbf {S}_{i}^{T}\) and consider the stochastic convex quadratic \(h_{\mathbf {S}_{i}}\)

$$ h_{\mathbf{S}_{i}}(x) \overset{\text{def}}{=} \frac{1}{2}\|\mathbf{A}x-b\|_{\mathbf{Z}_{i}}^{2} = \frac{1}{2} (\mathbf{A}x-b)^{T} \mathbf{Z_{i}} (\mathbf{A}x-b), $$
(3.4)

(recall that \(\mathbf {A} \in \mathbb {R}^{m \times n}\), \(b \in \mathbb {R}^{m}\)).

The general sketched mirror descent update with learning rate tk and a mirror map f works as follows: For a given iterate xk draw a sketching matrix \(\mathbf {S}_{i_{k}}\) (actually, one draws an index ik) at random and update

$$ x_{k+1} = \underset{x \in \mathbb{R}^{n}}{\arg \min} \Bigg \{\langle \nabla h_{\mathbf{S}_{i_{k}}}(x_{k}), x - x_{k} \rangle + \frac{1}{t_{k}} D^{x_{k}^{*}}_{f}(x_{k}, x) \Bigg\},\quad x_{k}^{*} \in \partial f(x_{k}), $$
(3.5)

which yields to the following update:

$$ \begin{aligned} x_{k+1}^{*} &= x_{k}^{*} - t_{k} \nabla h_{\mathbf{S}_{i_{k}}}(x_{k}) ,\\ x_{k+1} &= \nabla f^{*}(x_{k+1}^{*}). \end{aligned} $$
(3.6)

where f denotes the Fenchel conjugate of f.

One can show that

$$ \begin{array}{@{}rcl@{}} D^{x_{k+1}^{*}}_{f}(x_{k+1}, \hat x) \leq D^{x_{k}^{*}}_{f}(x_{k}, \hat x) - t_{k}\langle \nabla h_{\mathbf{S}_{i_{k}}}(x_{k}), x_{k} - \hat x \rangle + \frac{{t_{k}^{2}}}{2}\|\nabla h_{\mathbf{S}_{i_{k}}}(x_{k})\|^{2} \end{array} $$
(3.7)

(actually, this also follows from Lemma 4.3 below with \({\varPhi }(x) = \langle \nabla h_{\mathbf {S}_{i_{k}}}(x_{k}), x - x_{k} \rangle \), and a strongly convex function f ). If we select tk such that the RHS of inequality (3.7) is minimized, we obtain

$$ t_{k} = \frac{\langle \nabla h_{\mathbf{S}_{i_{k}}}(x_{k}), x_{k} - \hat x \rangle}{\|\nabla h_{\mathbf{S}_{i_{k}}}(x_{k})\|^{2}}. $$
(3.8)

Since \(\nabla h_{\mathbf {S}}(x) = \mathbf {A}^{T} \mathbf {Z}(\mathbf {A}x-b)\), we get (cf. [42])

$$ h_{\mathbf{S}_{i_{k}}}(x) - h_{\mathbf{S}_{i_{k}}}(\hat x) \overset{h_{\mathbf{S}_{i_{k}}}(\hat x) = 0}{=} h_{\mathbf{S}_{i_{k}}}(x) = \frac{1}{2} \|\nabla h_{\mathbf{S}_{i_{k}}}(x)\|^{2} = \tfrac12\langle \nabla h_{\mathbf{S}_{i_{k}}}(x_{k}), x - \hat x \rangle $$
(3.9)

we get that the optimal step size is in fact simply

$$ t_{k} = \frac{2\big[h_{\mathbf{S}_{i_{k}}}(x_{k}) - h_{\mathbf{S}_{i_{k}}}(\hat x)\big]}{\|\nabla h_{\mathbf{S}_{i_{k}}}(x_{k})\|^{2}}. $$
(3.10)

This quotient is known as stochastic mirror Polyak stepsize [9] (note that in this particular case, we always get tk = 1). The randomized Kaczmarz (1.4) is equivalent to one step of the stochastic mirror descent (3.5) with the mirror Polyak stepsize tk given in (3.8), whereas the mirror Polyak stepsize in it general form [9, 22] is given by \(t_{k} = \frac {h_{\mathbf {S}_{i_{k}}}(x_{k}) - h_{\mathbf {S}_{i_{k}}}(\hat x)}{c\|\nabla h_{\mathbf {S}}(x_{k})\|^{2}_{2}}\). The parameter \(0 < c \in \mathbb {R} \) in the step size is an important quantity which can be set theoretically based on the properties of the function under study. In [22], it is suggested that, for optimal convergence, one should select c = 1/2 for strongly convex functions and c = 0.2 for non-convex functions. Moreover, the general RSKA method (see Algorithm 1) falls into the general sketched-and-project framework in the context of mirror descent with \(\mathbf {Z} \overset {\text {def}}{=} \frac {1}{\eta } {\sum }_{j\in \tau } w_{j} \mathbf {S}_{j}(\mathbf {S}_{j}^{T} \mathbf {A} \mathbf {A}^{T} \mathbf {S}_{j})^{\dagger }\mathbf {S}_{j}^{T}\) with Sj = ej, tk = 1, and \(f(x) = \lambda \cdot \|x\|_{1} + \tfrac 12 \cdot \|x\|_{2}^{2}\). In fact, the sketch-and-project form of updates (1.8) (resp. 3.6) is given by:

$$ \begin{aligned} x_{k+1}^{*} &= x_{k}^{*} - \mathbf{A}^{T} \mathbf{Z} (\mathbf{A}x_{k} - b),\\ x_{k+1} &= \nabla f^{*}(x_{k+1}^{*}), \end{aligned} $$
(3.11)

with some random τi ⊂ [m] with cardinality η.

4 Convergence analysis

In this section, we show expected linear convergence for the randomized sparse Kaczmarz with averaging method. Before that, we give the update satisfy by the iterate \(x_{k}^{*}\) in Algorithm 1. From line 5 of Algorithm 1, it holds that:

$$ \begin{array}{@{}rcl@{}} x_{k+1}^{*} &=& x_{k}^{*} - \frac{1}{\eta} \sum\limits_{i \in \tau_{k}} w_{i}\frac{\langle{a_{i}},{x_{k}}\rangle-b_{i}}{\|{a_{i}}\|^{2}_{2}} \cdot a_{i} \\ &=& x^{*}_{k} - \frac{1}{\eta} \sum\limits_{i \in \tau_{k}} w_{i} ({e_{i}^{T}}\mathbf{A})^{T} \cdot \frac{{e_{i}^{T}}(\mathbf{A} x_{k} - b)}{\|a_{i}\|^{2}_{2}}\\ &=& x^{*}_{k} - \mathbf{A}^{T}\frac{1}{\eta} \sum\limits_{i \in \tau_{k}} w_{i} \cdot \frac{e_{i},{e_{i}^{T}}}{\|a_{i}\|^{2}_{2}}(\mathbf{A} x_{k} - b) \end{array} $$

To simplify notation, we define the following matrices.

Definition 4.1

Let \(\mathbf {Diag}(d_{1}, d_{2}, \dots , d_{m})\) denote the diagonal matrix with d1,d2, \( \dots , d_{m}\) on the diagonal. We define the following matrices:

  • Weighted sampling matrix:

    $$ \begin{array}{@{}rcl@{}} \mathbf{M}_{k} = \frac{1}{\eta} \sum\limits_{i \in \tau_{k}} w_{i} \frac{e_{i} {e_{i}^{T}}}{\|a_{i}\|^{2}_{2}} \end{array} $$
  • Normalization matrix:

    $$ \begin{array}{@{}rcl@{}} \mathbf{D} = \mathbf{Diag} (\|a_{1}\|, \|a_{2}\|, \dots, \|a_{m}\|) \end{array} $$

    so that the matrix D− 1A has rows with unit norm.

  • Probability matrix:

    $$ \begin{array}{@{}rcl@{}} \mathbf{P}=\mathbf{Diag}(p_{1},p_{2},\dots,p_{m}) \end{array} $$

    where \(p_{j} = \mathbb {P} (i=j)\).

  • Weight matrix:

    $$ \begin{array}{@{}rcl@{}} \mathbf{W} = \mathbf{Diag}(w_{1},w_{2},\dots,w_{m}) \end{array} $$

    where wi represents the weight corresponding to the i th row.

The following lemma and proof are taken from [28, Lemma 1] and we include the full proof for completeness. The lemma gives the first and second moment of the random matrices Mk, \(\mathbf {A}^{T}\mathbf {M}_{k}\) respectively and will be use in our convergence analysis.

Lemma 4.2

Let Mk,P,W, and D be defined as in Definition 4.1. Then

$$ \begin{array}{@{}rcl@{}} \mathbb{E}_{k} \left[ \mathbf{M}_{k}\right] &=& \mathbf{P}\mathbf{W}\mathbf{D}^{-2}\quad\text{and}\\ \mathbb{E}_{k} \left[ (\mathbf{A}^{T}\mathbf{M}_{k})^{T} \cdot (\mathbf{A}^{T}\mathbf{M}_{k})\right] & =& \frac{1}{\eta} \mathbf{PW^{2}D^{-2}} + (1-\frac{1}{\eta}) \mathbf{PWD^{-2}}\mathbf{AA}^{T}\mathbf{PWD^{-2}}. \end{array} $$

Proof

Let \(\mathbb {E}_{i}[\cdot ]\) denote \(\mathbb {E}_{i \sim p}[\cdot ]\). From the definition of the weighted sampling matrix Mk as the weighted average of the i.i.d. sampling matrices \(\frac {e_{i} {e_{i}^{T}}}{\|a_{i}\|^{2}_{2}},\) we see that

$$ \begin{array}{@{}rcl@{}} \mathbb{E}_{k} \left[ \mathbf{M}_{k}\right] = \mathbb{E}_{k} \left[ \frac{1}{\eta} \sum\limits_{i \in \tau_{k}} w_{i} \frac{e_{i} {e_{i}^{T}}}{\|a_{i}\|^{2}_{2}}\right] = \mathbb{E}_{i} \left[ w_{i} \frac{e_{i} {e_{i}^{T}}}{\|a_{i}\|^{2}_{2}}\right] = \sum\limits_{i=1}^{m} p_{i} w_{i} \frac{e_{i} {e_{i}^{T}}}{\|a_{i}\|^{2}_{2}} = \mathbf{P}\mathbf{W}\mathbf{D}^{-2}. \end{array} $$

In the same way, we have

$$ \begin{array}{@{}rcl@{}} && \mathbb{E}_{k} \left[ (\mathbf{A}^{T}\mathbf{M}_{k})^{T} \cdot (\mathbf{A}^{T}\mathbf{M}_{k})\right] \\ &=& \mathbb{E}_{k} \left[ (\frac{1}{\eta} \sum\limits_{i \in \tau_{k}} w_{i} \frac{e_{i} {e_{i}^{T}}}{\|a_{i}\|^{2}_{2}}) \mathbf{A} \cdot \mathbf{A}^{T} \cdot (\frac{1}{\eta} \sum\limits_{j \in \tau_{k}} w_{j} \frac{e_{j} {e_{j}^{T}}}{\|a_{j}\|^{2}_{2}})\right] \\ &=& \mathbb{E}_{k} \left[ (\frac{1}{\eta} \sum\limits_{i \in \tau_{k}} w_{i} \frac{e_{i} {a_{i}^{T}}}{\|a_{i}\|^{2}_{2}}) \cdot (\frac{1}{\eta} \sum\limits_{j \in \tau_{k}} w_{j} \frac{a_{j} {e_{j}^{T}}}{\|a_{j}\|^{2}_{2}})\right] \\ &=& \frac{1}{\eta} \mathbb{E}_{i} \left[ (w_{i} \frac{e_{i} {a_{i}^{T}}}{\|a_{i}\|^{2}_{2}}) \cdot (w_{i} \frac{a_{i} {e_{i}^{T}}}{\|a_{i}\|^{2}_{2}})\right] + (1 - \frac{1}{\eta})\mathbb{E}_{i} \left[ w_{i} \frac{e_{i} {a_{i}^{T}}}{\|a_{i}\|^{2}_{2}}\right] \mathbb{E}_{i} \left[ w_{i} \frac{a_{i} {e_{i}^{T}}}{\|a_{i}\|^{2}_{2}} \right] \\ &=& \frac{1}{\eta} \mathbb{E}_{i} \left[ {w_{i}^{2}} \frac{e_{i} {e_{i}^{T}}}{\|a_{i}\|^{2}_{2}} \right] + (1 - \frac{1}{\eta})\mathbb{E}_{i} \left[ w_{i} \frac{e_{i} {e_{i}^{T}}}{\|a_{i}\|^{2}_{2}}\right] \mathbf{A} \mathbf{A}^{T} \mathbb{E}_{i} \left[ w_{i} \frac{e_{i} {e_{i}^{T}}}{\|a_{i}\|^{2}_{2}} \right] \\ &=& \frac{1}{\eta}\mathbf{P}\mathbf{W}^{2}\mathbf{D}^{-2} + (1 - \frac{1}{\eta})\mathbf{P}\mathbf{W}\mathbf{D}^{-2} \mathbf{A} \mathbf{A}^{T} \mathbf{P}\mathbf{W}\mathbf{D}^{-2}, \end{array} $$

by separating the cases where i = j from those where ij and utilizing the independence of the indices sampled in τk. □

We now present convergence results for the proposed method. We start our analysis by characterizing the error bound between two consecutive iterates and the error bound between the Bregman distance of the iterates, the solution, and the residual in the following lemma.

Lemma 4.4

Let \(f, {\varPhi }: \mathbb {R}^{n} \rightarrow \mathbb {R} \cup \{+\infty \}\) be convex, where \(\text {dom}(f)=\mathbb {R}^{n}\) and dom(Φ)≠. Let \( {\mathscr{X}} \subseteq \text {dom}({\varPhi })\) be nonempty and convex, \(x_{k} \in \mathbb {R}^{n}\), \(x_{k}^{*}\in \partial f(x_{k})\). Assume that

$$ x_{k+1} \in \underset{x \in \mathscr{X}}{\arg \min} \Bigg \{{\varPhi}(x) + D^{x_{k}^{*}}_{f}(x_{k}, x)\Bigg\}. $$

Then there exist subgradient \(x_{k+1}^{*}\in \partial f(x_{k+1})\) such that it holds

$$ \begin{array}{@{}rcl@{}} & {\varPhi}(y) + D^{x_{k}^{*}}_{f}(x_{k}, y) \geq {\varPhi}(x_{k+1}) + D^{x_{k}^{*}}_{f}(x_{k}, x_{k+1}) + D^{x_{k+1}^{*}}_{f}(x_{k+1}, y) \end{array} $$

for any \(y \in {\mathscr{X}}\).

Proof

Let denote by \(J(x) = {\varPhi }(x) + D^{x_{k}^{*}}_{f}(x_{k}, x)\). Since J and \(\mathscr X\) are convex and xk+ 1 minimizes J over \({\mathscr{X}}\), there exists a subgradient dJ(xk+ 1) such that

$$\langle d, y - x_{k+1} \rangle \geq 0 \quad \forall y \in \mathscr{X}.$$

Since f is finite everywhere, we have \(\text {dom}(D_{f}(\cdot ,u))=\mathbb {R}^{n}\) for all \(u \in \mathbb {R}^{n}\). Since dom(Φ) is nonempty and convex, Φ has nonempty relative interior. So the subgradient sum rule applies and we obtain that

$$ \partial J(x_{k+1}) = \partial {\varPhi}(x_{k+1}) + (\partial f(x_{k+1}) - x_{k}^{*}). $$

Hence, there exist subgradients gΦ(xk+ 1), \(x_{k+1}^{*}\in \partial f(x_{k+1})\) such that

$$\langle g + (x_{k+1}^{*} - x_{k}^{*}), \ y - x_{k+1} \rangle \geq 0 \quad \forall y \in \mathscr{X}.$$

Therefore, using the property of the subgradient and (2.4), we have for all \(y \in {\mathscr{X}}\)

$$ \begin{array}{@{}rcl@{}} {\varPhi}(y) &&\geq \ {\varPhi}(x_{k+1}) + \langle g, y-x_{k+1} \rangle \\ &&\geq \ {\varPhi}(x_{k+1}) + \langle x_{k}^{*} - x_{k+1}^{*}, \ y-x_{k+1} \rangle \\ &&\overset{\tiny (2.4)}= {\varPhi}(x_{k+1}) + D^{x_{k}^{*}}_{f}(x_{k}, x_{k+1}) - D^{x_{k}^{*}}_{f}(x_{k}, y) + D^{x_{k+1}^{*}}_{f}(x_{k+1}, y). \end{array} $$

The following lemma provides an error bound for the Bregman distance.

Lemma 4.6 (43)

Let \(\tilde {\sigma }_{\min \limits }(\mathbf {A})\) and \(|\hat x|_{\min \limits }\) be defined as in Section 1.3. Then for any \(x \in \mathbb {R}^{n}\) with f(x) ∩Range(AT)≠ 0 and for all \(\hat x = \mathbf {A}^{T} y \in \partial f(x) \cap \mathbf {Range}(\mathbf {A}^{T})\), we have

$$ D_{f}^{x_{k}^{*}}(x_{k},\hat x) \leq \gamma \cdot \|\mathbf{A}x_{k}-b\|_{2}^{2} $$
(4.1)

where

$$ \gamma = \frac{1}{\tilde{\sigma}^{2}_{\min}(\mathbf{A})}\frac{|\hat x|_{\min} + 2\lambda}{|\hat x|_{\min}} $$
(4.2)

To effectively use Lemma 4.4, we need the following assumption which characterize the coupling between the weight matrix and the probability matrix.

Assumption 1

The weight matrix W and the probability matrix P are linked by the following coupling

$$ \mathbf{P} \mathbf{W} \mathbf{D}^{-2} = \frac{\alpha}{\|\mathbf{A}\|_{F}^{2}} \mathbf{I} $$

some scalar relaxation parameter α > 0.

Assumption (1) has been used in [28] for the inconsistent case (i.e., \(b - \mathbf {A} \hat x \neq 0\)) and for λ = 0. We include a motivation for completeness. As the batch size η goes to \(\infty \) (recall that we sample with replacement), we have

$$ \lim_{\eta \to \infty}\mathbf{M}_{k} = \mathbb{E}_{i \sim p} \bigg[w_{i} \frac{e_{i} {e_{i}^{T}}}{\|a_{i}\|^{2}_{2}}\bigg] = \mathbf{P} \mathbf{W} \mathbf{D}^{-2} $$

Therefore, the averaged RSK update of (1.8) approaches the deterministic update:

$$ \begin{array}{@{}rcl@{}} x_{k+1} &=& (\mathbf{I} - \mathbf{A}^{T}\mathbf{P} \mathbf{W} \mathbf{D}^{-2}\mathbf{A})x_{k} + \mathbf{A}^{T}\mathbf{P} \mathbf{W} \mathbf{D}^{-2}b \\ x_{k+1} - \hat x &=& (\mathbf{I} - \mathbf{A}^{T}\mathbf{P} \mathbf{W} \mathbf{D}^{-2}\mathbf{A})(x_{k} - \hat x) + \mathbf{A}^{T}\mathbf{P} \mathbf{W} \mathbf{D}^{-2}(b - \mathbf{A} \hat x) \end{array} $$

In order to have that \((x_{k+1} - \hat x)\) goes to zero in the limit, we should require that this limiting error update has the zero vector as a fixed point, i.e.,

$$ 0 = \mathbf{A}^{T}\mathbf{P} \mathbf{W} \mathbf{D}^{-2}(b - \mathbf{A} \hat x) $$

This is guaranteed if PWD− 2 = βI. But for λ≠ 0, we do not have a bound of the form \(D_{f}^{x_{k}^{*}}(x_{k}, \hat x) \leq \gamma \cdot \|Ax_{k} - b\|^{2}_{\mathbf {P} \mathbf {W} \mathbf {D}^{-2}}\) that is the reason why for that case we need to assume PWD− 2 = βI.

Since in this case PW2D− 2 = PWD− 2W, Lemma 4.2 becomes:

Lemma 4.7

Let Mk,P,W and D be defined as in Definition 4.1 and let Assumption 1 hold. Then

$$ \mathbb{E}_{k} \left[ \mathbf{M}_{k}\right] = \frac{\alpha \mathbf{I}}{\|\mathbf{A}\|^{2}_{F}} $$

and

$$ \mathbb{E}_{k} \left[ \mathbf{{M_{k}}}^{T}\mathbf{AA}^{T}M_{k} \right] = \frac{1}{\eta} \frac{\alpha \mathbf{W}}{\|\mathbf{A}\|^{2}_{F}} + \alpha^{2} (1-\frac{1}{\eta}) \frac{\mathbf{AA^{T}}}{\|\mathbf{A}\|^{4}_{F}} $$

Lemma 4.8

Under Assumption 1, for the iterates xk of Algorithm 1, it holds that:

$$ \begin{array}{@{}rcl@{}} \mathbb E_{k} \left[ D_{f}^{x_{k+1}^{*}}(x_{k+1},\hat x) \right] \leq D_{f}^{x_{k}^{*}}(x_{k},\hat x) - \frac{\alpha }{\|\mathbf{A}\|^{2}_{F}} (1 - \sigma_{\max}(\mathbf{T})) \|\mathbf{A}x_{k} - b\|^{2}_{2} . \end{array} $$

with

$$ \mathbf{T} = \frac{1}{2\eta} \mathbf{W} + \frac{\alpha}{2} (1-\frac{1}{\eta}) \frac{\mathbf{A} \mathbf{A}^{T}}{\|\mathbf{A}\|^{2}_{F}} $$

Proof

Using Lemma 4.3 with \(f(x) = \lambda \|x\|_{1} + \frac {1}{2}\|x\|_{2}^{2}\) and \({\varPhi }(x) = \langle \mathbf {A}^{T} \mathbf {M}_{k}(\mathbf {A} x_{k} - b), x - x_{k} \rangle \), \(y= \hat x\), it holds that:

$$ \begin{array}{@{}rcl@{}} &&D^{x_{k+1}^{*}}_{f}(x_{k+1}, \hat x) \\ && \quad\leq D^{x_{k}^{*}}_{f}(x_{k}, \hat x) + {\varPhi}(\hat x) - {\varPhi}(x_{k+1}) - D^{x_{k}^{*}}_{f}(x_{k}, x_{k+1}) \\ &&\quad= D^{x_{k}^{*}}_{f}(x_{k}, \hat x) - \langle \mathbf{A}^{T} \mathbf{M}_{k}(\mathbf{A} x_{k} - b), x_{k} - \hat x \rangle\\ &&\qquad+ \langle \mathbf{A}^{T} \mathbf{M}_{k}(\mathbf{A} x_{k} - b), x_{k} - x_{k+1} \rangle - D^{x_{k}^{*}}_{f}(x_{k}, x_{k+1}) \\ &&\quad \leq D^{x_{k}^{*}}_{f}(x_{k}, \hat x) - \langle \mathbf{A}^{T} \mathbf{M}_{k}(\mathbf{A} x_{k} - b), x_{k} - \hat x \rangle\\ &&\qquad+ \| \mathbf{A}^{T} \mathbf{M}_{k}(\mathbf{A} x_{k} - b)\| \cdot \|x_{k} - x_{k+1}\| - \frac{1}{2}\|x_{k} - x_{k+1}\|^{2} \\ &&\quad\leq D^{x_{k}^{*}}_{f}(x_{k}, \hat x) - \langle \mathbf{A}^{T} \mathbf{M}_{k}(\mathbf{A} x_{k} - b), x_{k} - \hat x \rangle + \frac{1}{2}\| \mathbf{A}^{T} \mathbf{M}_{k}(\mathbf{A} x_{k} - b)\|^{2} \end{array} $$

We have:

$$ \begin{array}{@{}rcl@{}} \mathbb{E}_{k} \left[ \langle \mathbf{A}^{T} \mathbf{M}_{k}(\mathbf{A} x_{k} - b), x_{k} - \hat x \rangle \right] &=& \mathbb{E}_{k} \left[ \langle \mathbf{M}_{k} \cdot (\mathbf{A} x_{k} - b) , \mathbf{A} x_{k} - b \rangle \right] \\ &\overset{\text{\tiny Lemma 4.2}}{=}& \langle \mathbf{A} x_{k} - b , \mathbf{A} x_{k} - b \rangle_{\mathbf{P W D}^{-2}}\\ &\overset{\text{\tiny Lemma 4.5}}{=}& \frac{\alpha }{\|\mathbf{A}\|^{2}_{F}} \left|\mathbf{A} x_{k} - b\right|^{2}_{2} \end{array} $$

and with \(\mathbf {T} = \frac {1}{2\eta } \mathbf {W} + \frac {\alpha }{2} (1-\frac {1}{\eta }) \frac {\mathbf {A} \mathbf {A}^{T}}{\|\mathbf {A}\|^{2}_{F}}\), we get

$$ \begin{array}{@{}rcl@{}} \mathbb{E}_{k} \left[ \| \mathbf{A}^{T} \cdot \mathbf{M}_{k} \cdot (\mathbf{A} x_{k} - b) \|_{2}^{2} \right] &=& \langle \mathbf{A} x_{k} - b , \mathbb{E}_{k} \left[ \mathbf{M}_{k}^{T} \mathbf{A} \mathbf{A}^{T} \mathbf{M}_{k} \right]\cdot (\mathbf{A} x_{k} - b) \rangle \\ &\overset{\text{\tiny Lemma 4.5}}{=}& \langle \mathbf{A} x_{k} - b , \Bigg(\frac{1}{\eta} \frac{\alpha \mathbf{W}}{\|\mathbf{A}\|^{2}_{F}} + \alpha^{2} (1-\frac{1}{\eta}) \frac{\mathbf{A} \mathbf{A}^{T}}{\|\mathbf{A}\|^{4}_{F}}\Bigg)\\ &&\cdot (\mathbf{A} x_{k} - b) \rangle\\ &=& \frac{2\alpha}{\|\mathbf{A}\|^{2}_{F}}\langle \mathbf{A} x_{k} - b , \mathbf{A} x_{k} - b \rangle_{\mathbf{T}} \\ &\leq& \frac{2\alpha}{\|\mathbf{A}\|^{2}_{F}} \sigma_{\max}(\mathbf{T}) \|\mathbf{A} x_{k} - b\|^{2}_{2}. \end{array} $$

Thus, combining everything together gives us

$$ \begin{array}{@{}rcl@{}} && \mathbb{E}_{k} \left[ D_{f}^{x_{k+1}^{*}}(x_{k+1}, \hat x) \right] \\ &\leq& D_{f}^{x_{k}^{*}}(x_{k}, \hat x) - \mathbb{E}_{k} \left[ \langle \mathbf{A}^{T} \mathbf{M}_{k}(\mathbf{A} x_{k} - b), x_{k} - \hat x \rangle \right] + \frac{1}{2} \mathbb{E}_{k} \left[\| \mathbf{A}^{T} \mathbf{M}_{k} (\mathbf{A} x_{k} - b) \|_{2}^{2} \right] \\ &\leq& D_{f}^{x_{k}^{*}}(x_{k}, \hat x) - \frac{\alpha }{\|\mathbf{A}\|^{2}_{F}} \|\mathbf{A} x_{k} - b\|^{2}_{2} + \frac{1}{2} \frac{2 \alpha}{\|\mathbf{A}\|^{2}_{F}} \sigma_{\max}(\mathbf{T}) \|\mathbf{A} x_{k} - b\|^{2}_{2} \\ &\leq& D_{f}^{x_{k}^{*}}(x_{k}, \hat x) - \frac{\alpha }{\|\mathbf{A}\|^{2}_{F}} (1 - \sigma_{\max}(\mathbf{T})) \|\mathbf{A} x_{k} - b\|^{2}_{2}. \end{array} $$

The following lemma gives an upper and a lower bound for the largest singular value of T which will be use in the convergence of RSKA iterates.

Lemma 4.10

Let

$$ \mathbf{T} = \frac{1}{2\eta} \mathbf{W} + \frac{\alpha}{2} (1-\frac{1}{\eta}) \frac{\mathbf{A} \mathbf{A}^{T}}{\|\mathbf{A}\|^{2}_{F}} $$

Then the largest singular value of T satisfies:

$$ \begin{array}{@{}rcl@{}} \frac{1}{2\eta} \bigg(\sigma_{\max}(\mathbf{W}) - \frac{\alpha}{\|\mathbf{A}\|^{2}_{F}} (\eta - 1) \sigma_{\max}^{2}(\mathbf{A})\bigg) & \!\leq\!& \sigma_{\max}(\mathbf{T})\\ & \!\leq\!& \frac{1}{2\eta} \bigg(\sigma_{\max}(\mathbf{W}) + \frac{\alpha}{\|\mathbf{A}\|^{2}_{F}} (\eta-1) \sigma_{\max}^{2}(\mathbf{A})\bigg) \end{array} $$

In addition, If W = αI, then T is positive semi-definite and

$$ \sigma_{\max}(\mathbf{T}) = \frac{1}{2\eta} \bigg(\alpha + \frac{\alpha}{\|\mathbf{A}\|^{2}_{F}} (\eta-1) \sigma_{\max}^{2}(\mathbf{A})\bigg) $$

Proof

The first part of the proof follows easily from Theorem 2.2. If W = αI, we have: If λ is an eigenvalue of AAT, then \(\tfrac \alpha \eta + \tfrac \alpha 2(1-\tfrac 1\eta )\tfrac \lambda {\|\mathbf {A}\|_{F}^{2}}\) is an eigenvalue of T. From this, we deduce the last equality as well as that T is positive semi-definite. □

4.1 General convergence result

In this part, we present general convergence results for the iterates from RSKA method.

Theorem 4.12 (Noiseless case)

Consider η > 1, γ as defined in (4.2) (Lemma (4.4)), let Assumption 1 hold and assume that

$$ \begin{array}{@{}rcl@{}} 0<\alpha < 2\frac{(\eta - \tfrac12 \sigma_{\max}(\mathbf{W}))\|\mathbf{A}\|_{F}^{2}}{\sigma_{\max}(\mathbf{A})^{2}(\eta-1)}. \end{array} $$
(4.3)

Then the random iterates xk produced by Algorithm 1 converge in expectation with a linear rate to the unique solution \(\hat x\) of \(\operatorname *{\min \limits }_{\mathbf {A}x=b}\lambda \|x\|_{1} + \frac {1}{2}\|x\|_{2}^{2}\), more precisely, with

$$ \begin{array}{@{}rcl@{}} q = 1 - \frac{1}{\gamma } \cdot \frac{ L(\alpha)}{\|\mathbf{A}\|^{2}_{F}} \in (0,1), \end{array} $$
(4.4)

and

$$ \begin{array}{@{}rcl@{}} L(\alpha) = \alpha - \frac{\alpha}{2\eta} \bigg(\frac{\alpha}{\|\mathbf{A}\|^{2}_{F}} (\eta-1) \sigma^{2}_{\max}(\mathbf{A}) + \sigma_{\max}(\mathbf{W})\bigg), \end{array} $$

it holds that

$$ \begin{array}{@{}rcl@{}} \mathbb{E} \left[ D_{f}^{x_{k+1}^{*}}(x_{k+1},\hat x) \right] &\leq& q \cdot \mathbb{E} \left[ D_{f}^{x_{k}^{*}}(x_{k},\hat x) \right] \\ \mathbb{E} \bigg[\|x_{k}-\hat x\|_{2}^{2} \bigg] &\le& 2 \cdot q^{k} \cdot f(\hat{x}). \end{array} $$

Proof

Combining Lemma 4.6 with (4.1) gives

$$ \begin{array}{@{}rcl@{}} \mathbb{E}_{k} \left[ D_{f}^{x_{k+1}^{*}}(x_{k+1},\hat x) \right] &\leq& D_{f}^{x_{k}^{*}}(x_{k},\hat x) - \frac{\alpha}{\|\mathbf{A}\|_{F}^{2}\gamma}(1-\sigma_{\max}(\mathbf{T}))D_{f}^{x_{k}^{*}}(x_{k},\hat x)\\ & \leq& \left( 1-\frac{\alpha}{\|\mathbf{A}\|_{F}^{2}\gamma}(1-\sigma_{\max}(\mathbf{T}))\right)D_{f}^{x_{k}^{*}}(x_{k},\hat x). \end{array} $$

Using the rule of total expectation, we get

$$ \begin{array}{@{}rcl@{}}\mathbb{E} \left[ D_{f}^{x_{k+1}^{*}}(x_{k+1},\hat x) \right] &\leq \left( 1- \frac{1}{\gamma} \cdot \frac{\alpha \cdot (1-\sigma_{\max}(\mathbf{T}))}{\|\mathbf{A}\|_{F}^{2}}\right)\mathbb{E} \left[ D_{f}^{x_{k}^{*}}(x_{k},\hat x) \right]. \end{array} $$

To get a rate q ∈ (0,1), we need that \((1-\sigma _{\max \limits }(\mathbf {T}))>0\), i.e., \(\sigma _{\max \limits }(\mathbf {T})<1\) which holds true from (4.3). From Lemma 4.7, it follows that \(L(\alpha ) \leq \alpha (1 - \sigma _{\max \limits }(\mathbf {T}))\) and thus we get \(\mathbb {E} \left [ D_{f}^{x_{k+1}^{*}}(x_{k+1},\hat x) \right ] \leq q \cdot \mathbb {E} \left [ D_{f}^{x_{k}^{*}}(x_{k},\hat x) \right ]\), with \(q = 1 - \frac {1}{\gamma } \cdot \frac { L(\alpha )}{\|\mathbf {A}\|^{2}_{F}}\). The inequality in terms of \(\|{x_{k} - \hat x}\|^{2}_{2}\) is obtained by using the first inequality of (2.3) since f is 1-strongly convex. □

From (4.4), we see that we want to choose α such that L(α) is as large as possible:

Corollary 4.14

Let Assumption 1 hold true. Then the relaxation parameter α and the constant L which yields the fastest convergence rate guarantee in Theorem 4.8 are as follows:

  1. (a)

    General Weights: If \(\eta > \max \limits (1,\sigma _{\max \limits }(\mathbf {W})/2)\) then

    $$ \begin{array}{@{}rcl@{}} \alpha^{*} = \frac{\|\mathbf{A}\|^{2}_{F}}{\sigma^{2}_{\max}(\mathbf{A})(\eta-1)}(\eta - \frac{\sigma_{\max}(\mathbf{W})}{2}), \end{array} $$

    and

    $$ \begin{array}{@{}rcl@{}} L(\alpha^{*}) = \frac{\|\mathbf{A}\|^{2}_{F}}{8\sigma^{2}_{max}(\mathbf{A})} \cdot \frac{(2\eta - \sigma_{max}(\mathbf{W}))^{2}}{\eta(\eta-1)}. \end{array} $$
  2. (b)

    Uniform Weights, i.e., W = αI:

    $$ \alpha^{*} = \frac{\eta}{1 + (\eta-1)\frac{\sigma^{2}_{\max}(\mathbf{A})}{\|\mathbf{A}\|^{2}_{F}}} $$

    and

    $$ L(\alpha^{*}) = \frac{\eta}{2 + 2(\eta-1)\frac{\sigma^{2}_{\max}(\mathbf{A})}{\|\mathbf{A}\|^{2}_{F}}}. $$

Proof

In the case (a) of general weights, in order to get the tightest lower bound, we maximized the concave function L(α) and obtain

$$ \begin{array}{@{}rcl@{}} \alpha = \|\mathbf{A}\|_{F}^{2}(\eta - \sigma_{\max}(\mathbf{W})/2)/((\eta-1)\sigma_{\max}(\mathbf{A})^{2} \end{array} $$

which fulfills (4.3) and thus gives the best q is Theorem 4.8. Since we need α ≥ 0, we have to assume \(\eta \geq \sigma _{\max \limits }(\mathbf {W})/2\). This gives α and plugging this into L(α) give us L(α). In the case (b) of uniform weights, we maximized L(α) for all η with W = αI. □

A few remarks about the interpretation of the above theorem are in order:

Remark 4.16 (Overrelaxation)

  • When a single thread η = 1 is used in the case (b) of uniform weight, we see that our optimal relaxation parameter is α = 1. Whereas, when multiple threads η > 1 are used, we see

    $$ 1 < \alpha^{*} \leq \eta, $$

    i.e., the method allows for large overrelaxation when the number of threads is high and we will see in Section 5.2 that this does indeed lead to faster convergence.

  • Our relaxation parameter α in the case of uniform weights is the same as the relaxation parameter αRT suggested in [42] although they do not treat the sparse case and only consider uniform weights.

  • Finally, for η = 1 in the case of general weights, we have, due to the coupling in Assumption 1, that the weights fulfill \(w_{i} = \frac {\alpha \|a_{i}\|^{2}}{p_{i}\|\mathbf {A}\|_{F}^{2}}\). We could estimate \(\sigma _{\max \limits }(\mathbf {W}) \leq \frac {\alpha }{\min \limits _{i}p_{i}}\) but this would be a quite crude estimate. By choosing the classical probabilities \(p_{i} = \|a_{i}\|^{2}/\|\mathbf {A}\|_{F}^{2}\), we would get \(\sigma _{\max \limits }(\mathbf {W}) = \alpha \) and get that the relaxation parameter needs to fulfill α ∈ (0,2) and that α = 1 is the optimal relaxation parameter.

Remark 4.17 (Relation to standard randomized sparse Kaczmarz)

In the case W = I,η = 1,α = 1, we have \(\mathbf {T} = \frac {1}{2}\mathbf {I}\) and we recover the rate of the standard RSK. It holds

$$ \begin{array}{@{}rcl@{}} \mathbb{E}_{k} \left[ D_{f}^{x_{k+1}^{*}}(x_{k+1},\hat x) \right] \leq D_{f}^{x_{k}^{*}}(x_{k},\hat x) - \frac{1 }{2\|\mathbf{A}\|^{2}_{F}} \|\mathbf{A}x_{k} - b\|^{2}_{2} \end{array} $$

which is obtained in [43] and it is shown that this leads to a linear convergence rate in expectation,

$$ \mathbb{E} \bigg[\|x_{k}-\hat x\|_{2}^{2} \bigg] \le 2 \cdot \bigg(1 - \frac{1 }{2\gamma\|\mathbf{A}\|^{2}_{F}}\bigg)^{k} \cdot f(\hat{x}). $$
(4.5)

which implies that we reach accuracy \(\mathbb {E} \bigg [\|x_{k}-\hat x\|_{2}^{2} \bigg ] \le 2 \cdot \varepsilon \cdot f(\hat {x})\) in at most \(k \geq 2\gamma \|A\|^{2}_{F} \log (\frac {1}{\varepsilon })\) iterations.

Remark 4.18 (Convergence rate for the linearized Bregman method)

Similarly to Lemma 4.6, we can show that the linearized Bregman algorithm [3, 23, 48]

$$ \begin{array}{@{}rcl@{}} x_{k+1}^{*} &=& x_{k}^{*} - \frac{\mathbf{A}^{T}(\mathbf{A}x_{k} - b)}{\|\mathbf{A}\|^{2}_{2}},\\ x_{k+1} &=& S_{\lambda}(x_{k+1}^{*}) \end{array} $$
(4.6)

has linear convergence rate given by

$$ \big\|x_{k}-\hat x\big\|_{2}^{2} \le 2 \cdot \bigg(1 - \frac{1 }{2\gamma\|\mathbf{A}\|^{2}_{2}}\bigg)^{k} \cdot f(\hat{x}). $$
(4.7)

Although we suspect that this result (4.7) is not new, we could not find it in the literature.

Inspired by [42], the following remarks apply to the uniform weight case.

Remark 4.19 (Mini-batch vs. full-batch)

Let \(H(\eta )= \frac {1}{L(\alpha ^{*})} = \frac {2}{\eta } + 2(1-\frac {1}{\eta })\frac {\sigma ^{2}_{\max \limits }(\mathbf {A})}{\|\mathbf {A}\|^{2}_{F}} \) be the inverse of L(α). Recall from Theorem 4.8 that L(α) influences the convergence rate: The larger L(α), the faster the convergence.

Since \(\frac {\|\mathbf {A}\|^{2}_{F}}{\sigma ^{2}_{\max \limits }(\mathbf {A})} \geq 1\), H is a nonincreasing function of η and we have H(1) = 2, \(H(\infty ) \overset {\text {def}}{=} \lim _{\eta \rightarrow \infty } H(\eta ) = \frac {2\sigma ^{2}_{\max \limits }(\mathbf {A})}{\|\mathbf {A}\|^{2}_{F}}\). In the asymptotic regime \(\eta \rightarrow \infty ,\) Algorithm 1 becomes linearized Bregman algorithm for minimizing (3.1), and \(H(\infty )\) is the rate of linearized Bregman cf. (4.7). This shows that the averaging method interpolates between the basic method and the linearized Bregman. By increasing η, the quantity \(\frac {H(1)}{H(\infty )} = \frac {\|\mathbf {A}\|^{2}_{F}}{\sigma ^{2}_{\max \limits }(\mathbf {A})}\) controls the maximum (guaranteed) speedup in the iteration complexity achievable. Comparing (4.4) with the convergence rate (4.5) of the basic sparse Kaczmarz method, we get an improvement of 2L(α) > 1 which shows that for the RSKA algorithm, we can get a speed-up even of order approximately \(\frac {\|\mathbf {A}\|^{2}_{F}}{\sigma ^{2}_{\max \limits }(\mathbf {A})}\) compared to the rate of the basic sparse Kaczmarz algorithm (see also the comparison in Table 1).

Table 1 Complexity of different methods

For \(\eta \geq \frac {\|\mathbf {A}\|^{2}_{F}}{\sigma ^{2}_{\max \limits }(\mathbf {A})},\) we get \(H(\eta ) \leq 2 H(\infty )\), which is the performance of the full batch (up to a factor of 2). This means that it does not make sense to use a minibatch size larger than \(\frac {\|\mathbf {A}\|^{2}_{F}}{\sigma ^{2}_{\max \limits }(\mathbf {A})}\). Moreover, notice that \(H(\eta ) \geq \frac {1}{\eta } H(1)\) for all η, show that the number of iterations does not decrease linearly in the minibatch size η. From a total complexity perspective cf. Table 1, this also means that in a computational regime where processing η basic method updates costs η times as much as processing a single update, the decrease in iteration complexity cannot compensate for the increase in cost per iteration, which means that the choice η = 1 is optimal. On the other hand, if a parallel processor is available, a larger η will be better.

4.2 Noisy right hand sides

In the noisy case, we consider a consistent linear system Ax = b, but assume that instead of b we only have access to some vector bδ with |bbδ|2δ. In the context if Kaczmarz methods, such errors in the right hand side have been considered in [30, 49]. Since the system Ax = bδ is most likely inconsistent, RSKA will not solve the optimization problem (1.5) but hopefully the iterates still come close to the solution. If we assume a bound ∥bbδ2δ for b with \(\mathbf {A}\hat x = b\), we get the following result on the convergence of the method:

Theorem 4.20 (Noisy case)

Assume that instead of exact data bRange(A) only a noisy right hand side \(b^{\delta } \in \mathbb {R}^{m}\) with ∥bδb2δ is given. Consider η > 1, ε > 0, γ as defined in (4.2) (Lemma (4.4)), let Assumption 1 hold and assume that

$$ \begin{array}{@{}rcl@{}} 0<\alpha < 2\frac{((1-\varepsilon)\eta - \tfrac12 \sigma_{\max}(\mathbf{W}))\|A\|_{F}^{2}}{\sigma_{\max}(\mathbf{A})^{2}(\eta-1)}. \end{array} $$
(4.8)

If the iterates xk of the RSKA method from Algorithm 1 are computed with b replaced by bδ, then, with the contraction factor a, we have :

$$ \begin{array}{@{}rcl@{}} a = 1 - \frac{\alpha}{\gamma } \cdot \frac{ (1-\varepsilon-\sigma_{\max}(\mathbf{T}))}{\|\mathbf{A}\|^{2}_{F}} \in (0,1), \end{array} $$
(4.9)

and the expected rate of convergence is

$$ \begin{array}{@{}rcl@{}} \mathbb{E} \left[ D_{f}^{x_{k+1}^{*}}(x_{k+1},\hat x) \right] &\leq& a \cdot \mathbb{E} \left[ D_{f}^{x_{k}^{*}}(x_{k},\hat x) \right]\\\quad + c \delta^{2} \mathbb{E} \bigg[\|x_{k}-\hat x\|_{2}^{2} \bigg] & \leq& 2 \cdot a^{k} \cdot f(\hat{x}) + \frac{c}{1-a} \delta^{2}. \end{array} $$

where \(c = \frac {\alpha }{\|A\|^{2}_{F}}\bigg (\sigma _{\max \limits }(\mathbf {T}) + \frac {1}{\varepsilon }\sigma _{\max \limits }^{2}(\mathbf {T}^{\prime })\bigg ), \quad \mathbf {T}^{\prime } = \mathbf {T} - \frac {1}{2}\mathbf {I}\)

Proof

Assuming that a noisy observed data \(b^{\delta } \in \mathbb {R}^{m}\) instead of b with ∥bδb2δ is given, where \(b = A \hat x\). The update in this case is given by:

$$ \begin{array}{@{}rcl@{}} x^{*}_{k+1} &=& x^{*}_{k} - \frac{1}{\eta} \sum\limits_{i \in \tau_{k}} w_{i} \frac{\langle a_{i}, x_{k} \rangle - b_{i}^{\delta}}{\|a_{i}\|^{2}_{2}} \cdot a_{i},\\ x_{k+1} &=& S_{\lambda}(x^{*}_{k+1}) \end{array} $$
(4.10)

where η = |τk|,which in terms of matrix multiplication is equal to:

$$ \begin{array}{@{}rcl@{}} x^{*}_{k+1} &=& x^{*}_{k} - \mathbf{A}^{T} \cdot \mathbf{M}_{k} \cdot (\mathbf{A} x_{k} - b^{\delta}),\\ x_{k+1} &=& S_{\lambda}(x^{*}_{k+1}) \end{array} $$
(4.11)

We introduce the abbreviation

$$ x_{k}^{\delta} \overset{\text{def}}{=} \hat x + \mathbf{A}^{T} \cdot \mathbf{M}_{k} \cdot (b - b^{\delta}) $$

and use Lemma 4.3 with \(f(x) = \lambda \|x\|_{1} + \frac {1}{2}\|x\|_{2}^{2}\) and \({\varPhi }(x) = \langle \mathbf {A}^{T} \mathbf {M}_{k}(\mathbf {A} x_{k} - b^{\delta }), x - x_{k} \rangle \), and \(y=x_{k}^{\delta }\) and get

$$ \begin{array}{@{}rcl@{}} D^{x_{k+1}^{*}}_{f}(x_{k+1}, x_{k}^{\delta}) & \leq& D^{x_{k}^{*}}_{f}(x_{k}, x_{k}^{\delta}) + {\varPhi}(x_{k}^{\delta}) - {\varPhi}(x_{k+1}) - D^{x_{k}^{*}}_{f}(x_{k}, x_{k+1}) \\ &=& D^{x_{k}^{*}}_{f}(x_{k}, x_{k}^{\delta}) - \langle \mathbf{A}^{T} \mathbf{M}_{k}(\mathbf{A} x_{k} - b^{\delta}), x_{k} - x_{k}^{\delta} \rangle \\ &&+ \langle \mathbf{A}^{T} \mathbf{M}_{k}(\mathbf{A} x_{k} - b^{\delta}), x_{k} - x_{k+1} \rangle - D^{x_{k}^{*}}_{f}(x_{k}, x_{k+1}) \\ & \leq& D^{x_{k}^{*}}_{f}(x_{k}, x_{k}^{\delta}) - \langle \mathbf{A}^{T} \mathbf{M}_{k}(\mathbf{A} x_{k} - b^{\delta}), x_{k} - x_{k}^{\delta} \rangle \\ &&+ \|\mathbf{A}^{T} \mathbf{M}_{k}(\mathbf{A} x_{k} - b^{\delta})\| \cdot \|x_{k} - x_{k+1}\| - \frac{1}{2}\|x_{k} - x_{k+1}\|^{2} \\ & \leq& D^{x_{k}^{*}}_{f}(x_{k}, x_{k}^{\delta}) - \langle \mathbf{A}^{T} \mathbf{M}_{k}(\mathbf{A} x_{k} - b^{\delta}), x_{k} - x_{k}^{\delta} \rangle\\ &&+ \frac{1}{2}\|\mathbf{A}^{T} \mathbf{M}_{k}(\mathbf{A} x_{k} - b^{\delta})\|^{2} \end{array} $$

so that

$$ \begin{array}{@{}rcl@{}} D_{f}^{x_{k+1}^{*}}(x_{k+1},x_{k}^{\delta})\! &\leq\! D_{f}^{x_{k}^{*}}(x_{k},x_{k}^{\delta}) - \langle \mathbf{A}^{T} \mathbf{M}_{k} (\mathbf{A} x_{k} - b^{\delta}) , x_{k} - x_{k}^{\delta} \rangle + \frac{1}{2} \| \mathbf{A}^{T} \mathbf{M}_{k} (\mathbf{A} x_{k} - b^{\delta}) \|_{2}^{2} \end{array} $$
(4.12)

Unfolding the expression of \( D_{f}^{x_{k+1}^{*}}(x_{k+1},x_{k}^{\delta })\) and \(D_{f}^{x_{k}^{*}}(x_{k},x_{k}^{\delta }),\) we get:

$$ \begin{array}{@{}rcl@{}} D_{f}^{x_{k+1}^{*}}(x_{k+1},\hat x) &\!\leq\! D_{f}^{x_{k}^{*}}(x_{k},\hat x) - \langle \mathbf{A}^{T} \mathbf{M}_{k} (\mathbf{A} x_{k} - b^{\delta}) , x_{k} - \hat x \rangle + \frac{1}{2} \| \mathbf{A}^{T} \mathbf{M}_{k} (\mathbf{A} x_{k} - b^{\delta}) \|_{2}^{2} \end{array} $$

On the other hand,

$$ \begin{array}{@{}rcl@{}} \mathbb E_{k} \left[ \langle \mathbf{M}_{k} \cdot (\mathbf{A} x_{k} - b^{\delta}) , \mathbf{A}x_{k} - b \rangle \right] &= \frac{\alpha }{\|\mathbf{A}\|^{2}_{F}} \|\mathbf{A}x_{k} - b\|^{2}_{2} + \frac{\alpha }{\|\mathbf{A}\|^{2}_{F}} \langle b - b^{\delta} , \mathbf{A}x_{k} - b \rangle \end{array} $$

and

$$ \begin{array}{@{}rcl@{}} \mathbb E_{k} \left[ \| \mathbf{A}^{T} \cdot \mathbf{M}_{k} \cdot (\mathbf{A} x_{k} - b^{\delta}) \|_{2}^{2} \right] &=& \langle \mathbf{A} x_{k} - b , \mathbb E_{k} \left[\mathbf{M}_{k}^{T}\mathbf{A}\mathbf{A}^{T} \mathbf{M}_{k} \right]\cdot (\mathbf{A} x_{k} - b) \rangle \\ &+& \langle b - b^{\delta} , \mathbb E_{k} \left[\mathbf{M}_{k}^{T}\mathbf{A}\mathbf{A}^{T} \mathbf{M}_{k} \right]\cdot (b - b^{\delta}) \rangle \\ &+& 2 \langle \mathbf{A} x_{k} - b , \mathbb E_{k} \left[\mathbf{M}_{k}^{T}\mathbf{A}\mathbf{A}^{T} \mathbf{M}_{k} \right]\cdot (b - b^{\delta}) \rangle \end{array} $$

In summary, we have:

$$ \begin{array}{@{}rcl@{}} & -&\mathbb E_{k} \left[ \langle \mathbf{A}^{T} \cdot \mathbf{M}_{k} \cdot (\mathbf{A} x_{k} - b^{\delta}) , x_{k} - \hat x \rangle \right] + \frac{1}{2} \mathbb E_{k} \left[\| \mathbf{A}^{T} \cdot \mathbf{M}_{k} \cdot (\mathbf{A} x_{k} - b^{\delta}) \|_{2}^{2} \right] \\ &=& -\frac{\alpha }{\|\mathbf{A}\|^{2}_{F}} \|\mathbf{A}x_{k} - b\|^{2}_{2} - \frac{\alpha }{\|\mathbf{A}\|^{2}_{F}} \langle b - b^{\delta} , \mathbf{A}x_{k} - b \rangle\\ &&+ \frac{1}{2}\langle b - b^{\delta} , \mathbb E_{k} \left[\mathbf{M}_{k}^{T}\mathbf{A}\mathbf{A}^{T} \mathbf{M}_{k} \right] (b - b^{\delta}) \rangle \\ &&+ \frac{1}{2}\langle \mathbf{A} x_{k} - b , \mathbb E_{k} \left[\mathbf{M}_{k}^{T}\mathbf{A}\mathbf{A}^{T} \mathbf{M}_{k} \right] (\mathbf{A} x_{k} - b) \rangle\\ &&+ \langle \mathbf{A} x_{k} - b , \mathbb E_{k} \left[\mathbf{M}_{k}^{T}\mathbf{A}\mathbf{A}^{T} \mathbf{M}_{k} \right]\cdot (b - b^{\delta}) \rangle \\ &\leq& -\frac{\alpha }{\|\mathbf{A}\|^{2}_{F}} \|\mathbf{A}x_{k} - b\|^{2}_{2} + \frac{\alpha}{\|\mathbf{A}\|^{2}_{F}} \sigma_{\max}(\mathbf{T}) \|\mathbf{A}x_{k} - b\|^{2}_{2} + \frac{\alpha}{\|\mathbf{A}\|^{2}_{F}} \sigma_{\max}(\mathbf{T}) \|b - b^{\delta}\|^{2}_{2} \\ &&+ \frac{2\alpha}{\|\mathbf{A}\|^{2}_{F}}\langle \mathbf{A} x_{k} - b , \Bigg(\frac{1}{2\eta} \mathbf{W} -\frac{1}{2}I + \frac{\alpha}{2} (1-\frac{1}{\eta}) \frac{\mathbf{AA}^{T}}{\|\mathbf{A}\|^{2}_{F}}\Bigg)\cdot (b - b^{\delta}) \rangle\\ &\leq& -\frac{\alpha }{\|\mathbf{A}\|^{2}_{F}} \|\mathbf{A}x_{k} - b\|^{2}_{2} + \frac{\alpha}{\|\mathbf{A}\|^{2}_{F}} \sigma_{\max}(\mathbf{T}) \|\mathbf{A}x_{k} - b\|^{2}_{2} + \frac{\alpha}{\|\mathbf{A}\|^{2}_{F}} \sigma_{\max}(\mathbf{T}) \|b - b^{\delta}\|^{2}_{2} \\ &&+ \frac{\alpha \varepsilon}{\|\mathbf{A}\|^{2}_{F}} \|\mathbf{A} x_{k} - b\|^{2}_{2} + \frac{\alpha}{\varepsilon\|\mathbf{A}\|^{2}_{F}} \Bigg\| \Bigg(\frac{1}{2\eta} \mathbf{W} -\frac{1}{2}I + \frac{\alpha}{2} (1-\frac{1}{\eta}) \frac{\mathbf{A}\mathbf{A}^{T}}{\|\mathbf{A}\|^{2}_{F}}\Bigg)\cdot (b - b^{\delta}) \Bigg\|^{2}_{2} \\ &=& - \frac{\alpha}{\|\mathbf{A}\|^{2}_{F}} (1 - \varepsilon - \sigma_{\max}(\mathbf{T}))\|\mathbf{A} x_{k} - b\|^{2}_{2}\\ &&+ \frac{\alpha}{\|\mathbf{A}\|^{2}_{F}} (\sigma_{\max}(\mathbf{T}) + \frac{1}{\varepsilon}\sigma_{\max}^{2}(\mathbf{T}^{\prime}))\|b - b^{\delta}\|^{2}_{2}. \end{array} $$
(4.13)

To get an improvement after each iteration, we need \((1 - \varepsilon - \sigma _{\max \limits }(\mathbf {T})) >0\), i.e., \(\sigma _{\max \limits }(\mathbf {T}) < 1 - \varepsilon \) which hold true because of (4.8). Combining (4.12) and (4.13) and using the error bound from Lemma 4.4, we arrive at

$$ \begin{array}{@{}rcl@{}} \mathbb E_{k} \left[ D_{f}^{x_{k+1}^{*}}(x_{k+1},\hat x) \right] &\leq& \left( 1 - \frac{\alpha}{\gamma\|\mathbf{A}\|_{F}^{2}}(1-\varepsilon-\sigma_{\max}(\mathbf{T})) \right)D_{f}^{x_{k}^{*}}(x_{k},\hat x)\\ &&\quad+ \frac{\alpha}{\|\mathbf{A}\|^{2}_{F}}(\sigma_{\max}(\mathbf{T}) + \tfrac1\varepsilon \sigma^{2}_{\max}(\mathbf{T}^{\prime}))\delta^{2}, \end{array} $$

which concludes the proof. The inequality in terms of the norm is obtained by using the first inequality of (2.3) since f is 1-strongly convex. □

Remark 4.22

Note that for W = I,η = 1,α = 1, we have \(\mathbf {T} = \frac {1}{2}\mathbf {I}\) meaning \(\mathbf {T}^{\prime } = 0\) and sending ε to zero give us \(\frac {c}{1-a} = \gamma \) which recover the rate of the standard RSK in the noisy case showed in [43].

5 Numerical experiments

We present several experiments to demonstrate the effectiveness of Algorithm 1 under various conditions. In particular, we study the effects of the relaxation parameter α, the number of threads η, the sparsity parameter λ, the weight matrix W, and the probability matrix P. The simulations were performed in Python on an Intel Core i7 computer with 16GB RAM. We start by comparing several variants of RSKA algorithms with randomized Kaczmarz (RK) and randomized sparse Kaczmarz (RSK). We consider the following RSKA variants:

  1. (a)

    v1: RSKA with W = I, i.e., α = 1, with a coupling such that \(\mathbf {P}\mathbf {W}\mathbf {D}^{-2}= \frac {\alpha \mathbf {I}}{\|\mathbf {A}\|^{2}_{F}}\) i.e., \(p_{i} = \frac {\|a_{i}\|^{2}_{2}}{\|\mathbf {A}\|^{2}_{F}}\).

  2. (b)

    v2: RSKA with a uniform weight matrix W = αI, i.e., \(p_{i} = \frac {\|a_{i}\|^{2}_{2}}{\|\mathbf {A}\|^{2}_{F}}\), where α is from Corollary 4.9.

  3. (c)

    v3: RSKA with a general diagonal weight matrix W with diagonal entries wi sample i.i.d. from the uniform distribution on [0,1] and \(p_{i} = \frac {\|a_{i}\|^{2}_{2}}{\|\mathbf {A}\|^{2}_{F}}\).

  4. (d)

    v4: RSKA with a general diagonal weight matrix W with diagonal entries wi sample i.i.d. from the uniform distribution on [0,1] and set pi proportional to \(\frac {\|{a_{i}}\|^{2}}{w_{i}}\), i.e., we have PWD− 2 = αI with \(\alpha = \big ({\sum }_{j}\frac {\|{a_{j}}\|^{2}}{w_{j}}\big )^{-1}\).

The rationale behind these choices are as follows: Version v1 just chooses no weight and standard probabilities. In v2, we still choose standard probabilities but use a coupling of weights and probabilities with the uniform weight α of which we have seen in Corollary 4.9 that is has a certain optimality property. In v3, we still use standard probabilities but do not use a coupling of probabilities and weights. Since we do not have any indications on how to choose weights in any optimal way, we just used random weights here. In contrast to v3, we do use a coupling of weights and probabilities in v4, but again, since we do not have results on how to choose optimal weights, we choose random ones.

Note that RSK and RK are both special cases of RSKA, namely

  1. (a)

    RSKA with W = I, i.e., α = 1, with a coupling such that \(\mathbf {P}\mathbf {W}\mathbf {D}^{-2}= \frac {\alpha \mathbf {I}}{\|\mathbf {A}\|^{2}_{F}}\), i.e., \(p_{i} = \frac {\|a_{i}\|^{2}_{2}}{\|\mathbf {A}\|^{2}_{F}}\) and η = 1,

  2. (b)

    RSKA with W = I, i.e., α = 1, with a coupling such that \(\mathbf {P}\mathbf {W}\mathbf {D}^{-2}= \frac {\alpha \mathbf {I}}{\|\mathbf {A}\|^{2}_{F}}\), i.e., \(p_{i} = \frac {\|a_{i}\|^{2}_{2}}{\|\mathbf {A}\|^{2}_{F}}\), η = 1 and λ = 0.

Synthetic data for the experiments is generated as follows: All elements of the data matrix \(\mathbf {A} \in \mathbb {R}^{m\times n}\) are chosen independent and identically distributed from the standard normal distribution \({\mathscr{N}} (0, 1)\). We constructed overdetermined, square, and underdetermined linear systems. To construct sparse solutions \(\hat x \in \mathbb {R}^{n}\), we choose s indices from \(\left \{ 1,\dots ,n \right \}\) at random and placed zeros at these positions, and the corresponding right hand sides are \(b = \mathbf {A}\hat x \in \mathbb {R}^{m}\) while the respective noisy right hand sides are bδ and are obtained by adding Gaussian noise (see Section 5.4 below). We generally chose \(\eta = 1 + \frac {1}{10}\min \limits (m,n)\) for RSKA, unless something else is indicated. Note that in the overdetermined case with no noise, there will be a unique solution \(\hat x\) since with probability 1, the matrices A have full rank, and so all methods are expected to converge to the same solution \(\hat x\) in this case.

For each experiment, we run independent trials each starting with the initial iterate x0 = 0. We measure performance by plotting the relative residual error ∥Axb∥/∥b∥ and the error \(\|x_{k} - \hat x \|/\|\hat x\|\) against the number of full iterations. Thick line shows mean over the total number of trials and shaded area which represent the standard deviation over the trials are plotted when appropriate.

Figure 1 shows the result for a five times overdetermined and consistent system without noise where the value λ = 1 was used for RSK and RSKA. Note that the usual RK and RSKA variants perform consistently well over all trials, while the performance of RSK differs drastically between different instances. Moreover, we observe experimentally that choosing the theoretically optimal overrelaxation parameter α from Corollary 4.9 for the RSKA v2 method gives us faster convergence.

Fig. 1
figure 1

A comparison of randomized Kaczmarz (blue), randomized sparse Kaczmarz (green), and RSKA method (black), m = 100,n = 20, sparsity s = 10, η = 11, λ = 1, no noise, and 20 runs. Thick line shows mean over all trials

Figures 2 and 3 show the results respectively for a two times resp. five times underdetermined and consistent system without noise where the values λ = 1, resp. λ = 3 was used for RSK and RSKA. Methods like RSK and RSKA take advantage of the fact that the vectors \(\hat x\) are very sparse. Moreover, in Fig. 2, the RSK and RK methods do not reduce the residual as fast as the RSKA method. However, since the problem is underdetermined, the RK method does not converge to a sparse solution and hence, the error does not converge to zero. Figures 4 and 5 show results for further values of m and n and show similar behavior as Fig. 2.

Fig. 2
figure 2

A comparison of randomized Kaczmarz (blue), randomized sparse Kaczmarz (green), and RSKA method (black), m = 100,n = 200, sparsity s = 10, η = 11, λ = 1, no noise, and 10 runs. Thick line shows mean over all trials, and shaded area represents the standard deviation

Fig. 3
figure 3

A comparison of randomized Kaczmarz (blue), randomized sparse Kaczmarz (green), and RSKA method (black), m = 100,n = 500, sparsity s = 10, η = 11, λ = 3, no noise, and 10 runs. Thick line shows mean over all trials, and shaded area represents the standard deviation

Fig. 4
figure 4

A comparison of randomized Kaczmarz (blue), randomized sparse Kaczmarz (green), and RSKA method (black), m = 200,n = 600, sparsity s = 10, η = 21, λ = 3, no noise, and 5 runs. Thick line shows mean over all trials, and shaded area represents the standard deviation

Fig. 5
figure 5

A comparison of randomized Kaczmarz (blue), randomized sparse Kaczmarz (green), and RSKA method (black), m = 300,n = 300, sparsity s = 10, η = 31, λ = 3, no noise, and 5 runs. Thick line shows mean over all trials, and shaded area represents the standard deviation

5.1 The effect of the number of threads η

In Figs. 678, and 9, we see the effects of the number of threads η in the error of Algorithm 1 for the variant v2. We used underdetermined and overdetermined and consistent system, with λ ∈{0.01,3}. For the small λ = 0.01, the RSKA behaves almost like the standard randomized Kaczmarz with averaging from [28], while for the larger value λ = 3, we see the typical behavior for the sparse Kaczmarz method which stagnates from time to time and switches to faster improvement in between [23]. As the number of threads η increases, we see a corresponding decrease in the number of iterations needed to reach a certain accuracy; however, at some point, increasing η does not improve the method in accordance with Remark 4.10. For smaller values of η, we roughly see an η-times speedup in the number of iterations. Thus, it is clear that the averaging will pay off as soon as the updates in RSKA can be done in parallel.

Fig. 6
figure 6

The effect of the number of threads η on the error and relative residual versus iteration for Algorithm 1 on the variant v2. m = 200,n = 600, sparsity s = 10, λ = 0.01, no noise

Fig. 7
figure 7

The effect of the number of threads η on the error and relative residual versus iteration for Algorithm 1 on the variant v2. m = 200,n = 600, sparsity s = 10, λ = 3, no noise

Fig. 8
figure 8

The effect of the number of threads η on the error and relative residual versus iteration for Algorithm 1 on the variant v2. m = 100,n = 10, sparsity s = 10, λ = 0.01, no noise

Fig. 9
figure 9

The effect of the number of threads η on the error and relative residual versus iteration for Algorithm 1 on the variant v2. m = 100,n = 10, sparsity s = 10, λ = 3, no noise

5.2 The effect of the relaxation parameter α

In Fig. 10, we observe the effect on the convergence rate as we vary the relaxation parameter α. We used an underdetermined and consistent system with λ = 0.1, η = 8 with variant v2. In fact, increasing α allows us to get smaller error; however, the method can ultimately diverge for some larger values of the relaxation parameter α. This is observed in Fig. 11 which plot the relative residual and the error after 100 iterations on a smaller example for various relaxation parameters α and batch sizes η. The theoretically optimal parameter α from Corollary 4.9 is indicated as a dot. We used an overdetermined and consistent system with λ = 1 with variant v2. The plots confirm that α cannot be chosen too large, i.e., there exists an η-depended upper bound for α which leads to convergence (cf. Theorem 4.8). However, we also observe that a larger relaxation parameter than α from Corollary 4.9 leads to even faster convergence.

Fig. 10
figure 10

The effect of the relaxation parameter α on the error and relative residual versus iteration for Algorithm 1 on the variant v2. m = 200,n = 600, sparsity s = 10, η = 8, λ = 0.1, no noise

Fig. 11
figure 11

The effect of the relaxation parameter α for various values of η on the relative residual after 100 iterations of Algorithm 1 with the variant v2. m = 100,n = 10, sparsity s = 10, λ = 6, no noise. Circle markers are estimates of the optimal relaxation parameter using Corollary 4.9

5.3 The effect of the sparsity parameter λ

In Fig. 12, we see the effects of the sparsity parameter λ on the approximation error of Algorithm 1 and the randomized Kaczmarz method (RK). We used an underdetermined and consistent system with η = 21 with variant v2 of RSKA. We observed that as you increase the sparsity parameter λ, the relative residual and the reconstruction error get worse and RSK is more affected by this behavior whereas RSKA keep it performance along different λ. The first row of Fig. 12 corresponds to experiment where we run 1000 iterations and in the second row, the methods are run for (1 + λ)1000 iterations for all λ.

Fig. 12
figure 12

A comparison of randomized sparse Kaczmarz (blue) and RSKA-v2 method (orange) in terms of the sparsity parameter λ. m = 200,n = 600, sparsity s = 10, η = 21, no noise. In the first row (a, b), methods are run for 1000 iterations for all λ whereas in the second row (c, d), methods are run for (1 + λ)1000 iterations for all λ

5.4 Noisy case

In this part, we are interested in the effectiveness of the RSKA method on inconsistent systems. We construct a sparse \(\hat x\) with normally distributed non-zero entries and set \(b = \mathbf {A}\hat {x}, \quad b^{\varepsilon } = b + \varepsilon \) where ε is a random vector uniformly distributed on a sphere with radius that correspond to the relative noise level such that ∥bbε2. Figures 1314, and 15 show the results for noisy right hand sides, all with λ = 1 for RSK and RSKA. Figure 14 uses a five times overdetermined system with 10% relative noise, and Fig. 15 has the same noise level and a five times underdetermined system. In the underdetermined case, all methods consistently stagnate at a residual level which is comparable to the noise level; however, in all settings, RSKA variants achieve faster convergence than RSK which in turn is faster than RK. Regarding the reconstruction error, RSKA and RSK achieve reconstructions with an error in the size of the noise level, while RSKA achieves an even lower reconstruction error.

Fig. 13
figure 13

A comparison of randomized Kaczmarz (blue), randomized sparse Kaczmarz (green), and RSKA method (black), m = 100,n = 100, sparsity s = 10, η = 11, λ = 1, noise level l = 0.1, and 5 runs. Thick line shows mean over all trials, and shaded area represents the standard deviation

Fig. 14
figure 14

A comparison of randomized Kaczmarz (blue), randomized sparse Kaczmarz (green), and RSKA method (black), m = 500,n = 100, sparsity s = 10, η = 11, λ = 1, noise level l = 0.1, and 5 runs. Thick line shows mean over all trials, and shaded area represents the standard deviation

Fig. 15
figure 15

A comparison of randomized Kaczmarz (blue), randomized sparse Kaczmarz (green), and RSKA method (black), m = 100,n = 500, sparsity s = 10, η = 11, λ = 1, noise level l = 0.1, and 5 runs. Thick line shows mean over all trials, and shaded area represents the standard deviation

6 Conclusion

We proved that the iterates of the randomized sparse Kaczmarz with averaging method (Algorithm 1) are expected to converge linearly for consistent linear systems. Moreover, we show that the iterates reach an error threshold in the order of the noise-level in the noisy case. We gave a general error bound in terms of the sparsity parameter λ, the number of threads η, and a relaxation parameter α. Numerical experiments show that the method performs consistently well over a range of values of λ (which is different for the version without averaging), very good reconstruction quality as λ increases, confirm the theoretical results, and demonstrate the benefit of using Algorithm 1 to recover sparse solutions of linear systems, even in the noisy case. We demonstrate that the rate of convergence for Algorithm 1 improves both in theory and practice as the number of threads η increases. Moreover, we derive an optimal value for the relaxation parameter α which gives the fastest convergence speed and our numerical experiments indicate that this optimal value for α (in v2 and v4 of the algorithms in Section 5) does indeed provide fast convergence.