Skip to main content
Log in

Quantile-based random sparse Kaczmarz for corrupted and noisy linear systems

  • Original Paper
  • Published:
Numerical Algorithms Aims and scope Submit manuscript

Abstract

The randomized Kaczmarz method, along with its recently developed variants, has become a popular tool for dealing with large-scale linear systems. However, these methods usually fail to converge when the linear systems are affected by heavy corruption, which is common in many practical applications. In this study, we develop a new variant of the randomized sparse Kaczmarz method with linear convergence guarantees, by making use of the quantile technique to detect corruptions. Moreover, we incorporate the averaged block technique into the proposed method to achieve parallel computation and acceleration. Finally, the proposed algorithms are illustrated to be very efficient through extensive numerical experiments.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Algorithm 1
Algorithm 2
Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Availability of supporting data

The data that support the findings of this study are divided into two kinds: simulated data generated by Matlab code and public data generated by AIRtools toolbox (http://www.imm.dtu.dk/~pcha/Regutools/).

References

  1. Lorenz, D.A., Wenger, S., Schöpfer, F., Magnor, M.: A sparse Kaczmarz solver and a linearized Bregman method for online compressed sensing. In: 2014 IEEE International Conference on Image Processing (ICIP), pp. 1347–1351 (2014). https://doi.org/10.1109/ICIP.2014.7025269. IEEE

  2. Tan, Y.S., Vershynin, R.: Phase retrieval via randomized Kaczmarz: theoretical guarantees. Inf. Inference: J. IMA 8(1), 97–123 (2019). https://doi.org/10.1093/imaiai/iay005

    Article  MathSciNet  Google Scholar 

  3. Xian, Y., Liu, H.G., Tai, X.C., Wang, Y.: Randomized Kaczmarz Method for Single-Particle X-Ray Image Phase Retrieval[M]//Handbook of Mathematical Models and Algorithms in Computer Vision and Imaging: Mathematical Imaging and Vision. Cham: Springer International Publishing, pp. 1–16 (2022). https://doi.org/10.1007/978-3-030-98661-2112

  4. Römer, P., Filbir, F., Krahmer, F.: On the randomized Kaczmarz algorithm for phase retrieval. In: 2021 55th Asilomar Conference on Signals, Systems, and Computers, pp. 847–851 (2021). https://doi.org/10.1109/IEEECONF53345.2021.9723291. IEEE

  5. Chen, X., Qin, J.: Regularized Kaczmarz algorithms for tensor recovery. SIAM J. Imaging Sci. 14(4), 1439–1471 (2021). https://doi.org/10.1137/21M1398562

    Article  MathSciNet  Google Scholar 

  6. Du, K., Sun, X.H.: Randomized regularized extended Kaczmarz algorithms for tensor recovery. Preprint at arXiv:2112.08566 (2021)

  7. Gordon, R., Bender, R., Herman, G.T.: Algebraic reconstruction techniques (ART) for three-dimensional electron microscopy and X-ray photography. J. Theor. Biol. 29(3), 471–481 (1970). https://doi.org/10.1016/0022-5193(70)90109-8

    Article  Google Scholar 

  8. Jarman, B., Needell, D.: QuantileRK: Solving large-scale linear systems with corrupted, noisy data. In: 2021 55th Asilomar Conference on Signals, Systems, and Computers, pp. 1312–1316 (2021). https://doi.org/10.1109/IEEECONF53345.2021.9723338. IEEE

  9. Needell, D.: Randomized Kaczmarz solver for noisy linear systems. BIT Numer. Math. 50, 395–403 (2010). https://doi.org/10.1007/s10543-010-0265-5

    Article  MathSciNet  Google Scholar 

  10. Schöpfer, F., Lorenz, D.A.: Linear convergence of the randomized sparse Kaczmarz method. Math. Program. 173(1), 509–536 (2019). https://doi.org/10.1007/s10107-017-1229-1

    Article  MathSciNet  Google Scholar 

  11. Yuan, Z.Y., Zhang, H., Wang, H.X.: Sparse sampling Kaczmarz-Motzkin method with linear convergence. Math. Methods Appl. Sci. 45(7), 3463–3478 (2022). https://doi.org/10.1002/mma.7990

    Article  MathSciNet  Google Scholar 

  12. Yuan, Z.Y., Zhang, L., Wang, H.X., Zhang, H.: Adaptively sketched Bregman projection methods for linear systems. Inverse Probl. 38(6), 065005 (2022). https://doi.org/10.1088/1361-6420/ac5f76

    Article  MathSciNet  Google Scholar 

  13. Zhang, L., Yuan, Z.Y., Wang, H.X., Zhang, H.: A weighted randomized sparse Kaczmarz method for solving linear systems. Comput. Appl. Math. 41(8), 1–18 (2022). https://doi.org/10.1007/s40314-022-02105-9

    Article  MathSciNet  Google Scholar 

  14. Haddock, J., Needell, D., Rebrova, E., Swartworth, W.: Quantile-based iterative methods for corrupted systems of linear equations. SIAM J. Matrix Anal. Appl. 43(2), 605–637 (2022). https://doi.org/10.1137/21M1429187

    Article  MathSciNet  Google Scholar 

  15. Steinerberger, S.: Quantile-based random Kaczmarz for corrupted linear systems of equations. Inf. Inference: J. IMA 12(1), 448–465 (2023). https://doi.org/10.1093/imaiai/iaab029

    Article  MathSciNet  Google Scholar 

  16. Tondji, L., Lorenz, D.A.: Faster randomized block sparse Kaczmarz by averaging. Numer. Algorithms 93(4), 1417–1451 (2023). https://doi.org/10.1007/s11075-022-01473-x

    Article  MathSciNet  Google Scholar 

  17. Merzlyakov, Y.I.: On a relaxation method of solving systems of linear inequalities. USSR Comput. Math. and Math. Phys. 2(3), 504–510 (1963). https://doi.org/10.1016/0041-5553(63)90463-4

    Article  Google Scholar 

  18. Necoara, I.: Faster randomized block Kaczmarz algorithms. SIAM J. Matrix Anal. Appl. 40(4), 1425–1452 (2019). https://doi.org/10.1137/19M1251643

    Article  MathSciNet  Google Scholar 

  19. Karczmarz, S.: Angenaherte auflosung von systemen linearer glei-chungen. Bull. Int. Acad. Pol. Sic. Let., Cl. Sci. Math. Nat., 355–357 (1937)

  20. Hounsfield, G.N.: Computerized transverse axial scanning (tomography): Part 1. description of system. Brit. J. Radiol. 46(552), 1016–1022 (1973). https://doi.org/10.1259/0007-1285-46-552-1016

    Article  Google Scholar 

  21. Neumann, J.V.: Functional operators, vol. ii. the geometry of orthogonal spaces (this is a reprint of mimeographed lecture notes first distributed in 1933) annals of math. Studies Nr. 22 Princeton Univ. Press (1950). https://doi.org/10.1515/9781400882250

  22. Halperin, I.: The product of projection operators. Acta Sci. Math. (Szeged) 23(1), 96–99 (1962)

    MathSciNet  Google Scholar 

  23. Deutsch, F., Hundal, H.: The rate of convergence for the method of alternating projections, ii. J. Math. Anal. Appl. 205(2), 381–405 (1997). https://doi.org/10.1006/jmaa.1997.5202

    Article  MathSciNet  Google Scholar 

  24. Galántai, A.: On the rate of convergence of the alternating projection method in finite dimensional spaces. J. Math. Anal. Appl. 310(1), 30–44 (2005). https://doi.org/10.1016/j.jmaa.2004.12.050

    Article  MathSciNet  Google Scholar 

  25. Strohmer, T., Vershynin, R.: A randomized Kaczmarz algorithm with exponential convergence. J. Fourier Anal. Appl. 15(2), 262–278 (2009). https://doi.org/10.1007/s00041-008-9030-4

    Article  MathSciNet  Google Scholar 

  26. Lorenz, D.A., Schopfer, F., Wenger, S.: The linearized Bregman method via split feasibility problems: analysis and generalizations. SIAM J. Imaging Sci. 7(2), 1237–1262 (2014). https://doi.org/10.1137/130936269

    Article  MathSciNet  Google Scholar 

  27. Chen, S.S., Donoho, D.L., Saunders, M.A.: Atomic decomposition by basis pursuit. SIAM Rev. 43(1), 129–159 (2001). https://doi.org/10.1137/S003614450037906X

    Article  MathSciNet  Google Scholar 

  28. Cai, J.F., Osher, S., Shen, Z.: Convergence of the linearized Bregman iteration for \(l_1\)-norm minimization. Math. Comput. 78(268), 2127–2136 (2009). https://doi.org/10.1090/S0025-5718-09-02242-X

    Article  Google Scholar 

  29. Petra, S.: Randomized sparse block Kaczmarz as randomized dual block-coordinate descent. Anal. ştiin. ale Univ. Ovidius Constanţa. Seria Mat. 23(3), 129–149 (2015). https://doi.org/10.1515/auom-2015-0052

    Article  MathSciNet  Google Scholar 

  30. Jiang, Y.T., Wu, G., Jiang, L.: A Kaczmarz method with simple random sampling for solving large linear systems. Preprint at arXiv:2011.14693 (2020)

  31. Needell, D., Tropp, J.A.: Paved with good intentions: analysis of a randomized block Kaczmarz method. Linear Algebra Appl. 441, 199–221 (2014). https://doi.org/10.1016/j.laa.2012.12.022

    Article  MathSciNet  Google Scholar 

  32. Moorman, J.D., Tu, T.K., Molitor, D., Needell, D.: Randomized Kaczmarz with averaging. BIT Numer. Math. 61(1), 337–359 (2021). https://doi.org/10.1007/s10543-020-00824-1

    Article  MathSciNet  Google Scholar 

  33. Miao, C.Q., Wu, W.T.: On greedy randomized average block Kaczmarz method for solving large linear systems. J. Comput. Appl. Math. 413, 114372 (2022). https://doi.org/10.1016/j.cam.2022.114372

    Article  MathSciNet  Google Scholar 

  34. Yin, W.T.: Analysis and generalizations of the linearized bregman method. SIAM J. Imaging Sci 3(4), 856–877 (2010). https://doi.org/10.1137/090760350

    Article  MathSciNet  Google Scholar 

  35. Haddock, J., Needell, D.: Randomized projection methods for linear systems with arbitrarily large sparse corruptions. SIAM J. Sci. Comput. 41(5), 19–36 (2019). https://doi.org/10.1137/18M1179213

    Article  MathSciNet  Google Scholar 

  36. Cheng, L., Jarman, B., Needell, D., Rebrova, E.: On block accelerations of quantile randomized Kaczmarz for corrupted systems of linear equations. Inverse Probl. 39(2), 024002 (2022). https://doi.org/10.1088/1361-6420/aca78a

    Article  MathSciNet  Google Scholar 

  37. Zhang, L., Wang, H.X., Zhang, H.: Quantile-based random sparse Kaczmarz for corrupted, noisy linear inverse systems. Preprint at arXiv:2206.07356 (2022)

  38. Hansen, P.C.: Regularization tools version 4.0 for matlab 7.3. Numer. Algorithms 46(2), 189–194 (2007). https://doi.org/10.1007/s11075-007-9136-9

    Article  MathSciNet  Google Scholar 

  39. Polyak, B.T.: Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. and Math. Phys. 4(5), 1–17 (1964). https://doi.org/10.1016/0041-5553(64)90137-5

    Article  Google Scholar 

  40. Schöpfer, F.: Exact regularization of polyhedral norms. SIAM J. Optim. 22(4), 1206–1223 (2012). https://doi.org/10.1137/11085236X

Download references

Acknowledgements

The authors thank the editor and reviewers of this manuscript for their thoughtful suggestions.

Funding

This work was supported by the National Natural Science Foundation of China (No.11971480, No.61977065), the Natural Science Fund of Hunan for Excellent Youth (No.2020JJ3038), and the Fund for NUDT Young Innovator Awards (No.20190105).

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to the study’s conception and design. The first draft of the manuscript was written by Lu Zhang and all authors commented on previous versions of the manuscript. All authors read and approve the final manuscript and are all aware of the current submission.

Corresponding author

Correspondence to Hui Zhang.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Proof of Lemma 5

Denote the set of all the indices of corrupted equations as C. The set excluding the \(\beta m\) corrupted rows is denoted as E. For all \(i\in E\), we have \(b_i=\tilde{b}_i+r_i\) and \(\langle a_i,\hat{x}\rangle =\tilde{b}_i=b_i-r_i\). Hence,

$$ |\langle a_i,x_k-\hat{x}\rangle | =|\langle a_i,x_k\rangle -b_i+r_i|\nonumber \ge |\langle a_i,x_k\rangle -b_i|-\Vert r\Vert _{\infty }. $$

Thus,

$$\begin{aligned} |\langle a_i,x_k\rangle -b_i|\le |\langle a_i,x_k-\hat{x}\rangle |+\Vert r\Vert _{\infty }. \end{aligned}$$
(1)

Recall that \(Q_k=Q_q(x_{k},1\le i\le m )\) and \(|E|=(1-\beta )m\), meaning that at least \((1-\beta -q)m\) numbers of residuals \(|\langle a_i,x_k\rangle -b_i|_{i=1}^m\) are at least \(Q_k\) and have not been corrupted. Then,

$$\begin{aligned} (1-\beta -q)mQ_k&\le \sum _{i\in E}|\langle a_i,x_k\rangle -b_i|\nonumber \\&\le \sum _{i\in E}(|\langle a_i,x_k-\hat{x}\rangle |+\Vert r\Vert _{\infty })\nonumber \\&\le \left( \sum _{i\in E}|\langle a_i,x_k-\hat{x}\rangle |^2\right) ^{\frac{1}{2}}\sqrt{|E|}+|E|\cdot \Vert r\Vert _{\infty }\nonumber \\&= \Vert A_{ E}(x_k-\hat{x})\Vert \sqrt{(1-\beta )m} +(1-\beta )m\Vert r\Vert _{\infty }\nonumber \\&\le \sigma _{\max }\Vert x_k-\hat{x}\Vert \sqrt{(1-\beta )m}+ (1-\beta )m\Vert r\Vert _{\infty }. \end{aligned}$$
(2)

Therefore,

$$\begin{aligned} Q_k\le \frac{\sqrt{1-\beta }}{(1-\beta -q)\sqrt{m}}\sigma _{\max }\Vert x_k-\hat{x}\Vert + \frac{1-\beta }{1-\beta -q}\Vert r\Vert _{\infty }, \end{aligned}$$
(3)

which completes the proof. \(\square \)

Proof of Theorem 1

We start by introducing some useful notations. Denote the set consisting of all acceptable indices in the k-th iterate as

$$ B=\{1\le i\le m\mid |\langle x_k,a_i\rangle | \le Q_k\}.\nonumber $$

The subset consisting of corrupted indices in B is denoted as S; then the other indices in B are in subset \(B\backslash S\). Apparently, \((q-\beta )m\le |B\backslash S|\le qm\) and \(0\le |S|\le \beta m\). Note that the used index in each iterate is sampled from the acceptable set B in each iterate. This observation inspires us to consider the following splitting:

$$\begin{aligned} \mathbb {E}\left[ D_f^{x_{k+1}^*}(x_{k+1},\hat{x})\right] =&P(i\in S)\mathbb {E}\left[ D_f^{x_{k+1}^*}(x_{k+1},\hat{x})|i\in S\right] \nonumber \\&+ P(i\in B\backslash S)\mathbb {E}\left[ D_f^{x_{k+1}^*}(x_{k+1},\hat{x})|i\in B\backslash S\right] . \end{aligned}$$
(4)

Our remainder assignment is to estimate the split terms above. For clarity, we respectively take both inexact and exact steps into account, which are all divided into three steps.

(a) In the inexact-step case, firstly we consider the uncorrupted equations indexed \(i\in B\backslash S\). Since \(b_{B\backslash S}^c=0\), the equations indexed \(i\in B\backslash S\) satisfy

$$ A_{B\backslash S}x=\tilde{b}_{B\backslash S}=b_{B\backslash S}-r_{B\backslash S}. $$

According to the convergence rate of RaSK in noisy case in Lemma 4(a), we have

$$\begin{aligned}&~~~~~\mathbb {E}_k\left[ D_f^{x_{k+1}^*}(x_{k+1},\hat{x})|i\in B\backslash S\right] \nonumber \\&\le \left( 1-\frac{1}{2}\cdot \frac{1}{\tilde{\kappa }(A_{B\backslash S})^2}\cdot \frac{|\hat{x}|_{\text {min}}}{|\hat{x}|_{\text {min}}+2\lambda }\right) D_f^{x_{k}^*}(x_{k},\hat{x})+\frac{1}{2}\frac{\Vert r_{B\backslash S}\Vert ^2}{\Vert A_{B\backslash S}\Vert _F^2}\nonumber \\&\le \left( 1-\frac{1}{2}\cdot \frac{\tilde{\sigma }_{q-\beta ,\text {min}}^2}{qm}\cdot \frac{|\hat{x}|_{\text {min}}}{|\hat{x}|_{\text {min}}+2\lambda }\right) D_f^{x_{k}^*}(x_{k},\hat{x}) +\frac{1}{2}\Vert r\Vert _{\infty }^2, \end{aligned}$$
(5)

where the last inequality follows from \(\Vert r_{B\backslash S}\Vert ^2\le |B\backslash S|\cdot \Vert r\Vert _{\infty }^2\) and

$$ \tilde{\kappa }(A_{B\backslash S})^2= \frac{\Vert A_{B\backslash S}\Vert _F^2}{\tilde{\sigma }^2_{\text {min}}(A_{B\backslash S})}\le \frac{qm}{\tilde{\sigma }_{q-\beta ,\text {min}}^2}. $$

Second, we consider the conditional expectation in S. In this case,

$$ A_Sx=\tilde{b}_S, b_S=\tilde{b}_S+b_S^c+r_S. $$

Denote the orthogonal projection of true solution \(\hat{x}\in H(a_{i_k},\tilde{b}_{i_k})\) onto corrupted hyperplane \(H(a_{i_k},b_{i_k})\) as \(x_k^c\) [9], implying that

$$\begin{aligned} x_k^c:=\hat{x}+(b_{i_k}-\tilde{b}_{i_k})a_{i_k}\in H(a_{i_k},b_{i_k}). \end{aligned}$$
(6)

Note that \(x_{k+1}\) is the Bregman projection of \(x_k\) onto the hyperplane \(H(a_{i_k},b_{i_k})\). According to Lemma 2 and the 1-strong convexity of f, it follows that

$$\begin{aligned} D_f^{x_{k+1}^*}(x_{k+1},x_k^c) \le D_f^{x_{k}^*}(x_{k},x_k^c) - \frac{1}{2}(\langle a_{i_k},x_k\rangle -b_{i_k})^2. \end{aligned}$$
(7)

By reformulating it, we obtain that

$$\begin{aligned} D_f^{x_{k+1}^*}(x_{k+1},\hat{x}) \le D_f^{x_{k}^*}(x_{k},\hat{x}) - \frac{1}{2}(\langle a_{i_k},x_k\rangle -b_{i_k})^2 +\langle x_{k+1}^*-x_{k}^*,x_k^c-\hat{x}\rangle . \end{aligned}$$
(8)

For Quantile-RaSK with inexact step, we have \(x_{k+1}^*-x_{k}^*=-(\langle a_{i_k},x_k\rangle -b_{i_k})a_{i_k}\), obtained from the 11-step of Algorithm 1. Hence, (8) can be rewritten as

$$\begin{aligned}&~~~~D_f^{x_{k+1}^*}(x_{k+1},\hat{x})\nonumber \\&\le D_f^{x_{k}^*}(x_{k},\hat{x}) + \frac{1}{2}(\langle a_{i_k},x_k\rangle -b_{i_k})^2 -(\langle a_{i_k},x_k\rangle -b_{i_k})(\langle a_{i_k},x_k\rangle -\tilde{b}_{i_k}). \end{aligned}$$
(9)

Now fix the values of the indices \(i_0,\ldots ,i_{k-1}\) and consider only \(i_k\) as a random variable with values in \(\{1,\ldots ,m\}\). According to \(|\langle a_{i_k},x_k\rangle -b_{i_k}|\le Q_k\), we have

$$\begin{aligned}&~~~~\mathbb {E}_k\left[ D_f^{x_{k+1}^*}(x_{k+1},\hat{x})|i\in S\right] \!\le \! D_f^{x_{k}^*}(x_{k},\hat{x}) \!+\!\frac{1}{2}Q_k^2\!+\!Q_k\cdot \mathbb {E}_k\left( |\langle a_i,x_k\!-\!\hat{x}\rangle |\mid i\in \! S\right) , \end{aligned}$$
(10)

where

$$\begin{aligned} \mathbb {E}_k\left( |\langle a_i,x_k-\hat{x}\rangle |\mid i\in S\right) \!=\frac{1}{|S|}\sum _{i\in S} |\langle a_i,x_k-\hat{x}\rangle | \!\le \! \frac{1}{\sqrt{|S|}}\Vert A_Sx_k-b_S\Vert \!\le \!\frac{\sigma _{\max }}{\sqrt{|S|}}\Vert x_k-\hat{x}\Vert . \end{aligned}$$
(11)

Combining (3), (10) and (1), we have that

$$\begin{aligned}&~~~~\mathbb {E}_k\left[ D_f^{x_{k+1}^*}(x_{k+1},\hat{x})|i\in S\right] \nonumber \\&\le D_f^{x_{k}^*}(x_{k},\hat{x}) +\frac{1}{2}\left( \frac{1-\beta }{1-\beta -q}\right) ^2\Vert r\Vert _{\infty }^2\nonumber \\&~~~+\frac{1-\beta }{1-\beta -q}\left( \frac{1}{\sqrt{|S|}\sqrt{1-\beta }\sqrt{m}}+\frac{1}{2}\cdot \frac{1}{(1-\beta -q)m} \right) \sigma _{\max }^2\Vert x_k-\hat{x}\Vert ^2\nonumber \\&~~~+\frac{1-\beta }{1-\beta -q}\left( \frac{\sqrt{1-\beta }}{(1-\beta -q)\sqrt{m}}+\frac{1}{\sqrt{|S|}}\right) \sigma _{\max }\Vert x_k-\hat{x}\Vert \cdot \Vert r\Vert _{\infty } . \end{aligned}$$
(12)

To handle the \(\Vert x_k-\hat{x}\Vert \cdot \Vert r\Vert _{\infty }\) term we split into two cases: \(\Vert x_k-\hat{x}\Vert \ge \sqrt{n} \Vert r\Vert _{\infty }\) and \(\Vert x_k-\hat{x}\Vert \le \sqrt{n} \Vert r\Vert _{\infty }\). It is easy to obtain that

$$\begin{aligned}&~~~~\mathbb {E}_k\left[ D_f^{x_{k+1}^*}(x_{k+1},\hat{x})\mid i\in S\right] \nonumber \\&\le \left( 1+\frac{c_{A,\beta ,q}'}{\sqrt{|S|}}+C_{A,\beta ,q}\right) D_f^{x_{k}^*}(x_{k},\hat{x}) +\left( \frac{d_{A,\beta ,q}}{\sqrt{|S|}}+D_{A,\beta ,q}\right) \Vert r\Vert _{\infty }^2, \end{aligned}$$
(13)

where

$$\begin{aligned} c_{A,\beta ,q}'= & {} \frac{2\sigma _{\max }^2\sqrt{1-\beta }}{\sqrt{m}(1-\beta -q)}+\frac{2\sigma _{\max }(1-\beta )}{\sqrt{n}(1-\beta -q)}, C_{A,\beta ,q}= \frac{\sigma _{\max }^2(1-\beta )}{m(1-\beta -q)^2} + \frac{2(1-\beta )^{\frac{3}{2}}\sigma _{\max }}{\sqrt{mn}(1-\beta -q)^2},\\ d_{A,\beta ,q}= & {} \frac{(1-\beta )\sqrt{n}}{1-\beta -q}\sigma _{\max },D_{A,\beta ,q}=\frac{(1-\beta )^{\frac{3}{2}}\sqrt{n}}{(1-\beta -q)^2\sqrt{m}}\sigma _{\max }+\frac{1}{2}\left( \frac{1-\beta }{1-\beta -q}\right) ^2. \end{aligned}$$

Finally, combining (4), (5) and (13) we have

$$\begin{aligned} \mathbb {E}_k\left[ D_f^{x_{k+1}^*}(x_{k+1},\hat{x})\right] \le \left( 1-C_1\right) D_f^{x_{k}^*}(x_{k},\hat{x}) +C_2\Vert r\Vert _{\infty }^2, \end{aligned}$$
(14)

where

$$\begin{aligned} C_1= & {} \frac{q-\beta }{2q^2} \frac{|\hat{x}|_{\text {min}}}{|\hat{x}|_{\text {min}}+2\lambda } \frac{\tilde{\sigma }_{q-\beta ,\text {min}}^2}{m} - \frac{2\sqrt{\beta (1-\beta )}}{q(1-\beta -q)}\left( \frac{\sigma _{\max }^2}{m}+\frac{\sqrt{1-\beta }\sigma _{\max }}{\sqrt{mn}}\right) \\{} & {} -\frac{\beta (1-\beta )}{q(1-\beta -q)^2}\left( \frac{\sigma _{\max }^2}{m}+\frac{2\sqrt{1-\beta }\sigma _{\max }}{\sqrt{mn}} \right) ,\\ C_2= & {} \frac{\sqrt{\beta }(1-\beta )}{q(1-q-\beta )}\left( 1+\frac{\sqrt{\beta (1-\beta )}}{1-q-\beta }\right) \sqrt{\frac{n}{m}}\sigma _{\max }+\frac{1}{2}\frac{\beta (1-\beta )^2}{q(1-\beta -q)^2}+\frac{1}{2}. \end{aligned}$$

We consider all indices \(i_0,i_1,\ldots ,i_k\) as random variables, and take full expectation on both sides. Thus,

$$\begin{aligned} \mathbb {E} \left[ D_f^{x_{k+1}^*}(x_{k+1},\hat{x})\right] \le \left( 1-C_1\right) \mathbb {E}\left[ D_f^{x_{k}^*}(x_{k},\hat{x})\right] +C_2\Vert r\Vert _{\infty }^2, \end{aligned}$$
(15)

(b) In the exact-step case, first we consider the uncorrupted equations indexed \(i\in B\backslash S\). Since \(b_{B\backslash S}^c=0\), the equations indexed \(i\in B\backslash S\) satisfy

$$ A_{B\backslash S}x=\tilde{b}_{B\backslash S}=b_{B\backslash S}-r_{B\backslash S}. $$

It follows from the convergence rate of ERaSK for noisy case in Lemma 4(b) that

$$\begin{aligned}&~~~~\mathbb {E}_k\left[ D_f^{x_{k+1}^*}(x_{k+1},\hat{x})|i\in B\backslash S\right] \nonumber \\&\le \left( 1-\frac{1}{2}\frac{\tilde{\sigma }_{q-\beta ,\text {min}}^2}{qm}\frac{|\hat{x}|_{\text {min}}}{|\hat{x}|_{\text {min}}+2\lambda }\right) D_f^{x_{k}^*}(x_{k},\hat{x}) +\frac{1}{2}\Vert r\Vert _{\infty }^2 +\frac{2}{\sqrt{|B\backslash S|}}\Vert r\Vert _{\infty }\Vert A\Vert _{1,2}. \end{aligned}$$
(16)

Second, we consider the conditional expectation in corrupted set S. In this case, we have

$$ A_Sx=\tilde{b}_S, b_S=\tilde{b}_S+b_S^c+r_S.\nonumber $$

For Quantile-RaSK with exact step, we have \(x_{k}^*=x_{k}+\lambda s_k\) with \(\Vert s_k\Vert _{\infty }\le 1\) and \(\Vert s_{k+1}\Vert _{\infty }\le 1\); then \(x_{k+1}^*-x_{k}^*=(x_{k+1}-x_{k})+\lambda (s_{k+1}-s_{k})\). Note that the exact linesearch guarantees \(\langle a_{i_k},x_{k+1}\rangle =b_{i_k}\). Thus, (8) can be rewritten as

$$\begin{aligned} D_f^{x_{k+1}^*}(x_{k+1},\hat{x})&\le D_f^{x_{k}^*}(x_{k},\hat{x}) + \lambda \langle s_{k+1}-s_{k},a_{i_k}\rangle (b_{i_k}-\tilde{b}_{i_k})\nonumber \\&~~~+ \frac{1}{2}(\langle a_{i_k},x_k\rangle -b_{i_k})^2 - (\langle a_{i_k},x_k\rangle -b_{i_k}) (\langle a_{i_k},x_k\rangle -\tilde{b}_{i_k}). \end{aligned}$$
(17)

Viewing \(i_k\) as a random variable with fixed \(i_0,\ldots ,i_{k-1}\), yields

$$\begin{aligned} \mathbb {E}_k(\lambda \langle s_{k+1}-s_{k},a_i\rangle (b_i-\tilde{b}_i)|i\in S)&\le 2\lambda \mathbb {E}_k(\Vert a_i\Vert _1\cdot |b_i-\tilde{b}_i||i\in S)\nonumber \\&\le \frac{2\lambda }{|S|}\sum _{i\in S}(\Vert a_i\Vert _1\cdot |b_i-\tilde{b}_i|)\nonumber \\&\le \frac{2\lambda }{|S|}\Vert b-\tilde{b}\Vert \cdot \Vert A\Vert _{1,2}. \end{aligned}$$
(18)

And recall the conclusion (13) in (a), we obtain

$$\begin{aligned}&~~~\mathbb {E}_k\left[ D_f^{x_{k+1}^*}(x_{k+1},\hat{x})\mid i\in S\right] \nonumber \\&\le \left( 1+\frac{c_{A,\beta ,q}'}{\sqrt{|S|}}+C_{A,\beta ,q}\right) D_f^{x_{k}^*}(x_{k},\hat{x}) +\left( \frac{d_{A,\beta ,q}}{\sqrt{|S|}}+D_{A,\beta ,q}\right) \Vert r\Vert _{\infty }^2\nonumber \\&~~~ +\frac{2\lambda }{|S|}\Vert b-\tilde{b}\Vert \cdot \Vert A\Vert _{1,2}. \end{aligned}$$
(19)

Finally, combining all ingredients: (4), (16) and (19), and taking full expectation, we have that

$$\begin{aligned}&~~~~\mathbb {E}\left[ D_f^{x_{k+1}^*}(x_{k+1},\hat{x})\right] \\&\le \left( 1-C_1\right) \mathbb {E}\left[ D_f^{x_{k}^*}(x_{k},\hat{x})\right] +C_2\Vert r\Vert _{\infty }^2 +\frac{2}{\sqrt{qm}}\Vert r\Vert _{\infty }\cdot \Vert A\Vert _{1,2} +\frac{2\lambda }{qm}\Vert b-\tilde{b}\Vert \cdot \Vert A\Vert _{1,2}. \end{aligned}$$

The constants \(C_1\) and \(C_2\) are placed in Theorem 1.

(c) To ensure the decay in expectation, we require

$$ \left( \frac{2\sqrt{(1-\beta )\beta }}{1-\beta -q}+\frac{(1-\beta )\beta }{(1-\beta -q)^2} \right) \cdot \sigma _{\max } + \sqrt{\frac{m}{n}}\left( \frac{2\sqrt{\beta }(1-\beta )}{1-\beta -q} + \frac{2(1-\beta )^{\frac{3}{2}}\beta }{(1-\beta -q)^2} \right) \nonumber $$
$$ < \frac{q-\beta }{2q}\cdot \frac{|\hat{x}|_{\text {min}}}{|\hat{x}|_{\text {min}}+2\lambda }\cdot \frac{\tilde{\sigma }_{q-\beta ,\text {min}}^2}{\sigma _{\max }},\nonumber $$

which holds for a small enough parameter \(\beta \) since the left-hand side of it tends to zero as \(\beta \) tends to zero. Therefore, we obtain the conclusion. \(\square \)

The proof of Theorem 3

Similar to the proof of Theorem 1, we first denote the set of row indices less than the quantile \(Q_k\) at iteration k as \(T=\{i\in [m]\mid |\langle a_i,x_k\rangle -b_i|<Q_k\},|T|=\eta =qm\). The uncorrupted and corrupted rows in T are respectively denoted as \(T_1\) and \(T_2\). Then \((q-\beta )m\le |T_1|\le qm,|T_2|\le \beta m\). Using the constant stepsize, the update in Algorithm 2 is as follows:

$$\begin{aligned} x_{k+1}^*= & {} x_k^*-\frac{w}{\eta }\sum _{i\in T} (\langle a_i,x_k\rangle -b_i) a_{i},\nonumber \\ x_{k+1}= & {} \mathcal {S}_{\lambda }(x_{k+1}^*). \end{aligned}$$
(20)

Denote

$$\begin{aligned} x_{k}^{\delta }:=\hat{x}+\frac{w}{\eta }\sum _{i\in T} (b_i-\tilde{b}_i) a_{i}. \end{aligned}$$
(21)

Now use Lemma 8 with \(f(x)=\lambda \Vert x\Vert _{1}+\frac{1}{2}\Vert x\Vert ^2\), \(\Phi (x)=\langle x_k^*-x_{k+1}^*,x-x_k\rangle \) and \(y=x_k^{\delta }\), and it holds that

$$\begin{aligned}{} & {} D_f^{x_{k+1}^*}(x_{k+1}, x_k^{\delta })\\\le & {} D_f^{x_k^*}(x_k, x_k^{\delta }) \!+\langle x_k^{*}\!-x_{k+1}^{*},x_k^{\delta }\!-x_k\rangle \!-\langle x_k^*\!-x_{k+1}^*,x_{k+1}\!-x_k\rangle \!-D_f^{x_{k+1}^*}(x_{k},x_{k+1})\\\le & {} D_f^{x_k^*}(x_k, x_k^{\delta }) \!+\!\langle x_k^*\!-\!x_{k+1}^*,x_k^{\delta }\!-\!x_k\rangle \!+\!\Vert x_k^*\!-\!x_{k+1}^*\Vert \cdot \Vert x_{k+1}\!-\!x_k\Vert \!-\!\frac{1}{2} \Vert x_k\!-\!x_{k+1}\Vert ^2\\\le & {} D_f^{x_k^*}(x_k, x_k^{\delta }) +\langle x_k^*-x_{k+1}^*,x_k^{\delta }-x_k\rangle +\frac{1}{2} \Vert x_k^*-x_{k+1}^*\Vert ^2, \end{aligned}$$

Unfolding the expression of \(x_k^{\delta }\) in (21), we obtain

$$\begin{aligned} \begin{aligned}&~~~~D_f^{x_{k+1}^*}\left( x_{k+1}, \hat{x}\right) \\&\le D_f^{x_k^*}\left( x_k, \hat{x}\right) +\langle x_{k}^*-x_{k+1}^*,\hat{x}-x_k\rangle +\frac{1}{2} \Vert x_k^*-x_{k+1}^*\Vert ^2, \end{aligned} \end{aligned}$$

We divide it into two steps: the scalar product and the quadratic term.

Step 1: For the inner product term, we can derive that

$$\begin{aligned} \begin{aligned}&~~~~\langle x_{k}^*-x_{k+1}^*,\hat{x}-x_k\rangle \\&= -\langle \frac{w}{\eta }\sum _{i\in T} (\langle a_i,x_k\rangle -b_i) a_{i},x_k-\hat{x}\rangle \\&= -\langle \frac{w}{\eta }\sum _{i\in T_1} (\langle a_i,x_k\rangle -b_i) a_{i},x_k-\hat{x}\rangle -\langle \frac{w}{\eta }\sum _{i\in T_2} (\langle a_i,x_k\rangle -b_i) a_{i},x_k-\hat{x}\rangle \\ \end{aligned} \end{aligned}$$
(22)

From the first term in (22), we obtain

$$\begin{aligned}&~~~~-\langle \frac{w}{\eta }\sum _{i\in T_1}(\langle a_i,x_k\rangle -b_i) a_{i},x_k-\hat{x}\rangle \nonumber \\&= -\langle \frac{w}{\eta }\sum _{i\in T_1} (\langle a_i,x_k\rangle -\tilde{b}_i) a_{i},x_k-\hat{x}\rangle +\langle \frac{w}{\eta }\sum _{i\in T_1} r_i a_{i},x_k-\hat{x}\rangle \nonumber \\&\le -\frac{w}{\eta }\langle x_k-\hat{x},\sum _{i\in T_1}a_ia_i^T(x_k-\hat{x})\rangle +\frac{w}{\eta }\Vert r\Vert _{\infty }\langle \sum _{i\in T_1} a_{i},x_k-\hat{x}\rangle \nonumber \\&\le -\frac{w}{qm}\sigma _{q-\beta ,\text {min}}^2\cdot \Vert x_k-\hat{x}\Vert ^2 + \frac{w}{\sqrt{qm}}\sigma _{\max }\cdot \Vert r\Vert _{\infty }\cdot \Vert x_k-\hat{x}\Vert . \end{aligned}$$
(23)

From the second term in (22), we get

$$\begin{aligned}&~~~~-\langle \frac{w}{\eta }\sum _{i\in T_2} (\langle a_i,x_k\rangle -b_i) a_{i},x_k-\hat{x}\rangle \nonumber \\&\le \frac{w}{\eta }Q_k\langle \sum _{i\in T_2} a_{i},x_k-\hat{x}\rangle \nonumber \\&\le \frac{w}{\eta }Q_k\sqrt{| T_2|}\sigma _{\max }\cdot \Vert x_k-\hat{x}\Vert \nonumber \\&\le \frac{w\sqrt{(1-\beta )\beta }}{(1-\beta -q)qm}\sigma _{\max }^2\cdot \Vert x_k-\hat{x}\Vert ^2 + \frac{1-\beta }{1-\beta -q} \frac{w\sqrt{\beta }}{q\sqrt{m}}\sigma _{\max }\cdot \Vert r\Vert _{\infty }\cdot \Vert x_k-\hat{x}\Vert . \end{aligned}$$
(24)

The last inequality makes use of Lemma 5. Combining (22), (23) and (24), we obtain

$$\begin{aligned}&~~~~\langle x_{k}^*-x_{k+1}^*,\hat{x}-x_k\rangle \nonumber \\&\le \left( -\frac{w}{qm}\sigma _{q-\beta ,\text {min}}^2 +\frac{w\sqrt{(1-\beta )\beta }}{(1-\beta -q)qm}\sigma _{\max }^2 \right) \Vert x_k-\hat{x}\Vert ^2\nonumber \\&~~~~+ \left( \frac{w}{\sqrt{qm}}\sigma _{\max } + \frac{1-\beta }{1-\beta -q} \frac{w\sqrt{\beta }}{q\sqrt{m}}\sigma _{\max } \right) \Vert r\Vert _{\infty }\cdot \Vert x_k-\hat{x}\Vert . \end{aligned}$$
(25)

Step 2: For the quadratic term, we have

$$\begin{aligned}&~~~~\Vert x_k^*-x_{k+1}^*\Vert ^2\nonumber \\&=\Vert \frac{w}{\eta }\sum _{i\in T} (\langle a_i,x_k\rangle -b_i) a_{i}\Vert ^2\nonumber \\&=\Vert \frac{w}{\eta }\sum _{i\in T_1} (\langle a_i,x_k\rangle -b_i) a_{i}+\frac{w}{\eta }\sum _{i\in T_2} (\langle a_i,x_k\rangle -b_i) a_{i}\Vert ^2,\nonumber \\&=\Vert u+v\Vert ^2,\nonumber \\&\le (\Vert u\Vert +\Vert v\Vert )^2, \end{aligned}$$
(26)

where \(u=\frac{w}{\eta }\sum _{i\in T_1} (\langle a_i,x_k\rangle -b_i) a_{i},v=\frac{w}{\eta }\sum _{i\in T_2} (\langle a_i,x_k\rangle -b_i) a_{i}\).

We have

$$\begin{aligned} \Vert u\Vert ^2&=\Vert \frac{w}{\eta }\sum _{i\in T_1} (\langle a_i,x_k\rangle -b_i) a_{i}\Vert ^2\nonumber \\&= \frac{w^2}{\eta ^2}\Vert \sum _{i\in T_1} (\langle a_i,x_k\rangle -\tilde{b}_i-r_i) a_{i}\Vert ^2\nonumber \\&= \frac{w^2}{\eta ^2} \Vert A_{ T_1}^T A_{ T_1}(x_k-\hat{x})-\sum _{i\in T_1}a_ir_i \Vert ^2\nonumber \\&\le \frac{w^2}{\eta ^2} \left( \Vert A_{ T_1}^T A_{ T_1}(x_k-\hat{x})\Vert +\Vert \sum _{i\in T_1}a_ir_i\Vert \right) ^2\nonumber \\&\le \frac{w^2\sigma _{\max }^2}{q^2m^2}\left( \sigma _{\max }\Vert x_k-\hat{x}\Vert +\sqrt{qm}\Vert r\Vert _{\infty } \right) ^2. \end{aligned}$$
(27)

and

$$\begin{aligned} \Vert v\Vert ^2&=\Vert \frac{w}{\eta }\sum _{i\in T_2} (\langle a_i,x_k\rangle -b_i) a_{i}\Vert ^2\nonumber \\&\le \frac{w^2}{\eta ^2}\cdot Q_k^2\cdot \Vert \sum _{i\in T_2}a_i\Vert ^2\nonumber \\&\le \frac{w^2\sigma _{\max }^2\beta }{q^2m}\left( \frac{\sqrt{1-\beta }}{(1-\beta -q)\sqrt{m}}\sigma _{\max }\Vert x_k-\hat{x}\Vert + \frac{1-\beta }{1-\beta -q}\Vert r\Vert _{\infty } \right) ^2. \end{aligned}$$
(28)

Bring (27) and (28) together, we obtain

$$\begin{aligned}&~~~~\Vert u+v\Vert ^2\nonumber \\&\le \frac{w^2\sigma _{\max }^2}{q^2m^2}\left( \sigma _{\max }\Vert x_k-\hat{x}\Vert +\sqrt{qm}\Vert r\Vert _{\infty } + \frac{\sqrt{(1-\beta )\beta }}{1-\beta -q}\sigma _{\max }\Vert x_k-\hat{x}\Vert + \frac{(1-\beta )\sqrt{\beta m}}{1-\beta -q}\Vert r\Vert _{\infty } \right) ^2\nonumber \\&= \frac{w^2\sigma _{\max }^2}{q^2m^2} \left[ \sigma _{\max } \left( 1+\frac{\sqrt{(1-\beta )\beta }}{1-\beta -q} \right) \Vert x_k-\hat{x}\Vert + \sqrt{qm} \left( 1+\frac{(1-\beta )\sqrt{\beta }}{(1-\beta -q)\sqrt{q}} \right) \Vert r\Vert _{\infty } \right] ^2\nonumber \\&\le \frac{w^2}{q^2m^2}\sigma _{\max }^4\left( 1+ \frac{\sqrt{(1-\beta )\beta }}{1-\beta -q} \right) ^2\Vert x_k-\hat{x}\Vert ^2 + \frac{w^2}{qm}\sigma _{\max }^2\left( 1+\frac{(1-\beta )\sqrt{\beta }}{(1-\beta -q)\sqrt{q}} \right) ^2\Vert r\Vert _{\infty }^2\nonumber \\&~~~~+ \frac{2w^2}{(qm)^{\frac{3}{2}}}\sigma _{\max }^3 \left( 1+ \frac{\sqrt{(1-\beta )\beta }}{1-\beta -q} \right) \left( 1+\frac{(1-\beta )\sqrt{\beta }}{(1-\beta -q)\sqrt{q}} \right) \Vert x_k-\hat{x}\Vert \cdot \Vert r\Vert _{\infty }. \end{aligned}$$
(29)

Combining (25) and (29) yields

$$\begin{aligned}{} & {} D_f^{x_{k+1}^*}\left( x_{k+1}, \hat{x}\right) \\\le & {} D_f^{x_k^*}\left( x_k, \hat{x}\right) +\langle x_{k}^*-x_{k+1}^*,\hat{x}-x_k\rangle +\frac{1}{2} \Vert x_k^*-x_{k+1}^*\Vert ^2,\nonumber \\\le & {} D_f^{x_k^*}\left( x_k, \hat{x}\right) + \left( -\frac{w}{qm}\sigma _{q-\beta ,\text {min}}^2 +\frac{w\sqrt{(1-\beta )\beta }}{(1-\beta -q)qm}\sigma _{\max }^2 \right) \Vert x_k-\hat{x}\Vert ^2\nonumber \\{} & {} + \left( \frac{w}{\sqrt{qm}}\sigma _{\max } + \frac{1-\beta }{1-\beta -q} \frac{w\sqrt{\beta }}{q\sqrt{m}}\sigma _{\max } \right) \Vert r\Vert _{\infty }\cdot \Vert x_k-\hat{x}\Vert \nonumber \\{} & {} + \frac{w^2}{q^2m^2}\sigma _{\max }^4\left( 1+ \frac{\sqrt{(1-\beta )\beta }}{1-\beta -q} \right) ^2\Vert x_k-\hat{x}\Vert ^2 + \frac{w^2}{qm}\sigma _{\max }^2\left( 1+\frac{(1-\beta )\sqrt{\beta }}{(1-\beta -q)\sqrt{q}} \right) ^2\Vert r\Vert _{\infty }^2\nonumber \\{} & {} + \frac{2w^2}{(qm)^{\frac{3}{2}}}\sigma _{\max }^3 \left( 1+ \frac{\sqrt{(1-\beta )\beta }}{1-\beta -q} \right) \left( 1+\frac{(1-\beta )\sqrt{\beta }}{(1-\beta -q)\sqrt{q}} \right) \Vert x_k-\hat{x}\Vert \cdot \Vert r\Vert _{\infty }\\= & {} D_f^{x_k^*}\left( x_k, \hat{x}\right) + \left( -c_1\frac{w}{m}\sigma _{q-\beta ,\text {min}}^2 +c_2\frac{ w}{m}\sigma _{\max }^2 +c_3 \frac{w^2}{m^2}\sigma _{\max }^4 \right) \Vert x_k-\hat{x}\Vert ^2\nonumber \\{} & {} + \left( c_4\frac{w}{\sqrt{m}}\sigma _{\max } +c_5\frac{w^2}{m^{\frac{3}{2}}}\sigma _{\max }^3 \right) \Vert r\Vert _{\infty }\cdot \Vert x_k-\hat{x}\Vert + c_6\frac{w^2}{m}\sigma _{\max }^2\Vert r\Vert _{\infty }^2, \end{aligned}$$

where \(c_i,i=1,\ldots ,6\) are positive constants only related to q and \(\beta \); they can be directly obtained and we omit it. For the term \(\Vert r\Vert _{\infty }\cdot \Vert x_k-\hat{x}\Vert \), we use average inequality

$$\Vert r\Vert _{\infty }\cdot \Vert x_k-\hat{x}\Vert \le \frac{1}{2\sqrt{n}}\Vert x_k-\hat{x}\Vert ^2+\frac{\sqrt{n}}{2}\Vert r\Vert _{\infty }^2.$$

Denote \(\alpha =\frac{|\hat{x}|_{\text {min}}}{ |\hat{x}|_{\text {min}} +2 \lambda }\), and use Lemma 9, we have

$$\begin{aligned}&~~~~D_f^{x_{k+1}^*}\left( x_{k+1}, \hat{x}\right) \nonumber \\&\le D_f^{x_k^*}\left( x_k, \hat{x}\right) + \left( -c_1\frac{w}{m}\sigma _{q-\beta ,\text {min}}^2 +c_2\frac{w}{m}\sigma _{\max }^2 +c_3\frac{w^2}{m^2}\sigma _{\max }^4 +\frac{c_4}{2}\frac{w}{\sqrt{mn}}\sigma _{\max } \right) \Vert x_k-\hat{x}\Vert ^2\nonumber \\&~~~~~~ +\frac{c_5}{2}\frac{w^2}{m^{3/2}n^{1/2}}\sigma _{\max }^3\Vert x_k-\hat{x}\Vert ^2 + \left( \frac{c_4}{2}\frac{\sqrt{n}w}{\sqrt{m}}\sigma _{\max }+\frac{c_5}{2}\frac{\sqrt{n}w^2}{m^{3/2}}\sigma _{\max }^3+c_6\frac{w^2}{m}\sigma _{\max }^2 \right) \Vert r\Vert _{\infty }^2 \nonumber \\&\le \left[ 1-\alpha \left( c_1\frac{w}{m}\frac{\tilde{\sigma }_{\text {min}}^2\sigma _{q-\beta ,\text {min}}^2}{\sigma _{\max }^2} -c_2\frac{w}{m}\tilde{\sigma }_{\text {min}}^2 -c_3\frac{w^2}{m^2}\tilde{\sigma }_{\text {min}}^2\sigma _{\max }^2 -\frac{c_4}{2}\frac{w\tilde{\sigma }_{\text {min}}^2}{\sqrt{mn}\sigma _{\max }} \right) \right] D_f^{x_k^*}\left( x_k, \hat{x}\right) \nonumber \\&~~~~~~ +\frac{c_5}{2}\frac{\alpha w^2\tilde{\sigma }_{\text {min}}^2}{m^{3/2}n^{1/2}}\sigma _{\max }D_f^{x_k^*}\left( x_k, \hat{x}\right) + \left( \frac{c_4}{2}\frac{\sqrt{n}w}{\sqrt{m}}\sigma _{\max }+\frac{c_5}{2}\frac{\sqrt{n}w^2}{m^{3/2}}\sigma _{\max }^3+c_6\frac{w^2}{m}\sigma _{\max }^2 \right) \Vert r\Vert _{\infty }^2\nonumber \\&=(1-c_1^*w+c_2^*w^2)D_f^{x_k^*}\left( x_k, \hat{x}\right) +(c_3^*w+c_4^*w^2)\Vert r\Vert _{\infty }^2, \end{aligned}$$
(30)

where

$$\begin{aligned} c_1^*= & {} c_1\alpha \frac{\tilde{\sigma }_{\text {min}}^2\sigma _{q-\beta ,\text {min}}^2}{ m\sigma _{\max }^2}-c_2\alpha \frac{\tilde{\sigma }_{\text {min}}^2}{ m}-c_4\alpha \frac{\tilde{\sigma }_{\text {min}}^2}{2\sqrt{mn}\sigma _{\max }},~ c_2^*=c_3\alpha \frac{\tilde{\sigma }_{\text {min}}^2\sigma _{\max }^2}{ m^2}+c_5\alpha \frac{\tilde{\sigma }_{\text {min}}^2\sigma _{\max }}{2 m^{3/2}n^{1/2}},\\ c_3^*= & {} c_4\frac{\sqrt{n}\sigma _{\max }}{2\sqrt{m}},~c_4^*=c_5\frac{\sqrt{n}\sigma _{\max }^3}{2m^{3/2}}+c_6\frac{\sigma _{\max }^2}{m}. \end{aligned}$$

\(\square \)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, L., Wang, H. & Zhang, H. Quantile-based random sparse Kaczmarz for corrupted and noisy linear systems. Numer Algor (2024). https://doi.org/10.1007/s11075-024-01844-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11075-024-01844-6

Keywords

Navigation