Abstract
Recent advances in dual averaging schemes for primal-dual subgradient methods and stochastic learning revealed an ongoing and growing interest in making stochastic and online approaches consistent and tailored towards sparsity inducing norms. In this paper we focus on the reweighting scheme in the \(l_2\)-Regularized Dual Averaging approach which favors properties of a strongly convex optimization objective while approximating in a limit the \(l_0\)-type of penalty. In our analysis we focus on a regret and convergence criteria of such an approximation. We derive our results in terms of a sequence of strongly convex optimization objectives obtained via the smoothing of a sub-differential and non-smooth loss function, e.g. hinge loss. We report an empirical evaluation of the convergence in terms of the cumulative training error and the stability of the selected set of features. Experimental evaluation shows some improvements over the \(l_1\)-RDA method in the generalization error as well.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, New York (2004)
Candès, E., Wakin, M., Boyd, S.: Enhancing sparsity by reweighted l1 minimization. Journal of Fourier Analysis and Applications 14(5), 877–905 (2008)
Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2008, pp. 3869–3872 (March 2008)
Chen, X., Lin, Q., Peña, J.: Optimal regularized dual averaging methods for stochastic optimization. In: Bartlett, P.L., Pereira, F.C.N., Burges, C.J.C., Bottou, L., Weinberger, K.Q. (eds.) NIPS, pp. 404–412 (2012)
Daubechies, I., DeVore, R., Fornasier, M., Güntürk, C.S.: Iteratively reweighted least squares minimization for sparse recovery. Comm. Pure Appl. Math. 63(1), 1–38 (2010)
Frank, A., Asuncion, A.: UCI machine learning repository (2010). http://archive.ics.uci.edu/ml
Huang, K., King, I., Lyu, M.R.: Direct zero-norm optimization for feature selection. In: ICDM, pp. 845–850 (2008)
Lai, M.J., Liu, Y.: The null space property for sparse recovery from multiple measurement vectors. Applied and Computational Harmonic Analysis 30(3), 402–406 (2011)
Lai, M.J., Xu, Y., Yin, W.: Improved iteratively reweighted least squares for unconstrained smoothed \(l_q\) minimization. SIAM J. Numerical Analysis 51(2), 927–957 (2013)
Lázaro, J.L., De Brabanter, K., Dorronsoro, J.R., Suykens, J.A.K.: Sparse LS-SVMs with \(l_0\)-norm minimization. In: ESANN, pp. 189–194 (2011)
Nelder, J.A., Mead, R.: A simplex method for function minimization. Computer Journal 7, 308–313 (1965)
Nesterov, Y.: Primal-dual subgradient methods for convex problems. Mathematical Programming 120(1), 221–259 (2009)
Shalev-Shwartz, S., Singer, Y.: Logarithmic regret algorithms for strongly convex repeated games. Tech. rep., The Hebrew University (2007)
Shalev-Shwartz, S., Singer, Y., Srebro, N.: Pegasos: Primal Estimated sub-GrAdient SOlver for SVM. In: Proceedings of the 24th International Conference on Machine Learning, ICML 2007, New York, NY, USA, pp. 807–814 (2007)
Shalev-Shwartz, S., Tewari, A.: Stochastic methods for l1 regularized loss minimization. In: Proceedings of the 26th Annual International Conference on Machine Learning, ICML 2009, pp. 929–936. ACM, New York (2009)
Wipf, D.P., Nagarajan, S.S.: Iterative reweighted \(l_1\) and \(l_2\) methods for finding sparse solutions. J. Sel. Topics Signal Processing 4(2), 317–329 (2010)
Xavier-De-Souza, S., Suykens, J.A.K., Vandewalle, J., Bollé, D.: Coupled simulated annealing. IEEE Trans. Sys. Man Cyber. Part B 40(2), 320–335 (2010)
Xiao, L.: Dual averaging methods for regularized stochastic learning and online optimization. J. Mach. Learn. Res. 11, 2543–2596 (2010)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this paper
Cite this paper
Jumutc, V., Suykens, J.A.K. (2014). Reweighted l 2-Regularized Dual Averaging Approach for Highly Sparse Stochastic Learning. In: Zeng, Z., Li, Y., King, I. (eds) Advances in Neural Networks – ISNN 2014. ISNN 2014. Lecture Notes in Computer Science(), vol 8866. Springer, Cham. https://doi.org/10.1007/978-3-319-12436-0_26
Download citation
DOI: https://doi.org/10.1007/978-3-319-12436-0_26
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-12435-3
Online ISBN: 978-3-319-12436-0
eBook Packages: Computer ScienceComputer Science (R0)