Safe Feature Elimination for Non-negativity Constrained Convex Optimization
- 11 Downloads
Inspired by recent work on safe feature elimination for 1-norm regularized least-squares, we develop strategies to eliminate features from convex optimization problems with non-negativity constraints. Our strategy is safe in the sense that it will only remove features/coordinates from the problem when they are guaranteed to be zero at a solution. To perform feature elimination, we use an accurate, but not optimal, primal–dual feasible pair, making our methods robust and able to be used on ill-conditioned problems. We supplement our feature elimination problem with a method to construct an accurate dual feasible point from an accurate primal feasible point; this allows us to use a first-order method to find an accurate primal feasible point and then use that point to construct an accurate dual feasible point and perform feature elimination. Under reasonable conditions, our feature elimination strategy will eventually eliminate all zero features from the problem. As an application of our methods, we show how safe feature elimination can be used to robustly certify the uniqueness of nonnegative least-squares problems. We give numerical examples on a well-conditioned synthetic nonnegative least-squares problem and on a set of 40,000 extremely ill-conditioned problems arising in a microscopy application.
KeywordsFeature elimination Dimension reduction Duality NNLS
Mathematics Subject Classification49N15 90C25 90C46
Stephen Becker acknowledges the donation of a Tesla K40c GPU from NVIDIA.
- 1.Ghaoui, L.E., Viallon, V., Rabbani, T.: Safe feature elimination in sparse supervised learning. Pac. J. Optim. 8(4) (2012)Google Scholar
- 3.Fercoq, O., Gramfort, A., Salmon, J.: Mind the duality gap: safer rules for the lasso. In: Proceedings of the 32nd International Conference on Machine Learning, Proceedings of Machine Learning Research, vol. 37, pp. 333–342. PMLR, Lille, France (2015)Google Scholar
- 4.Ndiaye, E., Fercoq, O., Gramfort, A., Salmon, J.: GAP safe screening rules for sparse multi-task and multi-class models. In: Advances in Neural Information Processing Systems, pp. 811–819 (2015)Google Scholar
- 5.Ndiaye, E., Fercoq, O., Gramfort, A., Salmon, J.: GAP safe screening rules for sparse-group lasso. In: Advances in Neural Information Processing Systems, pp. 388–396 (2016)Google Scholar
- 8.Ogawa, K., Suzuki, Y., Takeuchi, I.: Safe screening of non-support vectors in pathwise SVM computation. In: Proceedings of the 30th International Conference on Machine Learning, Proceedings of Machine Learning Research, vol. 28, pp. 1382–1390. PMLR, Atlanta, Georgia, USA (2013)Google Scholar
- 9.Zimmert, J., de Witt, C.S., Kerg, G., Kloft, M.: Safe screening for support vector machines. In: NIPS Workshop on Optimization in Machine Learning (OPT) (2015)Google Scholar
- 10.Raj, A., Olbrich, J., Gärtner, B., Schölkopf, B., Jaggi, M.: Screening rules for convex problems. arXiv preprint arXiv:1609.07478 (2016)
- 13.Beck, A.: First-Order Methods in Optimization, vol. 25. SIAM (2017)Google Scholar
- 31.van den Berg, E.: A hybrid quasi-Newton projected-gradient method with application to lasso and basis-pursuit denoising. Math. Program. Comput. (2019). https://doi.org/10.1007/s12532-019-00163-5