- 742 Downloads
In this paper we propose an L 1/2 regularizer which has a nonconvex penalty. The L 1/2 regularizer is shown to have many promising properties such as unbiasedness, sparsity and oracle properties. A reweighed iterative algorithm is proposed so that the solution of the L 1/2 regularizer can be solved through transforming it into the solution of a series of L 1 regularizers. The solution of the L 1/2 regularizer is more sparse than that of the L 1 regularizer, while solving the L 1/2 regularizer is much simpler than solving the L 0 regularizer. The experiments show that the L 1/2 regularizer is very useful and efficient, and can be taken as a representative of the L p (0 > p > 1)regularizer.
Keywordsmachine learning variable selection regularizer compressed sensing
Unable to display preview. Download preview PDF.
- 1.Akaike H. Information theory and an extension of the maximum likelihood principle. In: Petrov B N, Caki F, eds. Second International Symposium on Information Theory. Budapest: Akademiai Kiado, 1973. 267–281Google Scholar
- 19.Yuille A, Rangarajan A. The concave convex procedure (CCCP). NIPS, 14. Cambridge, MA: MIT Press, 2002Google Scholar
- 21.Blake C, Merz C. Repository of Machine Learning Databases [DB/OL]. Irvine, CA: University of California, Department of Information and Computer Science, 1998Google Scholar