Advertisement

MCP Based Noise Resistant Algorithm for Training RBF Networks and Selecting Centers

  • Hao Wang
  • Andrew Chi Sing Leung
  • John Sum
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11302)

Abstract

In the implementation of a neural network, some imperfect issues, such as precision error and thermal noise, always exist. They can be modeled as multiplicative noise. This paper studies the problem of training RBF network and selecting centers under multiplicative noise. We devise a noise resistant training algorithm based on the alternating direction method of multipliers (ADMM) framework and the minimax concave penalty (MCP) function. Our algorithm first uses all training samples to create the RBF nodes. Afterwards, we derive the training objective function that can tolerate to the existence of noise. Finally, we add a MCP term to the objective function. We then apply the ADMM framework to minimize the modified objective function. During training, the MCP term has an ability to make some unimportant RBF weights to zero. Hence training and RBF node selection can be done at the same time. The proposed algorithm is called the ADMM-MCP algorithm. Also, we present the convergent properties of the ADMM-MCP algorithm. From the simulation result, the ADMM-MCP algorithm is better than many other RBF training algorithms under weight/node noise situation.

Keywords

RBF Center selection ADMM MCP Multiplicative noise 

Notes

Acknowledgments

The work was supported by a research grant from City University of Hong Kong (7004842).

References

  1. 1.
    Poggio, T., Girosi, T.: Networks for approximation and learning. Proc. IEEE 78(9), 1481–1497 (1990)CrossRefGoogle Scholar
  2. 2.
    Haykin, S.: Neural Networks: A Comprehensive Foundation. Prentice Hall, Upper Saddle River (1998)zbMATHGoogle Scholar
  3. 3.
    Gomm, J., Yu, D.: Selecting radial basis function network centers with recursive orthogonal least squares training. IEEE Trans. Neural Netw. 11(2), 306–314 (2000)CrossRefGoogle Scholar
  4. 4.
    Burr, J.B.: Digital neural network implementations. In: Neural Networks, Concepts, Applications, and Implementations, vol. 3, pp. 237–285. Prentice Hall (1995)Google Scholar
  5. 5.
    Han, Z., Feng, R., Wan, W.Y., Leung, C.S.: Online training and its convergence for faulty networks with multiplicative weight noise. Neurocomputing 155, 53–61 (2015)CrossRefGoogle Scholar
  6. 6.
    Bernier, J.L., Ortega, J., Ros, E., Rojas, I., Prieto, A.: A quantitative study of fault tolerance, noise immunity, and generalization ability of MLPs. Neural Comput. 12(12), 2941–2964 (2000)CrossRefGoogle Scholar
  7. 7.
    Leung, C.S., Wan, W.Y., Feng, R.: A regularizer approach for RBF networks under the concurrent weight failure situation. IEEE Trans. Neural Netw. Learn. Syst. 28(6), 1360–1372 (2017)CrossRefGoogle Scholar
  8. 8.
    Zhang, C.H.: Nearly unbiased variable selection under minimax concave penalty. Ann. Stat. 38(2), 894–942 (2010)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2011)CrossRefGoogle Scholar
  10. 10.
    Breheny, P., Huang, J.: Coordinate descent algorithms for nonconvex penalized regression, with applications to biological feature selection. Ann. Appl. Stat. 5(1), 232–253 (2011)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Donoho, D.L., Johnstone, I.M.: Ideal spatial adaptation by wavelet shrinkage. Biometrika 81(3), 425–455 (1994)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Attouch, H., Bolte, J., Svaiter, B.F.: Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized gauss-seidel methods. Math. Program. 137(1–2), 91–129 (2013)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Wang, Y., Yin, W., Zeng, J.: Global convergence of ADMM in nonconvex nonsmooth optimization. J. Sci. Comput. (2015, accepted)Google Scholar
  14. 14.
    Lichman, M.: UCI machine learning repository (2013)Google Scholar
  15. 15.
    Zhang, Q., Hu, X., Zhang, B.: Comparison of \(l_1\)-norm SVR and sparse coding algorithms for linear regression. IEEE Trans. Neural Netw. Learn. Syst. 26(8), 1828–1833 (2015)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Malioutov, D.M., Cetin, M., Willsky, A.S.: Homotopy continuation for sparse signal representation. In: Proceedings of the IEEE CASSP 2005, vol. 5, pp. 733–736. IEEE Press, New York (2005)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Department of Electronic EngineeringCity University of Hong KongKowloon TongHong Kong
  2. 2.Institute of Technology ManagementNational Chung Hsing UniversityTaichungTaiwan

Personalised recommendations