Optimisation Using Levenberg-Marquardt Algorithm of Neural Networks for Iris

Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 247)

Abstract

This paper explores the optimisation technique of Damped Least Square Method also known as the Levenberg-Marquardt (LM) Algorithm for Iris recognition. The motive behind it is to show that even though there are many algorithms available which act as an alternative to the LM algorithm such as the simple gradient decent and other conjugate gradient methods be it the vanilla gradient decent or the Gauss Newton iteration, the LM algorithm outperforms these optimisation techniques due to the addressing of the problem by the algorithm as the Non-linear Least Square Minimisation. The results are promising and provide an insight into Iris recognition which are distinct pattern of individuals and are unique in case of every eye.

Keywords

Levenberg-Marquardt LM algorithm Iris Biometrics of the eye Optimisation of Iris Images Least square method for Iris Damped Least Square Non Linear Least Square Method 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Burghadt, T.: Inside Iris Recognition, Report on Identity, Assignment Information Security, COMS40213Google Scholar
  2. 2.
  3. 3.
    Rowies, S.: Levenberg Marquardt Optimization (2005)Google Scholar
  4. 4.
    Moré, J.J.: The Levenberg-Marquardt algorithm: implementation and theory. In: Numerical Analysis, pp. 105–116. Springer, Heidelberg (1978)CrossRefGoogle Scholar
  5. 5.
    Brown, K.M., Dennis Jr., J.E.: Derivative free analogues of the Levenberg-Marquardt and Gauss algorithms for nonlinear least squares approximation. Numerische Mathematik 18(4), 289–297 (1971)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Hecht-Nielsen, R.: Theory of the back propagation neural network. In: International Joint Conference on Neural Networks, IJCNN. IEEE (1989)Google Scholar
  7. 7.
    Specification of CASIA Iris Image Database(ver 1.0), Chinese Academy of Sciences (March 2007), http://www.nlpr.ia.ac.cn/english/irds/irisdatabase.htm
  8. 8.
    Ma, L., Tan, T., Wang, Y., Zhang, D.: Personal Identification Based on Iris Texture Analysis. IEEE Trans. Pattern Analysis Machine Intelligence 25(12), 1519–1533 (2003)CrossRefGoogle Scholar
  9. 9.
    Draper, N.R., Smith, H., Pownell, E.: Applied regression analysis, vol. 3. Wiley, New York (1966)Google Scholar
  10. 10.
    Kupper, L.L., Muller, K.E., Nizam, A.: Applied regression analysis and multivariable methods. In: Kleinbaum, D.G. (ed.), 3rd edn. Duxbury Press, Pacific Grove (1998)Google Scholar
  11. 11.
    Sarle, W.S.: Neural networks and statistical models (1994)Google Scholar
  12. 12.
    MATLAB version R2011b. The Mathworks Inc., Pune (2011)Google Scholar
  13. 13.
    Seber, G.A.F., Lee, A.J.: Linear regression analysis, vol. 936. John Wiley & Sons (2012)Google Scholar
  14. 14.
    Kleinbaum, D.G.: Applied regression analysis and multivariable methods. CengageBrain. com (2007)Google Scholar
  15. 15.
    Michel, A.N., Farrell, J.A., Porod, W.: Qualitative analysis of neural networks. IEEE Transactions on Circuits and Systems 36(2), 229–243 (1989)MathSciNetCrossRefMATHGoogle Scholar
  16. 16.
    Hagan, M.T., Demuth, H.B., Beale, M.H.: Neural network design. Pws Pub., Boston (1996)Google Scholar
  17. 17.
    He, Y., et al.: Stability analysis for neural networks with time-varying interval delay. IEEE Transactions on Neural Networks 18(6), 1850–1854 (2007)CrossRefGoogle Scholar
  18. 18.
    Cochocki, A., Unbehauen, R.: Neural networks for optimization and signal processing. John Wiley & Sons, Inc. (1993)Google Scholar
  19. 19.
    Hsieh, W.W.: Nonlinear principal component analysis by neural networks. Tellus A 53(5), 599–615 (2001)CrossRefGoogle Scholar
  20. 20.
    Baldi, P., Hornik, K.: Neural networks and principal component analysis: Learning from examples without local minima. Neural Networks 2(1), 53–58 (1989)CrossRefGoogle Scholar
  21. 21.
    Diamantaras, K.I., Kung, S.Y.: Principal component neural networks. Wiley, New York (1996)MATHGoogle Scholar
  22. 22.
    Haykin, S.S., et al.: Neural networks and learning machines, vol. 3. Prentice Hall, New York (2009)Google Scholar
  23. 23.
    Ishibuchi, H., Tanaka, H.: Fuzzy regression analysis using neural networks. Fuzzy Sets and Systems 50(3), 257–265 (1992)MathSciNetCrossRefGoogle Scholar
  24. 24.
    Nauck, D., Klawonn, F., Kruse, R.: Foundations of neuro-fuzzy systems. John Wiley & Sons, Inc. (1997)Google Scholar
  25. 25.
    Haykin, S.: Neural networks: A comprehensive foundation. Prentice Hall PTR (1994)Google Scholar
  26. 26.
    Ham, F.M., Kostanic, I.: Principles of neurocomputing for science and engineering. McGraw-Hill Higher Education (2000)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  1. 1.Department of Electronics & TelecommunicationSAOEPuneIndia
  2. 2.Department of Computer EngineeringAISSMS IOITPuneIndia

Personalised recommendations