A Comparison between Artificial Neural Network and Cascade-Correlation Neural Network in Concept Classification

  • Yanming Guo
  • Liang Bai
  • Songyang Lao
  • Song Wu
  • Michael S. Lew
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8879)

Abstract

Deep learning has achieved significant attention recently due to promising results in representing and classifying concepts most prominently in the form of convolutional neural networks (CNN). While CNN has been widely studied and evaluated in computer vision, there are other forms of deep learning algorithms which may be promising. One interesting deep learning approach which has received relatively little attention in visual concept classification is Cascade-Correlation Neural Networks (CCNN). In this paper, we create a visual concept retrieval system which is based on CCNN. Experimental results on the CalTech101 dataset indicate that CCNN outperforms ANN.

Keywords

artificial neural network cascade-correlation neural network deep learning concept classification 

References

  1. 1.
    Weston, J., Ratle, F., Mobahi, H., Collobert, R.: Deep learning via semi-supervised embedding. In: Montavon, G., Orr, G.B., Müller, K.-R., et al. (eds.) NN: Tricks of the Trade, 2nd edn. LNCS, vol. 7700, pp. 639–655. Springer, Heidelberg (2012)Google Scholar
  2. 2.
    Rummelhart, D.E.: Learning representations by back-propagating errors. J. Nature 323(9), 533–536 (1986)CrossRefGoogle Scholar
  3. 3.
    Yegnanarayana, B.: Artificial neural networks. M. PHI Learning Pvt. Ltd. (2009)Google Scholar
  4. 4.
    Mikolov, T., Kombrink, S., Deoras, A., et al.: RNNLM-Recurrent neural network language modeling toolkit. In: Proc. of the 2011 ASRU Workshop, pp. 196–201 (2011)Google Scholar
  5. 5.
    Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. J. Neural Computation 18(7), 1527–1554 (2006)MathSciNetCrossRefMATHGoogle Scholar
  6. 6.
    Ranzato, M., Susskind, J., Mnih, V., et al.: On deep generative models with applications to recognition. In: Proc. CVPR, pp. 2857–2864 (2011)Google Scholar
  7. 7.
    Le, Q.V.: Building high-level features using large scale unsupervised learning. In: Proc. Acoustics, Speech and Signal Processing (ICASSP), pp. 8595–8598 (2013)Google Scholar
  8. 8.
    Bengio, Y., Courville, A., Vincent, P.: Representation learning: A review and new perspectives. J. IEEE Trans. PAMI, 1798–1828 (2013)Google Scholar
  9. 9.
    Cireşan, D.C., Meier, U., Masci, J., et al.: High-performance neural networks for visual object classification. J. arXiv preprint arXiv:1102.0183 (2011)Google Scholar
  10. 10.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet Classification with Deep Convolutional Neural Networks. J. NIPS, 1106–1114 (2012)Google Scholar
  11. 11.
    Fahlman, S.E., Lebiere, C.: The cascade-correlation learning architecture. J. Advances in Neural Information Processing Systems, 424–532 (1989)Google Scholar
  12. 12.
    Lazebnik, S., Schmid, C., Ponce, J.: Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In: Proc. CVPR, vol. 2, pp. 2169–2178 (2006)Google Scholar
  13. 13.
    Indyk, P., Motwani, R.: Approximate nearest neighbors: towards removing the curse of dimensionality. In: Proc. of the Thirtieth Annual ACM Symposium on Theory of Computing, pp. 604–613. ACM (1998)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Yanming Guo
    • 1
    • 2
  • Liang Bai
    • 2
  • Songyang Lao
    • 2
  • Song Wu
    • 1
  • Michael S. Lew
    • 1
  1. 1.LIACS Media LabLeiden UniversityLeidenThe Netherlands
  2. 2.College of Information Systems and ManagementNational University of Defense TechnologyChangshaChina

Personalised recommendations