Advertisement

A Simple Convolutional Transfer Neural Networks in Vision Tasks

  • Wenlei Wu
  • Zhaohang Lin
  • Xinghao Ding
  • Yue Huang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10637)

Abstract

Convolutional neural networks (ConvNets) is multi-stages trainable architecture that can learn invariant features in many vision tasks. Real-world applications of ConvNets are always limited by strong requirements of expensive and time-consuming labels generating in each specified task, so the challenges can be summarized as that labeled data is scarce while unlabeled data is abundant. The traditional ConvNets does not consider any information hidden in the large-scale unlabeled data. In this work, a very simple convolutional transfer neural networks (CTNN) has been proposed to address the challenges by introducing the idea of unsupervised transfer learning to ConvNets. We propose our model with LeNet5, one of the simplest model in ConvNets, where an efficient unsupervised reconstruction based pre-training strategy has been introduced to kernel training from both labeled and unlabeled data, or from both training and testing data. The contribution of the proposed model is that it can fully use all the data, including training and testing simultaneously, thus the performances can be improved when the labeled training data is insufficient. Widely used hand-written dataset MNIST, together with two retinal vessel datasets, DRIVE and STARE, are employed to validate the proposed work. The classification experiments results have demonstrated that the proposed CTNN is able to reduce the requirement of sufficient labeled training samples in real-world applications.

Keywords

Convolutional neural networks Transfer learning Unsupervised pre-training PCA 

Notes

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China under Grants 8167176681301278, 61172179, 61103121, 61571382, and 61571005, in part by the Guangdong Natural Science Foundation under Grant 2015A030313007, in part by the Fundamental Research Funds for the Central Universities under Grants 20720160075, 20720150169 and 20720150093, in part by the National Natural Science Foundation of Fujian Province, China 2017J01126, in part by the CCF-Tencent research fund.

References

  1. 1.
    LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE. 86(11), 2278–2324 (1998)CrossRefGoogle Scholar
  2. 2.
    Hubel, D.H., Wiesel, T.N.: Receptive fields of single neurones in the cats striate cortex. J. Physiol. 148(3), 574–591 (1959)CrossRefGoogle Scholar
  3. 3.
    Lecun, Y., Boser, B., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W., Jackel, L.D.: Handwritten digit recognition with a back-propagation network. Neural Netw. Curr. Appl. Chappman Hall 86(11), 2278–2324 (1992)Google Scholar
  4. 4.
    Jarrett, K., Kavukeuoglu, K., Ranzato, M.A., Lecun, Y.: What is the best multi-stage architecture for object recognition? In: IEEE International Conference on Computer Vision, pp. 2146–2153 (2009)Google Scholar
  5. 5.
    Lawrence, S., Giles, C.L., Tsoi, A.C.: Face recognition: a convolutional neural-network approach. IEEE Trans. Neural Netw. 8(1), 98–113 (1997)CrossRefGoogle Scholar
  6. 6.
    Frome, A., Cheung, G., Abdulkader, A., Zennaro, M., Wu, B.: Large-scale privacy protection in street level imagery. IEEE Int. Conf. Comput. Vis. 1(2), 2373–2380 (2009)Google Scholar
  7. 7.
    Farabet, C., Couprie, C., Najman, L., Lecun, Y.: Learning hierarchical features for scene labeling. IEEE Trans. Pattern Anal. Mach.Intell. 35, 1915–1929 (2013)CrossRefGoogle Scholar
  8. 8.
    Lecun, Y., Kavukcuoglu, K., Farabet, C.: Convolutional networks and applications in vision. In: IEEE International Conference on Computer Vision pp, pp. 253–256 (2010)Google Scholar
  9. 9.
    Soltau, H., Sano, G., Sainath, T.N.: Joint training of convolutional and non-convolutional neural networks. In: International Conference on Acoustics, Speech and Signal Processing, pp. 5572–5576 (2014)Google Scholar
  10. 10.
    Sainath, T.N., Mohamed, A.R., Kingsbury, B., Ramabhardran, B.: Deep convolutional neural networks for LVCSR. In: International Conference on Acoustics, Speech and Signal Processing, pp. 8614–8618 (2014)Google Scholar
  11. 11.
    Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22, 1345–1359 (2010)CrossRefGoogle Scholar
  12. 12.
    Pieer, S., Lecun, Y.: Traffic sign recognition with multi-scale convolutional networks. In: International Joint Conference on Neural Networks, pp. 2809–2813 (2011)Google Scholar
  13. 13.
    Ji, S., Xu, W., Yang, M., Yu, K.: 3D convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 221–231 (2013)CrossRefGoogle Scholar
  14. 14.
    Staal, J., Abramoff, M.D., Niemeijer, M., Viergever, M.A., van Ginneken, B.: Ridge-based vessel segmentation in color images of the retina. IEEE Trans. Med. Imaging 23, 501–509 (2004)CrossRefGoogle Scholar
  15. 15.
    Hoover, A., Goldbaum, M.: Locating the optic nerve in a retinal image using the fuzzy convergence of the blood vessels. IEEE Trans. Med. Imaging 22, 951–958 (2003)CrossRefGoogle Scholar
  16. 16.
    Hoover, A.D., Kouznetsova, V., Goldbaum, M.: Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response. IEEE Trans. Med. Imaging 19, 203–210 (2000)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Wenlei Wu
    • 1
    • 2
  • Zhaohang Lin
    • 1
  • Xinghao Ding
    • 2
  • Yue Huang
    • 2
  1. 1.Tencent Computer Systems Company LimitedShenzhenChina
  2. 2.Department of Communication EngineeringXiamen UniversityXiamenChina

Personalised recommendations