Advertisement

A Mobile Cloud Framework for Deep Learning and Its Application to Smart Car Camera

  • Chien-Hung Chen
  • Che-Rung Lee
  • Walter Chen-Hua Lu
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10036)

Abstract

Deep learning has become a powerful technology in image recognition, gaming, information retrieval, and many other areas that need intelligent data processing. However, huge amount of data and complex computations prevent deep learning from being practical in mobile applications. In this paper, we proposed a mobile cloud computing framework for deep learning. The architecture puts the training process and model repository in cloud platforms, and the recognition process and data gathering in mobile devices. The communication is carried out via Git protocol to ensure the success of data transmission in unstable network environments. We used smart car camera that can detect objects in recorded videos during driving as an example application, and implemented the system on NVIDIA Jetson TK1. Experimental results show that detection rate can achieve four frame-per-second with Faster R-CNN and ZF model, and the system can work well even when the network connection is unstable. We also compared the performance of system with and without GPU, and found that GPU still plays a critical role in the recognition side for deep learning.

Keywords

Mobile Device Deep Learning Batch Size Deep Neural Network Cloud Platform 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgment

This study is conducted under the The Core Technologies of Smart Handheld Devices (3/4) of the Institute for Information Industry; which is subsidized by the Ministry of Economy Affairs, Taiwan. The authors thank the Institute for Information Industry for the financial support under grant number 105-EC-17-A-24-0691.

References

  1. 1.
    LeCun, Y., Jackel, L., Bottou, L., Brunot, A., Cortes, C., Denker, J., Drucker, H., Guyon, I., Muller, U., Sackinger, E., et al.: Comparison of learning algorithms for handwritten digit recognition. In: International Conference on Artificial Neural Networks, vol. 60, pp. 53–60 (1995)Google Scholar
  2. 2.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  3. 3.
    Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)Google Scholar
  4. 4.
    Socher, R., Perelygin, A., Wu, J.Y., Chuang, J., Manning, C.D., Ng, A.Y., Potts, C.: Recursive deep models for semantic compositionality over a sentiment treebank. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), vol. 1631, p. 1642. Citeseer (2013)Google Scholar
  5. 5.
    Collobert, R., Weston, J.: A unified architecture for natural language processing: deep neural networks with multitask learning. In: Proceedings of the 25th International Conference on Machine Learning, pp. 160–167. ACM (2008)Google Scholar
  6. 6.
    Johnson, J., Karpathy, A., Fei-Fei, L.: Densecap: fully convolutional localization networks for dense captioning. arXiv preprint arXiv:1511.07571 (2015)
  7. 7.
    Fakoor, R., Ladhak, F., Nazi, A., Huber, M.: Using deep learning to enhance cancer diagnosis and classification. In: Proceedings of the International Conference on Machine Learning (2013)Google Scholar
  8. 8.
    Dinh, H.T., Lee, C., Niyato, D., Wang, P.: A survey of mobile cloud computing: architecture, applications, and approaches. Wirel. Commun. Mob. Comput. 13, 1587–1611 (2013)CrossRefGoogle Scholar
  9. 9.
    Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H., et al.: Greedy layer-wise training of deep networks. Adv. Neural Inform. Process. Syst. 19, 153 (2007)Google Scholar
  10. 10.
    Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587 (2014)Google Scholar
  11. 11.
    Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. arXiv preprint arXiv:1506.02640 (2015)
  12. 12.
    Sermanet, P., Eigen, D., Zhang, X., Mathieu, M., Fergus, R., LeCun, Y.: Overfeat: integrated recognition, localization and detection using convolutional networks. arXiv preprint arXiv:1312.6229 (2013)
  13. 13.
    Hinton, G., Deng, L., Yu, D., Dahl, G.E., Mohamed, A.R., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T.N., et al.: Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Sig. Process. Mag. 29, 82–97 (2012)CrossRefGoogle Scholar
  14. 14.
    Dahl, G.E., Yu, D., Deng, L., Acero, A.: Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Trans. Audio Speech Lang. Process. 20, 30–42 (2012)CrossRefGoogle Scholar
  15. 15.
    He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1026–1034 (2015)Google Scholar
  16. 16.
    Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15, 1929–1958 (2014)MathSciNetMATHGoogle Scholar
  17. 17.
    Sutskever, I., Martens, J., Dahl, G.E., Hinton, G.E.: On the importance of initialization and momentum in deep learning. ICML 28(3), 1139–1147 (2013)Google Scholar
  18. 18.
    Huang, J., Qian, F., Gerber, A., Mao, Z.M., Sen, S., Spatscheck, O.: A close examination of performance and power characteristics of 4G LTE networks. In: Proceedings of the 10th International Conference on Mobile Systems, Applications, and Services, pp. 225–238. ACM (2012)Google Scholar
  19. 19.
    Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093 (2014)
  20. 20.
    Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: Towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, pp. 91–99 (2015)Google Scholar
  21. 21.
    Sanneck, H.A., Carle, G.: Framework model for packet loss metrics based on loss runlengths. In: Proceedings of the SPIE, Multimedia Computing and Networking 2000, vol. 3969 (1999)Google Scholar
  22. 22.
    Han, S., Mao, H., Dally, W.J.: Deep compression: compressing deep neural network with pruning, trained quantization and huffman coding. CoRR, abs/1510.00149 2 (2015)Google Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  • Chien-Hung Chen
    • 1
  • Che-Rung Lee
    • 1
  • Walter Chen-Hua Lu
    • 1
  1. 1.National Tsing Hua UniversityHsinchuTaiwan

Personalised recommendations