Distributed Neural Networks for Internet of Things: The Big-Little Approach

  • Elias De ConinckEmail author
  • Tim Verbelen
  • Bert Vankeirsbilck
  • Steven Bohez
  • Pieter Simoens
  • Piet Demeester
  • Bart Dhoedt
Conference paper
Part of the Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering book series (LNICST, volume 170)


Nowadays deep neural networks are widely used to accurately classify input data. An interesting application area is the Internet of Things (IoT), where a massive amount of sensor data has to be classified. The processing power of the cloud is attractive, however the variable latency imposes a major drawback in situations where near real-time classification is required. In order to exploit the apparent trade-off between utilizing the stable but limited embedded computing power of IoT devices and the seemingly unlimited computing power of Cloud computing at the cost of higher and variable latency, we propose a Big-Little architecture for deep neural networks. A small neural network trained to a subset of prioritized output classes is running on the embedded device, while a more specific classification is calculated when required by a large neural network in the cloud. We show the applicability of this concept in the IoT domain by evaluating our approach for state of the art neural network classification problems on popular embedded devices such as the Raspberry Pi and Intel Edison.


Deep neural networks Distributed intelligence Internet of things 



Part of the work was supported by the iMinds IoT research program. Steven Bohez is funded by Ph.D. grant of the Agency for Innovation by Science and Technology in Flanders (IWT).


  1. 1.
    Atzori, L., Iera, A., Morabito, G.: The internet of things: a survey. Comput. Netw. 54(15), 2787–2805 (2010)CrossRefzbMATHGoogle Scholar
  2. 2.
    Barker, S.K., Shenoy, P.: Empirical evaluation of latency-sensitive application performance in the cloud. In: Proceedings of the First Annual ACM SIGMM Conference on Multimedia Systems, MMSys 2010, pp. 35–46. ACM, New York (2010)Google Scholar
  3. 3.
    Bastien, F., Lamblin, P., Pascanu, R., Bergstra, J., Goodfellow, I.J., Bergeron, A., Bouchard, N., Bengio, Y.: Theano: new features and speed improvements. In: Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop (2012)Google Scholar
  4. 4.
    Bergstra, J., Breuleux, O., Bastien, F., Lamblin, P., Pascanu, R., Desjardins, G., Turian, J., Warde-Farley, D., Bengio, Y.: Theano: a CPU and GPU math expression compiler. In: Proceedings of the Python for Scientific Computing Conference (SciPy), vol. 4, p. 3, June 2010. Oral PresentationGoogle Scholar
  5. 5.
    Bouckaert, S., Becue, P., Vermeulen, B., Jooris, B., Moerman, I., Demeester, P.: Federating wired and wireless test facilities through emulab and OMF: the iLab.t use case. In: Proceedings of the 8th International ICST Conference on Testbeds and Research Infrastructures for the Development of Networks and Communities, pp. 1–16. Department of Information technology, Ghent University (2012)Google Scholar
  6. 6.
    Dan C.C., Meier, U., Gambardella, L.M., Schmidhuber, J.: Deep big simple neural nets excel on handwritten digit recognition. arXiv preprint arXiv:1003.0358 22(12) (2010)
  7. 7.
    Coates, A., Huval, B., Wang, T., Wu, D., Catanzaro, B., Andrew, N.: Deep learning with COTS HPC systems. In: Proceedings of the 30th International Conference on Machine Learning, pp. 1337–1345 (2013)Google Scholar
  8. 8.
    Da Li, X., He, W., Li, S.: Internet of things in industries: a survey. IEEE Trans. Ind. Inform. 10(4), 2233–2243 (2014)CrossRefGoogle Scholar
  9. 9.
    Dean, J., Corrado, G., Monga, R., Chen, K., Devin, M., Mao, M., Senior, A., Tucker, P., Yang, K., Le, Q.V., et al.: Large scale distributed deep networks. In: Advances in Neural Information Processing Systems, pp. 1223–1231 (2012)Google Scholar
  10. 10.
    Krizhevsky, A.: One weird trick for parallelizing convolutional neural networks. arXiv preprint arXiv:1404.5997 (2014)
  11. 11.
    Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images. Technical report, Computer Science Department, University of Toronto (2009)Google Scholar
  12. 12.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  13. 13.
    LeCun, Y., Cortes, C.: The MNIST database of handwritten digits (1998)Google Scholar
  14. 14.
    Leroux, S., Bohez, S., Verbelen, T., Vankeirsbilck, B., Simoens, P., Dhoedt, B.: Resource-constrained classification using a cascade of neural network layers. In: International Joint Conference on Neural Networks (2015)Google Scholar
  15. 15.
    Parwekar, P.: From internet of things towards cloud of things. In: 2011 2nd International Conference on Computer and Communication Technology (ICCCT), pp. 329–333, September 2011Google Scholar
  16. 16.
    Rifkin, R., Klautau, A.: In defense of one-vs-all classification. J. Mach. Learn. Res. 5, 101–141 (2004)MathSciNetzbMATHGoogle Scholar
  17. 17.
    Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015). doi: 10.1007/s11263-015-0816-y MathSciNetCrossRefGoogle Scholar
  18. 18.
    Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015)CrossRefGoogle Scholar

Copyright information

© ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2016

Authors and Affiliations

  • Elias De Coninck
    • 1
    Email author
  • Tim Verbelen
    • 1
  • Bert Vankeirsbilck
    • 1
  • Steven Bohez
    • 1
  • Pieter Simoens
    • 1
  • Piet Demeester
    • 1
  • Bart Dhoedt
    • 1
  1. 1.Department of Information TechnologyGhent University – iMindsGentBelgium

Personalised recommendations