Advertisement

Artificial Intelligence Platform for Mobile Service Computing

  • Haikuo ZhangEmail author
  • Zhonghua Lu
  • Ke Xu
  • Yuchen Pang
  • Fang Liu
  • Liandong Chen
  • Jue Wang
  • Yangang Wang
  • Rongqiang Cao
Article
  • 9 Downloads

Abstract

Since the birth of artificial intelligence, the theory and the technology have become more mature, and the application field is expanding. Mobile networks and applications have grown quickly in recent years, and mobile computing is the new computing paradigm for mobile networks. In this paper, we build an artificial intelligence platform for a mobile service, which supports deep learning frameworks such as TensorFlow and Caffe. We describe the overall architecture of the AI platform for a GPU cluster in mobile service computing. In the GPU cluster, based on the scheduling layer, we propose Yarn by the Slurm scheduler to not only improve the distributed TensorFlow plug-in for the Slurm scheduling layer but also to extend YARN to manage and schedule GPUs. The front-end of the high-performance AI platform has the attributes of availability, scalability and efficiency. Finally, we verify the convenience, scalability, and effectiveness of the AI platform by comparing the performance of single-chip and distributed versions for the TensorFlow, Caffe and YARN systems.

Keywords

Artificial intelligence Mobile service computing Hadoop Slurm Schedule TensorFlow Caffe 

Notes

Acknowledgments

This work was partly supported by the National Key R&D Program of China (No. 2017YFB0202202), the Major Research Plan of National Natural Science Foundation of China (No. 91530324), the Super-computing Resource Pool of Chinese Academy of Sciences Information Project (No. XXH13503).

References

  1. 1.
    Shanhai, W., Xinxing, J., & Haiyan, Y. (2015). Research on isolated speech recognition based on deep learning neural networks. Computer Application Research, 32(8), 2289–2291.Google Scholar
  2. 2.
    Gantz, J., & Reinsel, D. (2012). The Digital Universe in 2020: big data, bigger digital shadows, and biggest growth in the far East[EB/OL]. https://www.emc.com/collateral/analyst-reports/idc-the-digital-universe-in-2020.pdf. Accessed 1 Dec 2012.
  3. 3.
    Sen, H., Mantang, T., & Xingguo, L. (2011). Application driven multi DSP processor array in high performance computing[J]. Computer Application Research, 28(4), 1336–1338.Google Scholar
  4. 4.
    Nickolls, J., & Dally, W. J. (2010). The GPU computing era[J]. IEEE Micro, 30(2), 56–69.CrossRefGoogle Scholar
  5. 5.
    Li, Y., Dai, W., Ming, Z., & Qiu, M. (2016). Privacy protection for preventing data over-collection in Smart City[J]. IEEE Transactions on Computers, 65(5), 1339–1350.MathSciNetCrossRefGoogle Scholar
  6. 6.
    Qiu, M., Gai, K., Thuraisingham, B. M., Tao, L., & Zhao, H. (2018). Proactive user-centric secure data scheme using attribute-based semantic access controls for mobile clouds in financial industry[J]. Future Generation Computer Systems, 80, 421–429.CrossRefGoogle Scholar
  7. 7.
    Zhang, Y., Qiu, M., Tsai, C., Hassan, M. M., & Alamri, A. (2017). Health-CPS: healthcare cyber-physical system assisted by cloud and big data[J]. IEEE Systems Journal, 11(1), 88–95.CrossRefGoogle Scholar
  8. 8.
    Christiansen, B., Garey, M., & Hartung, I. (2017). Slurm oveview [EB/OL]. https://slurm.schedmd.com/SC17/SlurmOverviewSC17.pdf. Accessed 12 Dec 2017.
  9. 9.
    Jette, M. A., Yoo, A. B., & Grondona, M. (2003). Slurm: simple Linux utility for resource management. 9th Workshop on Job Scheduling Strategies for Parallel Processing. Seattle, WA.Google Scholar
  10. 10.
    TensorFlow: Large-scale machine learning on heterogeneous systems[EB/OL]. http://download.tensorflow.org/paper/whitepaper2015.pdf.
  11. 11.
    Cybulska, M. (1999). Assessing yarn structure with image analysis Methods1[J]. Textile Research Journal, 69(5), 369–373.CrossRefGoogle Scholar
  12. 12.
    Pacelli, M., Caldani, L., & Paradiso, R. (2013). Performances evaluation of piezoresistive fabric sensors as function of yarn structure[J]. IEEE Engineering in Medicine and Biology Society 2013(2013):6502–6505.Google Scholar
  13. 13.
    Ozturk, M., & Nergis, B. U. (2008). Determining the dependence of colour values on yarn structure[J]. Coloration Technology, 124(3), 145–150.CrossRefGoogle Scholar
  14. 14.
    Owens, J. D., Houston, M., Luebke, D., et al. (2008). GPU computing[J]. Proceedings of the IEEE, 96(5), 879–899.CrossRefGoogle Scholar
  15. 15.
    Wikipedia contributors. CUDA[EB/OL]. https://en.wikipedia.org/wiki/CUDA.
  16. 16.
    Jia, Y. Caffe[EB/OL]. http://caffe.berkeleyvision.org.
  17. 17.
    Pratx, G., & Xing, L. (2011). GPU computing in medical physics: a review[J]. Medical Physics, 38(5), 2685–2697.CrossRefGoogle Scholar
  18. 18.
    Zhang, E. Z., Jiang, Y., Guo, Z., et al. (2011). On-the-fly elimination of dynamic irregularities for GPU computing[J]. AcmSigarch. Computer Architecture News, 47(4), 369–380.CrossRefGoogle Scholar
  19. 19.
    Coates, A., Huval, B., Wang, T., Wu, D., Catanzaro, B., & Ng, A. (2013). Deep learning with COTS HPC systems. In 30th ICML (pp. 1337–1345).Google Scholar
  20. 20.
    Xu, J., Yang, X., & Ali, S. https://kubernetes.io/blog.
  21. 21.
    Heigold, G., Vanhoucke, V., Senior, A., Nguyen, P., Ranzato, M.’. A., Devin, M., & Dean, J. (2013). Multilingual acoustic models using distributed deep neural networks. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 8619–8623). IEEE research.google.com/pubs/archive/40807.pdf.
  22. 22.
    Wang, H., Potluri, S., Luo, M., Singh, A. K., Sur, S., & Panda, D. K. (2011). MVAPICH2-GPU: optimized GPU to GPU communication for InfiniBand clusters. Computer Science-Research and Development, 26(3–4), 257–266.CrossRefGoogle Scholar
  23. 23.
    Gropp, W., Lusk, E., Doss, N., & Skjellum, A. (1996). A highperformance, portable implementation of the MPI message passing interface standard. Parallel Computing, 22(6), 789–828.CrossRefGoogle Scholar
  24. 24.
    Kuznik, F., Obrecht, C., et al. (2010). LBM based flow simulation using GPU computing processor.[J]. Computers & Mathematics with Applications, 59(7), 2380–2392.CrossRefGoogle Scholar
  25. 25.
    Vetter, J. S., Glassbrook, R., Dongarra, J., et al. (2011). Keeneland: bringing heterogeneous GPU computing to the computational science community[J]. Computing in Science & Engineering, 13(5), 90–95.CrossRefGoogle Scholar
  26. 26.
  27. 27.
    Jiabin, W. (2013). Research on massive transaction record query system based on Hadoop[D]. Nanjing University of Posts and Telecommunications.Google Scholar
  28. 28.
    Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung (2013). The Google file system. 19th ACM symposium on operating systems principles.Google Scholar
  29. 29.
    Wikipedia contributors (2017). Apache Hadoop [EB/OL]. https://en.wikipedia.org/wiki/Apache_Hadoop. Accessed 13 Dec 2017.
  30. 30.
    Wikipedia contributors (2018). Lustre_(file_system) [EB/OL]. https://en.wikipedia.org/wiki/Lustre_(file_system). Accessed 31 Jan 2018
  31. 31.
  32. 32.
    Martin, A., Raponi, S., Combe, T., et al. (2018). Docker ecosystem – vulnerability analysis[J]. Computer Communications.Google Scholar
  33. 33.
    Slurm workload Manager [EB/OL]. http://slurm.schedmd.com/slurm.html.
  34. 34.
    Wang, J., Liu, C., Huang, Y. Auto tuning for new energy dispatch problem: a case study. Future Generation Computer Systems, Elsevier Publisher, issue 54, 501–506,2016.1(IF:2.43).Google Scholar
  35. 35.
    Xu, J., Yang, X., & Ali, S. (2011). Hadoop authoritative guide: Second Edition[M]. Tsinghua University Press.Google Scholar
  36. 36.
    Wikipedia contributors. Depth-first search [EB/OL]. https://en.wikipedia.org/wiki/Depth-first_search.
  37. 37.
    Debo, L. (2014). Research of GPU cluster system based on YARN[D]. Sun Yat-sen University.Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Computer Network Information CenterChinese Academy of SciencesBeijingChina
  2. 2.University of Chinese Academy of SciencesBeijingChina
  3. 3.China Internet Network Information CenterBeijingChina
  4. 4.University of Illinois at Urbana-ChampaignChampaignUSA
  5. 5.State Grid Hebei Electric Power CompanyShijiazhuangChina

Personalised recommendations