Energy-Conserving Risk-Aware Data Collection Using Ensemble Navigation Network

  • Zhi Xing
  • Jae C. Oh
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10868)


The Data-collection Problem (DCP) models robotic agents collecting digital data in a risky environment under energy constraints. A good solution for DCP needs a balance between safety and energy use. We develop an Ensemble Navigation Network (ENN) that consists of a Convolutional Neural Network and several heuristics to learn the priorities. Experiments show ENN has superior performance than heuristic algorithms in all environmental settings. In particular, ENN has better performance in environments with higher risks and when robots have low energy capacity.


Deep reinforcement learning Ensemble methods 



This research was supported in part through computational resources provided by Syracuse University and by NSF award ACI-1541396.


  1. 1.
    Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., et al.: Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467 (2016)
  2. 2.
    Brázdil, T., Kučera, A., Novotný, P.: Optimizing the expected mean payoff in energy markov decision processes. In: Artho, C., Legay, A., Peled, D. (eds.) ATVA 2016. LNCS, vol. 9938, pp. 32–49. Springer, Cham (2016). Scholar
  3. 3.
    Dijkstra, E.W.: A note on two problems in connexion with graphs. Numer. Math. 1(1), 269–271 (1959)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Erdoğan, S., Miller-Hooks, E.: A green vehicle routing problem. Transp. Res. Part E Logistics Transp. Rev. 48(1), 100–114 (2012)CrossRefGoogle Scholar
  5. 5.
    Hudack, J., Oh, J.: Multi-agent sensor data collection with attrition risk. In: Proceedings - The 26th International Conference on Automated Planning and Scheduling, ICAPS (2016)Google Scholar
  6. 6.
    Kučera, A.: Playing games with counter automata. Reachability Prob., 29–41 (2012)Google Scholar
  7. 7.
    Lane, T., Kaelbling, L.P.: Approaches to macro decompositions of large markov decision process planning problems. In: Intelligent Systems and Advanced Manufacturing, pp. 104–113. International Society for Optics and Photonics (2002)Google Scholar
  8. 8.
    Lane, T., Kaelbling, L.P.: Nearly deterministic abstractions of markov decision processes. In: AAAI/IAAI, pp. 260–266 (2002)Google Scholar
  9. 9.
    Mnih, V., Badia, A.P., Mirza, M., Graves, A., Lillicrap, T.P., Harley, T., Silver, D., Kavukcuoglu, K.: Asynchronous methods for deep reinforcement learning. In: International Conference on Machine Learning (2016)Google Scholar
  10. 10.
    Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., Riedmiller, M.: Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602 (2013)
  11. 11.
    Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)CrossRefGoogle Scholar
  12. 12.
    Puterman, M.L.: Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley, New York (2014)Google Scholar
  13. 13.
    Silver, D., Huang, A., Maddison, C.J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al.: Mastering the game of go with deep neural networks and tree search. Nature 529(7587), 484–489 (2016)CrossRefGoogle Scholar
  14. 14.
    Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A., et al.: Mastering the game of go without human knowledge. Nature 550(7676), 354–359 (2017)CrossRefGoogle Scholar
  15. 15.
    Xing, Z., Oh, J.C.: Heuristics on the data-collecting robot problem with immediate rewards. In: Baldoni, M., Chopra, A.K., Son, T.C., Hirayama, K., Torroni, P. (eds.) PRIMA 2016. LNCS (LNAI), vol. 9862, pp. 131–148. Springer, Cham (2016). Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.EECS, College of Engineering and Computer ScienceSyracuse UniversitySyracuseUSA

Personalised recommendations