Optimal Patrol Planning for Green Security Games with Black-Box Attackers

  • Haifeng Xu
  • Benjamin Ford
  • Fei Fang
  • Bistra Dilkina
  • Andrew Plumptre
  • Milind Tambe
  • Margaret Driciru
  • Fred Wanyama
  • Aggrey Rwetsiba
  • Mustapha Nsubaga
  • Joshua Mabonga
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10575)

Abstract

Motivated by the problem of protecting endangered animals, there has been a surge of interests in optimizing patrol planning for conservation area protection. Previous efforts in these domains have mostly focused on optimizing patrol routes against a specific boundedly rational poacher behavior model that describes poachers’ choices of areas to attack. However, these planning algorithms do not apply to other poaching prediction models, particularly, those complex machine learning models which are recently shown to provide better prediction than traditional bounded-rationality-based models. Moreover, previous patrol planning algorithms do not handle the important concern whereby poachers infer the patrol routes by partially monitoring the rangers’ movements. In this paper, we propose OPERA, a general patrol planning framework that: (1) generates optimal implementable patrolling routes against a black-box attacker which can represent a wide range of poaching prediction models; (2) incorporates entropy maximization to ensure that the generated routes are more unpredictable and robust to poachers’ partial monitoring. Our experiments on a real-world dataset from Uganda’s Queen Elizabeth Protected Area (QEPA) show that OPERA results in better defender utility, more efficient coverage of the area and more unpredictability than benchmark algorithms and the past routes used by rangers at QEPA.

Notes

Acknowledgement

Part of this research is supported by NSF grant CCF-1522054. Fei Fang is partially supported by the Harvard Center for Research on Computation and Society fellowship.

References

  1. 1.
    Critchlow, R., Plumptre, A.J., Alidria, B., Nsubuga, M., Driciru, M., Rwetsiba, A., Wanyama, F., Beale, C.M.: Improving law-enforcement effectiveness and efficiency in protected areas using ranger-collected monitoring data. Conserv. Lett. (2016). https://doi.org/10.1111/conl.12288. ISSN 1755-263X
  2. 2.
    Davis, J., Goadrich, M.: The relationship between precision-recall and ROC curves. In: Proceedings of the 23rd International Conference on Machine Learning. ICML (2006)Google Scholar
  3. 3.
    Di Marco, M., Boitani, L., Mallon, D., Hoffmann, M., Iacucci, A., Meijaard, E., Visconti, P., Schipper, J., Rondinini, C.: A retrospective evaluation of the global decline of carnivores and ungulates. Conserv. Biol. 28(4), 1109–1118 (2014)CrossRefGoogle Scholar
  4. 4.
    Fang, F., Nguyen, T.H., Pickles, R., Lam, W.Y., Clements, G.R., An, B., Singh, A., Tambe, M., Lemieux, A.: Deploying PAWS: field optimization of the protection assistant for wildlife security. In: Twenty-Eighth IAAI Conference (2016)Google Scholar
  5. 5.
    Fang, F., Stone, P., Tambe, M.: When security games go green: designing defender strategies to prevent poaching and illegal fishing. In: Twenty-Fourth International Joint Conference on Artificial Intelligence (2015)Google Scholar
  6. 6.
    Gholami, S., Ford, B., Fang, F., Plumptre, A., Tambe, M., Driciru, M., Wanyama, F., Rwetsiba, A., Nsubaga, M., Mabonga, J.: Taking it for a test drive: a hybrid spatio-temporal model for wildlife poaching prediction evaluated through a controlled field test. In: Proceedings of the European Conference on Machine Learning & Principles and Practice of Knowledge Discovery in Databases, ECML PKDD 2017 (2017)Google Scholar
  7. 7.
    Haas, T.C., Ferreira, S.M.: Optimal patrol routes: interdicting and pursuing rhino poachers. Police Pract. Res. 1–22 (2017). RoutledgeGoogle Scholar
  8. 8.
    Haghtalab, N., Fang, F., Nguyen, T.H., Sinha, A., Procaccia, A.D., Tambe, M.: Three strategies to success: learning adversary models in security games. In: Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, pp. 308–314. AAAI Press (2016)Google Scholar
  9. 9.
    Kar, D., Ford, B., Gholami, S., Fang, F., Plumptre, A., Tambe, M., Driciru, M., Wanyama, F., Rwetsiba, A., Nsubaga, M., Mabonga, J.: Cloudy with a chance of poaching: adversary behavior modeling and forecasting with real-world poaching data. In: Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems, AAMAS 2017, pp. 159–167 (2017)Google Scholar
  10. 10.
    Moreto, W.: To conserve and protect: Examining law enforcement ranger culture and operations in Queen Elizabeth National Park, Uganda. Ph.D. thesis, Rutgers University-Graduate School-Newark (2013)Google Scholar
  11. 11.
    Nguyen, T.H., Sinha, A., Gholami, S., Plumptre, A., Joppa, L., Tambe, M., Driciru, M., Wanyama, F., Rwetsiba, A., Critchlow, R., et al.: Capture: a new predictive anti-poaching tool for wildlife protection. In: Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, pp. 767–775. International Foundation for Autonomous Agents and Multiagent Systems (2016)Google Scholar
  12. 12.
    Nguyen, T.H., Yang, R., Azaria, A., Kraus, S., Tambe, M.: Analyzing the effectiveness of adversary modeling in security games. In: AAAI (2013)Google Scholar
  13. 13.
    Nyirenda, V.R., Chomba, C.: Field foot patrol effectiveness in Kafue national park, Zambia. J. Ecol. Nat. Environ. 4(6), 163–172 (2012)Google Scholar
  14. 14.
    Seiffert, C., Khoshgoftaar, T.M., Van Hulse, J., Napolitano, A.: Rusboost: a hybrid approach to alleviating class imbalance. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 40(1), 185–197 (2010)CrossRefGoogle Scholar
  15. 15.
    Shieh, E., An, B., Yang, R., Tambe, M., Baldwin, C., DiRenzo, J., Maule, B., Meyer, G.: Protect: a deployed game theoretic system to protect the ports of the united states. In: Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems, vol. 1, pp. 13–20. International Foundation for Autonomous Agents and Multiagent Systems (2012)Google Scholar
  16. 16.
    Singh, M., Vishnoi, N.K.: Entropy, optimization and counting. In: STOC, pp. 50–59. ACM (2014)Google Scholar
  17. 17.
    Wolsey, L.A.: Integer programming. Wiley-Interscience, New York (1998)MATHGoogle Scholar
  18. 18.
    Xu, H., Tambe, M., Dughmi, S., Noronha, V.L.: The curse of correlation in security games and principle of max-entropy. CoRR abs/1703.03912 (2017)Google Scholar
  19. 19.
    Yang, R., Ford, B., Tambe, M., Lemieux, A.: Adaptive resource allocation for wildlife protection against illegal poachers. In: Proceedings of the 2014 International Conference on Autonomous Agents and Multi-agent Systems, pp. 453–460. International Foundation for Autonomous Agents and Multiagent Systems (2014)Google Scholar
  20. 20.
    Yin, Z., Jiang, A.X., Tambe, M., Kiekintveld, C., Leyton-Brown, K., Sandholm, T., Sullivan, J.P.: Trusts: scheduling randomized patrols for fare inspection in transit systems using game theory. AI Mag. 33(4), 59 (2012)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Haifeng Xu
    • 1
  • Benjamin Ford
    • 1
  • Fei Fang
    • 2
  • Bistra Dilkina
    • 3
  • Andrew Plumptre
    • 4
  • Milind Tambe
    • 1
  • Margaret Driciru
    • 5
  • Fred Wanyama
    • 5
  • Aggrey Rwetsiba
    • 5
  • Mustapha Nsubaga
    • 6
  • Joshua Mabonga
    • 6
  1. 1.University of Southern CaliforniaLos AngelesUSA
  2. 2.Carnegie Mellon UniversityPittsburghUSA
  3. 3.Georgia Institute of TechnologyAtlantaUSA
  4. 4.Wildlife Conservation SocietyNew York CityUSA
  5. 5.Uganda Wildlife AuthorityKampalaUganda
  6. 6.Wildlife Conservation SocietyKampalaUganda

Personalised recommendations