Learning in Physical Domains: Mating Safety Requirements and Costly Sampling

  • Francesco Leofante
  • Armando TacchellaEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10037)


Agents learning in physical domains face two problems: they must meet safety requirements because their behaviour must not cause damage to the environment, and they should learn with as few samples as possible because acquiring new data requires costly interactions. Active learning strategies reduce sampling costs, as new data are requested only when and where they are deemed most useful to improve on agent’s accuracy, but safety remains a standing challenge. In this paper we focus on active learning with support vector regression and introduce a methodology based on satisfiability modulo theory to prove that predictions are bounded as long as input patterns satisfy some preconditions. We present experimental results showing the feasibility of our approach, and compare our results with Gaussian processes, another class of kernel methods which natively provide bounds on predictions.


Machine learning Automated reasoning Formal verification 


  1. 1.
    Argall, B.D., Chernova, S., Veloso, M., Browning, B.: A survey of robot learning from demonstration. Robot. Auton. Syst. 57(5), 469–483 (2009)CrossRefGoogle Scholar
  2. 2.
    Nguyen-Tuong, D., Peters, J.: Model learning for robot control: a survey. Cogn. process. 12(4), 319–340 (2011)CrossRefGoogle Scholar
  3. 3.
    Kober, J., Peters, J.: Reinforcement learning in robotics: a survey. In: Wiering, M., van Otterlo, M. (eds.) Reinforcement Learning. ALO, vol. 12, pp. 579–610. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  4. 4.
    Settles, B.: Active learning literature survey. Univ. Wis. Madison 52(55–66), 11 (2010)Google Scholar
  5. 5.
    Smola, A.J., Schölkopf, B.: A tutorial on support vector regression. Stat. Comput. 14(3), 199–222 (2004)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Seung, H.S., Opper, M., Sompolinsky, H.: Query by committee. In: Proceedings of the Fifth Annual Workshop on Computational Learning Theory, pp. 287–294. ACM (1992)Google Scholar
  7. 7.
    Demir, B., Bruzzone, L.: A multiple criteria active learning method for support vector regression. Pattern Recogn. 47(7), 2558–2567 (2014)CrossRefGoogle Scholar
  8. 8.
    Barrett, C.W., Sebastiani, R., Seshia, S.A., Tinelli, C.: Satisfiability modulo theories. Handb. Satisfiability 185, 825–885 (2009)Google Scholar
  9. 9.
    Pulina, L., Tacchella, A.: Challenging smt solvers to verify neural networks. AI Commun. 25(2), 117–135 (2012)MathSciNetzbMATHGoogle Scholar
  10. 10.
    Rohmer, E., Singh, S.P., Freese, M.: V-rep: a versatile and scalable robot simulation framework. In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1321–1326. IEEE (2013)Google Scholar
  11. 11.
    Williams, C.K.I., Rasmussen, C.E.: Gaussian processes for regression. In: Advances in Neural Information Processing Systems 8, NIPS, Denver, CO, November 27–30, 1995, pp. 514–520 (1995)Google Scholar
  12. 12.
    Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. In: Proceedings of the Fifth Annual Workshop on Computational Learning Theory, pp. 144–152. ACM (1992)Google Scholar
  13. 13.
    Guyon, I., Boser, B., Vapnik, V.: Automatic capacity tuning of very large VC-dimension classifiers. In: Advances in Neural Information Processing Systems, pp. 147–147 (1993)Google Scholar
  14. 14.
    Vapnik, V.N.: The Nature of Statistical Learning Theory. Springer, New York (1995)CrossRefzbMATHGoogle Scholar
  15. 15.
    Rasmussen, C.E.: Gaussian processes in machine learning. In: Bousquet, O., Luxburg, U., Rätsch, G. (eds.) ML -2003. LNCS (LNAI), vol. 3176, pp. 63–71. Springer, Heidelberg (2004). doi: 10.1007/978-3-540-28650-9_4 CrossRefGoogle Scholar
  16. 16.
    Mackworth, A.K.: Consistency in networks of relations. Artif. Intell. 8(1), 99–118 (1977)MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    Pulina, L., Tacchella, A.: An abstraction-refinement approach to verification of artificial neural networks. In: Touili, T., Cook, B., Jackson, P. (eds.) CAV 2010. LNCS, vol. 6174, pp. 243–257. Springer, Heidelberg (2010). doi: 10.1007/978-3-642-14295-6_24 CrossRefGoogle Scholar
  18. 18.
    Chang, C.C., Lin, C.J.: Libsvm: a library for support vector machines. ACM Trans. Intell. Syst. Technol. (TIST) 2(3), 27 (2011)Google Scholar
  19. 19.
    Cimatti, A., Griggio, A., Schaafsma, B.J., Sebastiani, R.: The MathSAT5 SMT solver. In: Piterman, N., Smolka, S.A. (eds.) TACAS 2013. LNCS, vol. 7795, pp. 93–107. Springer, Heidelberg (2013). doi: 10.1007/978-3-642-36742-7_7 CrossRefGoogle Scholar
  20. 20.
    GPy: GPy: a gaussian process framework in python, since 2012.
  21. 21.
    Pathak, S., Pulina, L., Metta, G., Tacchella, A.: How to abstract intelligence? (if verification is in order). In: Proceedings of 2013 AAAI Fall Symp. How Should Intelligence Be Abstracted in AI Research (2013)Google Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  1. 1.DIBRISUniversità Degli Studi di GenovaGenovaItaly

Personalised recommendations