Advertisement

Learning Efficiently in Semantic Based Regularization

  • Michelangelo DiligentiEmail author
  • Marco Gori
  • Vincenzo Scoca
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9852)

Abstract

Semantic Based Regularization (SBR) is a general framework to integrate semi-supervised learning with the application specific background knowledge, which is assumed to be expressed as a collection of first-order logic (FOL) clauses. While SBR has been proved to be a useful tool in many applications, the underlying learning task often requires to solve an optimization problem that has been empirically observed to be challenging. Heuristics and experience to achieve good results are therefore the key to success in the application of SBR. The main contribution of this paper is to study why and when training in SBR is easy. In particular, this paper shows that exists a large class of prior knowledge that can be expressed as convex constraints, which can be exploited during training in a very efficient and effective way. This class of constraints provides a natural way to break the complexity of learning by building a training plan that uses the convex constraints as an effective initialization step for the final full optimization problem. Whereas previous published results on SBR have employed Kernel Machines to approximate the underlying unknown predicates, this paper employs Neural Networks for the first time, showing the flexibility of the framework. The experimental results show the effectiveness of the training plan on categorization of real world images.

Keywords

Statistical Relational Learning First Order Logic Convex optimization 

References

  1. 1.
    Belkin, M., Niyogi, P., Sindhwani, V.: Manifold regularization: a geometric framework for learning from labeled and unlabeled examples. J. Mach. Learn. Res. 7, 2434 (2006)MathSciNetzbMATHGoogle Scholar
  2. 2.
    Bengio, Y.: Curriculum learning. In: Proceedings of the 26th Annual International Conference on Machine Learning (ICML), pp. 41–48 (2009)Google Scholar
  3. 3.
    Broecheler, M., Mihalkova, L., Getoor, L.: Probabilistic similarity logic. In: Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI), pp. 73–82 (2010)Google Scholar
  4. 4.
    Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20(3), 273–297 (1995)zbMATHGoogle Scholar
  5. 5.
    Diligenti, M., Gori, M., Maggini, M., Rigutini, L.: Bridging logic and kernel machines. Mach. Learn. 86(1), 57–88 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Diligenti, M., Gori, M., Saccà, C.: Semantic-based regularization for learning and inference. Artif. Intell. (2015)Google Scholar
  7. 7.
    Domingos, P., Sumner, M.: The alchemy tutorial (2010). http://alchemy.cs.washington.edu/tutorial/tutorial.pdf
  8. 8.
    Domingos, P., Richardson, M.: Markov logic: a unifying framework for statistical relational learning. In: ICML-2004 Workshop on Statistical Relational Learning, pp. 49–54 (2004)Google Scholar
  9. 9.
    Friesen, A.L., Domingos, P.: Recursive decomposition for nonconvex optimization. In: Proceedings of the 24th International Joint Conference on Artificial Intelligence (2015)Google Scholar
  10. 10.
    Golomb, S.W., Baumert, L.D.: Backtrack programming. J. ACM (JACM) 12(4), 516–524 (1965)MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Hajek, P.: The Metamathematics of Fuzzy Logic. Kluwer, Dordrecht (1998)CrossRefzbMATHGoogle Scholar
  12. 12.
    Lowe, D.G.: Object recognition from local scale-invariant features. In: The Proceedings of the Seventh IEEE International Conference on Computer Vision, vol. 2, pp. 1150–1157. IEEE (1999)Google Scholar
  13. 13.
    Novák, V.: First-order fuzzy logic. Studia Logica 46(1), 87–109 (1987)MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Richardson, M., Domingos, P.: Markov logic networks. Mach. Learn. 62(1–2), 107–136 (2006)CrossRefGoogle Scholar
  15. 15.
    Rossi, F., Van Beek, P., Walsh, T.: Handbook of Constraint Programming. Elsevier (2006)Google Scholar
  16. 16.
    Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015)CrossRefGoogle Scholar
  17. 17.
    Sen, P., Namata, G., Bilgic, M., Getoor, L., Galligher, B., Eliassi-Rad, T.: Collective classification in network data. AI Mag. 29(3), 93 (2008)Google Scholar
  18. 18.
    Winston, P.H., Horn, B.K.: LISP. Addison Wesley Pub., Reading (1986)zbMATHGoogle Scholar
  19. 19.
    Yang, P., Tang, K., Yao, X.: A novel divide and conquer based approach for large-scale optimization problems. arXiv preprint (2016). arXiv:1603.03518
  20. 20.
    Zadeh, L.A.: Fuzzy sets. Inf. Control 8, 338–353 (1965)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  • Michelangelo Diligenti
    • 1
    Email author
  • Marco Gori
    • 1
  • Vincenzo Scoca
    • 2
  1. 1.Dipartimento di Ingegneria dell’Informazione e Scienza MatematicheSienaItaly
  2. 2.IMT School for Advanced StudiesLuccaItaly

Personalised recommendations