Support Vector Inductive Logic Programming

  • S. H. Muggleton
  • H. Lodhi
  • A. Amini
  • M. J. E. Sternberg
Part of the Studies in Fuzziness and Soft Computing book series (STUDFUZZ, volume 194)

Abstract

In this paper we explore a topic which is at the intersection of two areas of Machine Learning: namely Support Vector Machines (SVMs) and Inductive Logic Programming (ILP). We propose a general method for constructing kernels for Support Vector Inductive Logic Programming (SVILP). The kernel not only captures the semantic and syntactic relational information contained in the data but also provides the flexibility of using arbitrary forms of structured and non-structured data coded in a relational way. While specialised kernels have been developed for strings, trees and graphs our approach uses declarative background knowledge to provide the learning bias. The use of explicitly encoded background knowledge distinguishes SVILP from existing relational kernels which in ILP-terms work purely at the atomic generalisation level. The SVILP approach is a form of generalisation relative to background knowledge, though the final combining function for the ILP-learned clauses is an SVM rather than a logical conjunction. We evaluate SVILP empirically against related approaches, including an industry-standard toxin predictor called TOPKAT. Evaluation is conducted on a new broad-ranging toxicity dataset (DSSTox). The experimental results demonstrate that our approach significantly outperforms all other approaches in the study.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    C. Apt, F.J. Damerau, and S.M. Weiss. “Automated learning of decision rules for text categorization.” ACM Trans on Information Systems, 12:pp.233–251, 1994.CrossRefGoogle Scholar
  2. 2.
    Y. Chevaleyre and J.D. Zucker. “A framework for learning rules from multiple instance data.” Proceedings of the European Conference on Machine Learning (ECML 2001), pp.49–60, Berlin, 2001. Springer-Verlag. LNAI 2167.Google Scholar
  3. 3.
    M. Collins and N. Duffy. “Convolution kernels for natural language.” in Advances in Neural Information Processing System 14. MIT Press, 2002.Google Scholar
  4. 4.
    R. Collobert and S. Bengio. “Svmtorch: Support vector machines for large-scale regression problems.” Journal of Machine Learning Research, 1:pp.143–160, 2001.MathSciNetCrossRefGoogle Scholar
  5. 5.
    A. Cootes, S.H. Muggleton, and M.J.E. Sternberg. “The automatic discovery of structural principles describing protein fold space.” Journal of Molecular Biology, 2003.Google Scholar
  6. 6.
    N. Cristianini and J. Shawe-Taylor. “zAn introduction to Support Vector Machines.” Cambridge University Press, Cambridge, UK, 2000.Google Scholar
  7. 7.
    L. Dehaspe and H. Toivonen. “Discovery of frequent datalog patterns.” Data Mining and Knowledge Discovery, 3(1):pp.7–36, 1999.CrossRefGoogle Scholar
  8. 8.
    S.T. Dumais, J. Platt, D. Heckermann, and M. Sahami. “Inductive learning algorithms and representations for text categorisation.” in Proceedings of CIKM-98, 7th ACM International Conference on Information and Knowledge Management, pp.148–155, 1998.Google Scholar
  9. 9.
    P. Finn, S.H. Muggleton, D. Page, and A. Srinivasan. “Pharmacophore discovery using the Inductive Logic Programming system Progol.” Machine Learning, 30:pp.241–271, 1998.CrossRefGoogle Scholar
  10. 10.
    Thomas Gartner, Peter A. Flach, Adam Kowalczyk, and Alex J. Smola. “Multiinstance kernels.” in Proceedings of the Nineteenth International Conference on Machine Learning, pp.176–186. Morgan-Kaufmann, 2002.Google Scholar
  11. 11.
    Thomas Gartner, John W. Lloyd, and Peter A. Flach. Kernels for structured data. In Stan Matwin and Claude Sammut, editors, Proceedings of the Twelfth International Conference on Inductive Logic Programming, LNAI 2583, pages 66–83, Berlin, 2002. Springer-Verlag.Google Scholar
  12. 12.
    D. Haussler. “Convolution kernels on discrete structures.” Technical Report UCSC-CRL-99-10, University of California in Santa Cruz, Computer Science Department, July 1999.Google Scholar
  13. 13.
    T. Horvath, S. Wrobel, and U. Bohnebeck. “Relational instance-based learning with lists and terms.” Machine Learning, 43(1/2):53–80, 2001.CrossRefMATHGoogle Scholar
  14. 14.
    A. Hutchinson. “Metrics on terms and clauses.” In M. Someren and G. Widmer, editors, Proceedings of the Ninth European Conference on Machine Learning, pages 138–145, Berlin, 1997. Springer.Google Scholar
  15. 15.
    R.D. King, S.H. Muggleton, A. Srinivasan, and M. Sternberg. “Structure-activity relationships derived by machine learning: the use of atoms and their bond connectives to predict mutagenicity by inductive logic programming.” Proceedings of the National Academy of Sciences, 93:438–442, 1996.CrossRefGoogle Scholar
  16. 16.
    R.D. King, K.E. Whelan, F.M. Jones, P.K.G. Reiser, C.H. Bryant, S.H. Muggleton, D.B. Kell, and S.G. Oliver. “Functional genomic hypothesis generation and experimentation by a robot scientist.” Nature, 427:247–252, 2004.CrossRefGoogle Scholar
  17. 17.
    R.I. Kondor and J. Lafferty. “Diffusion kernels on graphs and other discrete input spaces.” In ICML, 2002.Google Scholar
  18. 18.
    S. Kramer and Frank E. “Bottom-up propositionalisation.” In Proceedings of the ILP-2000 Work-In-Progress Track, pages 156–162. Imperial College, London, 2000.Google Scholar
  19. 19.
    S. Kramer, N. Lavrac, and P. Flach. “Propositionalisation approaches to Relational Data Mining.” In S. Dzeroski and N. Larac, editors, Relational Data Mining, pages 262–291. Springer, Berlin, 2001.Google Scholar
  20. 20.
    S. Kramer, B. Pfahringer, and C. Helma. “Stochastic propositionalisation of non-determinate background knowledge.” In Proceedings of the Eighth International Conference on Inductive Logic Programming, pages 80–94, Berlin, 1998. Springer-Verlag.Google Scholar
  21. 21.
    N. Lavra\vc, S. D\vzeroski, and M. Grobelnik. “Learning non-recursive definitions of relations with LINUS.” In Yves Kodratoff, editor, Proceedings of the 5th European Working Session on Learning, volume 482 of Lecture Notes in Artificial Intelligence. Springer-Verlag, 1991.Google Scholar
  22. 22.
    J.W. Lloyd. “Logic for Learning.” Springer, Berlin, 2003.MATHGoogle Scholar
  23. 23.
    H. Lodhi, C. Saunders, J. Shawe-Taylor, N. Cristianini, and C. Watkins. “Text classification using string kernels.” Journal of Machine Learning Research, 2002, (to appear).Google Scholar
  24. 24.
    D. Mavroeidis and P.A. Flach. “Improved distances for structured data.” In T. Horvath and A. Yamamoto, editors, Proceedings of the Thirteenth International Conference on Inductive Logic Programming, LNAI 2835, pages 251–268, Berlin, 2003. Springer-Verlag.Google Scholar
  25. 25.
    J. Mercer. “Functions of positive and negative type and their connection with the theory of integral equations.” Philosophical Transactions of the Royal Society London (A), 209:415–446, 1909.MATHGoogle Scholar
  26. 26.
    S.H. Muggleton. “Inductive Logic Programming.” New Generation Computing, 8(4):295–318, 1991.MATHCrossRefGoogle Scholar
  27. 27.
    S.H. Muggleton. “Bayesian Inductive Logic Programming.” In M. Warmuth, editor, Proceedings of the Seventh Annual ACM. Conference on Computational Learning Theory, pages 3–11, New York, 1994. ACM Press. Keynote presentation.Google Scholar
  28. 28.
    S.H. Muggleton. “Inverse entailment and Progol.” New Generation Computing, 13:245–286, 1995.Google Scholar
  29. 29.
    S.H. Muggleton. “Inductive logic programming: issues, results and the LLL challenge.” Artificial Intelligence, 114(1–2):283–296, December 1999.MATHCrossRefGoogle Scholar
  30. 30.
    S.H. Muggleton. “Scientific knowledge discovery using Inductive Logic Programming.” Communications of the ACM, 42(11):42–46, November 1999.CrossRefGoogle Scholar
  31. 31.
    S.H. Muggleton and C.H. Bryant. “Theory completion using inverse entailment.” In Proc.of the 10th International Workshop on Inductive Logic Programming (ILP-00), pages 130–146, Berlin, 2000. Springer-Verlag.Google Scholar
  32. 32.
    S.H. Muggleton and L. De Raedt. “Inductive logic programming: Theory and methods.” Journal of Logic Programming, 19,20:629–679, 1994.MathSciNetCrossRefGoogle Scholar
  33. 33.
    S.H. Nienhuys-Cheng. “Distance between Herbrand interpretations: a measure for approximations to a target concept.” In N. Lavravc and S. Dvzeroski, editors, Proceedings of the Seventh International Workshop on Inductive Logic Programming (ILP97), pages 321–226, Berlin, 1997. Springer-Verlag. LNAI 1297.Google Scholar
  34. 34.
    D. Page and A. Frisch. “Generalization and learnability: A study of constrained atoms.” In S. Muggleton, editor, Inductive Logic Programming. Academic Press, London, 1992.Google Scholar
  35. 35.
    R.S. Pearlman. “Concord User’s Manual.” Tripos, Inc, St Louis, Missouri, 2000.Google Scholar
  36. 36.
    G. Plotkin. “A further note on inductive generalization.” In Machine Intelligence, volume 6. Edinburgh University Press, 1971.Google Scholar
  37. 37.
    G.D. Plotkin. “A note on inductive generalisation.” In B. Meltzer and D. Michie, editors, Machine Intelligence 5, pages 153–163. Edinburgh University Press, Edinburgh, 1969.Google Scholar
  38. 38.
    G.D. Plotkin. “Automatic Methods of Inductive Inference.” PhD thesis, Edinburgh University, August 1971.Google Scholar
  39. 39.
    J. Ramon and M. Bruynooghe. “A framework for defining distances between first-order logic objects.” In D. Page, editor, Proceedings of the Eighth International Workshop on Inductive Logic Programming (ILP98), pages 271–280, Berlin, 1998. Springer-Verlag. LNAI 1446.Google Scholar
  40. 40.
    J.C. Reynolds. “Transformational systems and the algebraic structure of atomic formulas.” In B. Meltzer and D. Michie, editors, Machine Intelligence 5, pages 135–151. Edinburgh University Press, Edinburgh, 1969.Google Scholar
  41. 41.
    A.M. Richard and C.R. Williams. “Distributed structure-searchable toxicity (DSSTox) public database network: A proposal.” Mutation Research, 499:27–52, 2000.Google Scholar
  42. 42.
    C.L. Russom, S.P. Bradbury, S.J. Brodrius, D.E. Hammermeister, and R.A. Drummond. “Predicting modes of toxic action from chemical structure: Acute toxicity in the fathead minnow (oimephales promelas).” Environmental Toxicology and Chemistry, 16(5):948–967, 1997.CrossRefGoogle Scholar
  43. 43.
    B. Scholkopf. “Support Vector Learning.” R. Oldenbourg Verlag, Munchen, 1997. Doktorarbeit, Technische Universitat Berlin. Available from http://www.kyb.tuebingen.mpg.desimbs.Google Scholar
  44. 44.
    A. Srinivasan and R. King. “Feature construction with inductive logic programming: a study of quantitative predictions of biological activity aided by structural attributes.” Data Mining and Knowledge Discovery, 3(1):35–57, 1999.Google Scholar
  45. 45.
    A. Srinivasan, S.H. Muggleton, R. King, and M. Sternberg. “Theories for mutagenicity: a study of first-order and feature based induction.” Artificial Intelligence, 85(1,2):277–299, 1996.CrossRefGoogle Scholar
  46. 46.
    M.J.E. Sternberg and S.H. Muggleton. “Structure activity relationships (SAR) and pharmacophore discovery using inductive logic programming (ILP).” QSAR and Combinatorial Science, 22, 2003.Google Scholar
  47. 47.
    M. Turcotte, S.H. Muggleton, and M.J.E. Sternberg. “Automated discovery of structural signatures of protein fold and function.” Journal of Molecular Biology, 306:591–605, 2001.CrossRefGoogle Scholar
  48. 48.
    V. Vapnik. “The Nature of Statistical Learning Theory.” Springer Verlag, New York, 1995.MATHGoogle Scholar
  49. 49.
    C. Watkins. “Dynamic alignment kernels.” In A.J. Smola, P.L. Bartlett, B. Schölkopf, and D. Schuurmans, editors, Advances in Large Margin Classifiers, pages 39–50, Cambridge, MA, 2000. MIT Press.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • S. H. Muggleton
    • 1
  • H. Lodhi
    • 2
  • A. Amini
    • 2
  • M. J. E. Sternberg
    • 2
  1. 1.Department of Biological SciencesImperial College LondonLondon
  2. 2.Department of ComputingImperial College LondonLondon

Personalised recommendations