Skip to main content

Inductive Learning with Corroboration

  • Conference paper
  • First Online:
Algorithmic Learning Theory (ALT 1999)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1720))

Included in the following conference series:

  • 541 Accesses

Abstract

The basis of inductive learning is the process of generating and refuting hypotheses. Natural approaches to this form of learning assume that a data item that causes refutation of one hypothesis opens the way for the introduction of a new (for now unrefuted) hypothesis, and so such data items have attracted the most attention. Data items that do not cause refutation of the current hypothesis have until now been largely ignored in these processes, but in practical learning situations they play the key role of corroborating those hypotheses that they do not refute.

We formalise a version of K.R. Popper’s concept of degree of corroboration for inductive inference and utilise it in an inductive learning procedure which has the natural behaviour of outputting the most strongly corroborated (non-refuted) hypothesis at each stage. We demonstrate its utility by providing characterisations of several of the commonest identification types in the case of learning from text over class-preserving hypothesis spaces and proving the existence of canonical learning strategies for these types. In many cases we believe that these characterisations make the relationships between these types clearer than the standard characterisations. The idea of learning with corroboration therefore provides a unifying approach for the field.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. D. Angluin, Inductive inference of formal languages from positive data, Information and Control 45, 117–135, 1980.

    Article  MATH  MathSciNet  Google Scholar 

  2. D. Angluin, C.H. Smith, Inductive inference: theory and methods, Computing Surveys 15, 237–269, 1983.

    Article  MathSciNet  Google Scholar 

  3. D. Gillies, Philosophy of Science in the Twentieth Century, Blackwell, 1993.

    Google Scholar 

  4. D. Gillies, Artificial Intelligence and Scientific Method, Oxford University Press, 1996.

    Google Scholar 

  5. E.M. Gold, Language identification in the limit, Information and Control 10, 447–474, 1967.

    Article  MATH  Google Scholar 

  6. K.P. Jantke, Monotonic and non-monotonic inductive inference, New Generation Computing 8, 349–460.

    Google Scholar 

  7. S. Lange, P. Watson, Machine discovery in the presence of incomplete or ambiguous data, in S. Arikawa, K.P. Jantke (Eds.) Algorithmic Learning Theory, Proc. of the Fifth International Workshop on Algorithmic Learning Theory, Reinhardsbrunn, Germany, Springer LNAI 872, 438–452, 1994.

    Google Scholar 

  8. S. Lange, T. Zeugmann, Set-driven and rearrangement-independent learning of recursive languages, in S. Arikawa, K.P. Jantke (Eds.) Algorithmic Learning Theory, Proc. of the Fifth International Workshop on Algorithmic Learning Theory, Reinhardsbrunn, Germany, Springer LNAI 872, 453–468, 1994.

    Google Scholar 

  9. Y. Mukouchi, S. Arikawa, Inductive inference machines that can refute hypothesis spaces, in K.P. Jantke, S. Kobayashi, E. Tomita, T. Yokomori (Eds.), Algorithmic Learning Theory, Proc. of the Fourth International Workshop on Algorithmic Learning Theory, Tokyo, Japan, Springer LNAI 744, 123–136, 1993.

    Google Scholar 

  10. K.R. Popper, The Logic of Scientific Discovery, 1997 Routledge reprint of the 1959 Hutchinson translation of the German original.

    Google Scholar 

  11. K.R. Popper, Degree of confirmation, British Journal for the Philosophy of Science 5, 143ff, 334, 359, 1954.

    Google Scholar 

  12. K.R. Popper, A second note on degree of confirmation, British Journal for the Philosophy of Science 7, 350ff, 1957.

    Google Scholar 

  13. K.R. Popper, Conjectures and Refutations, Routledge, 1963 (Fifth Edition, 1989).

    Google Scholar 

  14. P. Watson, Inductive Learning with Corroboration, Technical Report no. 6-99, Department of Computer Science, University of Kent at Canterbury, May 1999. Obtainable from http://www.cs.ukc.ac.uk/pubs/1999/782.

  15. K. Wexler, P. Culicover, Formal Principles of Language Acquisition, MIT Press, Cambridge, MA, 1980.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Watson, P. (1999). Inductive Learning with Corroboration. In: Watanabe, O., Yokomori, T. (eds) Algorithmic Learning Theory. ALT 1999. Lecture Notes in Computer Science(), vol 1720. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-46769-6_12

Download citation

  • DOI: https://doi.org/10.1007/3-540-46769-6_12

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-66748-3

  • Online ISBN: 978-3-540-46769-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics