Advertisement

Learning from multiple sources of inaccurate data

  • Ganesh Baliga
  • Sanjay Jain
  • Arun Sharma
Submitted Papers
Part of the Lecture Notes in Computer Science book series (LNCS, volume 642)

Abstract

Most theoretical studies of inductive inference model a situation involving a machine M learning its environment E on following lines. M, placed in E, receives data about E, and simultaneously conjectures a sequence of hypotheses. M is said to learn E just in case the sequence of hypotheses conjectured by M stabilizes to a final hypothesis which correctly represents E.

The above model makes the idealized assumption that the data about E that M receives is from a single and accurate source. An argument is made in favor of a more realistic learning model which accounts for data emanating from multiple sources, some or all of which may be inaccurate. Motivated by this argument, the present paper introduces and theoretically analyzes a number of inference criteria in which a machine is fed data from multiple sources, some of which could be infected with inaccuracies. The main parameters of the investigation are the number of data sources, the number of faulty data sources, and the kind of inaccuracies.

Keywords

Learning Machine Inductive Inference Inaccurate Data Idealize Assumption Computational Learn Theory 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [Bar74]
    J. M. Barzdin. Two theorems on the limiting synthesis of functions. In Theory of Algorithms and Programs, Latvian State University, Riga, 210:82–88, 1974. In Russian.Google Scholar
  2. [BB75]
    L. Blum and M. Blum. Toward a mathematical theory of inductive inference. Information and Control, 28:125–155, 1975.Google Scholar
  3. [Blu67]
    M. Blum. A machine independent theory of the complexity of recursive functions. Journal of the ACM, 14:322–336, 1967.Google Scholar
  4. [Cas74]
    J. Case. Periodicity in generations of automata. Mathematical Systems Theory, 8:15–32, 1974.Google Scholar
  5. [CL82]
    J. Case and C. Lynes. Machine inductive inference and language identification. Lecture Notes in Computer Science, 140:107–115, 1982.Google Scholar
  6. [CS83]
    J. Case and C. Smith. Comparison of identification criteria for machine inductive inference. Theoretical Computer Science, 25:193–220, 1983.Google Scholar
  7. [FCe90]
    M. Fulk and J. Case (editors). Proceedings of the Third Annual Workshop on Computational Learning Theory. Morgan Kaufmann Publishers, Inc., August 1990.Google Scholar
  8. [FJ89]
    M. A. Fulk and S. Jain. Learning in the presence of inaccurate information. In R. Rivest, D. Haussler, and M. K. Warmuth, editors, Proceedings of the Second Annual Workshop on Computational Learning Theory, Santa Cruz, California, pages 175–188. Morgan Kaufmann Publishers, Inc., August 1989.Google Scholar
  9. [Ful85]
    M. Fulk. A Study of Inductive Inference machines. PhD thesis, SUNY at Buffalo, 1985.Google Scholar
  10. [Gol67]
    E. M. Gold. Language identification in the limit. Information and Control, 10:447–474, 1967.Google Scholar
  11. [HPe88]
    D. Haussler and L. Pitt (editors). Proceedings of the 1988 Workshop on Computational Learning Theory. Morgan Kaufmann Publishers, Inc., August 1988.Google Scholar
  12. [Jai90]
    S. Jain. Learning in the Presence of Additional Information and Inaccurate Information. PhD thesis, University of Rochester, 1990.Google Scholar
  13. [MY78]
    M. Machtey and P. Young. An Introduction to the General Theory of Algorithms. North Holland, New York, 1978.Google Scholar
  14. [OSW86]
    D. Osherson, M. Stob, and S. Weinstein. Systems that Learn, An Introduction to Learning Theory for Cognitive and Computer Scientists. MIT Press, Cambridge, Mass., 1986.Google Scholar
  15. [OW82]
    D. Osherson and S. Weinstein. A note on formal learning theory. Cognition, 11:77–88, 1982.Google Scholar
  16. [Put73]
    H. Putnam. Reductionism and the nature of psychology. Cognition, 2:131–146, 1973.Google Scholar
  17. [RHWe89]
    R. Rivest, D. Haussler, and M. K. Warmuth (editors). Proceedings of the Second Annual Workshop on Computational Learning Theory. Morgan Kaufmann Publishers, Inc., August 1989.Google Scholar
  18. [Rog58]
    H. Rogers. Gödel numberings of partial recursive functions. Journal of Symbolic Logic, 23:331–341, 1958.Google Scholar
  19. [Rog67]
    H. Rogers. Theory of Recursive Functions and Effective Computability. McGraw Hill, New York, 1967. Reprinted. MIT Press. 1987.Google Scholar
  20. [Smi82]
    C. Smith. The power of pluralism for automatic program synthesis. Journal of the ACM, 29:1144–1165, 1982.Google Scholar
  21. [Sol64a]
    R. J. Solomonoff. A formal theory of inductive inference, Part I. Information and Control, 7:1–22, 1964.Google Scholar
  22. [Sol64b]
    R. J. Solomonoff. A formal theory of inductive inference, Part II. Information and Control, 7:224–254, 1964.Google Scholar
  23. [SR86]
    G. Schäfer-Richter. Some results in the theory of effective program synthesis — learning by defective information. Lecture Notes in Computer Science, 215:219–225, 1986.Google Scholar
  24. [VWe91]
    L. G. Valiant and M. K. Warmuth (editors). Proceedings of the Fourth Annual Workshop on Computational Learning Theory. Morgan Kaufmann Publishers, Inc., August 1991.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1992

Authors and Affiliations

  • Ganesh Baliga
    • 1
  • Sanjay Jain
    • 1
  • Arun Sharma
    • 2
  1. 1.Department of Computer and Information SciencesUniversity of DelawareNewarkUSA
  2. 2.School of Computer Science and EngineeringThe University of New South WalesSydneyAustralia

Personalised recommendations