Too much information can be too much for learning efficiently

  • Rolf Wiehagen
  • Thomas Zeugmann
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 642)

Abstract

In designing learning algorithms it seems quite reasonable to construct them in a way such that all data the algorithm already has obtained are correctly and completely reflected in the description the algorithm outputs on these data. However, this approach may totally fail, i.e., it may lead to the unsolvability of the learning problem, or it may exclude any efficient solution of it. In particular, we present a natural learning problem and prove that it can be solved in polynomial time if and only if the algorithm is allowed to ignore data.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Angluin, D., (1980), Finding Patterns Common to a Set of Strings, Journal of Computer and System Sciences 21, 46–62Google Scholar
  2. [2]
    Angluin, D. and C.H. Smith, (1983), Inductive Inference: Theory and Methods, Computing Surveys 15, 3, 237–269Google Scholar
  3. [3]
    Angluin, D. and C.H. Smith, (1987), Formal Inductive Inference, In Encyclopedia of Artificial Intelligence, St.C. Shapiro (Ed.), Vol. 1, pp. 409–418, Wiley-Interscience Publication, New YorkGoogle Scholar
  4. [4]
    Barzdin, Ya.M., (1974), Inductive Inference of Automata, Functions and Programs, Proc. Int. Congress of Math., Vancouver, pp. 455–460Google Scholar
  5. [5]
    Blum, M., (1967), Machine Independent Theory of Complexity of Recursive Functions, Journal of the ACM 14, 322–336Google Scholar
  6. [6]
    Blum, L. and M. Blum, (1975), Toward a Mathematical Theory of Inductive Inference, Information and Control 28, 122–155Google Scholar
  7. [7]
    Fulk, M.,(1988), Saving the Phenomena: Requirements that Inductive Inference Machines Not Contradict Known Data, Information and Computation 79, 193–209Google Scholar
  8. [8]
    Garey, M.R. and D.S. Johnson, (1979), Computers and Intractability, A Guide to the Theory of {ie85-01}-completness, Freeman and Company, San FranciscoGoogle Scholar
  9. [9]
    Gold, M.E., (1965), Limiting Recursion, Journal of Symbolic Logic 30, 28–48Google Scholar
  10. [10]
    Gold, M.E., (1967), Language Identification in the Limit, Information and Control 10, 447–474Google Scholar
  11. [11]
    Jantke, K.P. and H.R. Beick, (1981), Combining Postulates of Naturalness in Inductive Inference, Journal of Information Processing and Cybernetics (EIK) 17, 465–484Google Scholar
  12. [12]
    Kearns, M. and L. Pitt, (1989), A Polynomial-time Algorithm for Learning k-variable Pattern Languages From Examples. In Proc. 2nd Annual Workshop on Computational Learning Theory, R. Rivest, D. Haussler, and M.K. Warmuth (Eds.), pp. 57–70, Morgan Kaufmann Publishers Inc.Google Scholar
  13. [13]
    Ko, Ker-I, Marron, A. and W.G. Tzeng, (1990), Learning String Patterns and Tree Patterns From Examples, Proc. 7th Conference on Machine Learning, pp. 384–391Google Scholar
  14. [14]
    Lange, S. and R. Wiehagen, (1991), Polynomial-Time Inference of Arbitrary Pattern Languages, New Generation Computing 8, 361–370Google Scholar
  15. [15]
    Lange, S. and T. Zeugmann, (1991), Monotonic versus Non-monotonic Language Learning, in Proc. 2nd International Workshop on Nonmonotonic and Inductive Logic, December 1991, Reinhardsbrunn, to appear in Lecture Notes in Artificial IntelligenceGoogle Scholar
  16. [16]
    Lange, S. and T. Zeugmann, (1992), Types of Monotonic Language Learning and Their Characterization, Proc. 5th Annual Workshop on Computational Learning Theory, Morgan Kaufmann Publishers Inc.Google Scholar
  17. [17]
    Nix, R.P., (1983), Editing by Examples, Yale University, Dept. Computer Science, Technical Report 280Google Scholar
  18. [18]
    Osherson, D., Stob, M. and S. Weinstein, (1986), Systems that Learn. An Introduction to Learning Theory for Cognitive and Computer Scientists, MIT-Press, Cambridge, MassachusettsGoogle Scholar
  19. [19]
    Porat, S. and J.A. Feldman, (1988), Learning Automata from Ordered Examples, in Proc. First Workshop on Computational Learning Theory, D. Haussler and L. Pitt (Eds.), pp. 386–396, Morgan Kaufmann Publishers Inc.Google Scholar
  20. [20]
    Rogers, H.Jr., (1967), Theory of Recursive Functions and Effective Computability, Mc Graw-Hill, New YorkGoogle Scholar
  21. [21]
    Shinohara, T., (1982), Polynomial Time Inference of Extended Regular Pattern Languages, RIMS Symposia on Software Science and Engineering, Kyoto, Lecture Notes in Computer Science 147, pp. 115–127, Springer-VerlagGoogle Scholar
  22. [22]
    Solomonoff, R., (1964), A Formal Theory of Inductive Inference, Information and Control 7, 1–22, 234–254Google Scholar
  23. [23]
    Wiehagen, R., (1976), Limes-Erkennung rekursiver Funktionen durch spezielle Strategien, Journal of Information Processing and Cybernetics (EIK) 12, 93–99Google Scholar
  24. [24]
    Wiehagen, R., (1978a), Zur Theorie der algorithmischen Erkennung, Dissertation B, Humboldt-Universität zu BerlinGoogle Scholar
  25. [25]
    Wiehagen, R., (1978b), Characterization Problems in the Theory of Inductive Inference, Proc. 5th Colloquium on Automata, Languages and Programming, Udine, July 17–21, G. Ausiello and C. Böhm (Eds.), Lecture Notes in Computer Science 62, pp. 494–508, Springer-VerlagGoogle Scholar
  26. [26]
    Zeugmann, T., (1983), A-posteriori Characterizations in Inductive Inference of Recursive Functions, Journal of Information Processing and Cybernetics (EIK) 19, 559–594Google Scholar

Copyright information

© Springer-Verlag 1992

Authors and Affiliations

  • Rolf Wiehagen
    • 1
  • Thomas Zeugmann
    • 2
  1. 1.Institut für Theoretische InformatikHumboldt-UniversitätBerlin
  2. 2.Institut für Theoretische InformatikTH DarmstadtDarmstadt

Personalised recommendations