Abstract
This paper solves an important problem left open in the literature by showing that U-shapes are unnecessary in iterative learning. A U-shape occurs when a learner first learns, then unlearns, and, finally, relearns, some target concept. Iterative learning is a Gold-style learning model in which each of a learner’s output conjectures depends only upon the learner’s just previous conjecture and upon the most recent input element. Previous results had shown, for example, that U-shapes are unnecessary for explanatory learning, but are necessary for behaviorally correct learning.
Work on the aforementioned problem led to the consideration of an iterative-like learning model, in which each of a learner’s conjectures may, in addition, depend upon the number of elements so far presented to the learner. Learners in this new model are strictly more powerful than traditional iterative learners, yet not as powerful as full explanatory learners. Can any class of languages learnable in this new model be learned without U-shapes? For now, this problem is left open.
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Baliga, G., Case, J., Merkle, W., Stephan, F., Wiehagen, W.: When unlearning helps, Submitted (2007)
Blum, M.: A machine independent theory of the complexity of recursive functions. Journal of the ACM 14, 322–336 (1967)
Carlucci, L., Case, J., Jain, S., Stephan, F.: Non U-shaped vacillatory and team learning. In: ALT 2005. Lecture Notes in Artificial Intelligence, Springer, Heidelberg (2005)
Carlucci, L., Case, J., Jain, S., Stephan, F.: Memory-limited U-shaped learning (Journal version conditionally accepted for Information and Computation). In: COLT 2006. LNCS (LNAI), vol. 4005, pp. 244–258. Springer, Heidelberg (2006)
Case, J.: The power of vacillation in language learning. SIAM Journal on Computing 28(6), 1941–1969 (1999)
Case, J., Moelius III, S.E.: U-shaped, iterative, and iterative-with-counter learning (expanded version). Technical report, University of Delaware, (2007) Available at http://www.cis.udel.edu/~moelius/publications
Case, J., Jain, S., Lange, S., Zeugmann, T.: Incremental concept learning for bounded data mining. Information and Computation 152, 74–110 (1999)
Case, J., Lynes, C.: Machine inductive inference and language identification. In: Nielsen, M., Schmidt, E.M. (eds.) Proceedings of the 9th International Colloquium on Automata, Languages and Programming. LNCS, vol. 140, pp. 107–115. Springer, Heidelberg (1982)
Davis, M., Sigal, R., Weyuker, E.: Computability, Complexity, and Languages, 2nd edn. Academic Press, San Diego (1994)
Fulk, M.: Prudence and other conditions on formal language learning. Information and Computation 85, 1–11 (1990)
Fulk, M., Jain, S., Osherson, D.: Open problems in Systems That Learn. Journal of Computer and System Sciences 49(3), 589–604 (1994)
Gold, E.: Language identification in the limit. Information and Control 10, 447–474 (1967)
Jain, S.: Private communication (2006)
Jain, S., Osherson, D., Royer, J., Sharma, A.: Systems that Learn: An Introduction to Learning Theory. MIT Press, Cambridge, Mass (1999)
Kinber, E., Stephan, F.: Language learning from texts: mind changes, limited memory, and monotonicity. Information and Computation 123, 224–241 (1995)
Lange, S., Zeugmann, T.: Incremental learning from positive data. Journal of Computer and System Sciences 53, 88–103 (1996)
Lange, S., Zeugmann, T.: Set-driven and rearrangement-independent learning of recursive languages. Mathematical Systems Theory 6, 599–634 (1996)
Marcus, G., Pinker, S., Ullman, M., Hollander, M., Rosen, T.J., Xu, F.: Overregularization in Language Acquisition. Monographs of the Society for Research in Child Development, vol. 57, no. 4. University of Chicago Press, Includes commentary by Clahsen, H. (1992)
Osherson, D., Stob, M., Weinstein, S.: Systems that Learn: An Introduction to Learning Theory for Cognitive and Computer Scientists. MIT Press, Cambridge, Mass (1986)
Plunkett, K., Marchman, V.: U-shaped learning and frequency effects in a multilayered perceptron: implications for child language acquisition. Cognition 38, 43–102 (1991)
Rogers, H.: Theory of Recursive Functions and Effective Computability. McGraw Hill, New York, 1967. Reprinted, MIT Press (1987)
Schäfer-Richter, G.: Über Eingabeabhängigkeit und Komplexität von Inferenzstrategien. PhD thesis, Rheinisch-Westfälische Technische Hochschule Aachen, Germany (1984)
Taatgen, N.A., Anderson, J.R.: Why do children learn to say broke? A model of learning the past tense without feedback. Cognition 86, 123–155 (2002)
Wexler, K., Culicover, P.: Formal Principles of Language Acquisition. MIT Press, Cambridge, Mass (1980)
Wiehagen, R.: Limes-erkennung rekursiver funktionen durch spezielle strategien. Electronische Informationverarbeitung und Kybernetik 12, 93–99 (1976)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2007 Springer Berlin Heidelberg
About this paper
Cite this paper
Case, J., Moelius, S.E. (2007). U-Shaped, Iterative, and Iterative-with-Counter Learning. In: Bshouty, N.H., Gentile, C. (eds) Learning Theory. COLT 2007. Lecture Notes in Computer Science(), vol 4539. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-72927-3_14
Download citation
DOI: https://doi.org/10.1007/978-3-540-72927-3_14
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-72925-9
Online ISBN: 978-3-540-72927-3
eBook Packages: Computer ScienceComputer Science (R0)