Abstract
A variant of iterative learning in the limit (cf. Lange and Zeugmann 1996) is studied when a learner gets negative examples refuting conjectures containing data in excess of the target language and uses additional information of the following four types: (a) memorizing up to n input elements seen so far; (b) up to n feedback memberships queries (testing if an item is a member of the input seen so far); (c) the number of input elements seen so far; (d) the maximal element of the input seen so far. We explore how additional information available to such learners (defined and studied in Jain and Kinber 2007) may help. In particular, we show that adding the maximal element or the number of elements seen so far helps such learners to infer any indexed class of languages class-preservingly (using a descriptive numbering defining the class)—as it is proved in Jain and Kinber (2007), this is not possible without using additional information. We also study how, in the given context, different types of additional information fare against each other, and establish hierarchies of learners memorizing n+1 versus n input elements seen and n+1 versus n feedback membership queries.
Article PDF
Similar content being viewed by others
References
Angluin, D. (1980). Finding patterns common to a set of strings. Journal of Computer and System Sciences, 21(1), 46–62.
Angluin, D. (1988). Queries and concept learning. Machine Learning, 2(4), 319–342.
Brachman, R., & Anand, T. (1996). The process of knowledge discovery in databases: a human centered approach. In U. M. Fayyad, G. Piatetsky-Shapiro, P. Smyth, & R. Uthurusam (Eds.), Advances in knowledge discovery and data mining (pp. 37–58). Menlo Park: AAAI Press.
Blum, M. (1967). A machine-independent theory of the complexity of recursive functions. Journal of the ACM, 14(2), 322–336.
Blum, L., & Blum, M. (1975). Toward a mathematical theory of inductive inference. Information and Control, 28(2), 125–155.
Case, J. (1974). Periodicity in generations of automata. Mathematical Systems Theory, 8(1), 15–32.
Case, J., & Lynes, C. (1982). Machine inductive inference and language identification. In M. Nielsen & E. M. Schmidt (Eds.), Lecture notes in computer science: Vol. 140. Proceedings of the 9th international colloquium on automata, languages and programming (pp. 107–115). Berlin: Springer.
Case, J., & Moelius, S. (2008). U-shaped, iterative, and iterative-with-counter learning. Machine Learning, 72(1–2), 63–88.
Case, J., Jain, S., Lange, S., & Zeugmann, T. (1999). Incremental concept learning for bounded data mining. Information and Computation, 152(1), 74–110.
Fayyad, U. M., Piatetsky-Shapiro, G., & Smyth, P. (1996). From data mining to knowledge discovery. In U. M. Fayyad, G. Piatetsky-Shapiro, P. Smyth, & R. Uthurusam (Eds.), Advances in knowledge discovery and data mining (pp. 1–34). Menlo Park: AAAI Press.
Fulk, M. (1990). Prudence and other conditions on formal language learning. Information and Computation, 85(1), 1–11.
Gold, E. M. (1967). Language identification in the limit. Information and Control, 10(5), 447–474.
Hopcroft, J., & Ullman, J. (1979). Introduction to automata theory, languages, and computation. Reading: Addison–Wesley.
Jain, S., & Kinber, E. (2007). Iterative learning from positive data and negative counterexamples. Information and Computation, 205(12), 1777–1805.
Jain, S., & Kinber, E. (2008). Learning languages from positive data and negative counterexamples. Journal of Computer and System Sciences, 74(4), 431–456. Special Issue: Carl Smith memorial issue.
Jain, S., & Kinber, E. (2009). Iterative learning from texts and counterexamples using additional information. In R. Gavaldà, G. Lugosi, T. Zeugmann, & S. Zilles (Eds.), Lecture notes in artificial intelligence: Vol. 5809. Algorithmic learning theory: 20th international conference (ALT’ 2009) (pp. 308–322). Berlin: Springer.
Jockusch, C. G. (1968). Semirecursive sets and positive reducibility. Transactions of the American Mathematical Society, 131, 420–436.
Lange, S., & Zeugmann, T. (1992). Types of monotonic language learning and their characterization. In Proceedings of the fifth annual workshop on computational learning theory (pp. 377–390). New York: ACM.
Lange, S., & Zeugmann, T. (1996). Incremental learning from positive data. Journal of Computer and System Sciences, 53(1), 88–103.
Lange, S., Zeugmann, T., & Zilles, S. (2008). Learning indexed families of recursive languages from positive data: a survey. Theoretical Computer Science, 397(1–3), 194–232.
Li, Y., & Zhang, W. (2006). Simplify support vector machines by iterative learning. Neural Processing Information—Letters and Reviews, 10(1), 11–17.
Osherson, D., Stob, M., & Weinstein, S. (1986). Systems that learn: an introduction to learning theory for cognitive and computer scientists. Cambridge: MIT Press.
Popper, K. (1968). The logic of scientific discovery (2nd ed.). New York: Harper Torch Books.
Rogers, H. (1967). Theory of recursive functions and effective computability. New York: McGraw–Hill. Reprinted by MIT Press in 1987.
Wiehagen, R. (1976). Limes-Erkennung rekursiver Funktionen durch spezielle Strategien. Journal of Information Processing and Cybernetics (EIK), 12(1–2), 93–99.
Author information
Authors and Affiliations
Corresponding author
Additional information
Editor: A. Blum.
S. Jain was supported in part by NUS grant numbers R252-000-308-112 and C-252-000-087-001.
Rights and permissions
About this article
Cite this article
Jain, S., Kinber, E. Iterative learning from texts and counterexamples using additional information. Mach Learn 84, 291–333 (2011). https://doi.org/10.1007/s10994-011-5238-7
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10994-011-5238-7