Skip to main content

Natural Recursion Doesn’t Work That Way: Automata in Planning and Syntax

  • Chapter
  • First Online:
Fundamental Issues of Artificial Intelligence

Part of the book series: Synthese Library ((SYLI,volume 376))

  • 5092 Accesses

Abstract

Natural recursion in syntax is recursion by linguistic value, which is not syntactic in nature but semantic. Syntax-specific recursion is not recursion by name as the term is understood in theoretical computer science. Recursion by name is probably not natural because of its infinite typeability. Natural recursion, or recursion by value, is not species-specific. Human recursion is not syntax-specific. The values on which it operates are most likely domain-specific, including those for syntax. Syntax seems to require no more (and no less) than the resource management mechanisms of an embedded push-down automaton (EPDA). We can conceive EPDA as a common automata-theoretic substrate for syntax, collaborative planning, i-intentions, and we-intentions. They manifest the same kind of dependencies. Therefore, syntactic uniqueness arguments for human behavior can be better explained if we conceive automata-constrained recursion as the most unique human capacity for cognitive processes.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 189.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 249.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    All but one, that is. Everett (2005) argues that recursion is not a fact for all languages. That may be true, but the fact remains that some languages do have it, and all languages are equally likely to be acquirable. See Nevins et al. (2009) and Bozsahin (2012) for some criticism of Everett, and his response to some of the criticisms (Everett 2009). Even when syntactic recursion is not attested, there seems little doubt that semantic recursion, or recursion by value, is common for all humans, e.g. the ability to think where thinker is agent and thinkee is another thought of same type, manifested in English with complement clauses such as I think she thinks you like me. But, it can be expressed nonrecursively as well: I think of the following: she thinks of it; it being that you like me. We shall have a closer look at such syntactic, semantic and anaphoric differences in recursive thoughts.

  2. 2.

    The name is apt because, as lambda-calculus has shown us, reentrant knowledge can be captured without names if we want to, and that the solution comes with a price (more on that later). In current work, the term recursion by name (or label) is taken in its technical sense in computer science. Confusion will arise when we see the same term in linguistics, for example most recently in Chomsky (2013), where use of the same label in a recursive merger refers to the term ‘label’ in a different sense, to occurrence of a value.

  3. 3.

    Some comprehensive attempts in linguistics in accounting for the interaction are Grimshaw (1990), Manning (1996), and Hale and Keyser (2002).

  4. 4.

    See Jackendoff and Pinker (2005) and Parker (2006) for counterarguments on evolutionary basis of syntactic recursion.

  5. 5.

    I am not suggesting that (12) is the universal schema for all plans. It is meant to show that collaborative plans may be LIG-serializable. LIG-plan space remains to be worked out. For example, base cases of individuated grammars, the \(S_{i}^{{\prime}}[S_{i}]\) rules, doing running—as part of a dance—and Ai would be by definition LIG-serializable too, but in a manner different than what α and β are intended to capture, viz. contextualized knowledge states of the group constituting the we-intention. Ais may be LIG-realized action sequences, making the whole collection a we-plan.

  6. 6.

    We note that the language {www ∈ { a, b, c}} is fundamentally different than double-copy {wwww ∈ { a, b, c}}. The first one allows stack processing. Here is a LIG grammar for it: \(S_{[\ldots ]} \rightarrow x\ S_{[x\ldots ]},S_{[\ldots ]} \rightarrow S_{[\ldots ]}^{{\prime}},S_{[x\ldots ]}^{{\prime}}\rightarrow \ S_{[\ldots ]}^{{\prime}}\ x,S_{[\ ]}^{{\prime}}\rightarrow \epsilon\), for x ∈ { a, b, c}.

  7. 7.

    Continuing in this way of thinking, we could factor recursion and other dependencies in a grammar, and incorporate word order as a lexically specifiable constraint. It might achieve the welcome result of self-constraining recursion and levels of embedding in parsing: see Joshi (2004: 662).

    Both LTAG and CCG avoid recursion by name, LTAG by employing adjunction in addition to substitution, and CCG by avoiding any use of paradoxical combinators such as Y, or generalized composition. That is how they stay well below Turing equivalence that might otherwise have been achieved because of recursion by name; see also Joshi (1990), Vijay-Shanker and Weir (1993), and Bozsahin (2012) for discussion of these aspects. Their restrictiveness (to LIG) becomes their explanatory force.

  8. 8.

    The Swiss German facts are more direct because the language has overt case marking and more strict word order; see Bozsahin (2012) for a CCG grammar of some Swiss German examples.

  9. 9.

    Notice that \(\{a^{n}b^{n}c^{n}d^{n}e^{n}\mid n \geq 0\}\) is not a linear-indexed language, hence such grammars make no use of a linear distance metric, or simple induction from patterns; see Joshi (1983).

  10. 10.

    Notice that lazy evaluation is not a remedy here. By lazy evaluation, we can represent infinite streams by finite means (Abelson et al. 1985; Watt 2004), but for that to work infinite streams must be enumerable.

References

  • Aaronson, S. (2013). Why philosophers should care about computational complexity. In B. J. Copeland, C. J. Posy, & O. Shagrir (Eds.), Computability: Turing, Gödel, Church, and Beyond. Cambridge: MIT.

    Google Scholar 

  • Abelson, H., Sussman, G. J., & Sussman, J. (1985). Structure and interpretation of computer programs. Cambridge: MIT.

    Google Scholar 

  • Berwick, R. C., Okanoya, K., Beckers, G. J., & Bolhuis, J. J. (2011). Songs to syntax: The linguistics of birdsong. Trends in Cognitive Sciences, 15(3), 113–121.

    Google Scholar 

  • Berwick, R. C., Friederici, A. D., Chomsky, N., & Bolhuis, J. J. (2013). Evolution, brain, and the nature of language. Trends in Cognitive Sciences, 17(2), 89–98.

    Google Scholar 

  • Bozsahin, C. (2012). Combinatory linguistics. Berlin/Boston: De Gruyter Mouton.

    Google Scholar 

  • Bratman, M. E. (1992). Shared cooperative activity. The Philosophical Review, 101(2), 327–341.

    Google Scholar 

  • Burns, S. R. (2009). The problem of deduction: Hume’s problem expanded. Dialogue 52(1), 26–30.

    Google Scholar 

  • Chomsky, N. (1995). The minimalist program. Cambridge: MIT.

    Google Scholar 

  • Chomsky, N. (2005). Three factors in language design. Linguistic Inquiry, 36(1), 1–22.

    Google Scholar 

  • Chomsky, N. (2013) Problems of projection. Lingua, 130, 33–49.

    Google Scholar 

  • Curry, H. B., & Feys, R. (1958). Combinatory logic. Amsterdam: North-Holland.

    Google Scholar 

  • Deacon, T. W. (1997). The symbolic species: The co-evolution of language and the human brain. London: The Penguin Press.

    Google Scholar 

  • Everett, D. L. (2005). Cultural constraints on grammar and cognition in Pirahã. Current Anthropology, 46(4), 621–646.

    Google Scholar 

  • Everett, D. L. (2009). Pirahã culture and grammar: A response to some criticisms. Language, 85(2), 405–442.

    Google Scholar 

  • Fitch, T., Hauser, M., & Chomsky, N. (2005). The evolution of the language faculty: Clarifications and implications. Cognition, 97, 179–210.

    Google Scholar 

  • Gazdar, G. (1988). Applicability of indexed grammars to natural languages. In U. Reyle & C. Rohrer (Eds.), Natural language parsing and linguistic theories (pp. 69–94). Dordrecht: Reidel.

    Google Scholar 

  • Ghallab, M., Nau, D., & Traverso, P. (2004) Automated planning: Theory and practice. San Francisco: Morgan Kaufmann.

    Google Scholar 

  • Gibson, J. (1966). The senses considered as perceptual systems. Boston: Houghton-Mifflin Co.

    Google Scholar 

  • Grimshaw, J. (1990). Argument structure. Cambridge: MIT.

    Google Scholar 

  • Grosz, B., & Kraus, S. (1993). Collaborative plans for group activities. In IJCAI, Chambéry (Vol. 93, pp. 367–373).

    Google Scholar 

  • Grosz, B. J., Hunsberger, L., & Kraus, S. (1999). Planning and acting together. AI Magazine, 20(4), 23.

    Google Scholar 

  • Hale, K., & Keyser, S. J. (2002). Prolegomenon to a theory of argument structure. Cambridge: MIT.

    Google Scholar 

  • Hauser, M., Chomsky, N., & Fitch, W. T. (2002). The faculty of language: What is it, who has it, and how did it evolve? Science, 298, 1569–1579.

    Google Scholar 

  • Huybregts, R., & van Riemsdijk, H. (1982). Noam Chomsky on the generative enterprise. Dordrecht: Foris.

    Google Scholar 

  • Jackendoff, R., & Pinker, S. (2005). The nature of the language faculty and its implications for language evolution. Cognition, 97, 211–225.

    Google Scholar 

  • Jaynes, J. (1976). The origin of consciousness in the breakdown of the bicameral mind. New York: Houghton Mifflin Harcourt.

    Google Scholar 

  • Joshi, A. K. (1983). Factoring recursion and dependencies: An aspect of tree adjoining grammars (TAG) and a comparison of some formal properties of TAGs, GPSGs, PLGs, and LPGs. In Proceedings of the 21st Annual Meeting on Association for Computational Linguistics, Cambridge (pp. 7–15)

    Google Scholar 

  • Joshi, A. (1985). How much context-sensitivity is necessary for characterizing complex structural descriptions—Tree adjoining grammars. In D. Dowty, L. Karttunen, & A. Zwicky (Eds.), Natural language parsing (pp. 206–250). Cambridge: Cambridge University Press.

    Google Scholar 

  • Joshi, A. (1990). Processing crossed and nested dependencies: An automaton perspective on the psycholinguistic results. Language and Cognitive Processes, 5, 1–27.

    Google Scholar 

  • Joshi, A. K. (2004). Starting with complex primitives pays off: Complicate locally, simplify globally. Cognitive Science, 28(5), 637–668.

    Google Scholar 

  • Joshi, A., & Schabes, Y. (1992). Tree-adjoining grammars and lexicalized grammars. In M. Nivat & A. Podelski (Eds.), Definability and recognizability of sets of trees. Princeton: Elsevier.

    Google Scholar 

  • Joshi, A., Vijay-Shanker, K., & Weir, D. (1991). The convergence of mildly context-sensitive formalisms. In P. Sells, S. Shieber, & T. Wasow (Eds.), Foundational issues in natural language processing (pp. 31–81). Cambridge: MIT.

    Google Scholar 

  • Kanazawa, M., & Salvati, S. (2012). MIX is not a tree-adjoining language. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, Jeju Island (pp. 666–674). Association for Computational Linguistics.

    Google Scholar 

  • Knuth, D. E. (1968). Fundamental algorithms (The art of computer programming, Vol. 1). Reading: Addison-Wesley.

    Google Scholar 

  • Kok, A. (2013). Kant, Hegel, und die Frage der Metaphysik: Über die Möglichkeit der Philosophie nach der Kopernikanischen Wende. Wilhelm Fink.

    Google Scholar 

  • Lashley, K. (1951). The problem of serial order in behavior. In L. Jeffress (Ed.), Cerebral mechanisms in behavior (pp. 112–136). New York: Wiley. Reprinted in Saporta (1961).

    Google Scholar 

  • Lochbaum, K. E. (1998). A collaborative planning model of intentional structure. Computational Linguistics, 24(4), 525–572.

    Google Scholar 

  • Manning, C. D. (1996). Ergativity: Argument structure and grammatical relations. Stanford: CSLI.

    Google Scholar 

  • Nevins, A., Pesetsky, D., & Rodrigues, C. (2009). Pirahã exceptionality: A reassessment. Language, 85(2), 355–404.

    Google Scholar 

  • Parker, A. R. (2006). Evolving the narrow language faculty: Was recursion the pivotal step. In The Evolution of Language: Proceedings of the 6th International Conference on the Evolution of Language (pp. 239–246). Singapore: World Scientific Press.

    Google Scholar 

  • Petrick, R. P., & Bacchus, F. (2002). A knowledge-based approach to planning with incomplete information and sensing. In AIPS, Toulouse (pp. 212–222).

    Google Scholar 

  • Peyton Jones, S. L. (1987). The implementation of functional programing languages. New York: Prentice-Hall.

    Google Scholar 

  • Quine, W. v. O. (1960). Word and object. Cambridge: MIT.

    Google Scholar 

  • Saporta, S. (Ed.). (1961). Psycholinguistics: A book of readings. New York: Holt Rinehart Winston.

    Google Scholar 

  • Searle, J. R. (1990). Collective intentions and actions. In P. R. Cohen, M. E. Pollack, & J. L. Morgan (Ed.), Intentions in communication. Cambridge: MIT.

    Google Scholar 

  • Shieber, S. (1985). Evidence against the context-freeness of natural language. Linguistics and Philosophy, 8, 333–343.

    Google Scholar 

  • Speas, M., & Roeper, T. (Eds.). (2009, forthcoming). Proceedings of the Conference on Recursion: Structural Complexity in Language and Cognition, University of Mass, Amherst.

    Google Scholar 

  • Stabler, E. (2013). Copying in mildly context sensitive grammar. Informatics Seminars, Institute for Language, Cognition and Computation, University of Edinburgh, October 2013.

    Google Scholar 

  • Steedman, M. (2000). The syntactic process. Cambridge: MIT.

    Google Scholar 

  • Steedman, M. (2002). Plans, affordances, and combinatory grammar. Linguistics and Philosophy, 25, 723–753.

    Google Scholar 

  • Steedman, M., & Petrick, R. P. (2007). Planning dialog actions. In Proceedings of the 8th SIGDIAL Workshop on Discourse and Dialogue (SIGdial 2007), Antwerp (pp. 265–272)

    Google Scholar 

  • Tomasello, M., & Call, J. (1997). Primate cognition. New York: Oxford University Press.

    Google Scholar 

  • Tomasello, M., Call, J., & Hare, B. (2003). Chimpanzees understand psychological states—the question is which ones and to what extent. Trends in Cognitive Sciences, 7(4), 153–156.

    Google Scholar 

  • Turing, A. M. (1937). Computability and \(\lambda\)-definability. Journal of Symbolic Logic, 2(4), 153–163.

    Google Scholar 

  • Valiant, L. (1984). A theory of the learnable. Communications of the ACM, 27(11), 1134–1142.

    Google Scholar 

  • Valiant, L. (2013). Probably Approximately Correct: Nature’s Algorithms for Learning and Prospering in a Complex World. New York: Basic Books.

    Google Scholar 

  • Van Heijningen, C. A., De Visser, J., Zuidema, W., & Ten Cate, C. (2009). Simple rules can explain discrimination of putative recursive syntactic structures by a songbird species. Proceedings of the National Academy of Sciences, 106(48), 20538–20543.

    Google Scholar 

  • Vijay-Shanker, K. (1987). A study of tree adjoining grammars. PhD thesis, University of Pennsylvania.

    Google Scholar 

  • Vijay-Shanker, K., & Weir, D. (1993). Parsing some constrained grammar formalisms. Computational Linguistics, 19, 591–636.

    Google Scholar 

  • Watt, D. A. (2004). Programming language design concepts. Chicester: Wiley.

    Google Scholar 

  • Zettlemoyer, L., & Collins, M. (2005). Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Proceedings of the 21st Conference on Uncertainty in Artificial Intelligence, Edinburgh.

    Google Scholar 

Download references

Acknowledgements

Thanks to PT-AI reviewers and the audience at Oxford, İstanbul, and Ankara, and to Julian Bradfield, Aravind Joshi, Simon Kirby, Vincent Müller, Umut Özge, Geoffrey Pullum, Aaron Sloman, Mark Steedman, and Language Evolution and Computation Research Unit (LEC) at Edinburgh University, for comments and advice. I am to blame for all errors and for not heeding good advice. This research is supported by the GRAMPLUS project granted to Edinburgh University, EU FP7 Grant #249520.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Cem Bozşahin .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Bozşahin, C. (2016). Natural Recursion Doesn’t Work That Way: Automata in Planning and Syntax. In: Müller, V.C. (eds) Fundamental Issues of Artificial Intelligence. Synthese Library, vol 376. Springer, Cham. https://doi.org/10.1007/978-3-319-26485-1_7

Download citation

Publish with us

Policies and ethics