Skip to main content

Computational Models and Psychological Reality

  • Chapter
  • First Online:
Psychosyntax

Part of the book series: Philosophical Studies Series ((PSSP,volume 129))

  • 353 Accesses

Abstract

The main claim of this chapter and the next is that all psychologically plausible parsing models either represent or embody a grammar. I substantiate this claim by surveying top-down, bottom-up, and left-corner parsing algorithms, illustrating the ways in which they can draw on explicit representations of grammatical principles. I then discuss the Parsing as Deduction approach, wherein a proof procedure takes the rules of a grammar as axioms and derives MPMs as theorems, using a subpersonal analogue of natural deduction. This constitutes the most concrete implementation of the idea that the HSPM draws on syntactic principles as data. Finally, I turn to three strategies for dealing with the massive structural ambiguity that any parser will encounter in the input stream. Resource-based approaches emphasize parsing heuristics that minimize the use of computational resources, like short-term memory. Frequency-based approaches use statistical analyses of corpuses and treebanks to guide parsing decisions. Grammar-based approaches appeal directly to Minimalist syntactic principles in accounting for the HSPM’s behavior in the face of ambiguity. The latter possibility is particularly exciting, as it would show that a Minimalist grammar is not only suitable for describing abstract formal relations, but also the real-time operation of psychological mechanisms.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Devitt (2006a) characterizes the notion of a structure rule as follows: “The outputs of a linguistic competence … are governed by a system of rules, just like the outputs of the chess player, the logic machine, and the bee. Something counts as a sentence only if it has a place in the linguistic structure defined by these structure rules. Something counts as a particular sentence, has its particular syntactic structure, in virtue of the particular structure rules that govern it, in virtue of its particular place in the linguistic structure. Like the theory of the idealized outputs of the chess player, logic machine, and bee, our theory can be used to make distinctions among the nonideal. Strings that are not sentences can differ in their degree of failure. For they can differ in the sort and number of linguistic structure rules that they fail to satisfy” (p. 24).

  2. 2.

    Steedman (2000) actually makes a stronger commitment: “It is important to note that the strong competence hypothesis as stated by Bresnan and Kaplan imposes no further constraint on the processor. In particular, it does not limit the structures built by the processor to fully instantiated constituents. However, the Strict Competence Hypothesis proposed in this book imposes this stronger condition” (228).

  3. 3.

    Constraints of space preclude a fuller discussion of a number of formalisms that are currently popular in parsing theory. These include the lexicalist grammars that rely on feature unification—e.g., Lexical Functional Grammar (Bresnan 2001) and Head-driven Phrase Structure Grammar (Pollard and Sag 1994)—as well as Tree-Adjoining Grammar (Schabes et al. 1988) and Combinatory Categorial Grammars (Steedman 2000). Throughout the discussion, I will occasionally mention these, but I reserve a detailed treatment for future work. The philosophical conclusions pertaining to the psychological reality issue are not affected by this omission.

  4. 4.

    This claim pertains to the implementations of such algorithms in conventional computers. How such algorithms are implemented in the human brain, if indeed they are, is a separate question. As noted above, it may well be that the brain embodies the rules, without explicitly representing them.

  5. 5.

    We can trade in the awkward subjunctive locutions for ordinary material conditionals, so long as we keep firmly in mind that the latter are lawlike, in the sense of Goodman (1954/1983). See Chaps. 1 and 2 for a discussion of why this matters for the ontology of linguistic theory.

  6. 6.

    See Kartunnen and Zwicky (1985: pp. 3–5) for a discussion of closely related issues.

  7. 7.

    Strictly speaking, in order to secure this result, we would have to include an explicit rule to the effect that nothing else is a sentence of the language. Following convention, I have omitted this in the main text.

  8. 8.

    There is a distinction between “recognizing” a sentence and assigning a syntactic structure to it. A system’s recognizing a string, or “accepting” it, amounts to no more than that system’s issuing a judgment to the effect that the string in question is grammatical, relative to the grammar with which the system is operating. Parsing, by contrast, involves constructing one or more representations of the string’s syntactic structure and, in the case of ambiguity, selecting one of these as the privileged representation—the one that constitutes the system’s “ultimate decision” about how the input should be interpreted. For a discussion of this distinction, see Berwick and Weinberg (1984), p. 252.

  9. 9.

    Fodor et al. (1974) refer to top-down and bottom-up techniques as analysis-by-analysis and analysis-by-synthesis, respectively. Computer scientists sometimes use the terms recursive-descent and shift-reduce, for reasons that will become apparent below.

  10. 10.

    In contemporary syntactic theories in the P&P tradition (Chap. 9), the phrasal category S has been replaced by other phrasal types—different ones in different theories. The list includes, inter alia, inflectional phrase (IP), complementizer phrase (CP), and tense phrase (TP). For ease of exposition, I ignore this and related complications.

  11. 11.

    In principle, a top-down parser could make predictions even about which specific lexical items will appear in the input stream. But such predictions, even if made on the basis of frequency information and pragmatic/contextual clues, would still be rather risky. I assume, then, that lexical retrieval is in significant measure a data-driven process. This raises a question about whether the “matching” involved in lexical recognition is brute-causal, in the sense introduced by Devitt (2006a). The answer is that it’s not. The HSPM constructs phonological representations prior to, and in the service of, lexical retrieval. As with syntactic processing, the activation of a phonological representation is highly context-sensitive and dependent on factors that are not present in the immediate stimulus, but scattered over discontinuous chunks of time (Fernández and Cairns 2011: ch. 6). Moreover, higher-level syntactic decisions exert a downward influence on lexical retrieval and actively guide the correction of errors in the retrieval process.

  12. 12.

    The infinitude of the space is handled by a simple formal device that we shall introduce in our discussion of ATN parsers in Chap. 9. For a visual aid, the reader can skip down to Figs. 8.11, 8.12, 8.13, which illustrate the type of search space I have in mind here.

  13. 13.

    The Earley algorithm is discussed, in various levels of detail, in Hale (2001), Jurafsky and Martin (2008), Kilbury (1985), Harkema (2001), Aho and Ullman (1972), Morawietz (2000), Pereira and Warren (1983), and Shieber, Schabes, and Pereira (1993).

  14. 14.

    For details, see Jurafsky and Martin (2008: pp. 452–454). Kaplan (1973) contains an early but prescient discussion of various chart-parsing techniques.

  15. 15.

    See also Schabes, Abeille, and Joshi (1988), who provide an instructive application of the Earley algorithm to lexicalized versions of context-free grammars, as well as to the mildly context-sensitive tree-adjoining grammar (TAG). The authors discuss the considerable gains in efficiency stemming from the lexicalization of these grammars.

  16. 16.

    The terms “simple” and “complex” can be given a formal interpretation. Whereas programming languages tend to belong to the class of context-free languages, it has been known for some time that natural languages are slightly stronger than context-free. This was demonstrated by Shieber (1985), who discussed cases of cross-serial dependencies in Dutch and in Scandinavian languages. Syntacticians are in the process of constructing formalisms that fit this specification while avoiding overgeneration—i.e., without allowing the formulation of grammars that are not attested by any known natural language. The Minimalist grammars currently being explored in one branch of syntactic theory are committed to the existence of discontinuous constituents—syntactic chains in which antecedents do not c-command the traces or copies that they leave behind after movement. A grammar that generates discontinuous constituents is stronger than context-free. Minimalist grammars thus belong to the class of mildly context-sensitive grammars (Stabler 2001.) Shieber, Schabes, and Pereira (1993) discuss other mildly context-sensitive grammars, e.g., tree-adjoining grammars (TAGs), and provide a schema for constructing efficient parsers for these grammars.

  17. 17.

    Though we’ll see in Sect. 8.4.2 that the differences between their concerns can also cause confusion.

  18. 18.

    Another term for these parsers that may be familiar to computer scientists is ‘LR(k)’. The symbol ‘LR’ encodes the fact that the parser works from left to right. The symbol ‘(k)’ is a variable that defines the size of the parser’s “look-ahead window”—i.e., how much of the input it is allowed to take in before initiating some operation. An LR(2) parser, for instance, can look at two items of the input before deciding what to do next.

  19. 19.

    There is a way in which this sort of remark can be misleading. Devitt (2006a: pp. 69–71) notes an important distinction between the expressions of a language, on the one hand, and structural descriptions of those expressions on the other. The latter can be derived from a theory of a language, but the former cannot. “[W]hat is derived from a grammar is not an expression of the language but a description of an expression, just as what is derived from an astronomical theory is not, say, a star, but a description of a star” (p. 69). If we are careful not to run afoul of this use/mention distinction, we must say that the parser uses its internally represented grammar—conceived now as (a subpersonal analogue of) a theory of a language—to generate structural descriptions of incoming linguistic stimuli. Hence, when we say that the parser produces “a structure that has an S node at its root,” we do not mean that it produces a sentence; rather the parser produces a description of the incoming stimulus, thus characterizing the stimulus as a structure that has an S node at its root. The question then arises: How do we get from such a descriptive characterization to the final product of language comprehension? If the final representation uses words, rather than mentioning them, then what accounts for the transition between a description (which merely mentions the words) to the final product (which uses them)? This puzzle disappears if say that the final representation merely “mentions” the words, though in a sense that’s closer to indirect discourse (e.g., “Kurt just said that it’s raining.”)

  20. 20.

    Indeed, it had better not be the case, if psychological plausibility is our goal. As noted in Chaps. 5 and 6, the HSPM is an “eager” mechanism—the assignment of syntactic structure is never delayed; syntactic analysis begins at the very first hint of linguistic input and continues incrementally, morpheme by morpheme.

  21. 21.

    Similar visual aids can be found in Abney and Johnson (1991) and Crocker (1999).

  22. 22.

    Note that the CFG must first be transformed into a binary-branching format known as Chomsky Normal Form (CNF). This transformation is well-defined and computationally trivial. See Jurafsky and Martin (2008: pp. 441–2).

  23. 23.

    Similar visual aids can be found in Abney and Johnson (1991) and Crocker (1999).

  24. 24.

    See Abney and Johnson (1991), Stabler (1994), Crocker (1999), Harkema (2001). Abney and Johnson (1991) distinguish between what they call arc-eager and arc-standard left-corner parsers. The latter, they argue, are less efficient than the former. I omit the details here. Note also that left-corner parsing bears a close resemblance to what Fodor and Frazier (1980) term ‘information-paced parsing’. Frazier and Fodor argue that strictly top-down and bottom-up routines are too rigid to deal effectively with locally ambiguous inputs.

  25. 25.

    For an in-depth discussion of center-embedding, see Thomas (1995). Stabler (1994) discusses a range of center-embedded structures and related constructions from languages other than English.

  26. 26.

    In principle-based parsers, to be discussed in the next chapter, “interleaving” the principles of a modular grammar such as GB leads to impressive gains in efficiency. See Berwick (1991a, b) and Merlo (1995).

  27. 27.

    This formulation is a paraphrase of the one found deVincenzi (1991). The second conjunct is equivalent to what Frazier and Clifton (1989) refer to as the “Active Filler Hypothesis.” I use the locution ‘gap/trace’ to avoid commitment to formalisms that have traces and movement operations in their theoretical toolkit.

  28. 28.

    J. D. Fodor and L. Frazier (1978, 1980) do supply an additional argument for RT. Fodor and Frazier claim that their parsing model provides a principled explanation of why MA and LC are true of the parser, precisely in virtue of the model’s commitment to representing the grammar of a language in a separate data structure. They point out that the competing models, which implement the grammar “procedurally,” can build in such principles only in an ad hoc way, if at all. From this, they conclude that it “seems unavoidable that the well-formedness conditions on phrase markers are stored independently of the executive unit, and are accessed by it as needed.” (Frazier and Fodor 1978: 322n). We will return to this argument in Chap. 9.

  29. 29.

    Some grammars are drawn from a hand-crafted corpus, which can be a fragment of natural language (Magerman 1995), or an artificial language generated by a hand-crafted grammar (Rohde 2002).

  30. 30.

    Though see Weinberg (1999) for a discussion of some of the predictive failures of these models. I summarize Weinberg’s argument in Chap. 9, Sect. 9.6.

  31. 31.

    Manning and Schütze (2000) comment on this point, providing valuable methodological guidance to computational linguists: “It is not hard to induce some form of structure over a corpus of text. Any algorithm for making chunks—such as recognizing common subsequences—will produce some form of representation of sentences, which we might interpret as a phrase structure tree. However, most often the representations one finds bear little resemblance to the kind of phrase structure that is normally proposed in linguistics and NLP. Now, there is enough argument and disagreement within the field of syntax that one might find someone who has proposed syntactic structures similar to the ones that the grammar induction procedure which you have sweated over happens to produce. This can and has been taken as evidence for that model of syntactic structure. However, such an approach has more than a whiff of circularity to it. The structures found depend on the implicit inductive bias of the learning program. This suggests another tack. We need to get straight what structure we expect our model to find before we start building it. This suggests that we should begin by deciding what we want to do with parsed sentences. There are various possible goals: using syntactic structure as a first step towards semantic interpretation, detecting phrasal chunks for indexing in an IR system, or trying to build a probabilistic parser that outperforms n-gram models as a language model. For any of these tasks, the overall goal is to produce a system that can place a provably useful structure over arbitrary sentences, that is, to build a parser. For this goal, there is no need to insist that one begins with a tabula rasa. If one just wants to do a good job at producing useful syntactic structure, one should use all the prior information that one has” (407–408).

  32. 32.

    Here, “in principle” means “identification in the limit,” where no bounds are placed on the amount of data the learning model is allowed to see.

  33. 33.

    Gibson cites the results of a number of psycholinguistic experiments that establish the reality of this processing difficulty: “The object extraction is more complex by a number of measures including phoneme monitoring, on-line lexical decision, reading times, and response-accuracy to probe questions (Holmes 1973; Hakes et al. 1976; Wanner and Maratsos,1978; Holmes and O’Regan 1981; Ford 1983; Waters et al. 1987; King and Just 1991). In addition, the volume of blood flow in the brain is greater in language areas for object-extractions than for subject-extractions (Just et al. 1996a, b; Stromswold et al. 1996), and aphasic stroke patients cannot reliably answer comprehension questions about object-extracted RCs, although they perform well on subject-extracted RCs (Caramazza and Zurif 1976; Caplan and Futter 1986; Grodzinsky 1989; Hickok et al. 1993)” (p. 2).

  34. 34.

    Resolving the dispute between advocates of serial and parallel models would clearly yield a deeper understanding of the HSPM. For discussion, see Crocker et al. (2000), and references therein.

  35. 35.

    Of course, a serial parser can likewise make use of statistical information contained in PCFGs, e.g., to determine which rule to apply or which lexical item to select in cases of ambiguity. Jurafsky and Martin (2008: Chap. 14).

  36. 36.

    Though, see Clark (2010) for a probabilistic parser that makes use of the sophisticated Combinatory Categorial Grammar (CCG). It seems likely that probabilistic extensions of such sophisticated grammars will emerge in the coming years.

  37. 37.

    An interesting wrinkle: Manning (2003) argues that formal linguistics should itself take the statistical turn, so to speak, and cast its grammars in a probabilistic formalism. Thus, one way of securing a tight relation between the linguist’s descriptive grammar and the psycholinguist’s mental grammar is to adjust the former, not the latter.

References

  • Abney, S. P., & Johnson, M. (1991). Memory requirements and local ambiguities of parsing strategies. Journal of Psycholinguistic Research, 20(3), 233–250.

    Article  Google Scholar 

  • Aho, A. V., & Ullman, J. D. (1972). The theory of parsing, translation, and compiling (Vol. 1). Englewood Cliffs: Prentice-Hall.

    Google Scholar 

  • Berwick, R. C. (1991a). Principles of principle-based parsing. In R. C. Berwick, S. P. Abney, & C. Tenny (Eds.), Principle-based parsing: Computation and psycholinguistics (pp. 1–37). Dordrecht: Kluwer Academic Publishers.

    Google Scholar 

  • Berwick, R. C. (1991b). Principle-based parsing. In S. M. Shieber & T. Wasow (Eds.), P. Sells (pp. 115–226). Foundational issues in natural language processing: Kluwer Academic Publishers.

    Google Scholar 

  • Berwick, R. C., & Weinberg, A. S. (1984). The grammatical basis of linguistic performance. Cambridge, MA: MIT Press.

    Google Scholar 

  • Bod, R. (1998). Beyond grammar: An experience-based theory of language. Stanford: CSLI Publications.

    Google Scholar 

  • Bod, R., Hay, J., & Jannedy, S. (Eds.). (2003). Probabilistic linguistics. Cambridge, MA: MIT Press.

    Google Scholar 

  • Bresnan, J. (Ed.). (1982). The mental representation of grammatical relations. Cambridge, MA: MIT Press.

    Google Scholar 

  • Bresnan, J., & Kaplan, R. (1982a). Introduction: Grammars as mental representations of language. In J. Bresnan (Ed.), The mental representation of grammatical relations (pp. xvii–xlii). Cambridge, MA: MIT Press.

    Google Scholar 

  • Bresnan, J. (2001). Lexical-functional syntax. Malden: Blackwell Publishers.

    Google Scholar 

  • Carreiras, M., & Clifton, C. (1993). Relative clause interpretation preferences in Spanish and English. Language and Speech, 36(4), 353–372.

    Article  Google Scholar 

  • Charniak, E. (1993). Statistical language learning. Cambridge: MIT Press.

    Google Scholar 

  • Charniak, E. (1996). Tree-bank grammars. In Proceedings of the thirteenth national conference on artificial intelligence (pp. 1031–1036).

    Google Scholar 

  • Charniak, E. (1997). Statistical parsing with a context-free grammar and word statistics. In Proceedings of the fourteenth national conference on artificial intelligence (pp. 598–603).

    Google Scholar 

  • Chomsky, N. (1965). Aspects of the theory of syntax. Cambridge, MA: MIT Press.

    Google Scholar 

  • Clark, A. (2010). Statistical parsing. In A. Clark, C. Fox, & S. Lappin (Eds.), Handbook of computational linguistics and natural language processing (pp. 333–363). Wiley-Blackwell: Malden.

    Chapter  Google Scholar 

  • Collins, M. (1999). Head-driven statistical models for natural language parsing. PhD thesis, University of Pennsylvania.

    Google Scholar 

  • Costa, F., Frasconi, P., Lombardo, V., & Soda, G. (2000). Learning incremental syntactic structures with recursive neural networks. In Proceedings of the fourth international conference on knowledge-based intelligent engineering systems and allied technologies (Vol. 2, pp. 458–461). IEEE.

    Google Scholar 

  • Crocker, M. W. (1999). Mechanisms for sentence processing. In S. Garrod & M. Pickering (Eds.), Language processing. Taylor and Francis.

    Google Scholar 

  • Crocker, M. W., & Brants, T. (2000a). Probabilistic parsing and psychological plausibility. In Proceedings of the 18th conference on Computational linguistics – Volume 1.

    Google Scholar 

  • Crocker, M. W., & Brants, T. (2000b). Wide-coverage probabilistic sentence processing. Journal of Psycholinguistic Research, 29(6), 647–669.

    Article  Google Scholar 

  • Crocker, M. W., Pickering, M., & Clifton, C. (Eds.). (2000). Architectures and mechanisms for language processing. Cambridge: Cambridge University Press.

    Google Scholar 

  • Cuetos, F., & Mitchell, D. (1988). Cross-linguistic differences in parsing: Restrictions on the use of the late closure strategy in Spanish. Cognition, 30, 73–105.

    Article  Google Scholar 

  • DeVincenzi, M. (1991). Filler-gap dependencies in a null subject language: Referential and non-referential WHs. Journal of Psycholinguistic Research, 20(3), 197–213.

    Article  Google Scholar 

  • Devitt, M. (2006a). Ignorance of language. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Devitt, M. (2006b). Defending ignorance of language: Responses to the Dubrovnik papers. Croatian Journal of Philosophy, 6, 571–606.

    Google Scholar 

  • Earley, J. (1970). An efficient context-free parsing algorithm. Communications of the Association for Computing Machinery, 13(2).

    Google Scholar 

  • Fernández, E. M., & Cairns, H. S. (Eds.). (2011b). Fundamentals of psycholinguistics. Malden: Wiley-Blackwell.

    Google Scholar 

  • Ferreira, F., & Clifton, C., Jr. (1986). The independence of syntactic processing. Journal of Memory and Language, 25, 348–368.

    Article  Google Scholar 

  • Filip, H., Tanenhaus, M. K., Carlson, G. N., Allopenna, P. D., & Blatt, J. (2002). Reduced relatives judged hard require constraint-based analyses. In P. Merlo & S. Stevenson (Eds.), Sentence processing and the lexicon: Formal, computational, and experimental perspectives (pp. 255–280). Amsterdam: Benjamins.

    Chapter  Google Scholar 

  • Fodor, J. A. (1998a). Concepts: where cognitive science went wrong. Oxford University Press: Oxford.

    Book  Google Scholar 

  • Fodor, J. D., & Frazier, L. (1980). Is the human sentence parsing mechanism an ATN? Cognition, 8, 417–459.

    Article  Google Scholar 

  • Fodor, J. D., & Inoue, A. (1994). The diagnosis and cure of garden paths. Journal of Psycholinguistic Research, 23, 407–434.

    Article  Google Scholar 

  • Fodor, J. D., & Inoue, A. (1998). Attach anyway. In J. D. Fodor & F. Ferreira (Eds.), Reanalysis and sentence processing (p. xx). New York: Kluwer Academic Publishers.

    Chapter  Google Scholar 

  • Fodor, J. D., & Inoue, A. (2000a). Garden path repair: Diagnosis and triage. Language and Speech, 43(3), 261–271.

    Article  Google Scholar 

  • Fodor, J. D., & Inoue, A. (2000b). Syntactic features in reanalysis: Positive and negative features. Journal of Psycholinguistic Research, 29(1).

    Google Scholar 

  • Fodor, J. A., Bever, T., & Garrett, M. (1974). The psychology of language. New York: McGraw Hill.

    Google Scholar 

  • Frazier, L. (1979). On comprehending sentences: Syntactic parsing strategies, PhD dissertation, available at: http://digitalcommons.uconn.edu/dissertations/AAI7914150/

  • Frazier, L., & Clifton, C. (1989). Successive cyclicity in the grammar and the parser. Language and Cognitive Processes, 4(2), 93–126.

    Article  Google Scholar 

  • Frazier, L., & Fodor, J. D. (1978). The sausage machine: A new two-stage parsing model. Cognition, 6, 291–325.

    Article  Google Scholar 

  • Frazier, L., & Rayner, K. (1982). Making and correcting errors during sentence comprehension: Eye movements in the analysis of structurally ambiguous sentences. Cognitive Psychology, 14, 178–210.

    Article  Google Scholar 

  • Gibson, E. A. F. (1991). A computational theory of human linguistic processing: Memory limitations and processing breakdown. Unpublished Ph.D. dissertation, Carnegie Melon University. Available at: tedlab.mit.edu/tedlab_website/researchpapers/Gibson%201991.pdf

  • Gibson, E. A. F. (1998). Linguistic complexity: Locality of syntactic dependencies. Cognition, 68, 1–76.

    Article  Google Scholar 

  • Goodman, N. (1954/1983). Fact, fiction, and forecast (4th ed.). Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Hale, J. T. (1999). Dynamical parsing and harmonic grammar. Unpublished report, available at the following URL: courses.cit.cornell.edu/jth99/prism.ps

  • Hale, J. T. (2001). A probabilistic parser as a psycholinguistic model. In Proceedings of the NAACL, available at: http://www.aclweb.org/anthology/N/N01/N01-1021.pdf

  • Hale, J. T. (2003). Grammar, uncertainty and sentence processing. Unpublished Ph.D. dissertation, Johns Hopkins University.

    Google Scholar 

  • Hale, J. T. (2011). What a rational parser would do. Cognitive Science, 35(3), 399–344.

    Article  Google Scholar 

  • Hale, J. T., & Smolensky, P. (2006). Harmonic grammars and harmonic parsers for formal languages, ch. 10. In P. Smolensky & G. Legendre (Eds.), The harmonic mind (Vol. 1, pp. 393–415). Cambridge, MA: MIT Press.

    Google Scholar 

  • Harkema, H. (2001). Parsing minimalist languages. PhD dissertation, UCLA. Available at: http://www.linguistics.ucla.edu/people/stabler/paris08/Harkema01.pdf

  • Hart, P. E., Nilsson, N. J., & Raphael, B. (1968). A formal basis for the heuristic determination of minimum cost paths. In IEEE Transactions of systems science and cybernetics, ssc-4(2) (pp. 100–107).

    Google Scholar 

  • Johnson, M. (1989). Parsing as deduction: The use of knowledge of language. Journal of Psycholinguistic Research, 18(1), 105–128.

    Article  Google Scholar 

  • Johnson, M. (1991). Deductive parsing: The use of knowledge of language. In R. Berwick, S. Abney, & C. Tenny (Eds.), Principle-based parsing. Dordrecht: Kluwer Academic Publishers.

    Google Scholar 

  • Jurafsky, D. (1996). A probabilistic model of lexical and syntactic disambiguation. Cognitive Science, 20, 137–194.

    Article  Google Scholar 

  • Jurafsky, D. (2003). Probabilistic modeling in psycholinguistics: Linguistic comprehension and production. In R. Bod, J. Hay, & S. Jannedy (Eds.), Probabilistic linguistics. Cambridge, MA: MIT Press.

    Google Scholar 

  • Kaplan, R. M. (1973). A general syntactic processor. In R. Rustin (Ed.), Natural language processing (pp. 193–241). New York: Algorithmics Press.

    Google Scholar 

  • Kartunnen, D. R., & Zwicky, A. M. (1985). Introduction. In D. R. Dowty, L. Kartunnen, & A. M. Zwicky (Eds.), Natural language processing: Psychological, computational, and theoretical perspectives. Cambridge: Cambridge University Press.

    Google Scholar 

  • Kilbury, J. (1985). Chart parsing and the Earley algorithm. In U. Klenk (Ed.), Kontextfreie Syntaxen und verwandte Systeme (pp. 76–89). Tübingen: Niemeyer. Available at the following URL: user.phil-fak.uni-duesseldorf.de/~kilbury/Publ/Kilbury1985-CPEA.pdf

    Google Scholar 

  • Kimball, J. (1973). Seven principles of surface structure parsing in natural language. Cognition, 2, 15–47.

    Article  Google Scholar 

  • Knowles, J. (2000). Knowledge of grammar as a propositional attitude. Philosophical Psychology, 13(3), 325–353.

    Article  Google Scholar 

  • MacDonald, M. E., Pearlmutter, D., & Seidenberg, M. (1994). The lexical nature of syntactic ambiguity resolution. Psychological Review, 101, 678–703.

    Article  Google Scholar 

  • Magerman, D. M. (1994). Natural language parsing as statistical pattern recognition. PhD thesis, Stanford University.

    Google Scholar 

  • Magerman, D. M. (1995). Statistical decision-tree models for parsing. In ACL33 (pp. 276–283).

    Google Scholar 

  • Magerman, D. M., and Marcus, M. P. (1991). Pearl: A probabilistic chart parser. In EACL 4.

    Google Scholar 

  • Magerman, D. M., & Weir, C. (1992). Efficiency, robustness, and accuracy in picky chart parsing. ACL, 30, 40–47.

    Google Scholar 

  • Manning, C., & Carpenter, B. (1997). Probabilistic parsing using left corner language models. In Proceedings of the Fifth International Workshop on Parsing Technologies.

    Google Scholar 

  • Manning, C. D. (2003). Probabilistic syntax. In R. Bod, J. Hay, & S. Jannedy (Eds.), Probabilistic linguistics. Cambridge, MA: MIT Press.

    Google Scholar 

  • Manning, C. D., & Schütze, H. (1999). Foundations of statistical natural language processing. Cambridge, MA: MIT Press.

    Google Scholar 

  • Marcus, M. P. (1980). A theory of syntactic recognition for natural language. Cambridge, MA: MIT Press.

    Google Scholar 

  • Marr, D. (1980). Vision: A computational investigation into the human representation and processing of visual information. Cambridge, MA: MIT Press.

    Google Scholar 

  • McRoy, S., & Hirst, G. (1990). Race-based parsing and syntactic disambiguation. Cognitive Science, 14, 313–353.

    Article  Google Scholar 

  • Merlo, P. (1995). Modularity and information content classes in principle-based parsing. Computational Linguistics, 21(4), 515–541.

    Google Scholar 

  • Moore, R. C. (2000). Improved left-corner chart parsing for large context-free grammars. In Proceedings of the sixth international workshop on parsing technologies. Available at the following URL: http://research.microsoft.com/apps/pubs/default.aspx?id=68866

  • Morawietz, F. (2000). Chart parsing and constraint programming. In Proceedings of the 18th conference on computational linguistics – Volume 1. Available at: http://portal.acm.org/citation.cfm?id=990900

  • O’Donnell, M. (1995). Sentence analysis and generation: A systemic perspective. Unpublished Ph.D. dissertation, University of Sydney, available at: http://www.wagsoft.com/Papers/Thesis/

  • Pereira, F. C. N., & Warren, D. H. D. (1983). Parsing as deduction. In Proceedings of the 22nd annual meeting of the association for computational linguistics, pp. 137–144.

    Google Scholar 

  • Pollard, C., & Sag, I. A. (1994). Head-driven phrase structure grammar. Chicago: University of Chicago Press.

    Google Scholar 

  • Pritchett, B. (1992). Grammatical competence and parsing performance. Chicago: University of Chicago Press.

    Google Scholar 

  • Rattan, G. (2002). Tacit knowledge of grammar: A reply to Knowles. Philosophical Psychology, 15(2), 135–154.

    Article  Google Scholar 

  • Roark, B., & Johnson, M. (1999). Efficient probabilistic top-down and left-corner parsing, In Proceedings of the 37th annual meeting of the association for computational linguistics on computational linguistics, pp. 421-428. Available at the following URL: www.ldc.upenn.edu/acl/P/P99/P99-1054.pdf

  • Rohde, D.L.T. (2002). A connectionist model of sentence comprehension and production. Ph.D. dissertation, Department of Computer Science, Carnegie Mellon University.

    Google Scholar 

  • Schabes, Y., Abeille, A., Joshi, A. K. (1988). Parsing strategies with ‘Lexicalized’ grammars: Application to tree adjoining grammars. In Proceedings of the 12th international conference on computational linguistics. Available at: acl.ldc.upenn.edu/C/C88/C88-2121.pdf

  • Shieber, S. M. (1985). Evidence against the context-freeness of natural language. Linguistics and Philosophy, 8, 333–343.

    Article  Google Scholar 

  • Shieber, S.M., Schabes, Y., & Pereira, F. C. N. (1993). Principles and implementation of deductive parsing (Technical Report CRCT TR-11-94). Cambridge, MA: Computer Science Department, Harvard University. Available at: http://arXiv.org/

  • Shieber, S. M., & Johnson, M. (1993). Variations on incremental interpretation. Journal of Psycholinguistic Research, 22(2).

    Google Scholar 

  • Smolensky, P., & Legendre, G. (Eds.). (2006). The harmonic mind. Cambridge, MA: MIT Press.

    Google Scholar 

  • Stabler, E. P. (1994). The finite connectivity of linguistic structure. In C. Clifton Jr., L. Frazier, & K. Rayner (Eds.), Perspectives on sentence processing (pp. 303–336). Hillsdale: Lawrence Erlbaum Associates.

    Google Scholar 

  • Stabler, E. P. (2001). Minimalist grammars and recognition. In C. Rohrer, A. Rossdeutscher, & H. Kamp (Eds.), Linguistic form and its computation. Stanford: CSLI Publications.

    Google Scholar 

  • Steedman, M. (1992). Grammars and processors. University of Pennsylvania Department of Computer and Information Science Technical Report No. MS-CIS-92-53. Available at: http://repository.upenn.edu/cisreports/475

  • Steedman, M. (2000). The syntactic process. Cambridge, MA: MIT Press.

    Google Scholar 

  • Thomas, J. D. (1995). Center-embedding and self-embedding in human language processing. MS thesis, MIT.

    Google Scholar 

  • Trueswell, J. C., Tanenhaus, M. K., & Garnsey, S. M. (1994). Semantic influences on parsing: Use of thematic role information in syntactic disambiguation. Journal of Memory and Language, 33, 285–318.

    Article  Google Scholar 

  • Wanner, E. (1980). The ATN and the sausage machine: Which one is baloney? Cognition, 8, 209–225.

    Article  Google Scholar 

  • Wanner, E., & Maratsos, M. (1978). An ATN approach to comprehension. In M. Halle, J. W. Bresnan, & G. A. Miller (Eds.), Linguistic theory and psychological reality. Cambridge: MIT Press.

    Google Scholar 

  • Weinberg, A. (1999). A minimalist theory of human sentence processing. In S. D. Epstein & N. Hornstein (Eds.), Working minimalism (pp. 283–315). Cambridge, MA: MIT Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this chapter

Cite this chapter

Pereplyotchik, D. (2017). Computational Models and Psychological Reality. In: Psychosyntax. Philosophical Studies Series, vol 129. Springer, Cham. https://doi.org/10.1007/978-3-319-60066-6_8

Download citation

Publish with us

Policies and ethics