Skip to main content

On Directionality of Phrase Structure Building


Minimalism in grammatical theorizing (Chomsky in The minimalist program. MIT Press, Cambridge, 1995) led to simpler linguistic devices and a better focalization of the core properties of the structure building engine: a lexicon and a free (recursive) phrase formation operation, dubbed Merge, are the basic components that serve in building syntactic structures. Here I suggest that by looking at the elementary restrictions that apply to Merge (i.e., selection and licensing of functional features), we could conclude that a re-orientation of the syntactic derivation (from bottom-up/right-left to top-down/left-right) is necessary to make the theory simpler, especially for long-distance (filler-gap) dependencies, and is also empirically more adequate. If the structure building operations would assemble lexical items in the order they are pronounced (Phillips in Order and structure. PhD thesis, MIT, 1996; Chesi in Phases and cartography in linguistic computation: Toward a cognitively motivated computational model of linguistic competence. PhD thesis, Università di Siena, 2004; Chesi in Competence and computation: Toward a processing friendly minimalist grammar. Unipress, Padova, 2012), on-line performance data could better fit the grammatical model, without resorting to external “performance factors.” The phase-based, top-down (and, as a consequence, left-right) Minimalist Grammar here discussed goes in this direction, ultimately showing how strong Islands (Huang in Logical relations in Chinese and the theory of grammar. PhD thesis, MIT, 1982) and intervention effects (Gordon et al. in J Exp Psychol Learn Mem Cogn 27:1411–1423, 2001, Gordon et al. in J Mem Lang 51:97–114, 2004) could be better explained in structural terms assuming this unconventional derivational direction.

This is a preview of subscription content, access via your institution.


  1. 1.

    In the examples below, Merge is a function taking two arguments (inside round brackets); curly brackets indicate unordered sets (i.e., the result of the Merge operation). Square brackets will be used for ordered constituents [D N] and ordered sets of features within a lexical item L: [\(_{\mathrm{F}1\,\mathrm{F}2\, \mathrm{F}3}\) ... L ].

  2. 2.

    Recent discussion on the “labeling” of tree nodes seems to me to go exactly in this direction (Cecchetto and Donati 2010): the “label” (i.e., the interpretable result of Merge, accessible to other Merge operations) is the “probe” (i.e., the merged item that selects the other).

  3. 3.

    Note that “=” does not denote equivalence when employed in this way; “=X” means that X is selected.

  4. 4.

    As I will discuss in the “Non-Local Dependencies” section, all other possible word orders would therefore result from Movement (i.e., displacement of certain items to obtain a configuration able to remove features that are uninterpretable at the C-I interface).

  5. 5.

    Stabler (1997) uses the “+” (plus) and “\(-\)” (minus) signs to prefix “interpretable” and “uninterpretable” features, respectively (Chomsky 1995:280). Adapting Stabler’s formalism here to the probe-goal metaphor, “+” marks a probe feature and “\(-\)” a goal feature within a movement-based, non-local dependency.

  6. 6.

    Such a relation between the criterial and thematic positions is not present in all wh-dependencies. For example, as discussed in Rizzi (2001) and Ko (2005), why is a wh-operator item generated in a left-peripheral position and not involved in a movement operation of the kind discussed here. Here I will only discuss argumental wh-items like who, what, and which.

  7. 7.

    Being the most accessible feature means being the first (leftmost) feature in the ordered feature list associated with a lexical item.

  8. 8.

    The null hypothesis would be that elements are pronounced as soon as they are merged, but this might be too strong, and other morpho-phonetic restrictions should be taken into consideration. A source of cross-linguistic variation (resulting in a difference of linear order) can be related to different lexical selection. As suggested by Choi and Yoon (2006), in languages like Japanese, case-marked DPs select (i.e., create expectations for) specific verb types, leading to head-final order. An additional source of variation in linear order is movement, discussed in the “Moving Unexpected Features” section.

  9. 9.

    The diagram below shows the derivation history. Only relevant features are considered (=D and D). Notice that multiple “copies” of the verbal head kisses are present in the diagram, but only one of them will be pronounced. While the selection mechanism is assumed to be universal, the option to spell out one specific copy could be parameterized cross-linguistically. The set of dominance relations among constituents resulting in v. is the same as (8).

  10. 10.

    The step iv. is not a stipulation, but an empirical point: expansions “to the right” will be lexicalized (and eventually pronounced) later than what has already been lexicalized (and pronounced).

  11. 11.

    It would be interesting to derive the argument selection order from independent principles, instead of postulating it. Exploring this option is however beyond the scope of this article. Note that some parameterization options might apply to steps i. or iv., leading to the pronunciation of the verb at i. (base VSO order) or iv. (SVO), or, eventually, when the selection requirements are completed, i.e., after step v. (SOV). I will not explore these options here, and I will assume that the selecting verb is pronounced as soon as it is first merged, i.e., step i. (As will be clear in the “Moving Unexpected Features” section, SVO order is derived by subject movement.)

  12. 12.

    Within the Tree Adjoining Grammar framework (Joshi 1985) similar unlexicalized feature structures are dubbed “initial trees.”

  13. 13.

    Only items that are not used or fully interpreted in a given position are entered into the memory buffer to be moved. The parenthesis notation “(who)” in the memory buffer in (16) and (17) indicates that the item that is re-merged at the argument position will be silent, since its phonetic features are already used in the criterial position where it was first introduced. I will not discuss semantic features here, but it is fair to assume (Sauerland 2004) that a crucial part of the lexical content of a moved DP must be interpreted in the selected argument position and not in the criterial position.

  14. 14.

    Also, an unlexicalized feature cluster (cf. an “initial tree”; Joshi 1985; see footnote 12) such as [+wh +D N] can be used to expand the +wh feature. This can result in lexicalizations such as: [\(_{+\mathrm{wh}+\mathrm{D}}\) which] [\(_\mathrm{N}\) boy]. Certain constraints apply in this case; see discussion in Chesi (2013).

  15. 15.

    This is again an option that could be parameterized to explain the spell-out of “partial copies” (Felser 2004). As for wh-in-situ languages, the structure of the derivation does not change, only the lexical items do; for instance, the +wh operator can be lexicalized independently from the +D N argumental cluster, requiring a proper scope relation rather than movement through the memory buffer.

  16. 16.

    +S is placed below +T because of semantic restrictions forcing +T to be next to +C (as in the standard understanding of T-to-C movement; Pesetsky and Torrego 2001). Here head movement is not needed to achieve “subject-aux inversion.”

  17. 17.

    This, again, requires cross-linguistic parameterization. For instance, the subject criterial position, signaled by the +S feature, which is assumed to be usually a focalized position in English (Huck and Na 1990), is absent in (16) because of the focalized wh-item. As a result, the wh-item is interpreted as topic of the predication (and it will be interpreted as the subject of the predication in the relevant selected position).

  18. 18.

    There are several kinds of gap-first constructions, but they will not be discussed here.

  19. 19.

    An anonymous reviewer has noted that this might conflict with resumption, i.e., “last resort” strategies that replace an unlicensed gap with a resumptive pronoun, to rescue an otherwise ill-formed sentence. Though this topic is beyond the scope of this article, it is interesting to note that, along the lines of Shlonsky (1992), it could be assumed that “gaps” (or “traces” in standard minimalist terms) and “resumptive pronouns” are in complementary distribution and that the gap versus pronoun alternation depends on edge-properties of the constituent in which each can appear; see Shlonsky (1992) for details. Note that this explanation is not readily available in a bottom-up approach, where the trace/pronoun alternation must be resolved before the edge features of the containing constituent are merged. See the proposal by Boeckx (2003) for a minimalist account of resumption, which however does not solve this problem of the order of operations.

  20. 20.

    Evidence for it is Irish complementizer “agreement” when an A\(^{\prime }\)-dependency passes through multiple complementizer positions (McCloskey 2002, a.o.), and partial spell-out in German of copies of the moved constituent in intermediate positions (Felser 2004, a.o.).

  21. 21.

    For sake of simplicity, only CP phases are discussed here.

  22. 22.

    See Chesi (2007:58–60) for a critique of this approach to cyclicity in terms of “edge features.”

  23. 23.

    Several scholars have proposed ways of dealing with these problems. For instance, within a “multiple spell-out” framework (Uriagereka 1999), it is proposed to introduce the notions of “independent derivational workspaces” and “multiple numerations” (Nunes and Uriagereka 2000; see the section “Creating Phrase Structures Through Recursive Merge” above). Other theories postulate different movement triggers that do not need features (notably Moro’s 2000 “dynamic antisymmetry”). However, some problems remain concerning the nature of phases and the source of island phenomena.

  24. 24.

    The Minimalist phases DP and CP (Chomsky 2008) exactly correlate with the topmost extended projections of N and V phase heads, respectively (i.e., the beginning of a phase, in top-down terms). On the other hand, “little v” is not a phase boundary here, contrary to standard Minimalism (Chomsky 2008), since it corresponds to the first thematic verbal shell, while it is the last verbal shell (i.e., the last selected complement) that creates the lower boundary of a verbal phase from a top-down perspective.

  25. 25.

    Unselected adjuncts and (restrictive) relative clauses are introduced in the derivation as a result of functional feature expansions (e.g., “+R” for restrictive relative clause; “+MOD” for modal adjuncts like with absolute certainty). Their relative position should be universally determined (Cinque 1999), but their final order can be affected by shifting operations triggered by scope necessities or preferences for computational complexity minimization (“NP-shift”-like operations in head-initial languages or “scrambling” in head-final languages). Space limits prevent full discussion here, but see Chesi (2012:183).

  26. 26.

    Prepositional phrases are considered to be extended nominal projections: [+P +D N]. +P is integrated in the phrase structure as a functional case marker.

  27. 27.

    What is presented here as a stipulation can be derived from computational complexity considerations: i.e., the contrast between “true recursion” (nesting) and “tail recursion” (sequential). See Abelson and Sussman (1996), Bianchi and Chesi (2006), Chesi (2012:166–170).

  28. 28.

    I wish to thank Janet Fodor who pointed out this example to me.

  29. 29.

    See, for instance, for Subject Islands, Bianchi and Chesi (2006); for Adjunct Islands, Levine and Sag (2003) and Stepanov (2007); and Phillips (2013) for a comparison of formal and processing oriented accounts of island constraints.

  30. 30.

    Adjunct and complex NP islands can be accounted for using the very same idea, assuming that the functional features licensing adjuncts and relative clauses are computed before the selected gap in the matrix clause; then adjuncts and RCs can be shifted, ending up rightward with respect to the phase head (Chesi 2012:183).


  1. Abelson, H., & Sussman, J. (1996). Structure and interpretation of computer programs pp. 261–264. Cambridge, MA: MIT Press.

  2. Belletti, A., & Rizzi, L. (2013). Intervention in grammar and processing. In I. Caponigro & C. Cecchetto (Eds.), From grammar to meaning: The spontaneous logicality of language (pp. 293–311). Cambridge: Cambridge University Press.

    Google Scholar 

  3. Bever, T. G. (1970). The cognitive basis for linguistic structures. In J. R. Hayes (Ed.), Cognition and the development of language (pp. 279–362). New York, NY: Wiley.

    Google Scholar 

  4. Bianchi, V. (2009). A note on backward anaphora. Rivista di Grammatica Generativa, 34, 3–34.

    Google Scholar 

  5. Bianchi, V., & Chesi, C. (2006). Phases, left branch islands, and computational islands. University of Pennsylvania Working Papers in Linguistics, 12(1), 15–28.

    Google Scholar 

  6. Bianchi, V., & Chesi, C. (2010). Reversing the perspective on Quantifier Raising. Rivista di Grammatica Generativa, 35, 3–38.

    Google Scholar 

  7. Bianchi, V., & Chesi, C. (2012). Subject islands and the Subject Criterion. In V. Bianchi & C. Chesi (Eds.), Enjoy Linguistics! Papers offered to Luigi Rizzi on the occasion of his 60th birthday (pp. 25–53). Siena: CISCL Press.

    Google Scholar 

  8. Boeckx, C. (2003). Islands and chains: Resumption as stranding. Amsterdam: John Benjamins.

    Book  Google Scholar 

  9. Cecchetto, C., & Donati, C. (2010). On labeling: Principle C and head movement. Syntax, 13(3), 241–278.

    Article  Google Scholar 

  10. Chesi, C. (2004). Phases and cartography in linguistic computation: Toward a cognitively motivated computational model of linguistic competence. PhD thesis, Università di Siena.

  11. Chesi, C. (2007). An introduction to phase-based minimalist grammars: Why move is top-down from left-to-right. Studies in Linguistics, 1, 49–90.

    Google Scholar 

  12. Chesi, C. (2012). Competence and computation: Toward a processing friendly minimalist grammar. Padova: Unipress.

    Google Scholar 

  13. Chesi, C. (2013). Do the ‘right’ thing. Studies in Linguistics, 6, 131–164.

    Google Scholar 

  14. Choi, Y., Yoon, J. (2006). Argument cluster coordination and constituency test (non)-conflicts. In Paper presented at NELS 37. University of Illinois at Urbana-Champaign.

  15. Chomsky, N. (1973). Conditions on transformations. In S. Anderson & P. Kiparsky (Eds.), A festschrift for Morris Halle (pp. 232–286). New York: Holt Rinehart, & Winston.

    Google Scholar 

  16. Chomsky, N. (1981). Lectures on government and binding. Dordrecht: Foris.

    Google Scholar 

  17. Chomsky, N. (1986). Barriers. Cambridge, MA: MIT Press.

    Google Scholar 

  18. Chomsky, N. (1995). The minimalist program. Cambridge, MA: MIT Press.

    Google Scholar 

  19. Chomsky, N. (1998). Minimalist inquiries: The framework (No. 15). MIT Working Papers in Linguistics, MIT, Department of Linguistics.

  20. Chomsky, N. (2001). Derivation by phase. In M. Kenstowicz (Ed.), Ken Hale : A life in language (pp. 1–52). Cambridge, MA: MIT Press.

    Google Scholar 

  21. Chomsky, N. (2008). On phases. In R. Freidin, C. P. Otero, & M.-L. Zubizarreta (Eds.), Foundational issues in linguistic theory: Essays in honor of Jean-Roger Vergnaud. Cambridge, MA: MIT Press.

    Google Scholar 

  22. Chomsky, N. (2013). Problems of projection. Lingua, 130, 33–49.

  23. Cinque, G. (1999). Adverbs and functional heads: A cross-linguistic perspective. Oxford: Oxford University Press.

    Google Scholar 

  24. Collins, C. (1997). Local economy. Cambridge, MA: MIT Press.

    Google Scholar 

  25. Collins, C. (2002). Eliminating labels. In S. D. Epstein & T. D. Seely (Eds.), Derivation and explanation in the minimalist program. Malden, MA: Blackwell.

    Google Scholar 

  26. Culicover, P. W., & Postal, P. M. (2001). Parasitic gaps. Cambridge, MA: MIT Press.

    Google Scholar 

  27. De Vincenzi, M. (1991). Syntactic parsing strategies in Italian: The minimal chain principle. Berlin: Springer.

    Book  Google Scholar 

  28. De Vincenzi, M., Arduino, L., Ciccarelli, L., Job, R. (1999). Parsing strategies in children comprehension of interrogative sentences. In Proceeding of ECCS ’99. Siena.

  29. Engdahl, E. (1983). Parasitic gaps. Linguistics and Philosophy, 6, 5–34.

    Article  Google Scholar 

  30. Felser, C. (2004). Wh-copying, phases and successive cyclicity. Lingua, 114(5), 543–574.

    Article  Google Scholar 

  31. Fodor, J. D. (1978). Parsing strategies and constraints on transformations. Linguistic Inquiry, 9, 427–473.

    Google Scholar 

  32. Fox, D., & Pesetsky, D. (2005). Cyclic linearization of syntactic structure. In Object shift (Ed.) By Katalin E. Kiss, special issue. Theoretical Linguistics, 31(1–2), 1–46.

  33. Frazier, L., & Clifton, C, Jr. (1989). Successive cyclicity in the grammar and the parser. Language and Cognitive Processes, 4(2), 93–126.

    Article  Google Scholar 

  34. Frazier, L., & Fodor, J. D. (1978). The sausage machine: A new two-stage parsing model. Cognition, 6, 291–325.

    Article  Google Scholar 

  35. Frazier, L. (1978). On comprehending sentences: Syntactic parsing strategies. PhD Thesis, University of Connecticut.

  36. Friedmann, N., Belletti, A., & Rizzi, L. (2009). Relativized relatives: Types of intervention in the acquisition of A-bar dependencies. Lingua, 119, 67–88.

    Article  Google Scholar 

  37. Gibson, E., & Pearlmutter, N. J. (2000). Distinguishing serial and parallel parsing. Journal of Psycholinguistic Research, 29(2), 231–240.

    PubMed  Article  Google Scholar 

  38. Gordon, P., Hendrick, R., & Johnson, M. (2004). Effects of noun phrase type on sentence complexity. Journal of Memory and Language, 51, 97–114.

    Article  Google Scholar 

  39. Gordon, P. C., Hendrick, R., & Levine, W. H. (2002). Memory-load interference in syntactic processing. Psychological Science, 13(5), 425–430.

    PubMed  Article  Google Scholar 

  40. Gordon, P. C., Hendrick, R., & Johnson, M. (2001). Memory interference during language processing. Journal of Experimental Psychology: Learning, Memory and Cognition, 27, 1411–1423.

    Google Scholar 

  41. Grimshaw, J. (1991). Extended projection. In P. Coopmans, M. Everaert, & J. Grimshaw (Eds.), Lexical specification and insertion (pp. 115–134). The Hague: Holland Academic Graphics.

    Google Scholar 

  42. Hofmeister, P., & Sag, I. A. (2010). Cognitive constraints and island effects. Language, 86(2), 366–415.

    PubMed Central  PubMed  Article  Google Scholar 

  43. Hopcroft, J. E., Motwani, R., & Ullman, J. D. (2001). Introduction to automata theory, languages, and computation (2nd ed.). Reading: Addison-Wesley.

    Google Scholar 

  44. Huang, C. T. J. (1982). Logical relations in Chinese and the theory of grammar. PhD thesis. MIT.

  45. Huck, G., & Na, Y. (1990). Extraposition and focus. Language, 66, 51–77.

    Article  Google Scholar 

  46. Joshi, A. K. (1985). Tree adjoining grammars: How much context-sensitivity is required to provide reasonable structural descriptions? pp. 206–250. University of Pennsylvania, Moore School of Electrical Engineering, Department of Computer and Information Science.

  47. Kayne, R. (1983). Connectedness and binary branching. Dordrecht: Foris.

    Google Scholar 

  48. Kayne, R. (1994). The antisimmetry of syntax. Cambridge, MA: MIT Press.

    Google Scholar 

  49. Kempson, R., Meyer-Viol, W., & Gabbay, D. (2001). Dynamic syntax: The flow of language understanding. Oxford: Blackwell.

    Google Scholar 

  50. Ko, H. (2005). Syntax of why-in-situ: Merge into [Spec, CP] in the overt syntax. Natural Language & Linguistic Theory, 23(4), 867–916.

    Article  Google Scholar 

  51. Larson, R. (1988). On the double object construction. Linguistic Inquiry, 19, 335–391.

    Google Scholar 

  52. Lasnik, H., & Saito, M. (1992). Move alpha: Conditions on its application and output. Cambridge, MA: MIT Press.

    Google Scholar 

  53. Levine R., & Sag I. A. (2003). Some empirical issues in the grammar of extraction. In S. Müller (eds.), Proceedings of the HPSG03 Conference. Michigan State University, East Lansing. CSLI Publications. Life in Language (pp. 1–52). Cambridge, MA: MIT Press.

  54. Lewis, R. L. (2000). Falsifying serial and parallel parsing models: Empirical conundrums and an overlooked paradigm. Journal of Psycholinguistic Research, 29(2), 241–248.

    PubMed  Article  Google Scholar 

  55. McCloskey, J. (2002). Resumption, successive cyclicity, and the locality of operations. In Samuel Epstein & Daniel T. Seely (Eds.), Derivation and explanation in the minimalist program (pp. 184–226). New York: Blackwell.

    Chapter  Google Scholar 

  56. McElree, B., & Griffith, T. (1998). Structural and lexical constraints on filling gaps during sentence processing: A time-course analysis. Journal of Experimental Psychology: Learning, Memory, & Cognition, 24, 432–460.

    Google Scholar 

  57. McElree, B., Foraker, S., & Dyer, L. (2003). Memory structures that subserve sentence comprehension. Journal of Memory and Language, 48(1), 67–91.

    Article  Google Scholar 

  58. McKinnon, R., & Osterhout, L. (1996). Event-related potentials and sentence processing: Evidence for the status of constraints on movement phenomena. Language and Cognitive Processes, 11, 495–523.

    Article  Google Scholar 

  59. Moro, A. (2000). Dynamic antisymmetry. Cambridge, MA: MIT Press.

    Google Scholar 

  60. Nunes, J., & Uriagereka, J. (2000). Cyclicity and extraction domains. Syntax, 3, 20–43.

    Article  Google Scholar 

  61. Pesetsky, D., & Torrego, E. (2001). T-to-C movement: Causes and consequences. In M. Kenstowicz (Ed.), Ken Hale: A life in language (pp. 355–426). Cambridge, MA: MIT Press.

    Google Scholar 

  62. Phillips, C. (1996). Order and structure. PhD thesis, MIT.

  63. Phillips, C. (2003). Linear order and constituency. Linguistic Inquiry, 34(1), 37–90.

    Article  Google Scholar 

  64. Phillips, C. (2006). The real-time status of Island phenomena. Language, 82, 795–823.

    Article  Google Scholar 

  65. Phillips, C. (2013). On the nature of island constraints. I: Language processing and reductionist accounts. In J. Sprouse & N. Hornstein (Eds.), Experimental syntax and island effects. Cambridge: Cambridge University Press.

    Google Scholar 

  66. Pollard, C., & Sag, I. (1994). Head-driven phrase structure grammar. Stanford: CSLI.

    Google Scholar 

  67. Rizzi, L. (1990). Relativized minimality. Cambridge, MA: MIT Press.

    Google Scholar 

  68. Rizzi, L. (2001). On the position “int(errogative)” in the left periphery of the clause. In G. Cinque & G. Salvi (Eds.), Current studies in Italian syntax. Essays offered to Lorenzo Renzi (pp. 287–296). Amsterdam: Elsevier North-Holland.

  69. Rizzi, L. (2006). On the form of chains: Criterial positions and ECP effects. In L. Cheng & N. Corver (Eds.), Wh-movement: Moving on (pp. 97–134). Cambridge, MA: MIT Press.

    Google Scholar 

  70. Ross, J. R. (1967). Constraints on variables in syntax. PhD Thesis. MIT.

  71. Sauerland, U. (2004). The interpretation of traces. Natural Language Semantics, 12, 63–127.

    Article  Google Scholar 

  72. Shlonsky, U. (1992). Resumptive pronouns as a last resort. Linguistic Inquiry, 23(3), 443–468.

    Google Scholar 

  73. Sprouse, J., Wagers, M., & Phillips, C. (2012). A test of the relation between working memory capacity and syntactic island effects. Language, 88(1), 82–123.

    Article  Google Scholar 

  74. Stabler, E. (1997). Derivational minimalism. In Retoré (Ed.), Logical aspects of computational Linguistics. Berlin: Springer.

    Google Scholar 

  75. Stepanov, A. (2007). The end of CED? Minimalism and extraction domains. Syntax, 10, 80–126.

    Article  Google Scholar 

  76. Tavakolian, S. L. (1981). The conjoined-clause analysis of relative clauses. In S. L. Tavakolian (Ed.), Language acquisition and linguistic theory (pp. 167–187). Cambridge, MA: MIT Press.

    Google Scholar 

  77. Uriagereka, J. (1999). Multiple spell-out. In S. D. Epstein & N. Hornstein (Eds.), Working minimalism (pp. 251–282). Cambridge, MA: MIT Press.

    Google Scholar 

  78. Van Dyke, J. A., & McElree, B. (2006). Retrieval interference in sentence comprehension. Journal of Memory and Language, 55(2), 157–166.

    PubMed Central  PubMed  Article  Google Scholar 

  79. Wanner, E., & Maratsos, M. (1978). An ATN approach to comprehension. In M. Halle, J. Bresnan, & G. A. Miller (Eds.), Linguistic theory and psychological reality, chapter 3 (pp. 119–161). Cambridge, MA: MIT Press.

    Google Scholar 

  80. Warren, T., & Gibson, E. (2005). Effects of NP type in reading cleft sentences in English. Language and Cognitive Processes, 20, 751–767.

    Article  Google Scholar 

  81. Warren, T., & Gibson, E. (2002). The influence of referential processing on sentence complexity. Cognition, 85, 79–112.

    PubMed  Article  Google Scholar 

  82. Weinberg, A. (2001). A minimalist theory of human sentence processing. In Epstein & N. Hornstein (Eds.), Working minimalism. Cambridge, MA: MIT Press.

    Google Scholar 

Download references

Author information



Corresponding author

Correspondence to Cristiano Chesi.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Chesi, C. On Directionality of Phrase Structure Building. J Psycholinguist Res 44, 65–89 (2015).

Download citation


  • Relative Clause
  • Lexical Item
  • Memory Buffer
  • Phase Head
  • Merge Operation