Bare phrase structure theory (Chomsky 1995a) reduces the machinery involved in the generation of syntactic trees to a minimum. Its basic assumption, Inclusiveness, states that properties of syntactic structure must derive from information stored in the lexicon. As a result, bar levels, coindexation, etc., must be dispensed with.
Chomsky's (1995b: 228) discussion suggests that Inclusiveness is intended to be uniform: it does not only hold of the root node of a sentence, but of every subtree contained in it. This property is highlighted by (1) (where mapping procedures link a terminal to a lexical entry).
(1) Inclusiveness: The syntactic properties of a nonterminal node are fully recoverable from the structure it dominates; the syntactic properties of a terminal node are fully recoverable through mapping procedures.
The standard analysis of grammatical dependencies is at odds with Inclusiveness. Binding, movement, predication and the licensing of negative polarity items (NPIs) are typically thought of as involving a chain-like relation between a dependent and its c-commanding antecedent. This relation often affects properties of the antecedent or dependent. Predication, for example, has the consequence that one of the predicate s T-roles is satisfied, while the argument cannot satisfy any of the predicate s other roles. Thus, both the argument and the predicate acquire new properties. However, these cannot be recovered from their internal structures; any subtree that contains the predicate but not its argument, or vice versa, is noninclusive in the sense of (1).
So, a condition central to phrase structure does not seem to hold of grammatical dependencies. There are two possible reactions to this problem. The first is to deny that dependencies are encoded by the syntax; they would be established syntax-externally instead. This is the line in Brody 1998a for movement chains. The second is to develop a new encoding of dependencies that does not violate Inclusiveness. An argument for the second option can be based on Koster s (1987) configurational matrix .
Koster observes that grammatical dependencies share five properties. (i) The dependent must takes an antecedent; (ii) the antecedent must be in a c-commanding position; (iii) the antecedent must be sufficiently close to the dependent; (iv) each dependent must take a unique antecedent; and (v) an antecedent can be linked to multiple dependents. This configurational matrix must be inherent in the way syntax operates, not only because it refers to syntactic configurations, but also because it does not hold of syntax-external phenomena such as logophoricity (Reinhart & Reuland 1991, 1993), coreference (Reinhart 1983,1996) and bound variable anaphora (Reuland 1999). We will not discuss control, but differences between obligatory and nonobligatory control also demonstrate the contrast between syntactic and extra-syntactic relations (Williams 1980).
The configurational matrix does not only provide an argument for the syntactic encoding of grammatical dependencies but also for revising the way in which this is done. Standard approaches do not explain why relations prima facia very different in nature display a constant set of properties. GB theory, for example, simply imposes conditions on the various rules that associate dependents with their antecedents. Such an approach is unparsimonious and arbitrary: why should such rules all be conditioned in the same way and why should they meet the conditions in question rather than others?
Progress can be made if some grammatical dependencies are reduced to others. Chomsky (1986b), for instance, reanalyzes binding as LF-movement, while Progova (1994) suggests that NPI licensing is a subcase of binding. Similarly, Manzini & Roussou (2000) claim that T-role assignment is movement of an aspectual feature. This trend is foreshadowed in Koster 1987, where a meta-relation R subsumes all dependencies.
But even if parsimoniousness is thus guaranteed, it remains unclear why the meta-relation R is conditioned in the way it is. In addition, reductionism cannot easily deal with properties that distinguish grammatical dependencies. Movement, for example, obeys the Adjunct Island Condition but binding does not. Similarly, predication and movement are mutually exclusive in that they cannot share an antecedent (there is no raising to T-positions), whereas shared antecedents are allowed in the case of predication and binding. If all grammatical dependencies were one, these contrasts would be hard to capture.
This course develops an alternative theory of grammatical dependencies based on four assumptions. The first three are Economy, Inclusiveness, as formulated in (1), and Accessibility, according to which relations between nodes require immediate domination. These assumptions, we argue, imply that dependencies must be decomposed into more primitive relations. More specifically, a dependency is established if a selectional function introduced by the dependent is copied upward until it directly dominates a node that satisfies it. This approach provides an account of why c-command and obligatoriness are part of the configurational matrix. Our fourth assumption, Distinctness, is that attributes of nodes (features and selectional functions) must be distinguished either by their inherent properties or by ordering. Distinctness explains why antecedents are unique, whereas dependents need not be. Our four assumptions also underlie aspects of locality, the final property of the configurational matrix and various other phenomena.