Christopher D. Green
Department of Psychology
York University
christo@yorku.ca
John Vervaeke
Department of Philosophy
University of Toronto
(1997) In D.M. Johnson & C.E. Erneling (Eds.), The future of
the cognitive revolution
(pp. 149-163). Oxford: Oxford University Press
The problem with many contemporary criticisms of Chomsky and linguistic nativism is that they are based upon features of the theory that are no longer germane; aspects that have either been superseded by more adequate proposals, or that have been dropped altogether under the weight of contravening evidence. It is notable that, contrary to the misguided opinion of many of his critics, Chomsky has been more willing than the vast majority of psychological theorists to revise and extend his theory in the face of new evidence. His resistance to the proposals of those of his early students and colleagues who banded together under the name "generative semantics" was not, as it is widely believed, a matter of his unwillingness to entertain evidence contrary to his own position. Rather, it was a matter of his unwillingness to entertain vague, ambiguous, and inconsistent theoretical claims. His ultimate victory over generative semantics was grounded squarely in his willingness to alter his theory to bring it more in line with new evidence, and the theory now bears only a modest resemblance to that he developed in the 1950s and 1960s.
Thus, matters of "deep structure" and "transformations" of the sort described in Aspects of the theory of syntax (1965) are only of historical interest now, and criticism of them that claims to damage the current theory is simply off the mark. As Chomsky himself has written,
For about 25 years, I've been arguing that there are no phrase structure rules, just a general X-bar theoretic format.... As for transformational rules, for about 30 years I've been trying to show that they too reduce to an extremely simple form, something close to "do anything to anything" (interpreted, to be sure, in a very specific way). (personal communication, 1994)
It seems that many of those who were opposed to a formal theory of grammar in the 1960s (and their students) are still opposed to it today, but continue to trot out the same old critiques without bothering to keep up with new developments. Some have argued that Chomsky's continuous revision of his theory makes him impossible to "pin down," and imply that he is engaged in some sort of ruse to keep his critics "off balance." A close reading of the revisions, however, shows them to be well-motivated by and large, and indicative of a progressive research programme, in Lakatos' (1970) sense. We take his modifications, far from being some sort of ruse, to be simply good scientific practice; a practice that more psychological theoreticians should consider adopting.
In this paper, rather than rehashing old debates that are voluminously documented elsewhere, we intend to focus on more recent developments. To this end, we have put a premium on references from the 1990s and the latter half of the 1980s. It is our hope that, in doing so, we can shift the debate somewhat from the tired old bickering whether John is eager or easy to please (Chomsky, 1965), to a more fruitful and relevant line of discussion. It is not that we believe Chomsky to be infallible, or his theory to be definitively or ultimately True. Rather, it is that arguing about issues that were either resolved or dissolved in the 1970s can serve little purpose other than to cloud the important issues that face us today.
We cannot hope to cover all the issues surrounding the innateness of language in the short space allowed us here. Thus, we have decided to focus on a small set of questions that have produced the most interesting work lately. First, we will describe exactly what is now thought to be innate about language, and why it is thought to be innate rather than learned. Second, we will examine the evidence that many people take to be the greatest challenge to the nativist claim: ape language. Third, we will briefly consider how an innate language organ might have evolved. Fourth we will look at how an organism might communicate without benefit of the innate language structure proposed by Chomsky, and examine a number of cases in which this seems to be happening. Finally we will try to sum up our claims and characterize what we believe will be the most fruitful course of debate for the immediate future.
When faced with the bald claim that "language is innate" most people immediately balk, and with good reason. It is obvious to everyone that babies born into English speaking families acquire English because that is what they hear and, loosely speaking, that is what they are taught. Mutatis mutandis for babies born to Turkish families, Cantonese families, Inuit families, etc. Of course, the problem is that Chomsky's claim has never been that all aspects of language are innate. It was, rather, that grammar is innate. Grammar is, very roughly speaking, syntax and as much of semantics as can be accounted for by the structure of language. Phonology and morphology have, of course, been the subjects of Chomskyan-style analysis as well, and are related in interesting ways to syntax and semantics, but these are not the focus of this paper. Chomsky does not claim--or more importantly, his linguistic theory does not imply--as some of his critics have imputed to him, that all knowledge of meaning (semantics) is innate, or that knowledge of appropriate language use (pragmatics) is innate.
There is a systematic ambiguity in the term "grammar," an ambiguity that has caused much confusion among Chomsky's critics. It refers both to the knowledge that is hypothesized to be "in the head" of the human, and to the linguist's theory of that knowledge. In an effort to stem the confusion somewhat, Chomsky has recently adopted the term "I-language" to refer to the knowledge of language the human is thought innately to have, the "I" standing for individual, internal, and intensional. Once again, the point here is not to reject the possibility that there are, by contrast, social, external, and extensional aspects to language. It is only to identify those features of language that Chomsky is interested in addressing.
So just what is thought to be innate? Not, as was once thought, a vast array of transformation rules for converting the deep structures of sentences into their surface structures. Chomsky has long since rejected that research program seeing, as did many of his early critics, that such an approach leads to a wildly unconstrained proliferation of rules to cover increasingly rare grammatical cases. This was precisely the tack taken by advocates of abstract syntax in the 1960s and the early generative semanticists of the 1970s (before they abandoned the possibility of a formal theory of language at all), and led to Chomsky actually working to reduce the generative power of his proposal for fear of falling into the same difficulties (Harris, 1993).
The theory now proposes that there is a relatively small set of linguistic principles for which given parameters are given values by the early linguistic environment of the child. For instance, all languages seem to be either what as known as "head-first" or "head-last." That is, nouns and verbs either come before their complements (head-first) or after them (head-last). To give an example, in English, a head-first language, we say "Malcolm caught the train that was going to Winnipeg." Note that the object noun, "train," comes before its complement, "that was going to Winnipeg." In Japanese, a head-last language, the complement comes before. One can mock this up in English for the purposes of an example: it would be like saying, "...the was-going-to-Winnipeg train." There is some controversy over whether languages are head-first or head-last tout court or whether the parameter has to be set separately for each major grammatical category (nouns, verbs, adjectives, etc.). Most languages seem to be consistent across categories. There are some apparent exceptions, however.
Notice just what is being claimed to be innate here: simply the fact that languages are either head-first or head-last. The ultimate value of the parameter is set by the environment. Put conversely, what is being claimed is that young children do not have to learn that languages are head-first or head-last. That part is, as it were, innate. Of course, as with many cognitive activities, most notably perception, innate here does not necessarily mean "present at birth." It may take a few months or even years after birth for the brain to develop to the point where the mechanism is fully in place. What the environment contributes is the information necessary to establish the parameter's value; to set the parameter's "switch" to the correct position for the language being acquired. Good introductions to this material include Chomsky & Lasnik (1993), Cook (1988), Cowper (1992), and Radford (1988).
Chomsky's associate at MIT, Robert Berwick (1985), has been working for over a decade on precisely what parameters are needed to completely account for the principles and parameters underlying the structures of all languages. He is currently working with computer simulations that employ 24 parameters, and believes there may be another 12 to be worked out before the basic structures of all languages are accounted for (Berwick, 1993). Others researchers such as David Lightfoot (1991) have been working on just how environmental stimulation is able to set parameters without the conscious attention of the child. Among Lightfoot's discoveries is the apparent fact that the "triggering" stimuli must be robust and structurally simple, and that embedded material will not set linguistic parameters.
There is also some evidence that the "switches" are set to certain "default" positions at birth, and that only certain kinds of linguistic stimuli can change these default settings. There is a fair bit of debate about this claim however. Although this would account for some of the most interesting features of creole languages, as we explain below, the default hypothesis has some, probably not insurmountable, trouble accounting for cases of deprived children not learning language at all.
The immediate question of doubters is, why must the principles be innate? Why can't the child figure out for him- or herself what the relevant structures of language are through general cognitive mechanisms, the way he or she might figure out that clouds mean rain or that blood means pain? The responses to such questions are quite straightforward.
2.1. Complexity of language.
First of all, the structures of language are very complex and, in large part, domain-specific. Constituent hierarchical structure, an almost definitional feature of language, is just not something, by and large, that we come up against in the everyday world; and even when we do, it is darn hard, even for the best and brightest among us, to figure it out. Witness, for instance, the struggles of linguists themselves to adequately characterize language. Moreover, in using general cognitive procedures, children (and adults, for that matter) are usually consciously aware of the hypotheses they are testing and rejecting. Bruner's and Wason's cognitive tasks of the 1950s and 1960s are the paradigmatic examples (see, e.g., Bruner, Goodnow, & Austin, 1956; Wason & Johnson-Laird, 1972), and it is clear to everyone that whatever children do to learn language, it bears little relation to what those subjects were doing.
Compare, for instance, children's learning of language to linguists' learning about language. In effect, the general cognition hypothesis says that children are "little linguists" trying to discover the rules that govern the structures of the utterances that are allowable within their language community. The fact is, however, that linguists have been unable to discover exactly what the rules are, even after dozens (one might argue hundreds or even thousands) of years of research. By contrast, virtually every child does it within a few years (with far less in the way of specialized cognitive machinery, and control over the quality of the incoming data, it is worth pointing out, than a Ph.D. in linguistics).
Some argue that the hypothesis-testing model of cognition is inadequate and have offered up various alternatives, such as the "Idealized Cognitive Models" of George Lakoff (1987) and his associates. Although we recognize the difficulties of hypothesis-testing models of cognition, and are open to new, more adequate models of thinking, we find most of the current alternatives vague and/or ambiguous; they simply don't have the formal rigor to make it as adequate theories of language acquisition (see, e.g., Green & Vervaeke, 1993; Vervaeke, Green, & Kennedy, 1993). Of course, the bottom line with people like Lakoff is that you just can't have a formal theory of language like you can of, say, the movements of the planets. Though we concede that there are difficulties, we are inclined to believe that Chomsky has cut up language into just about the right parts: one where a formal theory might be furitfully applied (grammar), and one where such a theory probably will not (pragmatics and some aspects of semantics). We are not so willing to give up on the formalist project altogether just yet. Simple scepticism is rarely the last word on anything.
Connectionism is often suggested as an alternative model of learning. Undoubtedly it has enjoyed many explanatory successes over the last decade. There are some mistaken assumptions that are commonly attributed to connectionist models, however, that bear on its relevance to linguistic nativism. Chiefly, it is often assumed that connectionism implicitly favors an empiricist account over a nativist one. This is simply untrue. Although connectionism was embraced early by psychological empiricists, all connectionist networks begin life with a given, pre-set computational architecture, and with given activation and learning rules. These features are essentially innate. Moreover, they ultimately dictate what can and cannot be learned by the network.
It might be argued that this sort of nativism is very weak because the particular features of given computational architectures do not strongly constrain the learning capacitites of the networks, and the initial connection weights are set to random values. Certain problems have come to light in connectionist research, however, for which plausible non-nativist solutions seem, at present, to be in profoundly short supply. Paticularly in work on language acquisition, connectionist networks are often unable to generalize correctly to new cases of the linguistic structures they have supposedly learned, unless exposed to a "phased training" regime (Elman, 1991), in which training cases are presented in a specified order that is engineered ahead of time by the resereacher explicitly to prevent the network from getting "lost in space(s)" as Andy Clark (1993, p. 142) has so aptly put it.
Whether or not such extraordinary procedures ultimately get the machine to generalize properly, they cannot be seriously countenanced as plausible theories of language acquisition. It is precisely such carefully constrained training regimes that children are not given when learning their first language. A system that cannot robustly capture the right linguistic generalizations despite being trained on multiple arrays of widely diverse input sets simply fails to do whatever it is that kids do when learning language. Put more plainly, the single most important datum to capture when modelling language learning is that children virtually never fail to learn language correctly, regardless of what they kind of linguistic data they are exposed to early in life. Models that don't capture this feature are simply wrong.
The importance of the native aspects of connectionist networks is just now beginning to be recognized. Some researchers have built innate structure directly into their networks (e.g., Feldman, 1994). Paul Smolensky (in press) has explicitly endorsed a strong linguistic nativism, and is collaborating with a traditional linguist (Prince & Smolensky, in press) who was once highly critical of connectionist approach to language leanring (Pinker & Prince, 1988) in an effort to integrate connectionist and Chomskyan approaches to language.
Among the most interesting efforts to combine nativism and connectionism are "artificial life" models in which a large set of networks are trained, all beginning with random weights. Only those few that learn fastest and best, however, are allowed to "reproduce." The new generation of networks start out leanring with the weights that their "parents" began with (plus a bit of randomness), but the competition is now a lot tougher than it was for the "parents." Again, only those that learn fastest and best are allowed to "reproduce." By iterating this procedure, a set of weights optimal for learning the task at hand can be developed in an evolutionarily realistic manner (see, e.g., Nolfi & Parisi, 1991). Clark (1993) has argued that these efforts constitute only a "minimal nativism," but there is no real reason to call it "minimal"; the result is a richly articulated native structure the absence of which would make reliable learning of particular and important kinds (such as of language) effectively impossible. Different as it may be from what Chomsky and his followers envision, it is quite strongly nativist in its thrust. Others have recently come to agree with this verdict. Quartz (1993), for instance, has argued that "PDP models are nativist in a robust sense" (p. 223).
2.2. Developmental dissociation of language and general cognition.
Another reason to believe that the principles of grammar are innate is that children all seem to learn the structures of language in pretty well the same order, and at very close to the same ages. Moreover, people who have, for one reason or another, been prevented from learning language at the usual time of life seem, later, to be virtually incapable of learning it despite having intact general cognitive abilities. Such cases include not just Genie, the girl from Los Angeles who was essentially locked in a closet until the age of twelve and never, thereafter, able to learn to speak anywhere near adequately. Although the facts of her case are suggestive, her life circumstances were such that one is wary of drawing firm conclusions from her particular case. More recently, better evidence has come from congenitally deaf children who were not taught sign language until adulthood. Such people typically never learn to employ the structures of their sign languages with the fluidity of those who learn it from birth (appropriately controlling, of course, for the simple length of time the sign language has been known to the person). Interestingly, however, their congenitally deaf children often do become syntactically fluid (see Pinker, 1994).
Even more importantly, Grimshaw, Adelstein, Bryden, & MacKinnon (1994) recently reported a case of a congenitally deaf boy from Mexico, E.M., who was not taught a sign language at home, but whose hearing was restored at the age of 15. Because there are no signs of abuse in his early life, unlike Genie, his linguistic progress is of crucial relveance to followers of the critical/ sensitive period hypothesis. Grimshaw, et al. report that
After 2 years of exposure to verbal Spanish, E.M.'s language production is limited to single words, combined with gesture. His articulation is very poor. He has a large receptive vocabulary, consisting largely of concrete nouns, adjectives, and a few verbs Despite his language limitations, he appears to function well in everyday situations.... E.M. has particular difficulty with pronouns, prepositions, word order, verb tense, and indefinite articles. (1994, p. 5)
Thus, he is having great difficulty learning anything but the most rudimentary grammar. His progress in coming years will be most interesting to watch.
In addition to cases of children with relatively normal intelligence, but severe language deficits, there have recently been reported cases of severely cognitively handicapped children whose syntactic abilities nevertheless seem to remain substantially intact. The most fascinating recent case is a woman called Laura (Yamada, 1990), whose testable IQ is in the mid-40s. She cannot even learn to count, and surely she is incapable of learning the structures of language by means of her general cognitive capacities, yet she freely uses a wide array of complex linguistic structures, including multiple embeddings (Yamada, 1990, p. 29). More interesting still, though her general cognitive handicap seems to have left her syntactic abilities relatively intact, her semantic capcities are seriously affected. This makes it highly unlikely that linguistic structures develop as a the result of semantic learning, for Laura has pretty well mastered the former in spite of a serious impoverishment of the latter. Moreover, Laura's pragmatic linguistic ability--her ability to converse competently--is grossly impaired. Thus, over and above the problems Laura poses for the person who thinks that language acquisition can be explained in terms of general cognitive abilities, she also generates significant difficulties for the language theorist who considers communicative function--i.e., pragmatics--to be the basis for structural and semantic aspects of language (Yamada, 1990, p. 72). This would include, of course, the popular Anglo-Amercan neo-Wittgensteinian approaches to language that are often extended in the attempt to account for linguistic structure, as well as many related Continental approaches (e.g., Habermas, 1976/1979). Whether they are useful in explaining aspects of language other than grammar is, of course, a matter that might well be decided on independent grounds.
The last several paragraphs effectively evince a double-dissociation between general cognition and language ability. On the one hand we have cognitively able individuals who are unable to learn standard linguistic structures that are picked up by any normal three-year-old. On the other we have a woman who has the general cognitive abilities of a toddler, but is able to employ the linguistic structures of an adult. Such anomalies must be explained by the advocate of general cognitivist accounts of language acquisition (e.g., Bates, 1976).
2.3. Comparison to other innate aspects of cognition.
Third, the development of children's linguistic abilities bear a strong relation to the development of other aspects of cognition --most notably in vision--that are widely thought to be essentially innate, but delayed in their onset due to maturational factors. For instance, one does not learn to see binocularly but, given sensory stimulation generally available early in life, cells in the visual cortex develop to mediate binocular stereopsis. Notice that even though there is environmental involvement (and no one denies that there is such for language either), children are not explicitly taught to see binocularly, nor are they able to learn to do so later if they are somehow robbed of the normal visual experiences early in life. The development of binocular stereopsis occurs in the natural course of development because of the innate presence of certain brain structures in the human. The current evidence is that they are built to mediate quite specific functions and, if not stimulated by the early environment, degenerate. In almost exactly the same respects as Chomsky claims certain aspects of language to be innate, binocular vision is innate as well.
For decades now there have been attempts to teach language to a wide variety of higher animals, primarily chimpanzees. One motivation for this project has been to show that there is no special organ unique to humans that mediates the learning of language. The reasoning is that if a chimpanzee, our closest evolutionary relative, can learn language then one is forced either to impute the same language-learning organ to chimps, or one is forced to say that language learning is mediated by general cognitive abilities that we share with chimps. Since chimps never learn language on their own, nor with the same degree of competence as even a human five-year-old, it seems doubtful that we have the same language-specific resources. Thus, it is argued, the reason humans learn language more naturally, and better, is that we have stronger general cognitive resources than the chimps, but nothing important that is specific to language (other than, perhaps, articulatory organs).
Everyone, by now, knows the story of how Herb Terrace (1979) tried to teach language to Nim Chimpsky, but later, after reviewing the video tapes of his research reversed himself and argued that Nim had never really learned to do much more than imitate, and had certainly never learned syntax. For a while that appeared to spell the end of ape language studies, but recently there has been a flurry of excitement over a bonobo named Kanzi, trained by Susan Savage-Rumbaugh, who seems to understand a wide array of English language commands, and can produce sentences by using a touch-sensitive symbol board. In a recent experiment Kanzi was faced with 310 sentences of various types. There were (in order of frequency) action-object sentences (e.g. "Would you please carry the straw"), action-object-location sentences (e.g. "Put the tomato in the refrigerator") and action-object-recipient sentences (e.g., "Carry the cooler to Penny"). Of the 310 sentences tested, Kanzi got 298 correct. Savage-Rumbaugh concluded that Kanzi's sentence comprehension, "appears to be syntactically based in that he responds differently to the same word depending upon its function in the sentence." (cited in Wallman, 1992, p. 103). Wallman (1992) argues, however, that this conclusion is ill-founded. Almost all the sentences with which Kanzi was presented were pragmatically constrained so that the relationships between agent, action, and object are clear simply from a list of the nouns used in the sentence. (For instance, how likely is it that Kanzi would be asked to carry Penny to the cooler? Or put the refrigerator in the tomato?) With respect even to those sentences that were pragmatically ambiguous enough to require a syntactic analysis, one still must be cautious about rejecting the pragmatic account because the sentences were given in everyday (i.e., not experimentally well-controlled) situations over a three month period. Without knowing the contexts in which the sentences were presented, it is difficult to know what can be validly concluded about Kanzi's behavior. As to the matter of Kanzi's sentence production, similar doubts arise. According to Wallman (1992, p. 95) Kanzi's most frequent string consists of "only one lexigram in combination with one or more deictic gestures."
Consequently, Kanzi does not show evidence of the kinds of grammatical knowledge that would pose a serous threat to the Chomskyan view. In fact, as we shall see below, Kanzi's behavior is precisely what one would expect from an intelligent animal who is attempting to communicate, but does not have the grammatical resources normally available to human beings. What is truly fascinating about Kanzi's performance is its remarkable similarity to the behavior of human beings who, for one reason or another, have access to some basic linguistic vocabulary, but not to the structure of the language to which the vocabulary belongs. This is discussed extensively in Section 4, below.
The question of how humans happened to evolve a complex language organ, such as the one proposed by Chomsky and his associates, looms very large. Chomsky (1988) himself, as well as noted evolutionary biologist Stephen J. Gould (1987, cited in Pinker & Bloom, 1990), have suggested that language may be something of an evolutionary accident; a by-product of relatively unrelated evolutionary factors. This is grossly unsatisfying to most. Bates, Thal, & Marchman (1989), for instance, have argued that if we were equipped with a language organ such as that proposed by Chomsky, one would expect to find precursors of it in animals that are closely related to us, such as chimpanzees. If Chomsky is right that we do not see any important linguistic abilities in chimps, then, Bates argues, the Chomskyans are left with a sort of "Big Bang" theory of language development; that we, of all species, were somehow accidentally blessed with a fully functioning prefabricated language organ to the exclusion of all other animals. Such a picture of the evolution of language, argues Bates, is hardly plausible, and many others have concurred. (Notice, ironically, that the real Big Bang theory is, as far as we now know, true! Bates, however, tries to invoke it rhetorically against Chomsky.)
The main problem with Bates' argument is the picture of evolution implicit in her account. If, in fact, humans were the direct descendants of chimpanzees, then you might expect the kind of evolutionary continuity for which she plumps. Even then, however, the continuity would be structural, not necessarily functional. That is, you would expect to see physiological continuity in the structure of the brain, but evolution has shown over and over again that a little bit of structural change can bring on a great deal of behavioral alteration, particularly if a number of small changes begin to interact (see, e.g., Pinker & Bloom, 1990).
All this notwithstanding, the basic premise of Bates' argument is demonstrably false. Humans are not direct descendants of chimps. Chimps and humans share a common ancestor. We are cousins, to extend the familial metaphor, and fairly distant cousins at that. There has been fair bit of evolutionary water under the bridge since we and chimps went our separate evolutionary ways: three or four species of Australopithecus, and perhaps the same number of Homo species as well. What went on, linguistically speaking, during those thousands of millennia is difficult to pin down exactly, but it may well be the gradual development that Bates expects to find. More likely, assuming a more modern evolutionary view like punctuated equilibrium, there were a few jumps in the development of language: perhaps first words, then some structural rules for putting them together more effectively, and so on. In any case, the assumption of gradualism does not imply that the evidence of gradual change must be written on the faces of currently surviving species. As Steven Pinker (1994) has mused, if some gigantic cataclysm had wiped out all mammals between humans and mice, making them the animal most closely related to us, would we then be inclined to look for evidence of precursors of all human abilities in mice, and base negative conclusions on a failure to find such precursors? Of course not.
Derek Bickerton (1990) has put forward a persuasive account of how language might have developed in the human species. What makes it especially fascinating is that it makes empirical predictions about what sort of verbal behavior one would expect not just from "cave men," but also from people today who do not have access to normal grammatical knowledge. Bickerton believes that before full-fledged grammatically complex language developed, people spoke something he calls "protolanguage." Protolanguage contains many of the concrete content words of modern language (e.g., rock, sky, father, deer, etc.), but few of the function words (e.g., the, to, by, of, etc.). These words are crucial to making a grammatical language operate. They are what distinguishes, for instance, "Buy the painting by Philip" from "Buy the painting for Philip." Thus, according to Bickerton, before the development and dissemination of function words, people resorted to combining concrete nouns and verbs together with gestures and signs to get their meanings across.
Bickerton has found evidence of protolanguage being used today in situations in which people have access to a rudimentary vocabulary, but little in the way of grammar or function words. For instance, when one is in a foreign country, one must often resort to shouting a couple of nouns, repeatedly, and pointing. More interestingly, this is pretty well the level of language attained by Genie and E.M., discussed in Section 1 above. E.M.'s language production, it will be recalled, "is limited to single words, combined with gesture" (Grimshaw, et al., 1994, p. 5). This just is protolanguage. Perhaps even more interestingly, this also seems to be precisely what the chimps of the ape language studies attain after their linguistic training. To repeat Wallman's conclusion about Kanzi's language production: Kanzi's most frequent string consists of "only one lexigram in combination with one or more deictic gestures." This is protolanguage to a tee.
Indeed, protolanguage seems to pop up almost everywhere that someone is communicating but does not have the grammatical resources to use full-blown language. Consider, for instance, Radford's (1990) recent analysis of a corpus of over 100,000 early child English utterances. Children who are beyond the single-word stage, into the multi-word stages (ages 18-30 months, roughly) seem to speak protolanguage. To draw material directly from Radford's table of contents, they have not yet developed, or have only incompletely developed, a determiner system, a complementizer system, an inflection system, and a case system. These are precisely the things that are missing from protolanguage.
In fact, Bickerton himself independently points out the similarity. The relation does not seem to be casual either. Bickerton shows that it stands up to a formal analysis and, moreover, that other kinds of language deficits, such as aphasia, seem to result in qualitatively different kinds of verbal behavior. That is, protolanguage is not just a grab-bag of language deficits; it seems to be a relatively well-defined kind of verbal behavior that occurs in a variety of situations that are tied together only by being instances in which language is robbed of grammar.
By far the most fascinating part of Bickerton's work, however, has been his examination of pidgins and creoles. Pidgins are languages, of sorts, developed by adults who are forced to operate under circumstances where no one around them speaks their native language, and they do not have access to resources that would allow them to properly learn the language of the place in which they find themselves. This was a quite common situation during the days of the colonial slave trade. All pidgins have features which are theoretically relevant. First, pidgins are syntactically very impoverished. They lack functional vocabulary; they evince no systematic use of determiners, no systematic use of auxiliary verbs, no systematic use of inflections, no subordinate clauses, no markers of tense or aspect, no prepositions (or very few with clear semantic content used unsystematically), and even the verbs are frequently omitted. Second, pidgin word order is extremely variable and unsystematic, and this is not, of course, compensated for by the presence of an inflectional system, as is found in many natural languages. Finally, pidgin utterances are restricted to short word strings because, without syntax, longer strings quickly become too ambiguous for unique interpretation.
Thus, pidgins provide extremely impoverished environments in which to learn syntax. In a very real sense there is no syntax present in a linguistic environment dominated by a pidgin. This is important since it bypasses the old debate about how impoverished the linguistic environment to the child really is. Here we have a situation in which it is clear that the linguistic environment is extremely impoverished. It is, thus, truly fascinating what children brought up in such an environment do. In a single generation (though not necessarily the first) they develop a creole. A creole is a full-blown language with its own complete syntax, showing none of the deficits of its "parent" pidgin. Its vocabulary is drawn from a pidgin, but its grammatical principles are not drawn from a pidgin because, to repeat, pidgins do not possess such principles in the first place. Nor does the creole, as Bickerton (1984) takes great pains to demonstrate, derive its grammatical principles from the target language (that of the "masters") nor from any of the substratum languages (one spoken by some subgroup of the labourers). The children appear not to have been using either of these as sources of syntactic information. Although the creole grammar does not reflect either the target or substratum grammars, Bickerton found that the grammars of creoles all around the world are remarkably similar even though they arise independently. Attempts to explain this in terms of similarities between target languages or substatum languages do not work. To take one example, creole languages are predominantly SVO (subject-verb-object word order) languages. One might account for this by saying that all the colonial powers were European and therefore used Indo-European languages. Although this might explain English- and French-based creoles, it will not explain Portuguese-based creoles since it is VSO, nor will it account for Dutch-based creoles since Dutch is SOV. If one tries to explain the creole word order system on the basis of a common African language family one is faced with the fact that recent research suggests that "the underlying order of a number of West African languages may well be SOV" (Muysken, 1988, p. 290). Finally, there is the particular case of the creole Berbice Dutch, spoken in Surinam. The substratum language it is directly related to is Ijo, a completely SOV language, and its target language was the SOV language Dutch. Yet Berbice Dutch is "still a straightforward SVO system" (Muysken, 1988, p. 290). Such evidence indicates that creole similarities are not based on features common to target languages or substratum languages.
What we have, then, with creoles is a situation in which children are acquiring syntax in an environment that is impoverished to such a degree that an account of acquisition based on inductive learning is implausible to the point of incomprehensibility. The burden of proof, then, is on the person who believes that general cognitive abilities account for language acquisition. Such a person must explain the similarity of creole grammars separated by huge distances in space and time. The Chomskyan, however, has an easy and apparent answer: children possess something like a universal grammar; a set of innate principles with parameters that are pre-set to default values. Barring the regular occurrence of the kinds of grammatical inputs that will reset the "switches" (as one would be unlikely to get in a pidgin environment, given the irregularity of the pidgin utterances) the defaults become fixed and, thus, all creoles have similar grammars.
Pinker (1994) points out that Bickerton's work has been "stunningly corroborated by two recent natural experiments in which creolization by children can be observed in real time" (Pinker, 1994, p. 36). Until recently, there was no sign system for the deaf in Nicaragua. After the revolution the children were brought into schools for instruction. The instruction there, however, followed the oralist tradition in which the children were not taught a sign system but were forced to learn to read lips, etc. In the school yards the children communicated with a sign system they cobbled together from the makeshift sign system they used at home (see Jackendoff, 1994, for a discussion of what is called "home sign"). The system used is called Lenguaje de Signos Nicarauense (LSN). It is used with varying degrees of fluency by young adults who developed it when they were ten or older. LSN lacks a consistent grammar, however; it depends on suggestion, circumlocutions, and context. "Basically," says Pinker (1994, p. 36), "it is a pidgin."
Children who were four or younger when they joined the school were exposed early to the already existing LSN. As they grew, however, their mature sign system turned out to be quite different from LSN. Most importantly, it has a consistent grammar. It uses syntactic devices common to other developed sign languages. It is highly expressive and allows for the expression of complex abstract thought. Due to these significant differences this language is recognized as distinct from LSN and is referred to as Idioma de Signos Nicaraguense (ISN). Basically, ISN is a creole that the deaf Nicaraguan children spontaneously created it from the pidgin LSN.
Pinker (1994) also describes an unpublished study, by psycholinguists Singleton and Newport, of a family made up of two deaf adults who acquired ASL late (and therefore use it in a fashion similar to pidgin speakers) and their deaf son, who is known by the pseudonym Simeon. What is astounding about Simeon is that, although he saw no ASL but his parents' defective ASL, his signing is far superior to theirs. He consistently uses grammatical principles that his parents use only inconsistently, and makes use of syntactic operations that appear not to be part of his parents' grammatical competence. As Pinker (1994, p. 39) puts it, with Simeon we see "an example of creolization by a single living child."
Simeon's case, and that of ISN, provide excellent corroborating evidence for Bickerton's thesis while also dealing with some potential criticisms of Bickerton's work. All this recent work on language acquisition in unusual circumstances provides important new evidence for the linguistic nativist that the general cognitivist is going to have a great deal of difficulty explaining. The overwhelming effect of all of these arguments is to shift the burden of proof is on to the general cognitivist. She can no longer assume that her position has, prima facie, superior plausibility. Her claim to hold the more reasonable position requires new argumentation and evidence.
The primary aim of this paper was to bring the nativism debate about language up to date because far too much time is spent, in our opinion, pointlessly debating issues were either resolved or dissolved some 20 years ago. A great deal of fascinating research is currently under way, however, that was not available when those debates began; evidence that bears directly on the current status of linguistic nativism. We have tried to review and integrate some of that evidence here.
First, we argued that debates about "deep structure," "transformations," and the like are simply no longer germane. Chomsky has long since given up on these concepts, largely as a result of penetrating criticism. Instead he, and associates such as Berwick, have advanced a small set of principles and parameters as the bases of universal grammar. It may be that the values of such parameters are set to defaults at birth, but that these can be changed across a small range of values by certain linguistic experiences.
Second, several lines of evidence suggest that these principles are innate and are relatively independent of general cognitive abilities. (1) Their complexity makes it unlikely that they could be learned by small children in the same way as facts about the natural world are learned. (2) The uniformity of both the order in which, and the time at which, children learn them across a widely diverse array of early linguistic and more broadly cognitive experiences suggest that innate factors are crucially involved. Moreover, the dissociations between linguistic and general cognitive abilities found in several individual cases belies the idea that they both spring from the same psychological ground. (3) The similarity of the developmental pattern of structural aspects of language to other uncontroversially innate aspects of psychological development, such as vision, suggests that these might also be innate.
Third, we argued that the evidence from Kanzi, which has provoked so much interest of late, is actually quite weak from the standpoint of demonstrating that an ape can learn complex linguistic structures on the basis of general cognitive capacities. Fourth, we showed that, contrary to widespread opinion, there is nothing particularly unusual, from an evolutionary standpoint, about claiming that humans possess a specific linguistic capacity that no other currently living species seem to have.
Finally, we described the main features of protolanguage, as advanced by Bickerton, and noted that it seems to characterize not only what early humans may have done to communicate, but also the behavior of children who have been raised without language, and animals in ape language experiments, as well as the utterances of small children before full-blown grammar has developed, and of speakers of pidgin languages. We also noted, with Bickerton, the rapid transformation of pidgins to creoles in children who are raised entirely within a pidgin environment, and the grammatical similarity of creoles the world over.
We believe that accounting for facts such as these is crucial to any adequate theory of language acquisition. As far as we can tell, only the Chomskyan program has come even close. This is not by any means to say that the job is done. Language is more complex than we could have imagined when we first began studying it in earnest. There is still a lot of work to do.
Bates, E. (1976). Language and context: Studies in the acquisition of pragmatics. New York: Academic Press.
Bates, E., Thal, D., & Marchman, V. (1989). Symbols, and syntax: A Darwinian approach to language development. In N. Krasnegor, D. Rumbaugh, M. Studdert-Kennedy, & R. Schiefelbusch (Eds.), The biological foundations of language development. Oxford: Oxford University Press.
Berwick, R. C. (1985). The acquisition of syntactic knowledge. Cambridge, MA: MIT Press.
Berwick, R. C. (1993). [Interview with Jay Ingram]. In The Talk Show. Program 2: Born to Talk. Canadian Broadcasting Corporation.
Bickerton, D. (1984). The language bioprogram hypothesis. The Behavioral and Brain Sciences 7, 173-221.
Bickerton, D. (1990). Language and species. Chicago: University of Chicago Press.
Bruner, J. S., Goodnow, J. J., & Austin, G. A. (1956). A study of thinking. New York: John Wiley & Sons.
Chomsky, N. (1965). Aspects of the theory of syntax. Cambridge, MA: MIT Press.
Chomsky, N. (1988). Language and the problems of knowledge: The Managua lectures. Cambridge, MA: MIT Press.
Chomsky, N. & Lasnik, H. (1993). The theory of principles and parameters. In J. Jacobs, A. von Stechow, W. Sternfeld, & T. Vennemann (Eds.), Syntax: An international handbook of contemporary research (pp. 506-569). Berlin: Walter de Gruyter.
Clark, A. (1993). Associative engines: Connectionism, concepts, and representational change. Cambridge, MA: MIT Press.
Cook, V. J. (1988). Chomsky's universal grammar: An introduction. London: Blackwell.
Cowper, E. A. (1992). A concise introduction to syntactic theory: The government-binding approach. Chicago: University of Chicago Press.
Elman, J. (1991). Incremental learning, or the importance of starting small. (Tech. report 9101). San Diego: University of California, Center for Research in Language.
Feldman, J. A. (1994, July). Structured connectionist models. Paper presented at the First International Summer Institute for Cognitive Science, Buffalo, NY.
Gould, S. J. (1987). The limits of adaptation: Is language a spandrel of the human brain? Paper presented to the Cognitive Science Seminar, Center for Cognitive Science, MIT.
Green, C. D. & Vervaeke J. (1997). The experience of objects and the objects of experience. Metaphor and Symbolic Activity, 12, 3-17.
Grimshaw, G. M., Adelstein, A., Bryden, M. P., & MacKinnon, G. E. (1994, May). First language acquisition in adolescence: A test of the critical period hypothesis. Poster presented at the conference 5th annual conference of Theoretical and Experimental Neuropsychology/Neuropsychologie Experimentale et Theoreticale (TENNET), Montreal, Quebec.
Habermas, J. (1979). Communication and the evolution of society (T. McCarthy, Trans.). Boston: Beacon Press. (Original work published 1976)
Harris, R. A. (1993). The linguistics wars. Oxford: Oxford University Press.
Jackendoff, R. (1994). Patterns in the mind. New York: Basic Books.
Lakatos, I. (1970). Falsification and the methodology of scientific research programmes. In I. Lakatos & A. Musgrave (Eds.), Criticism and the growth of knowledge (pp. 91-196). Cambridge: Cambridge University Press.
Lakoff, G. (1987). Women, fire, and dangerous things: What categories reveal about the mind. Chicago: University of Chicago Press.
Lightfoot, D. (1991). How to set parameters: Arguments from language change. Cambridge, MA: MIT Press.
Muysken, P. (1988). Are creoles a special type of language? In Linguistics: The Cambridge Survey, vol. 2. Edited by Frederick Newmeyer. Cambridge: Cambridge University Press.
Nolfi, S. & Parisi, D. (1991). Auto-teaching: Metworks that develop their own thaching input. (Tech report PCIA91-03). Rome: Institute of Psychology, CNR.
Pinker, S. (1994). The language instinct. New York: Morrow.
Pinker, S. & Bloom, P. (1990). Natural selection and natural language. Behavioral and Brain Sciences, 13, 707-784.
Pinker, S. & Prince, A. (1988). On language and connectionism: Analysis of a parallel distributed processing model of language acquisition. In S. Pinker & J. Mehler (Eds.), Connections and symbols (pp. 73-193). Cambridge, MA: MIT Press.
Prince, A. & Smolensky, P. (in press). Optimality theory: Constraint interaction in generative grammar. Cambridge, MA: MIT Press.
Quartz, S. R. (1993). Neural networks, nativism, and the plausibility of constructivism. Cognition, 48, 223-242.
Radford, A. (1988). Transformational grammar: A first course. Cambridge: Cambridge University Press.
Radford, A. (1990). Syntactic theory and the acquisition of language. London: Basil Blackwell.
Smolensky, P. (in press). Constituent structure and explanation in an integrated connectionist/symbolic cognitive architecture. In C. Macdonald & G. Macdonald (Eds.), Connectionism: Debates on psychological explanation. Oxford: Basil Blackwell. [Author's note: Published in 1995, pp. 223-290]
Terrace, H. S. (1979). Nim. New York: Knopf.
Vervaeke, J., Green, C. D., & Kennedy, J. M. (1993, May). Women, fire, and dangerous theories. Paper presented at the Canadian Psychological Association Convention, Montreal. [Author's note: A longer version of this paper was published in 1997 under the names of the first two authors in Metaphor and Symbolic Activity, 12, 59-80.]
Wallman, Joel. (1992). Aping language. Cambridge: Cambridge University Press.
Wason, P. C. & Johnson-Laird, P. N. (1972). Psychology of reasoning: Structure and content. Cambridge, MA: Harvard University Press.
Yamada, J. E. (1990). Laura: A case for the modularity of language. Cambridge, MA: MIT Press.