As I reread The Structure of Magic recently, I was reminded that Transformational Grammar forms the core of the MetaModel and yet transformational grammar is generally considered by linguists to be out of date. Years ago, I studied transformational grammar as part of a language teaching Masters program, but my knowledge of later models is very incomplete and I started a discussion on an online forum about the underlying grammar of NLP and whether it has been updated in any way.
One reply correctly pointed out that this discussion could be considered dogma and has little impact on the ability of NLP to help people. This statement is congruent with the belief that NLP is what works.
… none of the changes in theory really alter the basics of what we do. We listen for places where peoples’ model of the world is underdefined (that is, where there are deep structure limitations that are causing them to have fewer choices then they would like), and we ask questions that cause them to define those areas. During the process, they wind up with more choices then they had before. Dogma like how language develops, how many unique language families there are, and how the deep structure of the brain functions to develop language (which is what changes the fashion in theory) is largely, at least in my opinion, not material to what we do.
While I agree that the practical implications of NLP are far more important than the theoretical bases, in the long-term, my concern is that claims for NLP will necessarily be weakened by continuing to tie it to a grammar model that is considered out-of-date. While the number of “unique language families” does indeed seem to be very peripheral to NLP, the ‘linguistic’ word that appears in the name of neuro-linguistic programming implies to me at least that within the NLP community there needs to be energy applied to questions such as “how the deep structure of the brain functions to develop language” and how this knowledge can be used to improve the efficacy of NLP and to move it in new directions.
Not all NLP commentors and writers agree that transformational grammar underlies NLP. For example, in an interview Charles Faulkener says:
That was 79′ and at that time the dominant model in linguistics was Chomsky, and Chomsky, not to go into transformational grammar, but that is also what Grinder claims is the basis of NLP. The uses to which Grinder was put in NLP were, in fact, not transmissional grammar, but in fact, were generative semantics.
The word ‘transmissional’ may be a mis-transcribed version of ‘transformational’ – the transcription of the interview has quite a few typos, but this use of ‘transmissional grammar’ appears several times. Although it is not completely clear in places, Faulkner seems to be indicating that he disagrees with Grinder’s strong emphasis on transformational grammar and is suggesting that generative semantics had a much bigger influence.
Note that Generative Semantics is not the same thing as General Sematics. General Sematics is a field started by Korzybski (“The Map is not the Territory”) and is also another acknowledged influence on NLP.
Wikipedia gives a good description of Generative Semantics, the first paragraph of which is quoted below to show its difference to Transformational Grammar.
Generative semantics is the name of a research program within linguistics, initiated by the work of various early students of Noam Chomsky: John R. Ross, Paul Postal and later James McCawley. George Lakoff was also instrumental in developing and advocating the theory. The approach developed out of transformational generative grammar in the mid 1960s, but stood largely in opposition to work by Noam Chomsky and his later students.
Generative Semantics may be contrasted with Interpretative Semantics. In Interpretative Semantics, the rules of syntax produced well-formed grammatical sentences, and these sentences were evaluated using a separate model of semantics (meaning). In contrast, Generative Semantics postalated that interpretations were generated directly by the grammar as deep structures, and were subsequently transformed into recognizable sentences by transformations.
Faulkner explains it as follows:
Chomsky’s work claims that changes of syntactic structure will not mean changes of meaning. In fact, what George Lakoff and Paul Postal and some others have figured out was, in fact changes of syntactic structure did make changes of meaning, and that was one of the big points in NLP, was that how you talk about things and the different kind of grammatical structures you use would imply or infer certain different kinds of thinking processes. That is fundamental to NLP. Where as in fact, Chomsky’s work did not support that.
Later in the interview, Faulkner suggests the idea more strongly that the resting of NLP upon transformational grammar could be reconsidered:
is it time, for example that we ditched transformational grammar, at least in our brochures, because Chomsky ditched transformational grammar in 1980. Maybe we want to update the epistemological basis of what we do. Maybe we want to support, by donation or effort, some serious jury research into the voracity, the viability of certain claims, either of distinction, like the eye movements of language, or actual interventions in protocols. Like the phobia process or the switch pattern or something. Without that kind of an effort, why would it be credible?
Faulkner has taken his own advice and is currently involved with NLP research at the University of Surrey. The website introduces itself with the following description:
This website is an information hub for people interested in research into Neuro-Linguistic Programming, especially in fields of management, coaching and adult learning. Its purpose is to link practitioner and academic researchers to relevant resources.
This posting is just a small beginning to my thoughts on this matter. I have started reading further in cognitive linguistics and other areas of grammar in an attempt to bring myself more up to date so that I can play a small part in that research-based approach being advocated by Faulkner and others such as Dr Paul Tosey and Dr Jane Mathison at the University of Surrey. If NLP is to truly reach its full potential in the future, I believe that it needs to build up a solid base of evidence as well as a robust yet flexible theoretical framework.
***
©Copyright 2010 by Dr. Brian Cullen