NLP, as you may know, stands for neuro (brain/thinking) linguistic (pertaining to language and its use) programming (creating algorithms to run specific processes in response to identified cues). In case it is not obvious, this description refers to processes and process instructions for responding to patterns detected in the world. The content of the detected material does not contribute to the choice of algorithm. The form remains constant through different content expressions on different occasions.
Imagine a sunbeam shining in through a window. If the window is clean and the air is clear, you can see a window-shaped patch of sunlight on the floor. If the room is smoky, you can detect the whole length of the sunbeam from window to floor and see exactly where it starts and ends. To follow the content, we would fixate on the need for smoke to reveal the path of the sunbeam. When we follow form, or pattern, we need something off which the light can reflect, to show where the sunbeam is. We could stick our arms into the light and wave them around, or throw flour into the sunbeam, or drop glitter or dead leaves through the air, or may be stir up dust or talcum powder. Any of these would render the sunbeam visible, which was our outcome in this experiment.
Now imagine having a set of questions that respond to the form of a person’s use of language. The outcome is to gather high quality information about any content under discussion and to be able to do so without deep subject knowledge. This makes content free coaching possible and effective. We can use linguistic form to assist anyone; even experts refine their thinking on their own subject or to get user friendly and accurate directions to someone’s office. We could map the form into another language and still follow the same cues.
Grinder and Bandler (mostly Grinder) developed neuro-linguistic programs for gathering high quality information in any context and they follow linguistic form. The most comprehensive set of language patterns for information gathering is the Meta Model of Language and it is the first time a comprehensive, form based linguistic model has been developed for this purpose. The meta model applies specific questions, known as ‘challenges’ to 13 linguistic forms or ‘violations’, each of which belongs to the class of linguistic distortions or generalizations or deletions.
The intent of challenging meta model violations is to bring accuracy to distorted comments, specificity to over generalised comments and restoration of information to deleted comments, regardless of the subject matter. This is designed to give the challenger the information they need, and/or to train the speaker or writer to think more clearly about the content under discussion. The meta model is applicable to anything that humans talk or write about.
Meta model challenges can be blunt. There are many stories of students learning the meta model and annoying the hell out of unsuspecting friends and relations when they first use the patterns outside class. Rapport maintaining activity, softeners surrounding the questions, gentle voice tones can all help to keep the subject interested and comfortable while finding the additional information called for by a challenge. Framing (explaining one’s intentions and what one is doing) is a great rapport enhancer, as the subject is then included in the process instead of being at the sharp end of it.
This is a lot of material to teach in one go, but is essential for anyone doing a comprehensive generic NLP training.
There is a shorter version, the ‘Precision Model’, described in a book of that name by Grinder and MacMaster. The precision model is a cut down version of the meta model that covers challenges to generalization and deletion patterns. Like the newer specifier question model below, the precision model applies the questions, what, specifically and how, specifically to unclear nouns and verbs, describing these challenges as ‘noun blockbusters’ and ‘verb blockbusters’, respectively. The precision model also includes meta model challenges to statements of belief, known as modal operators of possibility (can, may, could and their opposites) and necessity (have to, must, should and their opposites) and to universal quantifiers (all, every, never, no-one). The precision model was designed to give people in business a shorter skill set than the meta model, one that would enable them to communicate more effectively and give and receive better quality instruction, but with less training and practice time.
For the many other people who could use a hand with giving and receiving information, Grinder and Bostic have now pared down the meta model to just two questions. You can use this model straight away, again, with rapport, after reading this page. The instructions are very simple.
‘What (noun) specifically?’, is asked in response to nouns, both abstract and concrete that could be clearer. ‘(Verb), how specifically’ is asked in response to unspecified and unclear verbs. Grinder recommends starting with the nouns. As with the meta model, a single question may not be adequate, but with repeated questioning with rapport, the desired specificity is obtainable provided the subject knows the answers.
Altering the form weakens the effect of these questions. While you can ask ‘Which (noun) specifically?’, instead of ‘What (noun) specifically?’ if you ask ‘What kind of (noun) specifically?’ you are eliciting a different class of response and it is not going to produce results. Ask ‘Which car, specifically?’ or ‘What outcome, specifically?’. With verbs ask, ‘Walk, how, specifically?” or ‘Put it down, how, specifically?’.
From the meta model, notice that the nouns and verbs being questioned, contain linguistic deletions and remember, the most effective order to challenge meta model violations is distortions first, then generalizations and deletions last. With this specifier model, Grinder proposes using specifier questions on nouns and verbs wherever there is a need to know. This includes nouns and verbs present in distorted and generalized sentences, too.
It is possible and functional to use specifier questions as Grinder proposes, because layered meta model violations occur in a single sentence, so specifying nouns and verbs contributes to clarifying distortions and specifying generalizations as well as restoring deleted material. Not only does every sentence derive from unspoken assumptions, every sentence also includes nouns and verbs that could be more specific, regardless of any overarching distortion or generalization in the larger text.
To find more on the specifier question model, follow up Grinder’s ‘Verbal Package’ in the New Code of NLP. The verbal package includes:
The Verbal package is taught as part of our one-day course, ‘The Rules of Engagement’.
© 2008 Jules Collingwood.
Learn more about NLP by reading our Ultimate Compendium of NLP
If you found this article useful, please share.