Today, one of the hallmarks of a lack of appreciation of NLP is the notion that we “are” a representational system, as in “you are a visual, he is an auditory, and I am a kinaesthetic.” A variation is “you are an auditory to visual to kinaesthetic, and he is a visual to kinaesthetic to auditory” but this is just as benighted; only the box is bigger and if the very idea confuses you, that is how its user gets people to believe it. People do not have single, fixed sequences of thinking, however much they try to box ideas. How facile to attempt to identify a person on a single or small sample of expression, but this class of identification does make an excellent criterion to include in seeking a possible source of NLP training or practice.
Representational systems is the name of a model of the way we code and order our thinking, memory, and imagination. The model proposes that people think in combinations and sequences of images, sounds and sensations, tastes and scents. These internal representations match our external senses and, when elicited in an associated form, like the sensory experience of being there, use the same neurological circuits as sensory experience. We distinguish linguistically between live sensory experience and internal representation by referring to sensory or representational vision, sound, feeling etc.
Everyone can use all internal representational systems simultaneously when attending internally, just as we can attend externally with all our senses, but often, only one system is in conscious awareness at any given moment. The supporting observations for this rely on personal reporting, choice of sensory-specific words, known as “predicates,” and the external evidence of eye-accessing cues.
The eye accessing model proposes that people use location to gain access to the content of memory and imagination (this includes patterns). Material in different representations is accessed from particular locations by a flick of the eyes in the appropriate direction. The majority of people access visual representations by flicking their eyes above the eye line. Auditory or sound representations are sourced horizontally, and feeling, both sensation and proprioception are found below eye level.
The distinction between accessing memory or constructed ideas is less clear-cut. While there is a majority that keeps memory to the left of the body and imagination to the right, there is a sizable minority that does the reverse. Contrary to speculation in some NLP literature, the idea of a “normally organised right handed person” is not reliable. Ideally, to use eye-accessing to assist someone retrieve information, we need to know exactly where each class of information resides for that person. We do this by asking questions to elicit deliberate accessing in each representational system and with reference to the past or the future. Questioning for future accessing needs to seek completely fresh ideas to ensure they have not been transferred to memory.
When information is accessed, it can be reviewed with the eyes on its location, or it can be brought into our visual and/or auditory field and/or felt, smelled, tasted in the body. We can detect sequences of representation in someone else’s thinking through the sensory predicates they use and the directions of their eye movements.
There is a choice, usually exercised unconsciously, of being aware of one or more representations simultaneously. When a memory or proposed situation is activated, we can become totally engrossed in it as if we were present in real time. Then we can experience all representational systems at once. If we represent the information as if from a distance, we might only see it or hear it, but in both of these possibilities, the use of more than one representational system is simultaneous.
Synaesthesia is another option. This occurs when we experience a representation, usually in a different system, in response to a sensory input or representation. Examples include: see favourite pet – feel warm glow; hear scratch on blackboard – feel teeth stand to attention; hear piece of music – see selection of colours. Synaesthesia is also the structure of phobias; see or hear phobic stimulus – experience disproportionately nasty feeling. The eye accessing evidence can be a fast flick of the eyes from one system to another, but this is seen with rapid multi-representational thought as well. If the eyes are defocused and facing front, this usually indicates a synaesthesia is happening. Synaesthesia can include more than two representational systems, though most reporting refers to two.
Outside of NLP, most people are unaware of the way they use their internal representations or even that they have them. Synaesthesia is commonly defined as a condition a few people exhibit, not a choice. Some people are convinced they do not visualise and cannot learn to do it. In NLP, it is presupposed that we can learn to track our current uses of internal representations and learn to use the parts we have not known before. We can separate unwanted synaesthesias, create new and desirable ones, expand our repertoire of thinking by including habitually ignored representations and facilitate our capacity to learn with deliberate mental photographs and sound recordings. We can change the meaning we attribute to any content we think about by altering the size, volume, bandwidth, clarity, shape, brightness, temperature, distance, speed, etc. of our representations of it. This uses a related model called submodalities, which considers the packaging in which an image, sound, or sensation is presented to us.
When Grinder and Bandler first became aware of representational systems and eye-accessing cues, it was through observation and listening. Grinder describes in “Whispering in the Wind” hearing a conversation between two people in a petrol service station and becoming aware that they were using sensory-specific words to each other, but from different senses. This did not produce smooth communication, and it drew Grinder’s attention.
Grinder and Bandler conducted experiments with training groups, creating sub-groups based on sensory-specific language. When they put strangers together according to the representational system used in their greeting, conversations were freer and more spontaneous in the group than when people were placed with others who greeted in different sensory predicates.
Initially, the idea of a preferred representational system was postulated, not to identify or label people, but as the basis for further research, which has been taking place ever since with excellent results. But, the tendency of most people to take a single example of something, or an open proposal, and overgeneralise from it occurred, and the NLP community of the day welcomed the idea with open arms. Regardless of further observation and more discovery in the last 35 years, including evidence that we shift between representations when thinking and use all of them in different sequences or strategies, the original postulate has become an icon.
Check out our 10970NAT Graduate Certificate in Neuro-Linguistic Programming program.
(Note: If you would like to learn more about the New Code of NLP, you can get a copy of our latest Kindle book, ‘AEGIS: Patterns for extending your reach in life, work & leisure’ by Jules Collingwood, NLP Trainer. For only $4.99 here).
Learn more about NLP by reading our Ultimate Compendium of NLP
If you found this article useful, please share it!