The face recognition model developed by Bruce and Young has eight key parts and it suggests how we process familiar and unfamiliar faces, including facial expressions. The diagram below shows how these parts are interconnected. Structural encoding is where facial features and expressions are encoded. This information is translated at the same time, down two different pathways, to various units. One being expression analysis, where the emotional state of the person is shown by facial features. By using facial speech analysis we can process auditory information. This was shown by McGurk (1976) who created two video clips, one with lip movements indicating 'Ba' and other indicating 'Fa'. Both clips had the sound 'Ba' played over the clip. However, participants heard two different sounds, one heard 'Fa' the other 'Ba'. This suggests that visual and auditory information work as one. Other units include Face Recognition Units (FRUs) and Person Identity Nodes (PINs) where our previous knowledge of faces is stored. The cognitive system contains all additional information, for example it takes into account your surroundings, and who you are likely to see there.
fMRI scans done by Kanwisher et al. (1997) showed that the fusiform gyrus in the brain was more active in face recognition than object recognition, this suggests and supports the idea that face recognition involves a separate processing mechanism. This model suggests that we process familiar and unfamiliar faces differently. That we process familiar faces using; structural encoding, FRUs, PINs and Name Generation. However, we use structural encoding, expression analysis, facial speech analysis and direct visual processing to process unfamiliar faces.
However, there is evidence by Young et al. suggesting that the idea of double association is poor. He studied 34 brain damaged men, finding there was only weak evidence for any difference between recognising familiar and unfamiliar faces. An issue with this