We investigated how information from face features is combined by comparing sensitivity to individual features with that for external (head shape, hairline) and internal (nose, mouth, eyes, eyebrows) feature compounds. Discrimination thresholds were measured for synthetic faces under the following conditions: (a) full-faces; (b) individual features (e.g., nose); and (c) feature compounds (either external or internal). Individual features and feature compounds were presented both in isolation and embedded within a fixed, task irrelevant face context. Relative to the fullface baseline, threshold elevations for the internal feature compound (2.41x) were comparable to those for the most sensitive individual feature (nose=2.12x). External features demonstrated the same pattern. A model that incorporated all available feature information within a single channel in an efficient way overestimated sensitivity to feature compounds. Embedding individual features within a task-irrelevant context reduced discrimination sensitivity, relative to isolated presentation. Sensitivity to feature compounds, however, was unaffected by embedding. A loss of sensitivity when embedding features within a fixed-face context is consistent with holistic processing, which limits access to information about individual features. However, holistic combination of information across face features is not efficient: Sensitivity to feature compounds is no better than sensitivity to the best individual feature. No effect of embedding internal feature compounds within task-irrelevant external face features (or vice versa) suggests that external and internal features are processed independently.
- Face perception
- Face features