Mapping Brain Function

Face processing is an example of a cognitive function that may seem like a trivial task, but in fact depends on a complex neural network that uses multiple processing steps.

Evidence of the multiple steps in face processing has been provided by recordings of the measured neural brain response, or Event-Related Potentials (ERPs), in subjects performing face/non-face discrimination. Differences in ERP components help make it possible to determine which stage(s) are being affected by a specific functional impairment.

Face processing is now believed to involve three separable levels of processing: at first, general low-level features analysis is performed (Bruce and Young, 1986), which leads to the second level of processing (Maurer et al. 2002)- first-order relational configuration (eyes above nose, above mouth), followed by the  holistic perception of faces (i.e., a face versus a non-face). The third level is the second-order relational configuration (spatial relations among facial features) that gives faces their individual distinctiveness and allows identity recognition (Maurer et al., 2002).

elminda’s BNA™ technology enables to unravel the complex neural networks that underlie cognitive functions such as face processing in a straightforward manner, thereby opening a new window into our brain’s functionality.

Back to top