elminda’s BNA™ technology enables to unravel the complex neural networks that underly cognitive functions such as face processing in a straightforward manner.
Figure A below shows a BNA network representing processing of a face previously presented to subjects, during a delayed–match-to -sample task. Observing how this network’s activation unfolds in time (figure B), reveals three main time-points where a clear network could be observed: at 120, 160 and 240 ms. Time points 120 ms and 160 ms correspond to ERP components P1, and N170.
The P1 component represents an early index of endogenous processing of visual stimuli, modulated primarily by low-level physical characteristics of the stimuli. N170 is suggested to index a mechanism tuned to detect faces or face-related information in the visual field. Time point 240 ms corresponds to ERP component N250, and is comparable to the second-order relational configuration (spatial relations among facial features), that gives faces their individual distinctiveness and allows identity recognition (Maurer et al., 2002).
Network activation also reveals the type of spectral activity most dominant at these time-points and their spatial location on the scalp. This sequence of events captures the known face processing stages form the literature, indicating posterior alpha activation at early stages of face processing, in which the encoding of visual information is carried out, while theta activity appears at later semantic stages of face processing probably signifying perceptual memory representations for faces.
The BNA™ technology simultaneously combines time, location and frequency of brain response with high temporal resolution, enabling to follow the main stages of cognitive processing in a simple and straightforward manner.