Well, this is Nick’s Do-It-Yourself baby, mostly. It sure looks like we found a larger N170 response to faces (3 subjects, no stats) than to houses, cars or animals (a living/nonliving distinction was the task). We wanted to do an inversion study with the faces to see if the enhanced N170 would be abolished but the equipment broke.
Note though that the equipment found other signatures, such as a pronounced and normal-looking auditory Mismatch Negativity (MMN) – again, only 2 subjects and no stats.
This is promising stuff on the waaaay cheap – we saw a Grass 79D amplifier on Ebay once for $99!
I’m pretty excited about this work.
To summarize, we give people a face to look at for either 400 ms or 2 seconds. Then after a brief delay they get a manipulated probe face for only 120 ms. They have to say whether the face is same or different from the initial face. Manipulations to the probe face are to the eye region on one side. We presume then that the changed eye (eye alone moved horizontally or vertically, or eye and eyebrow moved horizontally or vertically) is falling into the opposite brain hemisphere for initial processing.
The participants are given no explicit instructions about what to pay attention to, etc.
We know that the change is registered, because there is a delay in saying “same” to same identity faces after the manipulation compared to when the second probe face is not manipulated.
But the key thing is that in the 400 ms initial face condition, when you don’t have a lot of time to scan the image and “memorize” it, the disruption is right hemispheric and especially seen after eye and eyebrow movements. When you have more time to study the initial face (2 sec) then we think you are committing some of the small metric distances to memory quite well, and that is why the “local” changes to the eye alone are noticed more in the left hemisphere, which is known to process “parts” more effectively. This all fits with the neuroimaging literature over the past 10 years