FIXATION-CONTINGENT PRESENTATION OF STIMULI IN MEG
Laura Barca and I ran an MEG experiment in which participants were
instructed to fixate on a central point on a screen positioned 1 metre in
front of them. A stimulus (e.g. a word) was then presented very briefly
to the left or right of the fixation point. The aim was to track the
processing through the brain of words presented in the right visual field,
projecting directly to the left (language) hemisphere, or in the left
visual field, projecting first to the left (non-language) hemisphere and
presumably needing to be transferred across to the left hemisphere via the
corpus callosum for processing.
In the absence of eye movement monitoring we had to trust our participants
to fixate centrally, and to rely on the brief presentations to assert that
they could not have re-fixated in the time that the stimulus was on the
screen. We can also point to differences in the patterns of brain
activation we observed as indicating that we were successful on most
trials of the experiment in controlling presentation as we wanted to.
There are, however, people out there who could end up reviewing grant
applications for further work who get very animated about the need for
accurate fixation control in this kind of experiment. There are also two
strands of future research that may need more accurate monitoring. One is
work I would like to do following up Lisa Henderson's MSc project
comparing the responses of dyslexic and non-dyslexic students to words
presented in the left and right visual fields. The other is a project by
a PhD student of Richard Wise at Imperial College London , Zoe Woodhead (a
former undergraduate of ours), who want to use the York MEG system to look
at word recognition in patients with hemianopias following occipital
strokes. Both dyslexic and hemianopic participants may be assumed to have
greater difficulty controlling their fixation than 'normal' participants,
and good fixation control would be especially helpful for those studies.
It would also help with studies I would like to do in which words will be
presented centrally on the assumption that certain letters fall to the
left of fixation while other letters fall to the right.
What would be nice to have, then, is a way of ensuring that stimuli are
only displayed on the screen when a participant is fixating on, or close
to, a central fixation point. We normally offset the inner edge of our
stimuli by 1.5 degrees, so it would be good to define a central fixation
sphere with a radius of 0.5 degrees at 1 m distant from the participant,
and only to present the stimulus on a given trial when fixation is within
that sphere. So:
1. Would the resolution of the system allow us to know when someone is
fixating within a sphere that has a diameter of 1 degree at 1 metre
distance? Is a smaller resolution possible?
2. What would be the minimum time between registering that fixation is
within the defined region and a stimulus appearing on the screen? I want
to avoid suggestions that participants may have moved their eyes in the
interval between fixation being registered and the stimulus being
presented. It might help to present the stimulus after fixation has been
within the central sphere for a certain period of time in order to exclude
the possibility that participants were sweeping their eyes through the
sphere when presentation was triggered.
3. Richard Wise and Zoe Woodhead would be interested in a variant of this
procedure where a stimulus remains on the screen for as long as fixation
remains within the central region. This would allow more prolonger
presentation of stimuli to, for example, patients with hemianopias whose
processing of visual inputs may be relatively slow. I know that Richard
and Zoe have toyed with presenting "sentences" in one or other visual
field by displaying one word after another at the same position. That
would be OK to do if we could know that fixation remained central
throughout.
4. Finally, I gather from the meeting last night that only one eye is
monitored. There is quite a lot of discussion in the literature about how
often the two eyes focus on the same point, and how often there is either
'crossed' or 'uncrossed' fixation. We would need to think about this,
and
whether we should, for example, put a patch over the unmonitored eye.
I am posting this on ynic-users because we were encouraged to do so. I am
hoping for a sober response from YNiC which is realistic about what could
and could not be done, any difficulties we may run into, and the time that
would be required to implement a system like this (assuming that it is
do-able). At the moment all I need to know is what could or could not be
achieved so that I can write that in confidence into grant applications.
Other people with an interest in vision and MEG (Silvia, Piers, Andy Y
etc) may want to chip in so that YNiC can get a fuller understanding of
what people would like to have in the way of intergrated fixation
monitoring and stimulus presentation.
Andy Ellis
19 Oct 2007