Dear Users
Yesterday we had an extremely useful brainstorming session.
The discussion brought to light some issues about using beamforming
which have not been reported in the literature yet. As these
observations might affect other users we thought it would be of interest
to bring them to the attention of all users.
Uzma Urooj, supervised by Andy Ellis with Michael Simpson as SLO, had
identified two particular puzzling issues when using the beamforming
approach to (a) localise time locked activity and (b) create virtual
electrodes.
(a) the first observation is that if one constructs a NAI map from a
beamformer, the statistically significant blobs do not necessarily align
with the largest amplitude averaged response seen with virtual
electrodes. What the user has done was to perform a standard beamforming
analysis of some data and localised the most significant blob. They then
placed virtual electrodes at this point and around it. At each of these
virtual electrode positions they calculated the average response across
epochs. The largest amplitude evoked response was not at the location of
the beamformer blob. This observation has also been replicated in
simulation studies by Mark Hymers where he has placed dipoles at known
locations in a model brain which had realistic background noise.
The reason this occurs is because of the way the beamformer works. The
beamformer is a set of spatial filters which are designed to measure the
brain activity (the NAI) from each brain location in turn. The filters
are constructed to ensure that activity only comes from one location at
a time. All other locations are suppressed. This is done by computing
the power within the MEG signal and ensuring that this is measured with
a gain of one from a specified location and that it is zero from all
other positions. This is the key point - the beamformer is specified in
terms of MEG power. The power in an MEG signal can come from two forms
of oscillation, those that are time locked to the stimulus and those
power changes that produced by a stimulus but are not time locked in
terms of the oscillatory phase (often called the stimulus induced
power). Thus the blobs created by the application of the beamformer
programme are in terms of total power (time locked and induced). When
the average virtual electrodes were computed, these measures were only
of time locked activity. The user that brought this to our attention has
made an important observation, within a specific region, brain responses
that are time locked maybe at a different location to induced power
changes. As we do not know the relationship between fMRI haemodynamic
changes and MEG oscillations (time locked vs induced), it may be that
beamformer results will not always align with fMRI results either. This
does not mean that the beamformer is wrong in some way. In means that we
have to be very careful in how we interpret beamformer output in terms
of blobs and virtual electrodes. There clearly is a need to be able to
distinguish between time locked and induced responses. To this end, Will
Woods has been working on a 'evoked' beamformer.
implications for users of the beamformer
NAI statistical maps reflect total power changes within a band and can
contain both induced and time locked responses, just induced power
changes or just time locked responses. In the latter case the virtual
electrode averages will align with the NAI maps. In the other two cases,
NAI maps may not align with VE maps. The issue for users is what form of
response should be used to test a specific hypothesis. This will be down
to the individual user who will have a specified model for their
particular experiment in terms of how stimuli may be encoded.
NAI maps are estimates of statistical changes in total power, these
maps are related to the mean and variance of the power in the
oscillations produced by both induced and time locked responses
Averages of VEs provide estimates of the average time locked response,
only.
(b) the second interesting observation made by Uzma related to the
construction of time series of virtual electrode activity for every epoch.
Uzma has used a specific time window (say 200 milliseconds) to estimate
weights for a beamformer. Uzma then used these weights to estimate the
virtual electrode time series for a much longer time window beyond that
of the original 200 milliseconds for every epoch. The virtual electrode
output (three orthogonal components) was manipulated to create an
estimate of power as a function of time. This was done by taking the
squares of each VE component and summing them. This gives a measure of
the power as a function of time for each epoch. Uzma then averaged these
power time series, producing the average power changes as a function of
time. Uzma made an observation that has not been reported before. Beyond
the end of the original 200milliseconds, a large slow change in power
was observed. If this was all repeated for a window that was 1000
milliseconds long, the slow change in power was now observed if the
power was estimated for times greater than 1 second.
This is a very important observation. The current thoughts about this
are that this is a property of the beamformer. It might arise as
follows. The beamformer weights are estimated from the covariance matrix
of the original MEG data within the specified window (say 200
milliseconds). Covariance is related to power within a signal, that is
why it is stated that a set of beamformer weights is about detecting the
power from a specific location and not any other. If a window is only
200 milliseconds long, a very poor estimate will be made of power below
5Hz (1/0.2). Thus this beamformer will not be able to correctly deal
with slow changes in power and estimates beyond the original window may
be inaccurate.
implications for users of the beamformer
When using a beamformer to estimate changes in power (especially
induced power) with time, observations are only reliable when made
within the time contraints of the original window length that was used
to construct the beamformer. The problem with this is that a beamformer
works best if the window length matches the duration of the induced
power change, especially if performing a statistical comparison between
two conditions. If long windows are used and the induced power changes
are fractional proportions of the length of the window, then statistical
power will be lost. In this case it would be best to use moving windows
as in the Pammer and Cornelissen paper.
----------------
I believe this demonstrates that brainstorming sessions are quite
important activities. Some people might be concerned about bringing
problems to the attention of other users, but as the above demonstrates,
it can be invaluable as it can highlight problems that are not fully
appreciated.
Many thanks to the user who brought this to our attention.
--
Gary Green
York Neuroimaging Centre
The Biocentre
York Science Park
Innovation Way
Heslington
York
YO10 5DG
http://www.ynic.york.ac.uk
tel. 01904 435349
fax 01904 435356
mobile 07986 778954
October 25th 4pm YNiC Open Plan
1. Project presentation: Michael Simpson
2. Project presentation: Laura Lee
3. Work in progress - a look at an ongoing project
All welcome. Refreshments will be available.
--
Gary Green
York Neuroimaging Centre
The Biocentre
York Science Park
Innovation Way
Heslington
York
YO10 5DG
http://www.ynic.york.ac.uk
tel. 01904 435349
fax 01904 435356
mobile 07986 778954
I thought I would circulate the following talk to YNiC users as it
could be potentially interesting to many users. This is talk
presenting results of an fmri experiment after extensive training
with different kinds of sounds.
Speech-like processing of nonspeech sounds following extensive
categorization training.
Dr James Keidel, University of Manchester
Monday, Oct 22nd, 2007, 12.30pm, C108, Dept of Psychology
Silvia
Silvia Gennari
Department of Psychology
University of York
Heslington, York
YO10 5DD
United Kingdom
FIXATION-CONTINGENT PRESENTATION OF STIMULI IN MEG
Laura Barca and I ran an MEG experiment in which participants were
instructed to fixate on a central point on a screen positioned 1 metre in
front of them. A stimulus (e.g. a word) was then presented very briefly
to the left or right of the fixation point. The aim was to track the
processing through the brain of words presented in the right visual field,
projecting directly to the left (language) hemisphere, or in the left
visual field, projecting first to the left (non-language) hemisphere and
presumably needing to be transferred across to the left hemisphere via the
corpus callosum for processing.
In the absence of eye movement monitoring we had to trust our participants
to fixate centrally, and to rely on the brief presentations to assert that
they could not have re-fixated in the time that the stimulus was on the
screen. We can also point to differences in the patterns of brain
activation we observed as indicating that we were successful on most
trials of the experiment in controlling presentation as we wanted to.
There are, however, people out there who could end up reviewing grant
applications for further work who get very animated about the need for
accurate fixation control in this kind of experiment. There are also two
strands of future research that may need more accurate monitoring. One is
work I would like to do following up Lisa Henderson's MSc project
comparing the responses of dyslexic and non-dyslexic students to words
presented in the left and right visual fields. The other is a project by
a PhD student of Richard Wise at Imperial College London , Zoe Woodhead (a
former undergraduate of ours), who want to use the York MEG system to look
at word recognition in patients with hemianopias following occipital
strokes. Both dyslexic and hemianopic participants may be assumed to have
greater difficulty controlling their fixation than 'normal' participants,
and good fixation control would be especially helpful for those studies.
It would also help with studies I would like to do in which words will be
presented centrally on the assumption that certain letters fall to the
left of fixation while other letters fall to the right.
What would be nice to have, then, is a way of ensuring that stimuli are
only displayed on the screen when a participant is fixating on, or close
to, a central fixation point. We normally offset the inner edge of our
stimuli by 1.5 degrees, so it would be good to define a central fixation
sphere with a radius of 0.5 degrees at 1 m distant from the participant,
and only to present the stimulus on a given trial when fixation is within
that sphere. So:
1. Would the resolution of the system allow us to know when someone is
fixating within a sphere that has a diameter of 1 degree at 1 metre
distance? Is a smaller resolution possible?
2. What would be the minimum time between registering that fixation is
within the defined region and a stimulus appearing on the screen? I want
to avoid suggestions that participants may have moved their eyes in the
interval between fixation being registered and the stimulus being
presented. It might help to present the stimulus after fixation has been
within the central sphere for a certain period of time in order to exclude
the possibility that participants were sweeping their eyes through the
sphere when presentation was triggered.
3. Richard Wise and Zoe Woodhead would be interested in a variant of this
procedure where a stimulus remains on the screen for as long as fixation
remains within the central region. This would allow more prolonger
presentation of stimuli to, for example, patients with hemianopias whose
processing of visual inputs may be relatively slow. I know that Richard
and Zoe have toyed with presenting "sentences" in one or other visual
field by displaying one word after another at the same position. That
would be OK to do if we could know that fixation remained central
throughout.
4. Finally, I gather from the meeting last night that only one eye is
monitored. There is quite a lot of discussion in the literature about how
often the two eyes focus on the same point, and how often there is either
'crossed' or 'uncrossed' fixation. We would need to think about this, and
whether we should, for example, put a patch over the unmonitored eye.
I am posting this on ynic-users because we were encouraged to do so. I am
hoping for a sober response from YNiC which is realistic about what could
and could not be done, any difficulties we may run into, and the time that
would be required to implement a system like this (assuming that it is
do-able). At the moment all I need to know is what could or could not be
achieved so that I can write that in confidence into grant applications.
Other people with an interest in vision and MEG (Silvia, Piers, Andy Y
etc) may want to chip in so that YNiC can get a fuller understanding of
what people would like to have in the way of intergrated fixation
monitoring and stimulus presentation.
Andy Ellis
19 Oct 2007
October 18th 4pm YNiC Open Plan
1. This autumn in YNiC - the programme
2. What is new and what has changed
3. Booking and the database
4. Ethics and governance
5. new tools
6. The support system explained
7. What is happening in YNiC - a look at the future
8. A discussion about users' needs and a new forum for getting
users involved
All welcome. Refreshments will be available.
--
Gary Green
York Neuroimaging Centre
The Biocentre
York Science Park
Innovation Way
Heslington
York
YO10 5DG
http://www.ynic.york.ac.uk
tel. 01904 435349
fax 01904 435356
mobile 07986 778954
You will probably be pleased to hear that MEG is now fully operational
again.
We have also taken time to get the system recalibrated, eye trackers
installed and a complete overhaul of the mechanics of the bed.
The noise level is lower as the source of the small electrical pulse
artefacts has been identified and removed.
All MEG channels are working.
The original problem was due to a faulty interface between the fibre
optics taking data out of the shielded room into the data acquisition
rack. The board has been replaced.
Bookings can now be taken for MEG.
You may also be pleased to hear that MRI has had a routine maintenance
and all is well.
happy scanning!
--
Gary Green
York Neuroimaging Centre
The Biocentre
York Science Park
Innovation Way
Heslington
York
YO10 5DG
http://www.ynic.york.ac.uk
tel. 01904 435349
fax 01904 435356
mobile 07986 778954
>
>
>Please bring this conference to the attention of colleagues who may
>be interested-
>
>BPS Division of Neuropsychology
>
>
>Visual Dysfunction and Cognition in Childhood
>
>This day conference will increase knowledge on psychological,
>neurological and neurobiological systems involved in vision and
>visual defects of brain and peripheral optic pathways. it will also
>increase knowledge on brain:cognition relationships and cognitive
>patterns associated with visual defects, including spatial
>cognition, and severe visual impairment. The conference introduces
>the latest theory and practice in these areas, including clinical
>assessment and interpretation of cognitive development and disorders
>in children with severe visual impairment.
>
>Date: 30 October 2007
>
>
>Who should attend?
>
>This event is open to clinical or educational practitioners or
>researchers in child neuropsychology, clinical child psychology,
>paediatric, neurology, occupational therapists and related
>neuroscience disciplines. Relevant to practitioners and researchers.
>
>
>Programme
>
>09:30 Registration & Refreshments
>
>10:00 Two visual systems: agnosia, optic ataxia & neglect
>
>Professor David Milner (Professor of Cognitive Neuroscience,
>University of Durham): Co-author of Sight Unseen (winner of the BPS
>Book Award)
>
>11:15 Refreshments
>
>11:30 Neurobiological models of visual-spatial deficits in childhood
>
>Professor Janette Atkinson (Director of Visual Development Unit:
>University College London/ Oxford University): Author of The
>Developing Visual Brain
>
>12:45 Lunch
>
>14:00 Visual impairment and cognition: neurodevelopmental issues
>
>Dr Naomi Dale (Head of Psychology (Neurodisability), Great Ormond
>Street Hospital/ UCL Institute of Child Health): Co-lead in
>developing the governmental Early Support Developmental Journal for
>parents and children with VI
>
>14:45 Refreshments
>
>15:00 Visual impairment and cognition: clinical assessment and
>interpretation
>
>Dr Naomi Dale (Head of Psychology (Neurodisability), Great Ormond
>Street Hospital/ UCL Institute of Child Health): Co-lead in
>developing the governmental Early Support Developmental Journal for
>parents and children with VI
>
>15:45 Concluding Remarks
>
>
>Speakers
>
>Professor David Milner (University of Durham): Co-author of Sight
>Unseen (winner of the BPS Book Award)
>
>Professor Janette Atkinson (University College London/ Oxford
>University): Author of The Developing Visual Brain
>
>Dr Naomi Dale (Great Ormond Street Hospital/ UCL Institute of Child
>Health): Co-lead in developing the governmental Early Support
>Developmental Journal for parents and children with VI.
>Location Directions
>UCL Institute of Child Health, London
>
>For further details click here-
>http://www.ich.ucl.ac.uk/education/short_courses/courses/2S18
>
>---------------------------------------------
>COGNEURO archives and subscription manager can be found at
>http://www.jiscmail.ac.uk/lists/COGNEURO.HTML
>---------------------------------------------
>List owner's email address: COGNEURO-request(a)jiscmail.ac.uk
--
Professor Andy Ellis
Department of Psychology
University of York
York YO10 5DD
England
Tel. +44 (0)1904 433140
http://www.york.ac.uk/depts/psych/www/people/biogs/awe1.html