Dear list,
If anyone else is considering using Freesurfer
(http://surfer.nmr.mgh.harvard.edu/) to reconstruct cortical surface
models from T1 volumes collected at YNiC, you might be interested in the
following information.
By default, if you feed a standard structural YNIC T1 into freesurfer it
will most likely produce spurious brain extractions, and have great
difficulty (read: 40+ hours of processing!) in reconstructing the
cortical surfaces. After exploring these problems for a while, I
discovered two issues which require some pre-processing to solve. The
first is that freesurfer does not like images with a FOV larger than
256mm^3. Standard YNIC T1 structurals have a FOV measuring 176x290x290mm
(176x256x256 slices of 1 x 1.13 x 1.13mm).
In addition to this, the voxel dimensions are misinterpreted by
freesurfer when it attempts to resample the volume to 1x1x1m voxels,
which it does as a standard part of the importing process. They are
interpreted incorrectly as 1.13 x 1.13 x 1mm! Needless to say, this
leads to all kinds of problems, resulting in spatially distored output
surfaces.
Therefore, I would recommend that the following steps are taken before
trying to carry out any processing with freesurfer:
1) Manually resample the T1 to 1x1x1mm and force the correct dimensions
to be used, with the 'mri_convert' command (part of the freesurfer package):
mri_convert -iis 0.9999 -ijs 1.1328 -iks 1.1328 -ois 1 -ojs 1 -oks 1
-oic 176 -ojc 290 -okc 290 T1.nii.gz T1_1mm.nii.gz
Here, we specify the input sizes (1x1.13x1.13mm), the output sizes
(1x1x1mm), and I have also specified the number of output slices which
is important because otherwise mri_convert will truncate them to a
maximum of 256.
N.B.: This command _should_ be identical for all standard YNIC T1
structurals. However, it is always important to check that your slice
counts and sizes are the same as the example given here, otherwise all
subsequent processing will be compromised.
2) Remove unnecessary slices from outside the head and the neck so that
the final number of slices is less than or equal to 256 in all
dimensions with avwroi (from fsl):
avwroi T1_1mm T1_1mm_reslice x_start x_size y_start y_size z_start z_size
where T1_1mm is the resampled MRI (no .nii.gz extension), T1_1mm_reslice
is the output volume (again, no .nii.gz extension), and the _start and
_end parameters specify the starting slice and the number of slices to
include in each dimension (use fslview to find the slice numbers). For
example, to remove the first and last 17 slices in the Y and Z
dimensions, we could run:
avwroi T1_1mm T1_1mm_reslice 1 176 18 256 18 256
which would leave us with a FOV of 176x256x256mm, compatible with
freesurfer. Be careful that only redundant slices are removed!
Your pre-processed volume should now consist of less than or equal to
256 slices in each dimension, with 1x1x1mm voxels. This can be safely
processed with freesurfer using:
recon-all -i T1_1mm_reslice.nii.gz -subjid <subject id> -autorecon-all
Finally, make sure you apply all of this to a copy of the T1, as you
won't be able to modify the original in the mridata folder, and even if
you can you really shouldn't!
I hope this will save someone a bit of time and a lot of headaches!
p.s. if you want to install freesurfer in your home folder on YNIC
machines, follow these steps:
1) Download the freesurfer PowerPC distribution and register for a
licence file on their website: http://surfer.nmr.mgh.harvard.edu/
2) Double-click on the image file, and a window will open which contains
a single package file.
3) Right-click on the package file and choose 'Show pacakge contents'
4) In the new window that opens, navigate into the 'Contents' folder,
and drag the Archive.pax.gz to your desktop.
5) Double click on the archive file and wait for it to decompress (this
will take a long time)...
6) You should now have a freesurfer folder on your desktop, which you
can drag into your home folder. Now remove the archive files on your
desktop as they are quite large and are no longer needed.
7) Open a text editor and copy&paste the following:
export FREESURFER_HOME=~/freesurfer
source $FREESURFER_HOME/SetUpFreeSurfer.sh
and save the file as 'freesurfer-config' in your home folder. Note that
this script presumes that you have moved the freesurfer folder into your
home folder.
8) Copy your licence file into your freesurfer folder.
9) In an X terminal, type 'source ~/freesurfer-config'. You can now use
freesurfer commands. Make sure you run this command in X11 each time you
log in to have access to freesurfer.
Padraig.
--
Pádraig Kitterick
Graduate Student
Department of Psychology
University of York
Heslington
York YO10 5DD
UK
Tel: +44 (0) 1904 43 3170
Email: p.kitterick(a)psych.york.ac.uk
Dear users
We are, fortunately, being increasing asked to provide clinical scanning
slots for the NHS and for Lodestone. This has the desirable effect of
providing income. It has the disadvantage that there is less flexibility
in being able to offer slots for research scanning, especially when the
research is part of a pilot project that is unfunded.
The agreement with Lodestone is that they will attempt to use free slots
left after the Thursday noon cut-off for booking by research PIs. Please
book your time for the week ahead by booking before Thursdays at noon.
Lodestone will then take the remaining slots for single case work. The
only exception to this is Lodestone NHS work which will continue to
occupy half day slots so that block booking of patients can be used
several weeks ahead of time.
There is little pressure on scanners in the evenings (at the moment). If
you are a trained operator then please consider using evening slots for
your research. If you wish to become trained in operating MRI then
please let us know so that we can start a training course for those
interested. The advantage is that evening scanning is cheaper, at the
moment.
Gary
--
Gary Green
York Neuroimaging Centre
The Biocentre
York Science Park
Innovation Way
Heslington
York
YO10 5DG
http://www.ynic.york.ac.uk
tel. 01904 435349
fax 01904 435356
mobile 07986 778954
Dear ynic-users
At the Thursday evening sessions this term we have been describing the
changes in YNiC and building up to the first official release of YNiC
software and documentation. We are very pleased to be able to announce
that the release date will be the 8th of January 2008. This is earlier
than we had originally planned and reflects the enormous amount of
effort that everyone has put into getting the software and documents
into a complete package. This could not have been done without the
co-operation of the users who carried out the beta testing. Further
information about the complete contents of the first release will be
circulated next week.
We had initially planned to hold masterclasses this term to go through
current software and applications. But as we can now confidently say
that the new software will be released early in the new year, it makes
sense to hold the masterclasses after that release so that everyone can
take full advantage of all the new features and the more complete help
documentation. Therefore we would like to postpone the masterclasses.
Instead we would like to hold the roundtable discussion on the use of
the eye-tracker. We also have a new user who would like to make a
project presentation.
We would like to suggest that the Thursday sessions now look like this
for the rest of term
Thursday 29th Nov 2007 : Project presentation by Dr. Srimant Tripathy
from Bradford
Thursday 6th December : Roundtable discussion on use of the eye-trackers
Thursday 13th December : Project presentation by Gary Green
Thursday 20th December : Christmas drinks
and then
Thursday 10th January 2008 : The new software and documentation
Thursday 17th Jan : Masterclass on the new visualisation software
Thursday 24th Jan : Masterclass on MEG analysis techniques
Thursday 31st Jan : Masterclass on MRI analysis techniques
We then propose that we start the roundtable discussions about what
should be in the second release
comments welcome
--
Gary Green
York Neuroimaging Centre
The Biocentre
York Science Park
Innovation Way
Heslington
York
YO10 5DG
http://www.ynic.york.ac.uk
tel. 01904 435349
fax 01904 435356
mobile 07986 778954
November 22nd
1. Update on analysis tools at YNiC
2. MRI
FSL
2.1 Features: Atlases, Bedpostx and FIRST, avw2fsl
2.2 Scripts and batch processing (python demo)
2.2 Higher level analyses (?)
2.3 Cluster Feat and Cluster Other
OTHER (Freesurfer, SPM, mrVista)
3. MEG tools
3.1. Beamforming and permutation statistics
4. Visualisation tools at YNiC
4.1 FSLVIEW
4.2 YNICDV3D
5. Support & Documentation at YNiC
6. Users' requests and what we can do better
All welcome, refreshments will be provided afterwards.
--
Will Woods
York Neuroimaging Centre
The Biocentre
York Science Park
Innovation Way
Heslington
York
YO10 5DG
http://www.ynic.york.ac.uk
Hi Laura,
Rapid event-related fMRI has many advantages for cognitive neuroscience
research. However, the main problem with this technique is that the
signal is very small compared to a block design. One way to increase
the signal-to-noise ratio is to increase the number of trials. This
typically means using a short ISI (otherwise your subjects could be in
the scanner for a long time!). It is also good to vary the ISI to avoid
expectation effects (i.e the subject predicting when the next event is
likely to occur). However, the problem with a short ISI is that the
response to one stimulus will likely overlap with the response to the
next. This is not a problem if the BOLD response is linear (i.e. the
response to two successive stimuli is the same as adding the response to
two independent stimuli with an appropriate temporal offset). However,
a number of studies have found that there are significant
non-linearities when the ISI is less than ~5sec (eg Dale and Buckner,
1997; Huetell and McCarthy, 2000). So, varying ISI can have positive
and negative effects on the BOLD signal. I haven't used the programs
that Claire and Silvia are using, but I assume they are trying to find
an optimum balance between these effects.
Users - please feel free to correct or comment!
Tim
Laura Lee wrote:
> Hi MRI-support,
>
> I'm struggling along trying to work out how to create a 'stochastic'
> event-related design for fMRI. Claire Moody has passed on a program
> that searches for the optimum stimulus schedule (she & Silvia used it
> for their last project). I've read over all the bumpf but am still
> quite confused by all the new concepts. I think I know roughly what I
> want but then there are some parameters I am unsure about and don't
> really understand the implications of the settings. I'd be really
> grateful if you could give me a hand.
>
> This is the programme I downloaded...
> http://surfer.nmr.mgh.harvard.edu/optseq
> And there's a pretty comprehensive help page here...
> http://surfer.nmr.mgh.harvard.edu/optseq/optseq2.help.txt
>
> Thanks, Laura
>
>
>
>
>
--
Dr Tim Andrews
Department of Psychology
University of York
York, YO10 5DD
UK
Tel: 44-1904-434356
Fax: 44-1904-433181
http://www-users.york.ac.uk/~ta505/http://www.york.ac.uk/depts/psych/www/admissions/cns/
Dear all,
there is another interesting paper in the 'Articles of interest' section of:
https://www.ynic.york.ac.uk/doc/Miscellaneous
This one is on combined MRI / MEG, which may or may not be of interest you.
If you have any papers to add to this section, forward them to me and
I'll make them globally available.
Thanks,
Michael
--
Dr Michael Simpson
Science Liaison Officer
York Neuroimaging Centre
Innovation Way
York
YO10 5DG
Tel: 01904 567614
Web: http://www.ynic.york.ac.uk
Elena Solesio has been visiting YNiC from the imaging centre in Madrid.
Today, Thursday lunchtime, at 1:00pm, Elena will be giving a talk in
YNiC open plan on some of the work she's been doing, entitled:
Retroactive Interference Modulates Magnetic Brain Activity In Normal
Aging: A Magnetoencephalography Study.
All are welcome to come along.
-----------
At 4pm today, there is a YNiC update seminar
1. update on MEG
2. recap of MEG facilities at YNiC
3. what is new and what has changed
4. sensor space and source space
5. beamforming update
6. minimum norm and dipoles
7. Users' requests and what we can do better
All welcome
Refreshments will be served after the seminar
--
Gary Green
York Neuroimaging Centre
The Biocentre
York Science Park
Innovation Way
Heslington
York
YO10 5DG
http://www.ynic.york.ac.uk
tel. 01904 435349
fax 01904 435356
mobile 07986 778954
Hello all,
Elena Solesio has been visiting YNiC from the imaging centre in Madrid.
This Thursday lunchtime, at 1:00pm, Elena will be giving a talk in YNiC
open plan on some of the work she's been doing, entitled:
Retroactive Interference Modulates Magnetic Brain Activity In Normal
Aging: A Magnetoencephalography Study.
As ever all are welcome to come along.
Thanks,
Sam
Dear All,
I would really like to see a Thursday session on connectivity -
dynamic causal modelling, diffusion tensor imaging, etc. Is this
interesting to anyone else?
Best,
Johan Carlin
Dear All,
Recently, we have had some problems with registration using FSL. Some
of the registrations, particularly with the standard brain appear to be
quite poor. Jodie, Claire and Andre have done some work on this
recently and found that the best combination of registrations in FSL is
Initial Structural: 15 degree search, 6 DOF
Main Structural: 15 degree search, 6 DOF
Standard Brain: 15 degree search, 12 DOF
If you have any views on this, could you let us know. We have also
noticed that the brain extraction tool, which is set as default on FEAT
has not always been removing the skull from the EPI images. If you have
been doing FEAT analysis, I would be grateful if you could let us know
whether BET has been working for you. If this hasn't been happening, it
could affect your registrations. A related issue, is that although
structural brain images are automatically skull stripped using BET,
FLAIR images are not. However, I believe that BET may be automatically
applied to FLAIR images in the near future. :-)
Cheers,
Tim
--
Dr Tim Andrews
Department of Psychology
University of York
York, YO10 5DD
UK
Tel: 44-1904-434356
Fax: 44-1904-433181
http://www-users.york.ac.uk/~ta505/http://www.york.ac.uk/depts/psych/www/admissions/cns/
"Off-line consolidation during sleep and wakefulness"
Dr Penny Lewis
Psychology Department, University of Liverpool
Date: 16:00. Thursday 01 November 2007
Location: YNiC Open Plan
Host: Gary Green
Programme: York Neuroimaging Centre Seminars
All welcome
Refreshments will be served after the seminar
Penny Lewis will be in YNiC this afternoon if individuals would like to
meet her.
We will be taking Penny out to dinner after the seminar. If you would
like to join us for dinner please contact Gary Green.
--
Gary Green
York Neuroimaging Centre
The Biocentre
York Science Park
Innovation Way
Heslington
York
YO10 5DG
http://www.ynic.york.ac.uk
tel. 01904 435349
fax 01904 435356
mobile 07986 778954
Dear Users
Yesterday we had an extremely useful brainstorming session.
The discussion brought to light some issues about using beamforming
which have not been reported in the literature yet. As these
observations might affect other users we thought it would be of interest
to bring them to the attention of all users.
Uzma Urooj, supervised by Andy Ellis with Michael Simpson as SLO, had
identified two particular puzzling issues when using the beamforming
approach to (a) localise time locked activity and (b) create virtual
electrodes.
(a) the first observation is that if one constructs a NAI map from a
beamformer, the statistically significant blobs do not necessarily align
with the largest amplitude averaged response seen with virtual
electrodes. What the user has done was to perform a standard beamforming
analysis of some data and localised the most significant blob. They then
placed virtual electrodes at this point and around it. At each of these
virtual electrode positions they calculated the average response across
epochs. The largest amplitude evoked response was not at the location of
the beamformer blob. This observation has also been replicated in
simulation studies by Mark Hymers where he has placed dipoles at known
locations in a model brain which had realistic background noise.
The reason this occurs is because of the way the beamformer works. The
beamformer is a set of spatial filters which are designed to measure the
brain activity (the NAI) from each brain location in turn. The filters
are constructed to ensure that activity only comes from one location at
a time. All other locations are suppressed. This is done by computing
the power within the MEG signal and ensuring that this is measured with
a gain of one from a specified location and that it is zero from all
other positions. This is the key point - the beamformer is specified in
terms of MEG power. The power in an MEG signal can come from two forms
of oscillation, those that are time locked to the stimulus and those
power changes that produced by a stimulus but are not time locked in
terms of the oscillatory phase (often called the stimulus induced
power). Thus the blobs created by the application of the beamformer
programme are in terms of total power (time locked and induced). When
the average virtual electrodes were computed, these measures were only
of time locked activity. The user that brought this to our attention has
made an important observation, within a specific region, brain responses
that are time locked maybe at a different location to induced power
changes. As we do not know the relationship between fMRI haemodynamic
changes and MEG oscillations (time locked vs induced), it may be that
beamformer results will not always align with fMRI results either. This
does not mean that the beamformer is wrong in some way. In means that we
have to be very careful in how we interpret beamformer output in terms
of blobs and virtual electrodes. There clearly is a need to be able to
distinguish between time locked and induced responses. To this end, Will
Woods has been working on a 'evoked' beamformer.
implications for users of the beamformer
NAI statistical maps reflect total power changes within a band and can
contain both induced and time locked responses, just induced power
changes or just time locked responses. In the latter case the virtual
electrode averages will align with the NAI maps. In the other two cases,
NAI maps may not align with VE maps. The issue for users is what form of
response should be used to test a specific hypothesis. This will be down
to the individual user who will have a specified model for their
particular experiment in terms of how stimuli may be encoded.
NAI maps are estimates of statistical changes in total power, these
maps are related to the mean and variance of the power in the
oscillations produced by both induced and time locked responses
Averages of VEs provide estimates of the average time locked response,
only.
(b) the second interesting observation made by Uzma related to the
construction of time series of virtual electrode activity for every epoch.
Uzma has used a specific time window (say 200 milliseconds) to estimate
weights for a beamformer. Uzma then used these weights to estimate the
virtual electrode time series for a much longer time window beyond that
of the original 200 milliseconds for every epoch. The virtual electrode
output (three orthogonal components) was manipulated to create an
estimate of power as a function of time. This was done by taking the
squares of each VE component and summing them. This gives a measure of
the power as a function of time for each epoch. Uzma then averaged these
power time series, producing the average power changes as a function of
time. Uzma made an observation that has not been reported before. Beyond
the end of the original 200milliseconds, a large slow change in power
was observed. If this was all repeated for a window that was 1000
milliseconds long, the slow change in power was now observed if the
power was estimated for times greater than 1 second.
This is a very important observation. The current thoughts about this
are that this is a property of the beamformer. It might arise as
follows. The beamformer weights are estimated from the covariance matrix
of the original MEG data within the specified window (say 200
milliseconds). Covariance is related to power within a signal, that is
why it is stated that a set of beamformer weights is about detecting the
power from a specific location and not any other. If a window is only
200 milliseconds long, a very poor estimate will be made of power below
5Hz (1/0.2). Thus this beamformer will not be able to correctly deal
with slow changes in power and estimates beyond the original window may
be inaccurate.
implications for users of the beamformer
When using a beamformer to estimate changes in power (especially
induced power) with time, observations are only reliable when made
within the time contraints of the original window length that was used
to construct the beamformer. The problem with this is that a beamformer
works best if the window length matches the duration of the induced
power change, especially if performing a statistical comparison between
two conditions. If long windows are used and the induced power changes
are fractional proportions of the length of the window, then statistical
power will be lost. In this case it would be best to use moving windows
as in the Pammer and Cornelissen paper.
----------------
I believe this demonstrates that brainstorming sessions are quite
important activities. Some people might be concerned about bringing
problems to the attention of other users, but as the above demonstrates,
it can be invaluable as it can highlight problems that are not fully
appreciated.
Many thanks to the user who brought this to our attention.
--
Gary Green
York Neuroimaging Centre
The Biocentre
York Science Park
Innovation Way
Heslington
York
YO10 5DG
http://www.ynic.york.ac.uk
tel. 01904 435349
fax 01904 435356
mobile 07986 778954
October 25th 4pm YNiC Open Plan
1. Project presentation: Michael Simpson
2. Project presentation: Laura Lee
3. Work in progress - a look at an ongoing project
All welcome. Refreshments will be available.
--
Gary Green
York Neuroimaging Centre
The Biocentre
York Science Park
Innovation Way
Heslington
York
YO10 5DG
http://www.ynic.york.ac.uk
tel. 01904 435349
fax 01904 435356
mobile 07986 778954
I thought I would circulate the following talk to YNiC users as it
could be potentially interesting to many users. This is talk
presenting results of an fmri experiment after extensive training
with different kinds of sounds.
Speech-like processing of nonspeech sounds following extensive
categorization training.
Dr James Keidel, University of Manchester
Monday, Oct 22nd, 2007, 12.30pm, C108, Dept of Psychology
Silvia
Silvia Gennari
Department of Psychology
University of York
Heslington, York
YO10 5DD
United Kingdom
FIXATION-CONTINGENT PRESENTATION OF STIMULI IN MEG
Laura Barca and I ran an MEG experiment in which participants were
instructed to fixate on a central point on a screen positioned 1 metre in
front of them. A stimulus (e.g. a word) was then presented very briefly
to the left or right of the fixation point. The aim was to track the
processing through the brain of words presented in the right visual field,
projecting directly to the left (language) hemisphere, or in the left
visual field, projecting first to the left (non-language) hemisphere and
presumably needing to be transferred across to the left hemisphere via the
corpus callosum for processing.
In the absence of eye movement monitoring we had to trust our participants
to fixate centrally, and to rely on the brief presentations to assert that
they could not have re-fixated in the time that the stimulus was on the
screen. We can also point to differences in the patterns of brain
activation we observed as indicating that we were successful on most
trials of the experiment in controlling presentation as we wanted to.
There are, however, people out there who could end up reviewing grant
applications for further work who get very animated about the need for
accurate fixation control in this kind of experiment. There are also two
strands of future research that may need more accurate monitoring. One is
work I would like to do following up Lisa Henderson's MSc project
comparing the responses of dyslexic and non-dyslexic students to words
presented in the left and right visual fields. The other is a project by
a PhD student of Richard Wise at Imperial College London , Zoe Woodhead (a
former undergraduate of ours), who want to use the York MEG system to look
at word recognition in patients with hemianopias following occipital
strokes. Both dyslexic and hemianopic participants may be assumed to have
greater difficulty controlling their fixation than 'normal' participants,
and good fixation control would be especially helpful for those studies.
It would also help with studies I would like to do in which words will be
presented centrally on the assumption that certain letters fall to the
left of fixation while other letters fall to the right.
What would be nice to have, then, is a way of ensuring that stimuli are
only displayed on the screen when a participant is fixating on, or close
to, a central fixation point. We normally offset the inner edge of our
stimuli by 1.5 degrees, so it would be good to define a central fixation
sphere with a radius of 0.5 degrees at 1 m distant from the participant,
and only to present the stimulus on a given trial when fixation is within
that sphere. So:
1. Would the resolution of the system allow us to know when someone is
fixating within a sphere that has a diameter of 1 degree at 1 metre
distance? Is a smaller resolution possible?
2. What would be the minimum time between registering that fixation is
within the defined region and a stimulus appearing on the screen? I want
to avoid suggestions that participants may have moved their eyes in the
interval between fixation being registered and the stimulus being
presented. It might help to present the stimulus after fixation has been
within the central sphere for a certain period of time in order to exclude
the possibility that participants were sweeping their eyes through the
sphere when presentation was triggered.
3. Richard Wise and Zoe Woodhead would be interested in a variant of this
procedure where a stimulus remains on the screen for as long as fixation
remains within the central region. This would allow more prolonger
presentation of stimuli to, for example, patients with hemianopias whose
processing of visual inputs may be relatively slow. I know that Richard
and Zoe have toyed with presenting "sentences" in one or other visual
field by displaying one word after another at the same position. That
would be OK to do if we could know that fixation remained central
throughout.
4. Finally, I gather from the meeting last night that only one eye is
monitored. There is quite a lot of discussion in the literature about how
often the two eyes focus on the same point, and how often there is either
'crossed' or 'uncrossed' fixation. We would need to think about this, and
whether we should, for example, put a patch over the unmonitored eye.
I am posting this on ynic-users because we were encouraged to do so. I am
hoping for a sober response from YNiC which is realistic about what could
and could not be done, any difficulties we may run into, and the time that
would be required to implement a system like this (assuming that it is
do-able). At the moment all I need to know is what could or could not be
achieved so that I can write that in confidence into grant applications.
Other people with an interest in vision and MEG (Silvia, Piers, Andy Y
etc) may want to chip in so that YNiC can get a fuller understanding of
what people would like to have in the way of intergrated fixation
monitoring and stimulus presentation.
Andy Ellis
19 Oct 2007
October 18th 4pm YNiC Open Plan
1. This autumn in YNiC - the programme
2. What is new and what has changed
3. Booking and the database
4. Ethics and governance
5. new tools
6. The support system explained
7. What is happening in YNiC - a look at the future
8. A discussion about users' needs and a new forum for getting
users involved
All welcome. Refreshments will be available.
--
Gary Green
York Neuroimaging Centre
The Biocentre
York Science Park
Innovation Way
Heslington
York
YO10 5DG
http://www.ynic.york.ac.uk
tel. 01904 435349
fax 01904 435356
mobile 07986 778954
You will probably be pleased to hear that MEG is now fully operational
again.
We have also taken time to get the system recalibrated, eye trackers
installed and a complete overhaul of the mechanics of the bed.
The noise level is lower as the source of the small electrical pulse
artefacts has been identified and removed.
All MEG channels are working.
The original problem was due to a faulty interface between the fibre
optics taking data out of the shielded room into the data acquisition
rack. The board has been replaced.
Bookings can now be taken for MEG.
You may also be pleased to hear that MRI has had a routine maintenance
and all is well.
happy scanning!
--
Gary Green
York Neuroimaging Centre
The Biocentre
York Science Park
Innovation Way
Heslington
York
YO10 5DG
http://www.ynic.york.ac.uk
tel. 01904 435349
fax 01904 435356
mobile 07986 778954
>
>
>Please bring this conference to the attention of colleagues who may
>be interested-
>
>BPS Division of Neuropsychology
>
>
>Visual Dysfunction and Cognition in Childhood
>
>This day conference will increase knowledge on psychological,
>neurological and neurobiological systems involved in vision and
>visual defects of brain and peripheral optic pathways. it will also
>increase knowledge on brain:cognition relationships and cognitive
>patterns associated with visual defects, including spatial
>cognition, and severe visual impairment. The conference introduces
>the latest theory and practice in these areas, including clinical
>assessment and interpretation of cognitive development and disorders
>in children with severe visual impairment.
>
>Date: 30 October 2007
>
>
>Who should attend?
>
>This event is open to clinical or educational practitioners or
>researchers in child neuropsychology, clinical child psychology,
>paediatric, neurology, occupational therapists and related
>neuroscience disciplines. Relevant to practitioners and researchers.
>
>
>Programme
>
>09:30 Registration & Refreshments
>
>10:00 Two visual systems: agnosia, optic ataxia & neglect
>
>Professor David Milner (Professor of Cognitive Neuroscience,
>University of Durham): Co-author of Sight Unseen (winner of the BPS
>Book Award)
>
>11:15 Refreshments
>
>11:30 Neurobiological models of visual-spatial deficits in childhood
>
>Professor Janette Atkinson (Director of Visual Development Unit:
>University College London/ Oxford University): Author of The
>Developing Visual Brain
>
>12:45 Lunch
>
>14:00 Visual impairment and cognition: neurodevelopmental issues
>
>Dr Naomi Dale (Head of Psychology (Neurodisability), Great Ormond
>Street Hospital/ UCL Institute of Child Health): Co-lead in
>developing the governmental Early Support Developmental Journal for
>parents and children with VI
>
>14:45 Refreshments
>
>15:00 Visual impairment and cognition: clinical assessment and
>interpretation
>
>Dr Naomi Dale (Head of Psychology (Neurodisability), Great Ormond
>Street Hospital/ UCL Institute of Child Health): Co-lead in
>developing the governmental Early Support Developmental Journal for
>parents and children with VI
>
>15:45 Concluding Remarks
>
>
>Speakers
>
>Professor David Milner (University of Durham): Co-author of Sight
>Unseen (winner of the BPS Book Award)
>
>Professor Janette Atkinson (University College London/ Oxford
>University): Author of The Developing Visual Brain
>
>Dr Naomi Dale (Great Ormond Street Hospital/ UCL Institute of Child
>Health): Co-lead in developing the governmental Early Support
>Developmental Journal for parents and children with VI.
>Location Directions
>UCL Institute of Child Health, London
>
>For further details click here-
>http://www.ich.ucl.ac.uk/education/short_courses/courses/2S18
>
>---------------------------------------------
>COGNEURO archives and subscription manager can be found at
>http://www.jiscmail.ac.uk/lists/COGNEURO.HTML
>---------------------------------------------
>List owner's email address: COGNEURO-request(a)jiscmail.ac.uk
--
Professor Andy Ellis
Department of Psychology
University of York
York YO10 5DD
England
Tel. +44 (0)1904 433140
http://www.york.ac.uk/depts/psych/www/people/biogs/awe1.html
Attached is the 'draft' programme for YNiC Thursday evening seminars for
the autumn and early winter.
The current plan is that after Christmas the seminars would mainly
consist of invited speakers, internal speakers and journal club.
The third term would be devoted to people giving talks about work that
has been or is about to be published from YNiC projects.
ALL comments are welcome. Your ideas are absolutely essential to running
a Centre that will make imaging research possible and fun for all.
Please send your comments to me, Sam Johnson
(science.manager(a)ynic.york.ac.uk) or Tony Morland
(a.morland(a)psych.york.ac.uk)
Gary
--
Gary Green
York Neuroimaging Centre
The Biocentre
York Science Park
Innovation Way
Heslington
York
YO10 5DG
http://www.ynic.york.ac.uk
tel. 01904 435349
fax 01904 435356
mobile 07986 778954
Today we have two new project presentations (Thursday the
20th at 4pm in YNiC Open Plan)
Project presentations will be made by
Professor P. O'Higgins. HYMS. Scalp and head reconstruction from MRI.
Dr. T. Jellema, University of Hull, Department of Psychology. Imaging in
Autism.
We will also announce what is being discussed for YNiC for the autumn
term. Do come along and discuss training requirements, seminars and any
other issues you would like to see scheduled for a Thursday afternoon.
All welcome. refreshments will be available.
--
Gary Green
York Neuroimaging Centre
The Biocentre
York Science Park
Innovation Way
Heslington
York
YO10 5DG
http://www.ynic.york.ac.uk
tel. 01904 435349
fax 01904 435356
mobile 07986 778954
We are currently planning the Thursday evening sessions and other
training sessions for next term. If you have any thoughts or ideas we
would be keen to hear your views. The main sessions will start on the
18th of October.
This week though we have two new project presentations (Thursday the
20th at 4pm in YNiC Open Plan)
Project presentations will be made by
Professor P. O'Higgins. HYMS. Scalp and head reconstruction from MRI.
Dr. T. Jellema, University of Hull, Department of Psychology. Imaging in
Autism.
We will also announce what is being discussed for YNiC for the autumn
term. Do come along and discuss training requirements, seminars and any
other issues you would like to see scheduled for a Thursday afternoon.
We will circulate the draft timetable and ask for comments by email as well.
All welcome. refreshments will be available.
--
Gary Green
York Neuroimaging Centre
The Biocentre
York Science Park
Innovation Way
Heslington
York
YO10 5DG
http://www.ynic.york.ac.uk
tel. 01904 435349
fax 01904 435356
mobile 07986 778954
I am sorry to say that MEG is still unavailable. 4D are working on the
problem and believe that they have determined the cause. We expect an
engineer to visit this week and ti make adjustments to the main power
supplies to the system.
As soon as MEG is available I will let you know.
MRI is working fine
Gary
--
Gary Green
York Neuroimaging Centre
The Biocentre
York Science Park
Innovation Way
Heslington
York
YO10 5DG
http://www.ynic.york.ac.uk
tel. 01904 435349
fax 01904 435356
mobile 07986 778954
I am sorry to have to report that MEG is currently unavailable. An
electronics board that is the interface between the scanner and the main
computer rack has failed.
4D have been informed and are trying to get a board to us as fast as
possible. Unfortunately this may take more than a week as the new board
will need substantial testing before it can be installed.
I will keep you informed as to when MEG will be available again.
I know this will cause some inconvenience and I apologise for the
problems that this will undoubtedly bring to some projects.
--
Gary Green
York Neuroimaging Centre
The Biocentre
York Science Park
Innovation Way
Heslington
York
YO10 5DG
http://www.ynic.york.ac.uk
tel. 01904 435349
fax 01904 435356
mobile 07986 778954