ICA on 275ch MEG... removing artifacts

Michael Wibral wibral at BIC.UNI-FRANKFURT.DE
Fri Mar 27 18:41:54 CET 2009

Dear Suresh,

I will try to answer your questions to my best knowledge (we are doing both MEG and EEG ICA here) below.
Sorry if the response got a bit lengthy.


> -----Urspr√ľngliche Nachricht-----
> Von: "Suresh Muthukumaraswamy" <sdmuthu at CARDIFF.AC.UK>
> Gesendet: 27.03.09 17:22:35
> Betreff: [FIELDTRIP] ICA on 275ch MEG... removing artifacts

> Hi Everybody,
>      I have started playing with the componentanalysis and rejectcomponent
> functions to try to remove eye artefacts from CTF 275 channel MEG data. The
> aim is to try to remove eye artefacts and then run SAM analyses.
> Technically, I have the approach working fine and it does seem to reduce the
> eye artefact in the MEG channels near the front of the helmet (I have EOG
> traces as well)
> Specifically I was wondering
>   Typically how many components do people normally estimate and then how
> many of these would normally get rejected as containing eye artefact? 270
> components is alot to look through! I see one can limit the number of
> components the function can return....

In EEG we saw a definite improvement of the removal of blink artifacts when going from 64 to 128 channels - this suggests to do as many components as possible. There is a second reason to do as many components as possible: In order to search less components than sensors you need to reduce the dimensionality of your data - typically using PCA as a preprocessing step. The directions of PCA dimensions that you remove are at odd angles with the later ICA component axes - hence by doing PCA you change every later IC a little bit, something that doesn't happen when not using PCA. This information is lost, because the data you'll backproject for beamforming are the cleaned data and have a dimensionality of (number of sensors)-(dimensions removed by PCA)-(dimensions removed as artefact components). There are algorithms that can reduce dimensions by taking out ICs during the estimation process - in fact we're working on those and you may consider trying those (contact Georg Turi: turi at mpih-frankfurt.mpg.de). Note that deflationary ICA - only run up to the desired number of components - is NOT a option, as the order in which components are found is undefined.

By backprojecting these lower dimensional data to your sensors you encounter an additional mathematical problem, when planning to do beamforming: technically the covariance matrix of your data is rank deficient, because you have more sensors (aka columns or rows of the cov-matrix) than dimensions in your data (because you removed some of them). This theoretically would cause an error in your analysis. The fact that you did not encounter such an error is because you most likely use regularization (lambda in Fieldtrip). This adds a scaled unity matrix to your cov-matrix and helps you to get full dimensionality again.

How many components should be removed for eyeartefacts?
Difficult to say, there is usually one or two blink components. Eye movements are a totally different story. The signals from eye-movements VIOLATE the ICA mixing model which uses stationany, not moving sources - i.e. in theory they can't be modeled as independent components at all. Sometimes you may be lucky that exact rotations of the eyeball can be modelled as two orthogonal sationary sources with sinusoidal modulations - if that is the case you should get roughly 4 components:2 for each principal axis of rotation. But be careful, because this really means abusing the ICA mixing model!

How many components can you estimate at all? 
For algorithms like Fastica or INFOMAX a stable estimation requires a number of samples bigger than 20*(number of sensors)^2. That's what it says as a rule of thumb in the EEGLAB tutorial. The mathematical limit is 3*(number of sensors)^2 (this is the Cramer-Rao lower bound). Below this an estimation is mathematically impossible. Some more modern algorithms (EFICA) claim to almost reach the Cramer-Rao lower bound, you may consider using these. 
If you get close to this threshold I we strongly recommend using a statistical validation of your results, e.g. using ICASSO. or RICE (you could contact Georg Turi for this as well).

To sum up, use all possible components if you can afford it - i.e. if you have enough meaningful samples. If not, consider using a non-PCA approach for dimension reduction. If you're close to the Cramer-Rao lower bound you should do a statistical validation of your results by doing ICA from random starting points several hundred times (e.g. in ICASSO). And be careful with contributions from eye-movements - moving sources are not considered in the ICA mixing model that you try to estimate - so anything is possible there.

>  Do people normally reject solely on topography or do they do a frequency
> analysis of the component time-course or perhaps other things?

You would reject on the basis of topography and IC timecourse.

> Moreoever, I was also wondering if anyone had carried out any kind of
> systematic comparison between this approach for MEG compared to traditional
> EEG approaches to this problem (e.g. projecting out the EOG channels)?

For blinks ICA works as well on MEG as it does on EEG, and it works very well! Projecting out the EOG channels will definitely distort the signal (a detailed discussion of what to expect can be found in the BESA help for example).

The aim of this list is to facilitate the discussion between users of the FieldTrip  toolbox, to share experiences and to discuss new ideas for MEG and EEG analysis. See also http://listserv.surfnet.nl/archives/fieldtrip.html and http://www.ru.nl/neuroimaging/fieldtrip.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Michael Wibral.vcf
Type: text/x-vcard
Size: 344 bytes
Desc: not available
URL: <http://mailman.science.ru.nl/pipermail/fieldtrip/attachments/20090327/ede242b6/attachment-0002.vcf>

More information about the fieldtrip mailing list