[FieldTrip] ICA/PCA EOG artifact removal

Hamid Mohseni hamid.mohseni at eng.ox.ac.uk
Thu Oct 10 12:51:07 CEST 2013


This is an interesting question!

I think PCA is better than ICA in eye-blink removal. Because what ICA does
is first ESTIMATING the eye blink component and then linearly removing that
component from the data set. However, in PCA approach, a better estimation
of eyeblink component is employed (using for example eye-blink system) to
linearly remove that component from the data set---note that this component
also does not contain brain activity. In the cases when we do not have the
direct eye-blink measurement ICA is better.

Other ideas?

Thanks





On 10 October 2013 10:50, Alik Widge <alik.widge at gmail.com> wrote:

> Thank you for posting this, as I'm in the middle of this same processing
> step, and I've been pondering methods as well while following the
> tutorials. (In my case, 60-channel EEG plus one bipolar EOG in a diagonal
> configuration.) Your response to Craig contains something that's been
> bugging me a little bit:
>
> When doing the ICA on a test recording, I find (and it sounds like he
> finds as well) that there is not a single component that captures eyeblinks
> well (and I have tried various adjustments, such as altering number of
> components via pre-PCA, clipping or not clipping out epochs that look to me
> to have substantial slow eye-roll, turning runica.extended on/off). I do
> get components that have the classic "pair of eyeglasses" or "single
> eyeball" look... but I get four or five of them on a 58-component
> decomposition, and that's before we talk about the components that are
> almost pure 60-cycle noise or temporal EMG.
>
> You seem to be telling us, in your comment about orthogonality and
> percentage of variance captured, that this is actually a *good* thing,
> because it reduces the chance of removing activity from the frontal pole.
> Can you help me understand that a bit better? I've felt very nervous about
> the sheer number of components I'm removing; it feels as though I'm killing
> a big chunk of the dataset, and doing so somewhat blindly.
>
>
> Thanks,
>
> Alik Widge, MD, PhD
> Massachusetts General Hospital
> Charlestown, MA, USA
> alik.widge at gmail.com
> (206) 866-5435
>
>
>
> On Thu, Oct 10, 2013 at 4:43 AM, Robert Oostenveld <
> r.oostenveld at donders.ru.nl> wrote:
>
>> Hi Craig,
>>
>> Let me forward this to the email discussion list.
>>
>>
>> On 9 Oct 2013, at 23:27, CR wrote:
>>
>> > Hi Robert,
>> > I wanted to see what your thoughts were on the merits of 2 different
>> methods of removing blinks.  I have a 12 minute resting state segment of
>> data, so it has required me to do some things a little differently.
>> >
>> > Method 1:  ICA
>> >
>> > I break the 12 minute segment into 2 second intervals, since doing ICA
>> on the whole segment gave a poor result.
>>
>> why does it give you a poor result? Has the subject been moving? Is there
>> something else that makes the data not compatible with the stationary
>> mixing assumption?
>>
>> Or is it the difference in the preprocessing? 12 minutes of data
>> represented in one segment can have drift, whereas 12 minutes of data
>> represented in 2 second snippets will not have the drift  (assuming you use
>> the default cfg.demean=yes). Doing a low-pass filter on the continuous data
>> would have a similar effect as segmenting it and demeaning the 2 sec
>> snippets.
>>
>> > I apply the resulting unmixing matrix to the 12 minute segment and
>> correlate each component with the EOG to find the most relevant components,
>> and reject these based on a threshold.
>>
>> so a bit like
>>
>> http://fieldtrip.fcdonders.nl/example/use_independent_component_analysis_ica_to_remove_eog_artifacts
>> with the correlation method of
>>
>> http://fieldtrip.fcdonders.nl/example/use_independent_component_analysis_ica_to_remove_ecg_artifacts
>>
>>
>> > Method 2: PCA
>> >
>> > I do a timelock analysis based on the blink onset point returned by the
>> eyelink system.  I then PCA the resulting blink ERF.  I then reject the
>> component(s) that account for say 98% of the total variance.
>> >
>> > Obviously option 2 is much faster.  What do you see as the relative
>> merits/problems with the techniques?  Technique 1 is largely what the FT
>> tutorials suggest, so what about method 2?
>>
>> Option 2 makes the large variance component orthogonal to the remainder
>> of the components, whereas in option 1 the eye component and frontal brain
>> components are both estimated, not orthogonal, and removing of the eye
>> component does not remove the frontal brain component.
>>
>> Option 1 is better, as it is less aggressive in removing brain components.
>>
>> If speed is a concern, you could do
>> - ft_resampledata to e.g. 250 Hz or even less, estimate the components
>> based on that and project them out of the original high Fsample data.
>> - do ft_componentanalysis on a subset of the data (say every 4th data
>> segment after cutting it in pieces), and project them out of the original
>> segmented data
>> - a combination of the two
>> - try out anothe rica algorithm (fastica versus runica)
>> - try out with the options of the ica algorithm, esp the stopping options
>> - get a faster computer
>>
>> best regards
>> Robert
>>
>>
>>
>>
>>
>>
>> _______________________________________________
>> fieldtrip mailing list
>> fieldtrip at donders.ru.nl
>> http://mailman.science.ru.nl/mailman/listinfo/fieldtrip
>>
>
>
> _______________________________________________
> fieldtrip mailing list
> fieldtrip at donders.ru.nl
> http://mailman.science.ru.nl/mailman/listinfo/fieldtrip
>



-- 
Hamid R. Mohseni, PhD
Post-Doctoral Research Fellow
Institute of Biomedical Engineering
University of Oxford, OX3 7DQ, UK
Tel: +44 (0) 1865 2 83826
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.science.ru.nl/pipermail/fieldtrip/attachments/20131010/26613ab1/attachment-0002.html>


More information about the fieldtrip mailing list