[FieldTrip] Source statistics on spatio-temporal source reconstruction data (MNE)

smoratti at psi.ucm.es smoratti at psi.ucm.es
Wed Jun 12 18:58:40 CEST 2013


I think Jan.Mathijs alternative suggestion is quite attractive. With the neighbors on a cortical sheet I also had the problems that sometimes the vertices do not have the same distance and then clustering may be biased to smaller or bigger clusters as the number of neighbors does not guarantee same cluster sizes. With the interpolation onto a 3D grid, you won't have that problem.

best,

Stephan


________________________________________________________
Stephan Moratti, PhD

see also: http://web.me.com/smoratti/

Universidad Complutense de Madrid
Facultad de Psicología
Departamento de Psicología Básica I
Campus de Somosaguas
28223 Pozuelo de Alarcón (Madrid)
Spain

and

Center for Biomedical Technology
Laboratory for Cognitive and Computational Neuroscience
Parque Científico y Tecnológico de la Universidad Politecnica de Madrid
Campus Montegancedo
28223 Pozuelo de Alarcón (Madrid)
Spain


email: smoratti at psi.ucm.es
Tel.:    +34 679219982

El 12/06/2013, a las 18:00, jan-mathijs schoffelen escribió:

> An alternative would be to interpolate the cortical sheet to a 3D grid (where the grid is defined for each subject based on a warped template grid defined in a standard space), and then do clustering using a regular 3D spatial neighbourhood structure. The rationale being that two vertices on the sheet may appear as disconnected  (e.g. being on two sides of a sulcus) whereas, given the poor spatial resolution, they belong to the same spatial blob.
> 
> Best,
> Jan-Mathijs
> 
> On Jun 12, 2013, at 5:44 PM, smoratti at psi.ucm.es wrote:
> 
>> Dear Nicolai,
>> 
>> Indeed I have used ft_timelockstatistics for minimum norm source data. The trick is to put the source level data into a ERF structure. Determining the neighbors of a source surface with vertices is not trivial. However I used tess_vertconn.m from the BrainStorm toolbox to get the connectivity matrix that tells you who is a neighbor. This you can feed into timelockstats.
>> 
>> Hope that helps,
>> 
>> Stephan
>> 
>> ________________________________________________________
>> Stephan Moratti, PhD
>> 
>> see also: http://web.me.com/smoratti/
>> 
>> Universidad Complutense de Madrid
>> Facultad de Psicología
>> Departamento de Psicología Básica I
>> Campus de Somosaguas
>> 28223 Pozuelo de Alarcón (Madrid)
>> Spain
>> 
>> and
>> 
>> Center for Biomedical Technology
>> Laboratory for Cognitive and Computational Neuroscience
>> Parque Científico y Tecnológico de la Universidad Politecnica de Madrid
>> Campus Montegancedo
>> 28223 Pozuelo de Alarcón (Madrid)
>> Spain
>> 
>> 
>> email: smoratti at psi.ucm.es
>> Tel.:    +34 679219982
>> 
>> El 12/06/2013, a las 15:44, Nicolai Mersebak escribió:
>> 
>>> Dear all,
>>> 
>>> I have a question concerning the usage of ft_sourcegrandaverage and ft_sourcestatistics. 
>>> 
>>> After using ft_sourceanalysis (method: MNE), I get spatio-temporal source reconstructed data in source.avg.pow (4050 x 897): 4050 sources and 897 time points. 
>>> 
>>> Now I would like to use the cluster-based permutation test on my source reconstructed data. However it seems like ft_sourcegrandaverage and ft_sourcestatistics don't support source level time courses. E.g when I am using ft_sourcegrandaverage I am getting the following error:
>>> 
>>> Error in ft_sourcegrandaverage (line 158)
>>>   dat(:,i) = tmp(:);
>>> 
>>> Looking into the code:
>>> 
>>>   for i=1:Nsubject
>>>     tmp = getsubfield(varargin{i}, parameterselection(cfg.parameter, varargin{i}));
>>>     dat(:,i) = tmp(:);
>>>     tmp = getsubfield(varargin{i}, 'inside');
>>>     inside(tmp,i) = 1;
>>>   end
>>> 
>>> I see that "tmp" are getting the structure [N_sources x timepoints] from source.avg.pow for one subject, where "dat" requires the structure [N_sources x 1]. 
>>> 
>>> I seached the mailing list for similar issues and found this thread:
>>> 
>>> http://mailman.science.ru.nl/pipermail/fieldtrip/2010-September/003122.html
>>> 
>>> Since I am interested in using the temporal dimension in my statistics, I would like to know if it is still not possible to use spatio-temporal source reconstructed data in ft_sourcestatistics and ft_sourcegrandaverage ?
>>> 
>>> Or if any have succeeded in using the cluster-based permutation test on source level also including the temporal dimension ? 
>>> 
>>> Alternative I was thinking that I might could use ft_timelockstatistics, where I substituted the channels with sources, e.g instead of having 64 channels, I would now have 4050 "channels". 
>>> If so I need to calculate a label structure and an appropriate neighbor structure, which I guess is possible as I have all the 3D coordinates for each source, e.g in leadfield.pos ?
>>> I know this is a work around solution, but have anyone tried or have any experience using such an approach ? 
>>> 
>>> Best,
>>> 
>>> Nicolai
>>> 
>>> _______________________________________________
>>> fieldtrip mailing list
>>> fieldtrip at donders.ru.nl
>>> http://mailman.science.ru.nl/mailman/listinfo/fieldtrip
>> 
>> _______________________________________________
>> fieldtrip mailing list
>> fieldtrip at donders.ru.nl
>> http://mailman.science.ru.nl/mailman/listinfo/fieldtrip
> 
> Jan-Mathijs Schoffelen, MD PhD 
> 
> Donders Institute for Brain, Cognition and Behaviour, 
> Centre for Cognitive Neuroimaging,
> Radboud University Nijmegen, The Netherlands
> 
> Max Planck Institute for Psycholinguistics,
> Nijmegen, The Netherlands
> 
> J.Schoffelen at donders.ru.nl
> Telephone: +31-24-3614793
> 
> http://www.hettaligebrein.nl
> 
> _______________________________________________
> fieldtrip mailing list
> fieldtrip at donders.ru.nl
> http://mailman.science.ru.nl/mailman/listinfo/fieldtrip

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.science.ru.nl/pipermail/fieldtrip/attachments/20130612/c732537a/attachment.html>


More information about the fieldtrip mailing list