LORETA to fieldtrip

Robert Oostenveld r.oostenveld at FCDONDERS.RU.NL
Mon Mar 27 22:14:10 CEST 2006


Hi Vladimir

On 27 Mar 2006, at 18:42, Vladimir Litvak wrote:
> I tried to do some simple calculation and it doesn't look good. The
> size of
> a box defined by MNI coordinates used in LORETA is 141x166x116.
> Even in
> single precision it takes 10 Mb. In order to do the analysis I
> can't do in
> LORETA (regression) I need to store 4 conditions for 10 subjects.
> That gives
> us 400Mb per time frame.

Yes, I can follow the math (141x166x11 x 4bytes x 4conditions x
10subjects = 400MB). That means that for 20 timepoints (assuming some
temporal downsampling and selection of a latency window of interest)
that you need 8GB RAM just to hold the data.

If you could a-priory identy a time-window of interest, then you
could average over that time-window, limiting the analysis to a
single 400MB. Alternatively, you could spatially downsample with a
factor of 2x2x2 (using the DOWNSAMPLEVOLUME function), which would
result in the data fitting in 1GB of RAM. After identifying the time
of interest in this spatially downsampled volume, you could again go
to the full resolution volume and only work on the (average) time of
interest to localize the effect in that timewindow.

> We had an idea here to solve this but I'm not sure it's
> statistically sound.
> The idea is to do the analysis for each timeframe separately and
> then look
> for clusters continuous in time and compare them to what you would
> get with
> reshuffled timeframes. The problem is that the null hypothesis in
> the second
> step is not the same as in the first step. What do you think?

Yyou cannot give an unambigous interpretation of the probability that
you are getting out of the second step. The hypothesis of interest is
not on the timecourses (which will have a natural autocorrelation
that of course will be destroyed by the shuffling), but on the data
in the different conditions. As far as I can tell, shuffling the
timewindows does not correspond with a hypothesis that relates in a
meaningfull way to the data taht you observed in the 4 conditions.

> In any case, I can define some segments to average if there is no
> choice.

Furthermore, applying some Bonferoni correction for the multiple time
frames should not be too much of a problem.

ALternatively, you can do a massive univariate test on chuncks of the
data (clustering only in time and not in space), store the
probabilities per voxel, and use the false discovery rate method for
multiple comparison correction. I have code for FDR (but not in FT
yet). The idea in pseudocode would be

for slice=1:116
   read 10x4 slices, each with 141x166 voxels
   do statistics on each slice, without multiple comparison
correction over voxels, but with correction over timepoints: that
gives you an interpretable probability per voxel.
   save the probabilities
end
for all slices, read all voxel probabilities and apply FDR to control
for the expected proportion of false alarms over all voxels

best regards,
Robert



More information about the fieldtrip mailing list