[FieldTrip] Speeding up Statistics using multiple cores

Antony Passaro antony.passaro at gmail.com
Wed Mar 19 14:22:51 CET 2014


I have seen this issue come up a few times on the list-serve, especially in
regards to processing the statistics for large datasets (usually analyses
with several dimensions and many subjects, trials, and/or
channels/sources). Typically to run even 1000 iterations of a montecarlo
simulation with a cluster correction to compare two large datasets takes a
long time and often times, the computer doesn't have enough memory to
perform the operation. One possible solution that we found takes advantage
of multi-core computers such that the number of iterations are reduced to
only 50 or 100 and the process is spread across cores simultaneously. In
order to compute the statistical significance of a cluster, one only needs
the distribution of cluster t-values across all iterations. That being the
case, one can simply save the cluster t's across iterations and cores and
then compute a significance value for each cluster based on this
distribution. In our experience this tends to reduce the processing time by
a factor of the number of cores available on a given computer. Perhaps this
is something that can be implemented in a future release of Fieldtrip?

-Tony
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.science.ru.nl/pipermail/fieldtrip/attachments/20140319/645f07ee/attachment-0001.html>


More information about the fieldtrip mailing list