about cluster randomization analysis

Marco Buiatti marco.buiatti at GMAIL.COM
Tue Nov 8 13:58:57 CET 2005

Hi Eric,

On 11/8/05, Eric Maris <maris at nici.ru.nl> wrote:
> Hi Marco,
> thank you for your accurate responses. I fully understand from your
> arguments that temporally zooming on clusters is definitely wrong. Still, I
> wonder whether and how it is possible to use cluster randomization analysis
> cases in which it is difficult to formulate a precise hypothesis about when
> to expect an effect (for example, in infants), or cases in which an
> unexpected effect arises from a t-test. Do you think it would be correct to
> slide a relatively large (width of 200ms? 400ms? to be chosen a priori of
> course) window through the epochs and compute cluster randomization analysis
> for each latency to explore dubious significant t-test clusters?
>  If you have no hypothesis about where to expect an effect, you should use
> the complete latency window in which it may occur. Of course, this will
> reduce the sensitivity (statistical power) of your test (in comparison with
> the situation in which you do know when the effect can occur). As a rule,
> prior knowledge increases sensitivity.

>  Another related question: I computed a post-hoc non kosher tuning of the
> window around the most significative cluster in my data, and I saw that it
> is significative (p<0.05) if the window edges exceed of about 50 ms the
> cluster edges (since the cluster is about 70 ms long, the whole window is
> about 170 ms long); but if I take longer windows, the p-value increases
> quite rapidly (I'm running at least 500 random draws for each window, and
> checking that the result does not depend on the number of draws). Do you
> have such instabilities in your data or should I think that the effect
> relative to my cluster is definitely too weak? Or maybe my data are not
> clean enough?
>  This phenomenon is not an instability, it is what I would expect. Imagine
> your trials are 10 seconds long and there is an effect in the latency window
> between 1.3 and 1.35 seconds (i.e., less than 1 percent of trial length).
> If you ask clusterrandanalysis to compare the conditions over the complete
> trial length, it may very well miss the effect in the window between 1.3and
> 1.35 seconds, because it has to use a large critical value in order to
> control for false positives in the time window where there is no effect (
> i.e., 99 percent of the 10 second trial).
> I also expected the significativity to decrease while increasing the time
window for the same reason, but I was surprised to see the p-value increase
so rapidly. I may pose the question more clearly: from your experience,
would you say that the effect I described can be considered significative or
not? (a few other details: I have 128 electrodes, 8 subjects, and the window
I'm choosing is the window where I expect an effect from the literature) A
related question is: how much do artifacts influence this kind of test?
 thank you again,
Marco Buiatti - Post Doc

Cognitive Neuroimaging Unit - INSERM U562
Service Hospitalier Frederic Joliot, CEA/DRM/DSV
4 Place du general Leclerc, 91401 Orsay cedex, France
Telephone: +33 1 69 86 77 65 Fax: +33 1 69 86 78 16
E-mail: marco.buiatti at gmail.com Web: www.unicog.org <http://www.unicog.org>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.science.ru.nl/pipermail/fieldtrip/attachments/20051108/2e6fa1cb/attachment-0002.html>

More information about the fieldtrip mailing list