[FieldTrip] Single-subject Monte Carlo PLV or WPLI test?
sunyata at gmail.com
Mon Feb 21 21:39:56 CET 2011
>> I'd already tried clustering across only time/frequency and not across
>> channel, but what I found was that the strongest channels "set the
>> bar" for all the others, so to speak. I would see 2-3 strong channels
>> with long significant durations reaching significance, and everything
>> else would be silenced. Whereas with parametric stats, I had enough
>> strong signals to detect changes under FDR and Bonferroni correction
>> across a wide range of times and channels. Would z-scoring to
>> compensate for electrode sensitivity differences have helped?
> Are the channel-time-frequency-specific parametric p-values smaller than the
> corresponding permutation p-values (for the same test statistic, of course)?
> If that is the case, then I would be suspicious wrt to the validity of the
> parametric test (i.c., its false alarm rate control). (This does not mean,
> of course, that a smaller parametric p-value implies poor false alarm
The parametric p-vals are much smaller, but wouldn't they have to be?
To reach significance after Bonferroni correction, they'd have to be
on the order of .0005. FWIW, when I ran the permutation tests, I did
see p-vals at the minimum possible values.
> It's not clear to me what you mean by z-scoring. Does this amount to a
> linear transform of the dependent variable (the same one for the conditions
> that are compared)?
This would have involved z-scoring the log power. With intracranial
data, some electrodes are simply less sensitive because they're not
making good contact with the brain (because of the way they're
embedded in plastic sheets, each electrode is not individually
placed), and not because they're picking up a more distal/weaker
signal. So, one way to compensate would be to normalize the power. I
asked just in case, but I didn't really suspect this would work. Even
if power were normalized, the cluster-max permutation distribution
would still be dominated by the electrodes that were significant for
the longest period of time.
>> I considered doing Monte Carlo stats for each channel independently,
>> and then adjusting critical p-vals via FDR or Bonferroni, but for 100
>> channels, I would need at least 20000 permutations just to have the
>> Monte Carlo p-val resolution *approach* the adjusted Bonferroni
>> p-vals, and would probably need more to be safe. Factor in several
>> subjs and contrasts, and I computed my analysis would take a few weeks
>> to run.
>> This was why I asked about a parametric method; while I'd prefer
>> permutation methods, I fear the same problem will occur with my
>> connectivity analysis. I know your focus is on permutation stats, but
>> do you have any insight into how to proceed? Think I could generate a
>> permutation distribution of the WPLI differences from a random
>> sampling of electrodes and contrasts, and then, if they look
>> sufficiently close to normal (or transformable via something like a
>> log transform), use that as an argument for using t-tests if I have
> If a parametric test solves my problem, then I will definitely use it.
> However, for statistical tests outside the normal theory parametric
> framework, it is typically a big challenge to come up with an appropriate
> parametric reference distribution. I expect this to hold for the WPLI too.
> Statistical testing of differences at the level of channel pairs (e.g.,
> differences in coupling strength) is a big methodological challenge, for
> many reasons (a huge multiple comparison problem, lack of specificity of the
> coupling measure that is used for testing, difficulty of clustering in the
> space formed by channel pairs). A discussion of these issues is being the
> scope of the FT discussion list.
> It is not clear to me what you mean by "generate a permutation distribution
> of the WPLI differences from a random sampling of electrodes and contrasts".
> Constructing a permutation distribution in a single subject study (which I
> assume you are conducting, because you have ECoG data) involves random
> partitioning of trials (and not electrodes and contrasts).
Ehh, this was a bit more ad hoc. I didn't have a formal method in
mind, I was just thinking about the Kiebel, Tallon-Baudry and Friston
HBM paper where they show that log-transforming power renders it
approximately normal. It should, however, be legitimate for any given
electrode pair to permute trials, compute the connectivity metrics and
their difference, generate a permutation distribution of the
difference, and make an inference from that, yes? (Although it doesn't
address the MCP.)
Thanks for all your help, btw.
More information about the fieldtrip