[FieldTrip] Cluster-based permutation tests on time-frequency and size of conditions
Maris, E.G.G. (Eric)
e.maris at donders.ru.nl
Thu Mar 26 12:27:17 CET 2015
I would like to reply to this post by David:
Balanced sample sizes are typically recommended for conventional parametric independent samples tests (e.g., t-tests, ANOVAs) because it makes the tests less sensitive to differences in variation between the populations being compared. If the populations being compared differ in variance, having more observations from the population with less variability will make these tests overly permissive (i.e., the true false positive rate will be greater than your nominal alpha level). If you have more observations from the population with greater variability, the tests become overly conservative.
A few years ago, my colleagues and I simulated some EEG data and found that permutation tests exhibit a qualitatively similar sensitivity to differences in variance between populations (see below). If you're concerned about such a difference in your data you could do as has already been suggested and use a subset of data so that the number of observations between samples is the same. Alternatively you could use a permutation test based on variants of the t-statistic that are less sensitive to differences in variance. In our paper below, we investigated two variants, Welch's t and t_dif. Welch's t proved a bit less sensitive to differences in variance and was only slightly less powerful than the conventional t-statistic. t_dif was markedly insensitive to differences in variance but was significantly less powerful. However, I would guess that using t_dif or Welch's t are likely more powerful than discarding trials (though we didn't investigate that option in the paper).
I agree with David that the main issue with unequal sizes of the experimental conditions reduces your statistical sensitivity (as compared to the situation where the number of subjects is distributed equally over the conditions; the equal-n case). However, one should NEVER remove subjects from one experimental condition in order to obtain this equal-n case, at least not when a permutation test is being used. A permutation test controls the false alarm rate regardless of how the subjects/trials are distributed across the experimental conditions. So, the permutation test is not less sensitive, as mentioned by David, but it is completely INSENSITIVE to aspect of your design, at least when it comes to false alarm rate control. However, for every statistical test I know of, its statistical sensitivity (power) IS sensitive (notice the different meaning of the word sensitive in this second occurrence) to how the subjects/trials are distributed over the conditions.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the fieldtrip