[FieldTrip] bias in cluster stats from density of frequency sampling?
nick.ketz at gmail.com
Fri Aug 18 23:40:11 CEST 2017
I've recently gotten into an internal debate about what the optimal
frequency sampling (using wavelet decomposition) would be for a given range
of frequencies, and it has given rise a few points of discussion:
over all my general question is this:
Is it possible to bias the cluster based permutation statistics by over
sampling the frequency domain?
Consider the case where you have two true signals, one at 2Hz and one at
10Hz. Now, in an absurd scenario you sample every 1Hz in delta band (i.e.
1, 2 and 3Hz), and you sample every 0.1Hz in alpha (i.e. 8:.1:12 =
41bins). Now you go and do a cluster based analysis, are you more likely
to find the 10Hz signal because you have 10 frequency that are within that
range (10.0 to 10.9Hz) compared to the 2Hz signal in which you only have 1
bin? My intuition is that, yes, you would be biased because the t-values
form 10.0 to 10.9Hz would get grouped and summed together to characterize
the 10Hz cluster, and only the single 2Hz bin t-values would characterize
to the 2Hz signal.
This question is arising from a sampling strategy that seems optimal and
least biased to me, which is to space your frequency bins at intervals
related to the uncertainty in the frequency estimate (i.e.
frequency/wavelet-width). Using this strategy you end up with far fewer
bins in the mid to higher frequency bands, beta and gammer, and many more
in the lower bands (things get more complicated when your widths are
frequency dependent). Will this bias cluster based analyses from finding
clusters in these higher freqs compared to a linear sampling scheme?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the fieldtrip