calculating frequency interaction (now with the complete message)

Robert Oostenveld r.oostenveld at FCDONDERS.RU.NL
Tue Aug 16 09:29:07 CEST 2005


Hi Thomas,

On 15-aug-2005, at 23:41, Thomas Thesen wrote:

> Rob, your comments regarding the appropriate approach for testing the
> interaction are very insightful. Thank you. Considering that MEG/
> EEG data
> contain a lot from both of these options it seems unwise to do this
> type of
> testing. One couldn't tell wether a different phase relationship or
> a true
> effect might be responsible for non-linear summation. Or am I wrong
> there?

You always assume the null-hypothesis to be true, which means no
effects and no coherences. Then you test the probability of observing
your data under this hypothesis. If that is unlikely, you conclude
that the null-hypothesis does not apply. Therefore, knowing that
there are all sorts of complex dynamics in the brain does not harm
the correctness of your inference based on the statistical test,
since the test is based on the absence of the effect. Of course you
should be carefull and explicit in phrasing your null-hypothesis, my
phrasing here is too vague to operationalize it.

> I just saw at a conference a poster where they tried the same thing
> using
> bootstrapping. The web link above contains a copy of that poster
> "Senkowski...jpg". What do you think about this approach? Could
> that be done
> using FieldTrip?

The approach could in theory be done in Fieldtrip without too much
additional programming effort. It slightly resembles the statistical
test based on randomization theory that is implemented in
clusterrandanalysis (also see http://www2.ru.nl/fcdonders/fieldtrip/
uploads/media/RandBioTheory.pdf). However, the test suggested by
Senkowski on the poster that you sent is not very clear in the
hypothesis that is being tested. They focus on the sensitivity of the
test, but not on the validity of rejecting the null hypothesis.

I would rephrase their approach: they assume that the AV trials
originate from the same (unknown) distribution as the manually
crafted A+V trials. They have created NA times NV of these N+V trials
(i.e. each possible combination) and randomly draw subsets from this A
+V distribution with the same number of trials as they have in their
AV average. The error that they make is that they do not consider the
noise that is present in the measurement. In the absence of any
signal of interest in the EEG, you still would have (electrode)
noise. That noise is present in the AV condition, in the A condition
and in the V condition. If you add an A trial with a V trial, the
noise will add. Since noise here is assumed to be uncorrelated over
trials, the noise in A+V will be sqrt(2) times the noise in either A
or V  allone. But the noise in AV is similar as that in either A or V
(since they are all raw trials from the recorded data), hence the
noise in their A+V trial is sqrt(2) times the noise in the AV trial.
Therefore I conclude that there is a trivial difference between the
data, irrespective of the true underlying difference, which results
in a systematic overestimation of the A+V power with respect to the
AV power.

In their case, the more noise their signals contain, the more likely
it is that they will reject the hypothesis that the A+V and the AV
trials originate from the same distribution. Their incorrect
alternative hypothesis is subadditivity, i.e. the AV condition has
less power than the A+V condition.

best regards,
Robert



More information about the fieldtrip mailing list