[FieldTrip] Interactions

Eric Maris e.maris at psych.ru.nl
Tue Feb 4 12:16:34 CET 2014


Hi Josh,

 

You may have found something that is worth exploring. However, it will be
challenge to provide a formal proof of the fact that the permutation
analysis provides false alarm rate control under the null hypothesis of no
interaction effect. The crucial design here is the full between-subjects
design, because for the other designs a valid permutation analysis exists
(as demonstrated by formal proof).

 

I would embark on a testing-the-limits simulation study:

 

between/between. Normally-distributed data in each of 4 cells. The effect
size for e1 and e2 were each set to 1, with a SD of 1. The interaction (if
present) was .75 with an SD of 1. There were 40 subjects per cell. 

 

Set the effect sizes to 20, reduce the number of subjects to 5 per cell,
and simulate data without an interaction effect. I'm curious how the
simulated false alarm rate of the permutation test looks like.

 

(After this email, I will not continue this discussion any further. It is
becoming a scientific project; interesting though .)

 

Best,

 

Eric

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

within/within. For each participant, I generated a random intercept (M=0,
SD=.25) and random slopes for both e1 and e2 (M=1, SD=.25) and for the
interaction, if present (M=.75, SD=.25). Having generated that, for each
participant in each condition, I drew a single sample where the mean was
the sum of the effects just listed and the SD=1. 

 

within/between. The first factor was within subjects and the second was
between. Each subject had a random intercept (SD=.25) and a random slope
for factor 1 (M=1, SD=.25). The subjects in Level 1 for both factors had a
random intercept (M=.75, SD=.25). The between-subject factor was 1. As
above, I then generated a single datapoint from a normal distribution
(M=1, SD=1). 

 

In each case, I ran 500 simulations with an interaction and 500 without.
For each, I analyzed either with an ANOVA (ezANOVA in R) or a 500-sample
permutation test, as follows: Permutations respected the structure of the
data. So in the between/between case, condition labels were permuted
freely. In the within/within case, for each subject, I randomly flipped
the levels of each factor, preserving structure. That is, each subject had
two cells where factor 1 was 0 and two where factor 1 was 1. If the codes
switched, both the 1s were turned to 0s and both the 0s were turned to 1s.
The same was done for Factor 2. I dealt with the within/between data in an
analogous fashion, with the constraint that the same number of subjects be
in the each of the between-subject conditions. Having done my permuting, I
then calculated the F-stat for the interaction. I then compared the actual
F-stat against the resulting distribution.

 

The short description of the result is I got basically the same results
for the permutation tests and the ANOVA. For instance, in the
within/within case, when there was an actual interaction in the generative
model, I got an average p-value of .0844 using ezANOVA and .0858 using
permutations. The Type II error was .318 and .312, respectively. When
there was no interaction, I got average p-values of .5052 and .5066,
respectively, and a Type I error of .036 and .038, respectively. I got
analogous results for between/between and within/within. 

 

Incidentally, I understand that it isn't strictly necessary to permute
condition codes for both factors. But it doesn't seem to do any harm,
either. I actually tried the within/between case permuting only the
between factor, with similar results. 

 

Thanks,

Josh

 

 

On Sun, Jan 26, 2014 at 4:44 AM, <fieldtrip-request at science.ru.nl> wrote:

Hi Steve and Josh,


Josh writes

> > labels. I'm sure there's a proof somewhere for why this doesn't work,
> > and it would be great to see it.

In general, questions like these are very hard to answer satisfactorily on
a
discussion list. It is dealt with much more easily in person, say at one
of
the Fieldtrip courses. However, let me give it a try.

To prove that something does not work it suffices to produces a single
example that shows the contrary.

Try the following:

Generate random data in a 2-by-2 between-subjects design (say, normally
distributed within every cell). Add large main effects (relative to the
within-cell variance; say, MS_beween 50 times larger than MS_within) and
no
interaction effect. Take a small number of subjects (say, 5 per cell).
Now,
calculate a permutation p-value for the interaction-effect F-statistic by
permuting across all 4 cells. Do this for a large number of simulated data
set. My prediction is that, on average, the F-statistic p-value is less
than
0.05, which it should be (because there is no interaction effect).

I have not run this simulation study myself. Let me know if it does not
produce the predicted result. (I cannot guarantee that I'm not missing
something when producing this recipe.)



Best,

Eric






> -----Original Message-----
> From: Stephen Politzer-Ahles [mailto:politzerahless at gmail.com]
> Sent: zondag 26 januari 2014 8:25
> To: fieldtrip at science.ru.nl
> Subject: Re: [FieldTrip] interactions
>
> Hi Josh,
>
> Have you seen this [admittedly pretty old now] message from the
> archives: http://mailman.science.ru.nl/pipermail/fieldtrip/2011-
> January/003447.html
> ? My understanding was that it is ok to test interactions in within-
> subjects designs, and that you could do it by faking a dataset that
> represents the interaction (step 3 in that message) and then doing a
> dependent samples t-test. I had never heard before that interactions
> can't be tested in a within-subjects design, but also it's been a long
> time since I've looked at this issue--I'd definitely be interested to
> hear if this is no longer the recommended way to test interactions. I
> have seen messages saying that it doesn't work for between-subjects
> designs (e.g.
> http://mailman.science.ru.nl/pipermail/fieldtrip/2011-
> September/004244.html),
> but I'm not sure if that's still current. Hopefully someone on the list
> can offer more insight about the second question.
>
> Best,
> Steve
>
> >
> > Message: 2
> > Date: Fri, 24 Jan 2014 10:54:10 -0500
> > From: Joshua Hartshorne <jkhartshorne at gmail.com>
> > To: fieldtrip at science.ru.nl
> > Subject: [FieldTrip] interactions
> > Message-ID:
> >
> > <CA+3amhe+x4+TNUY1tf0aXe-cf-AB1kTE+ZHTpuRJxNQ=bNioUQ at mail.gmail.com>
> > Content-Type: text/plain; charset="iso-8859-1"
> >
> > Hi List!
> >
> > I have seen around a dozen comments in the archives that interactions
> > can't be tested by permutation for within-subject designs. I haven't
> > been able to find a thread that explains why not. It seems like in a
> > 2x2 design, you could still pick one of the conditions and permute
> the
> > labels. I'm sure there's a proof somewhere for why this doesn't work,
> > and it would be great to see it.
> >
> > Similarly, for the mixed design, why permute the between-subject
> labels?
> > Why not permute the within-subject labels instead? Actually, why not
> > do both? I follow the reasoning why permuting both is overkill, but
> > not why it's wrong.
> >
> > If someone could explain, it would be much appreciated. Knowing what
> > to do is good, but it would be even better to understand why.
> >
> > Thanks,
> > Josh
> > -------------- next part -------------- An HTML attachment was
> > scrubbed...
> > URL:
> >
> <http://mailman.science.ru.nl/pipermail/fieldtrip/attachments/20140124/
> b885cb4a/attachment-0001.html>
> >



------------------------------

Message: 2
Date: Sun, 26 Jan 2014 10:43:58 +0100
From: Azeez Adebimpe <ayobimpe2004 at hotmail.com>
To: FieldTrip discussion list <fieldtrip at science.ru.nl>
Subject: Re: [FieldTrip] Urgent: Error in Source Statistics, Group
        level
Message-ID: <DUB111-W130FC2F5C9CE7B2035F0BE4CDA30 at phx.gbl>
Content-Type: text/plain; charset="iso-8859-1"

Hi Chaitanya ,
I would suggest you try analyitcs instead of montecarlo  and use stat=
ft_sourcestatitics(cfg, source1a, source2a ..................,
source1b,source2b.............);a and b are for the conditions.
Azeez Adebimpe


Date: Sun, 26 Jan 2014 09:46:03 +0100
From: chaitanya.pro at gmail.com
To: fieldtrip at science.ru.nl
Subject: Re: [FieldTrip] Urgent: Error in Source Statistics, Group level

Hi Eelke,

No significant results then in my data. I wonder how my boss takes it :P.
Anyway, thanks for your help on a Sunday that too.
>From your reply I also understand that the code doesn't have any mistakes
:)

===============================================



Best RegardsChaitanya Srinivas Lanka


Wiss. Mitarbeiter                                       PhD Student

Functional and Restorative Neurosurgery Neural Information Processing
Neurosurgical University Hospital             Graduate Training Center for
Neuroscience

Eberhard Karls University                          Eberhard Karls
University

Otfried-Mueller-Str.45                                ?sterbergstr. 3

D-72076 Tuebingen                                    D-72074 Tuebingen

Mobile Phone Number : +49-176-79035731 <tel:%2B49-176-79035731> 
===============================================





On Sun, Jan 26, 2014 at 9:40 AM, Eelke Spaak <eelke.spaak at donders.ru.nl>
wrote:

Hi Chaitanya,
stat.prob reflects the 'p-values' resulting from your statistical test. So
voxels expressing e.g. stat.prob < 0.05 should be considered reflecting a
significant difference between conditions. The NaNs correspond to voxels
outside the brain.


Since stat.mask is all zeros (which by default is just stat.prob < 0.05),
this indicates there are no significant differences between your
conditions. There is nothing we can help you with in this respect :)


Best,Eelke

On 26 January 2014 09:06, Chaitanya Srinivas <chaitanya.pro at gmail.com>
wrote:


Hi Eelke,

         I looked at the stat.stat values if that is what you mean. There
are some NaNs , but also some values. Similarly in stat.prob, there are
some 1's. The stat.mask is all zeros as you say.




Any further suggestions from you?
Thank you

===============================================



Best RegardsChaitanya Srinivas Lanka




Wiss. Mitarbeiter                                       PhD Student



Functional and Restorative Neurosurgery Neural Information Processing
Neurosurgical University Hospital             Graduate Training Center for
Neuroscience



Eberhard Karls University                          Eberhard Karls
University



Otfried-Mueller-Str.45                                ?sterbergstr. 3



D-72076 Tuebingen                                    D-72074 Tuebingen



Mobile Phone Number : +49-176-79035731 <tel:%2B49-176-79035731> 
===============================================







On Sun, Jan 26, 2014 at 8:53 AM, Eelke Spaak <eelke.spaak at donders.ru.nl>
wrote:



Dear Chaitanya,
Perhaps an obvious question: do you find any significant differences in
the statistics step (inspect the stat structure)? If not, the mask will
consist of all zeroes, hence giving you a 'blank' plot.




Best,Eelke

On 26 January 2014 08:46, Chaitanya Srinivas <chaitanya.pro at gmail.com>
wrote:




Dear fieldtrip users,
I would like to do sourcestatistics on a group level with eeg data. I have
a
pre and post intervention measurement for each of my 10 subjects
. After source reconstruction using an DICS beamformer
and volume normalization, I calculated the sourcegrandaverage for the pre
and
post condition and i have avg.pow for each subject.

 However, when I use the grandaverage results in ft_sourcestatistics in
the
configuration shown below and plot the result I just get a blank
anatomical
mri. It only runs with cfg.parameter="pow" .I tried with cfg.parameter =
'avg.pow' it doesnt run.
Do I have to set any additional parameters or am I making some mistake?


cfg=[];
cfg.dim         = grandAVGsourcePre.dim;
cfg.method      = 'montecarlo';
cfg.statistic   = 'depsamplesT';
cfg.parameter   = 'pow';
cfg.correctm    = 'cluster';
cfg.numrandomization = 1000;
cfg.alpha       = 0.05;
cfg.tail        = 0;

nsubj=length(sourcePre.trial);
cfg.design(1,:) = [1:nsubj 1:nsubj];
cfg.design(2,:) = [ones(1,nsubj) ones(1,nsubj)*2];
cfg.uvar        = 1;
cfg.ivar        = 2;
stat = ft_sourcestatistics(cfg, grandAVGsourcePre, grandAVGsourcePost);
and next interpolation

 cfg                                       = [];





 cfg.voxelcoord                      = 'no';
 cfg.parameter                       = 'mask';
 cfg.interpmethod                   = 'nearest';
 cfg.coordsys                        = 'mni';





 mask                                  =
ft_sourceinterpolate(cfg,stat,mri);
 statplot.mask                      = mask.mask;


and then for plotting





cfg = [];
cfg.method        = 'slice';
cfg.funparameter  = 'stat';
cfg.maskparameter = 'mask';
   cfg.funcolorlim   = [-0.1 0.1];
   cfg.opacitylim    = [-0.1 0.1];
figure
ft_sourceplot(cfg, statplot);



















===============================================







Best RegardsChaitanya Srinivas Lanka

Wiss. Mitarbeiter                                       PhD Student





Functional and Restorative Neurosurgery Neural Information Processing
Neurosurgical University Hospital             Graduate Training Center for
Neuroscience





Eberhard Karls University                          Eberhard Karls
University





Otfried-Mueller-Str.45                                ?sterbergstr. 3





D-72076 Tuebingen                                    D-72074 Tuebingen





Mobile Phone Number : +49-176-79035731 <tel:%2B49-176-79035731> 
===============================================









_______________________________________________

fieldtrip mailing list

fieldtrip at donders.ru.nl

http://mailman.science.ru.nl/mailman/listinfo/fieldtrip



_______________________________________________

fieldtrip mailing list

fieldtrip at donders.ru.nl

http://mailman.science.ru.nl/mailman/listinfo/fieldtrip



_______________________________________________

fieldtrip mailing list

fieldtrip at donders.ru.nl

http://mailman.science.ru.nl/mailman/listinfo/fieldtrip



_______________________________________________

fieldtrip mailing list

fieldtrip at donders.ru.nl

http://mailman.science.ru.nl/mailman/listinfo/fieldtrip



_______________________________________________
fieldtrip mailing list
fieldtrip at donders.ru.nl
http://mailman.science.ru.nl/mailman/listinfo/fieldtrip
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://mailman.science.ru.nl/pipermail/fieldtrip/attachments/20140126/9c7
c30c0/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 23195 bytes
Desc: not available
URL:
<http://mailman.science.ru.nl/pipermail/fieldtrip/attachments/20140126/9c7
c30c0/attachment.png>

------------------------------

_______________________________________________
fieldtrip mailing list
fieldtrip at donders.ru.nl
http://mailman.science.ru.nl/mailman/listinfo/fieldtrip

End of fieldtrip Digest, Vol 38, Issue 49
*****************************************

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.science.ru.nl/pipermail/fieldtrip/attachments/20140204/83d097cf/attachment.html>


More information about the fieldtrip mailing list