[FieldTrip] equal number of trials across conditions

Gio Piantoni g.piantoni at nin.knaw.nl
Mon May 21 11:19:12 CEST 2012


Hi Vitória,

I like the intuitive appeal of your approach, in order to keep only
the "most representative" trials. However, I'd have serious concerns
that your approach might not be valid.

If you reject only the noisiest trials from condition A, you're
applying a sort of extra preprocessing step to your data and this
makes the comparison between conditions A and B not very meaningful.
You cannot unequivocally attribute the difference between A and B to a
real difference between experimental conditions or to the extra
preprocessing step.
More critically, if you only reject noisy trials in condition A,
you'll systematically introduce heteroscedasticity in your data; this
is against one of the assumptions of parametric testing.

I agree with Arjen to use random sampling of the trials of condition
A. Depending on what you want to do next, you can even get standard
errors from this randomization, similarly to the bootstrap approach. I
find this a very elegant way to deal with very unequal numbers of
trials.

Hope this helps,

Best,

Gio

-- 
Giovanni Piantoni, MSc
Dept. Sleep & Cognition
Netherlands Institute for Neuroscience
Meibergdreef 47
1105 BA Amsterdam (NL)

+31 20 5665492
gio at gpiantoni.com
www.gpiantoni.com


On Sun, May 20, 2012 at 11:53 AM, Vitória Magalhães Piai
<vitoria.piai at gmail.com> wrote:
> Hi everyone,
>
> I'm working on a dataset (ERPs, but this is not that relevant for the
> question, I believe) for which one condition elicited more errors than the
> other.
>
> I'd like to have both conditions with the same number of trials in the
> analyses.
> Ideally, I'd throw away the noisiest trials from one condition, instead of
> just start throwing away trials at random.
>
> I was thinking of using z-scores for that but I was wondering if any of you
> has already done this before and how. What would be the best way to go?
> Take the mean amplitude across all trials (collapsed over condition or not?)
> and calculate the z-score for each trial individually? Then take out the one
> with the largest scores? How does this approach sound?
> Does FT keep information about the variance for each trial somewhere in the
> output of an artefact rejection function? Or would I have to compute that
> myself?
>
> I'd appreciate any suggestions or feedback.
>
> Cheers, Vitória
>
>
> _______________________________________________
> fieldtrip mailing list
> fieldtrip at donders.ru.nl
> http://mailman.science.ru.nl/mailman/listinfo/fieldtrip




More information about the fieldtrip mailing list