[FieldTrip] equal number of trials across conditions

Andre Cravo andrecravo at gmail.com
Mon May 21 17:08:31 CEST 2012


Hi Vitória,

Although using random sampling is a good approach, it is not a big problem
to have a different number of trials between conditions, specially in ERP
studies.

If you're just going to do classical analyses (mean, grand-mean and
statistics in mean), it might be better just to stay with different number
of trials. There has been a similar discussion in the eeglab list, which
might be worth reading for those interested (
http://sccn.ucsd.edu/pipermail/eeglablist/2010/003240.html)

There is also a nice essay written by Steve Luck about this, which can be
found in this website : http://erpinfo.org/Members/sjluck

Best,

-- 
Andre M. Cravo
Postdoctoral Researcher
University of Sao Paulo-Brazil



On 21 May 2012 06:19, Gio Piantoni <g.piantoni at nin.knaw.nl> wrote:

> Hi Vitória,
>
> I like the intuitive appeal of your approach, in order to keep only
> the "most representative" trials. However, I'd have serious concerns
> that your approach might not be valid.
>
> If you reject only the noisiest trials from condition A, you're
> applying a sort of extra preprocessing step to your data and this
> makes the comparison between conditions A and B not very meaningful.
> You cannot unequivocally attribute the difference between A and B to a
> real difference between experimental conditions or to the extra
> preprocessing step.
> More critically, if you only reject noisy trials in condition A,
> you'll systematically introduce heteroscedasticity in your data; this
> is against one of the assumptions of parametric testing.
>
> I agree with Arjen to use random sampling of the trials of condition
> A. Depending on what you want to do next, you can even get standard
> errors from this randomization, similarly to the bootstrap approach. I
> find this a very elegant way to deal with very unequal numbers of
> trials.
>
> Hope this helps,
>
> Best,
>
> Gio
>
> --
> Giovanni Piantoni, MSc
> Dept. Sleep & Cognition
> Netherlands Institute for Neuroscience
> Meibergdreef 47
> 1105 BA Amsterdam (NL)
>
> +31 20 5665492
> gio at gpiantoni.com
> www.gpiantoni.com
>
>
> On Sun, May 20, 2012 at 11:53 AM, Vitória Magalhães Piai
> <vitoria.piai at gmail.com> wrote:
> > Hi everyone,
> >
> > I'm working on a dataset (ERPs, but this is not that relevant for the
> > question, I believe) for which one condition elicited more errors than
> the
> > other.
> >
> > I'd like to have both conditions with the same number of trials in the
> > analyses.
> > Ideally, I'd throw away the noisiest trials from one condition, instead
> of
> > just start throwing away trials at random.
> >
> > I was thinking of using z-scores for that but I was wondering if any of
> you
> > has already done this before and how. What would be the best way to go?
> > Take the mean amplitude across all trials (collapsed over condition or
> not?)
> > and calculate the z-score for each trial individually? Then take out the
> one
> > with the largest scores? How does this approach sound?
> > Does FT keep information about the variance for each trial somewhere in
> the
> > output of an artefact rejection function? Or would I have to compute that
> > myself?
> >
> > I'd appreciate any suggestions or feedback.
> >
> > Cheers, Vitória
> >
> >
> > _______________________________________________
> > fieldtrip mailing list
> > fieldtrip at donders.ru.nl
> > http://mailman.science.ru.nl/mailman/listinfo/fieldtrip
>
> _______________________________________________
> fieldtrip mailing list
> fieldtrip at donders.ru.nl
> http://mailman.science.ru.nl/mailman/listinfo/fieldtrip
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.science.ru.nl/pipermail/fieldtrip/attachments/20120521/eca7c4b9/attachment-0002.html>


More information about the fieldtrip mailing list