[FieldTrip] Questions relating to MEG nonparametric testing and uneven trial numbers

Heng-Ru May Tan Heng-RuMay.Tan at glasgow.ac.uk
Sat Oct 12 00:38:15 CEST 2013


Hi there,
I am resending my question below to the Fieldtrip discussion. Many 
thanks in advance!
> On 11/10/2013 20:10, Eric Maris wrote:
>> Hi May,
>>
>> Could you post your question on thr Fieldtrip discussion list? In that way, many more people can participate in the discussion. Moreover, the discussion will be archived, allowing people with the same question to inform themselves.
>>
>> Best,
>>
>> Eric

I wish to seek some insights into non-parametric statistical testing in 
the MEG data analysis I am planning to do at the source level.

For the most part, I have an appreciation of the permutation strategy 
(concept of random partitioning) described in Maris & Oostenveld 2007, 
which is implemented in fieldtrip.

I have however a concern regarding trial number differences between 
subject group for permutation statistics.
The data I am looking at after preprocessing/artefact removal etc. looks 
roughly as follows:

2 subject groups: Controls (C) and Patients (P).
For a particular experimental condition (A), on average, subjects within 
each group have the following number of trials:
condA_Ntrials_C  ~= 100
condA_Ntrials_P  ~=  80

I plan to perform a 2-step statistics-test to test for differences 
between C and P groups on a particular experimental condition.
Using a 1st level within subject t-test (or active-vs-baseline test) on 
baseline vs active period for each group, deriving subject-specific 
t-stats which are to be subsequently used in a 2nd level test between 
subject group contrast, using nonparametric test.

Question 1) Does it matter if the average trials per subject in each 
group is different? Should I try to equalize the trial numbers by e.g. 
random removal of some trials before computing any statistics? Is this 
advisable?

Question 2) Minimum trials required for source-level significance?
The supplementary info. accompanying the 2007 paper illustrated some 
minimum trial numbers required for obtaining a significant effect at the 
sensor-level analysis, given some threshold (e.g. cluster>250 
sensor-time pairs).
Has anyone systematically shown what is the minimum number of trials 
required to obtain significant clusters/fdr stats at the source-level 
analysis?
If not, is there a sensible way to find out? (presumably involving some 
form of bootstrapping and simulation?)

This relates to another issue I have with regards to 'within-subjects' 
comparison for 2 experimental conditions A and B where there are average 
trial number differences both within and between subject groups:
condA_Ntrials_C  ~= 100         condB_Ntrials_C  ~= 60   (min. =35)
condA_Ntrials_P  ~=  80          condB_Ntrials_P  ~=  50  (min. =35)

A similar concern of uneven trials arises if say I wish to perform a 
within subjects comparison between the experimental conditions.

In general, Q: Would the intended 2-step statistical test (as described 
above) be appropriate, OR would it best to control for 'equal' number of 
trials for all subjects and conditions of interest?

I would really appreciate it if someone could kindly comment or offer 
advise where appropriate.

Thank you very much in advance for your time.

Yours sincerely,
May


-- 

Heng-Ru/May/ Tan

Institute of Neuroscience and Psychology (INP) ▫ Centre for Cognitive 
Neuroimaging (CCNi)▫ University of Glasgow

58 Hillhead Street, Glasgow G12 8QB▫ +44 (0)141-330-5090▫ 
Heng-RuMay.Tan at glasgow.ac.uk

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.science.ru.nl/pipermail/fieldtrip/attachments/20131011/bb778aed/attachment-0001.html>


More information about the fieldtrip mailing list