[FieldTrip] Spike-Field-PLV z-scoring and comparison between three conditions of unequal sample sizes
Hähnke, Daniel
daniel.haehnke at tum.de
Wed Nov 9 14:30:04 CET 2016
Hi Florian,
thanks for your reply! I haven’t had any replies yet.
You’re right, it’s important to form a null hypothesis before doing statistical tests.
Explicitly, my null hypothesis is that the PLV is the same for all conditions. For that I’d need to shuffle the condition labels of the spike phases across conditions.
The z-scoring was meant as a normalisation of the spike-lfp-combinations, so I can pool combinations via averaging. Of course, the trial association shuffling implicitly also tests the null hypothesis of zero PLV.
Best,
Daniel
On 9 Nov 2016, at 01:51, Florian Gerard-Mercier <florian at brain.riken.jp<mailto:florian at brain.riken.jp>> wrote:
Dear Daniel,
Did you get an answer already?
I’m just wondering, what is the null hypothesis?
Do you want to see if individual PLV values are significantly different from the null value, or do you want to compare PLV values between conditions?
Best,
Florian Gerard-Mercier
Lab. for Cognitive Brain Mapping
RIKEN Brain Science Institute
2-1 Hirosawa, Wako, Saitama
351-0198, Japan
Tel: 048 462 1111 and 7106
Mob: 080 3213 6851
On 7 Nov, 2016, at 10:49 PM, Hähnke, Daniel <daniel.haehnke at tum.de<mailto:daniel.haehnke at tum.de>> wrote:
Dear FT community,
I’m currently working on spike and LFP data from a behavioural experiment that contained three different stimulus conditions. The conditions were unequally distributed across trials: condition A was in 60 % of trials and conditions B and C each in 20 % of trials.
I want to compare the spike-field PLV between the conditions using a z-scoring approach similar to Buschman et al. 2012, Neuron (http://download.cell.com/neuron/pdf/PIIS0896627312008823.pdf). In that paper they shuffle the trial associations between spike trials and LFP trials to generate a null distribution from which they compute the z-score.
Since I have unequal number of trials across conditions, I also need to equalise the number of spikes across conditions. There are two methods I used to try to accomplish this.
Method 1:
1. Within each condition, shuffle the trial association between spike trials and LFP trials (this is for the null distribution). Do this e.g. 100 times. Compute STS.
2. From each trial shuffle (see 1.) use a random subset of spike phases (matched to the condition with the lowest number of spikes) to compute the PLV. Do this random subsampling e.g. 1000 times.
3. For each trial shuffle (see 1.) average across subsamples (see 2.).
4. Compute z-score from the trial shuffles' subsampling-average (see 3.), by computing mean and SD across the trial shuffles’ subsampling averages.
Method 2:
1. Within each condition, use a random subset of trials (matched to condition with lowest number of trials). Do this e.g. 1000 times.
2. For each subsample (see 1.) shuffle the trial associations between spike trials and LFP trials. Do this e.g. 100 times. Compute STS and PLV.
3. For each trial-subset (see 1.) compute z-score by using SD and mean across trial shuffles (see 2.).
Now I see that for method 1, there is a lower SD for condition A, which is why I get higher z-scores. Using method 2 I get unlikely low z-scores.
Despite the differences in the steps, there are also the following differences in the two methods.
In method 1 I shuffled the spike trains so that they can also be referred to an LFP trial that didn’t have any spikes (i.e. I didn’t limit LFP trials to only trials in which the units were recorded). This of course gives condition A a much bigger “shuffling pool” than the other two conditions. In method 2 I only shuffled within the LFP trials that actually also had spikes.
Another difference is that in method 2, the spike numbers are only very similar but not equal, since I only equalised the trial numbers.
Is there another approach to accomplish what I am looking for? Basically, I want to reduce PLV bias by equalising the spike numbers and I also want to normalise the PLV.
I could imagine that limiting the “shuffling pool” in method 1 would maybe equalise the conditions better, but I’m not sure whether the general approach is statistically sound.
It would be great if someone could comment on the methods above and/or propose another method (e.g. would bootstrapping be alright for the generation of the null distribution?).
Best wishes,
Daniel
--
Daniel Hähnke
PhD student
Technische Universität München
Institute of Neuroscience
Translational NeuroCognition Laboratory
Biedersteiner Straße 29, Bau 601
80802 Munich
Germany
Email: daniel.haehnke at tum.de<mailto:daniel.haehnke at tum.de>
Phone: +49 89 4140 3356
_______________________________________________
fieldtrip mailing list
fieldtrip at donders.ru.nl<mailto:fieldtrip at donders.ru.nl>
https://mailman.science.ru.nl/mailman/listinfo/fieldtrip
_______________________________________________
fieldtrip mailing list
fieldtrip at donders.ru.nl<mailto:fieldtrip at donders.ru.nl>
https://mailman.science.ru.nl/mailman/listinfo/fieldtrip
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.science.ru.nl/pipermail/fieldtrip/attachments/20161109/7da0f9e5/attachment-0002.html>
More information about the fieldtrip
mailing list