<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class="">Hi Christine, <div class=""><br class=""></div><div class="">If you want to try something Bayesian and only if you have strong a priori hypotheses for a particular effect in a specific data window of interest, you could extract values from that window and do Bayesian analyses in JASP (<a href="https://jasp-stats.org" class="">https://jasp-stats.org</a>), which is really easy to use and has a range Bayesian alternatives for classical F- and t-tests available. </div><div class=""><br class=""></div><div class="">Alternatively, but this is probably much more complicated and time-consuming, you could do full Bayesian analyses with SPM12 (<a href="http://www.fil.ion.ucl.ac.uk/spm/software/spm12/" class="">http://www.fil.ion.ucl.ac.uk/spm/software/spm12/</a>), which allows you to calculate posterior probabilities for scalp, scalp x frequency or scalp x time maps. This is a great way of assessing evidence in your data, but I think it’s less popular than permutation tests. You can convert between Fieldtrip and SPM formats using <i class="">spm_eeg_ft2spm.m</i></div><div class=""><br class=""></div><div class="">I totally agree with Eelke that doing Bayesian tests <i class="">post-hoc</i>, after you obtained convincing results with permutations, seems like unnecessary methodological flourish. Yet it’s often hard to argue with reviewers without compromising a bit. You might want to use one of the above e.g. for your most important analysis and show in the rebuttal that it doesn’t alter your conclusions (hopefully!). This should convince them that results are robust to methodological choices, and exempt you from having to recalculate the whole thing again.</div><div class=""><br class=""></div><div class="">Hope that helps</div><div class=""><br class=""></div><div class="">Eugenio</div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><br class=""><div><div class="">On 12 Mar 2018, at 09:19, Eelke Spaak <<a href="mailto:e.spaak@donders.ru.nl" class="">e.spaak@donders.ru.nl</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div class="">Dear Christine,<br class=""><br class="">Bayes factors etc. are computed from the posterior distribution over<br class="">some model parameters (e.g. means of Gaussians in the case analogous<br class="">to the t-test). As the cluster-based permutation approach is<br class="">inherently non-parametric (i.e. it tests the exchangeability of data<br class="">beween conditions), I think it would be quite esoteric to try<br class="">something Bayesian with the cluster test. I think your best bet would<br class="">be to figure out *why* the reviewer wants this, and then come up with<br class="">an alternative answer that does not depend on Bayesian measures.<br class=""><br class="">Of course, one could "zoom in" on the effect you found and compute<br class="">parametric Bayesian stats for that region of interest, but that would<br class="">constitute "double dipping" if you don't have an independent contrast.<br class="">In case you find evidence in favour of a null effect (one circumstance<br class="">under which reviewers might ask for Bayesian evidence), this approach<br class="">and result might still be valid (as it goes against the bias<br class="">introduced by the preselection).<br class=""><br class="">Best,<br class="">Eelke<br class=""><br class="">On 11 March 2018 at 18:21, Blume Christine <<a href="mailto:christine.blume@sbg.ac.at" class="">christine.blume@sbg.ac.at</a>> wrote:<br class=""><blockquote type="cite" class="">Dear FT-Community,<br class=""><br class="">In the analysis of high-density EEG data for a recent manuscript<br class="">(<a href="https://www.biorxiv.org/content/early/2017/12/06/187195" class="">https://www.biorxiv.org/content/early/2017/12/06/187195</a>) we have used the<br class="">cluster-based permutation approach. While the reviewers commended the choice<br class="">of this approach, one reviewer would like us to calculate a Bayesian measure<br class="">in addition to the Monte Carlo p values. Does anyone have a recommendation<br class="">how to best approach this, any "best practice" to share?<br class=""><br class="">It is quite easy to calculate a Bayes factor as a follow-up on classic<br class="">t-tests for example (e.g. see here<br class=""><a href="http://www.lifesci.sussex.ac.uk/home/Zoltan_Dienes/inference/Bayes.htm" class="">http://www.lifesci.sussex.ac.uk/home/Zoltan_Dienes/inference/Bayes.htm</a>).<br class="">However, even though the permutation approach uses a t-value as a test<br class="">statistic, it is not a "t-test"...<br class=""><br class="">Best,<br class="">Christine<br class=""><br class=""><br class="">_______________________________________________<br class="">fieldtrip mailing list<br class=""><a href="mailto:fieldtrip@donders.ru.nl" class="">fieldtrip@donders.ru.nl</a><br class="">https://mailman.science.ru.nl/mailman/listinfo/fieldtrip<br class=""></blockquote>_______________________________________________<br class="">fieldtrip mailing list<br class=""><a href="mailto:fieldtrip@donders.ru.nl" class="">fieldtrip@donders.ru.nl</a><br class="">https://mailman.science.ru.nl/mailman/listinfo/fieldtrip<br class=""></div></div></div><br class=""></div></body></html>