cluster statistic on one sample

Eric Maris e.maris at DONDERS.RU.NL
Mon Oct 20 21:15:30 CEST 2008


Dear Guillaume,





I am not sure I understand your point against the bootstrap. My main source
of information about robust statistics is Wilcox, R. R. (2005). Introduction
to Robust Estimation and Hypothesis Testing (2nd Ed. ed.): Academic Press.

In this book, Wilcox makes an extensive use of the bootstrap technique. The
validation of the technique, when it has been performed, relies on
Monte-Carlo simulations. Also, over almost 600 pages, Wilcox spends only one
page on permutations, basically saying that it is a special case of the
bootstrap and that there is no particular reason to use it.

Do you have references showing validation tests with a direct comparison of
bootstrap and permutation? My understanding is that such comparisons do not
exist for EEG/MEG data.

Also, one must keep in mind that bootstrap is particularly efficient when
applied to robust measures of central tendency, like trimmed means and
M-estimators, see my recent EEG paper for instance
(http://www.journalofvision.org/8/12/3/).

Finally, Wilcox provides a large number of recipes to test significance of
linear regression results, that could be applied to the problem outlined
earlier about the hypothesis test against zero.





Please do not misunderstand me. I would love to see rigorous mathematical
proofs substantiating the use of the bootstrap distribution for false alarm
rate control under some scientifically interesting null hypothesis. Also, I
am not married to permutation tests (fortunately!). However, I cannot ignore
the fact that a very nice proof exists (which is even very intuitive)
showing that permutation tests control the false alarm rate of any test
statistic and for any correlation pattern in data of arbitrary
dimensionality. Moreover, it does so under a null hypothesis that is
scientifically interesting (data from multiple  experimental conditions
governed by the same probability distribution).



Please forgive for being explicit, but I do not think it is scientifically
appropriate to ask me for "references showing validation tests with a direct
comparison of bootstrap and permutation". The burden of the proof is
completely on the side of the side of the advocates of the bootstrap. They
have to show that a statistical test based on a bootstrap p-value controls
the false alarm rate under a scientifically interesting null hypothesis.
With such a proof on the table, I will become a vigourous defender of the
bootstrap, but until that I will only present it as procedure with an
intuitively appealing rationale.



After showing false alarm rate control by the bootstrap (i.e., under some
null hypothesis), there is the issue of comparing it with permutation tests
with respect to statistical sensitivity (i.e., under the alternative
hypothesis). I have no idea about the relative performance of bootstrap and
permutation tests in this respect, but intuitively I do not expect a big
difference. However, what will make a big difference is the type of test
statistic that is evaluated in under either the permutation or the bootstrap
distribution.







Greetings,



Eric Maris











Best,



GAR









On 20 Oct 2008, at 15:02, Eric Maris wrote:





Dear Fieldtrip-list-readers,





What about performing a nonparametric test, based on the bootstrap

distribution of the beta weights under the null-hypothesis?

This problem sounds similar to one I came across recently (and which

I still have to write something about on fieldtrip's wiki-page (sorry

Eric)), which has to do with the testing of the significance of the F-

value for interaction in a 2x2 repeated measure anova. Also in this

case, one also wants to test a parametric null-hypothesis, as Eric

phrased it in his last e-mail. One way to test this (I don't have the

reference at hand), is to test the observed F-statistic against a

null-distribution, obtained from bootstrapping your data, which you

preconditioned as to impose the null-hypothesis (in the case of an

anova it would be to remove from each of the observations the mean of

the cell to which the observation belongs). I don't know yet how to

impose the null-hypothesis in the regression case, but would this

line of thought be a possibility?

As to a potential implementation: Robert and I are pretty close to

have the bootstrapping implemented.


Again, I can only try to clarify some points here. I will not be able to
offer a solution for your problems.

1. Contrary to the permutation test, there is no useful statistical theory
for statistical tests based on the bootstrap distribution. By "useful", I
mean a theory that allows one to specify a scientifically interesting null
hypothesis (such as, "An expected value equal to 0") under which the false
alarm rate of a boostrap-p-value-based test can be controlled.

2. The bootstrap distribution has a nice intuitive appeal, because the
procedure to generate it (sampling with replacement) mimicks the sampling
process behind the sampling distribution (which is the ultimate "thing to
get" if you want to quantify the reliability of some quantity). But that is
not a proof of false alarm rate control!

3. I think the bootstrap distribution can be useful in situations where
parametric statistical tests do not exists, but I know of no rigourous
statistical argument to substantiate this claim.


Greetings,

Eric Maris







Yours,



Jan-Mathijs





On Oct 20, 2008, at 11:05 AM, Vladimir Litvak wrote:



Dear Floris and Eric,



Parametric tests at scalp level taking into account spatial

relationship between sensors can be done in SPM (with RFT correction).

That'll require using some low-level functions to convert

coefficients to images but in principle shouldn't be that difficult.



Best,



Vladimir



On Mon, Oct 20, 2008 at 10:17 AM, Eric Maris

<e.maris at donders.ru.nl> wrote:

Dear Floris,







I have a question about statistical analysis on the sensor level.

I would like to make use of the cluster size thresholding of the

clusterrand routine in Fieldtrip. Unfortunately, in the current

wrapper, it seems there is no option for a one-sample T-test?

There is

an activation-baseline test, and a (in)dependent samples test

between

two conditions, but what I want to do is simply test whether a 14

(subjects) x 275 (channels) matrix is different from zero,

taking into

account the spatial relations between adjacent sensors. (The data

points are regression weights from a multiple-regression

analysis, so

there's no easy way to split it into two parts.)

I assume this should be easy to tweak, but I couldn't come up

with any

smart ideas how to do it.

Anyone any ideas?



I'm afraid that I have to disappoint you, Floris. Your null

hypothesis is a

typical parametric null hypothesis; the expected value of some

(matrix-valued) variable being equal to zero. The null hypothesis

that is

tested by a nonparametric permutation test is equality across

experimental

conditions of the probability distribution from which the

(condition-specific) data are drawn. Since you have single

condition only, I

see no way of applying the theory behind nonparametric

permutation testing

(of the type described by Maris & Oostenveld, 2007) to your data.



To solve your problem we need a brilliant theoretical insight.





Greetings,



Eric















Thanks in advance!



Floris



----------------------------------

The aim of this list is to facilitate the discussion between

users of the

FieldTrip

toolbox, to share experiences and to discuss new ideas for MEG

and EEG

analysis.

See also http://listserv.surfnet.nl/archives/fieldtrip.html and

http://www.ru.nl/fcdonders/fieldtrip.



----------------------------------

The aim of this list is to facilitate the discussion between

users of the FieldTrip  toolbox, to share experiences and to

discuss new ideas for MEG and EEG analysis. See also http://

listserv.surfnet.nl/archives/fieldtrip.html and http://www.ru.nl/

fcdonders/fieldtrip.









----------------------------------

The aim of this list is to facilitate the discussion between users

of the FieldTrip  toolbox, to share experiences and to discuss new

ideas for MEG and EEG analysis. See also http://listserv.surfnet.nl/

archives/fieldtrip.html and http://www.ru.nl/fcdonders/fieldtrip.



----------------------------------

The aim of this list is to facilitate the discussion between users of the

FieldTrip



toolbox, to share experiences and to discuss new ideas for MEG and EEG

analysis.



See also http://listserv.surfnet.nl/archives/fieldtrip.html and

http://www.ru.nl/fcdonders/fieldtrip.


----------------------------------
The aim of this list is to facilitate the discussion between users of the
FieldTrip  toolbox, to share experiences and to discuss new ideas for MEG
and EEG analysis. See also
http://listserv.surfnet.nl/archives/fieldtrip.html and
http://www.ru.nl/fcdonders/fieldtrip.











****************************************************************************
********

Guillaume A. Rousselet, Ph.D.



Lecturer



Centre for Cognitive Neuroimaging (CCNi)

Department of Psychology

Faculty of Information & Mathematical Sciences (FIMS)

University of Glasgow

58 Hillhead Street

Glasgow, UK

G12 8QB



The University of Glasgow, charity number SC004401



http://web.me.com/rousseg/GARs_website/



Email: g.rousselet at psy.gla.ac.uk

Fax. +44 (0)141 330 4606

Tel. +44 (0)141 330 6652

Cell +44 (0)791 779 7833





"no test based upon a theory of probability can by itself

provide any valuable evidence of the truth or falsehood

of a hypothesis.



But we may look at the purpose of tests from another

viewpoint. Without hoping to know whether each separate

hypothesis is true or false, we may search for

rules to govern our behaviour with regard to them, in

following which we insure that, in the long run of

experience, we shall not often be wrong."



             Neyman J & Pearson E, 1933

****************************************************************************
********













----------------------------------

The aim of this list is to facilitate the discussion between users of the
FieldTrip toolbox, to share experiences and to discuss new ideas for MEG and
EEG analysis.

http://listserv.surfnet.nl/archives/fieldtrip.html

http://www.ru.nl/fcdonders/fieldtrip/


----------------------------------
The aim of this list is to facilitate the discussion between users of the FieldTrip  toolbox, to share experiences and to discuss new ideas for MEG and EEG analysis. See also http://listserv.surfnet.nl/archives/fieldtrip.html and http://www.ru.nl/fcdonders/fieldtrip.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.science.ru.nl/pipermail/fieldtrip/attachments/20081020/9d1e21af/attachment-0001.html>


More information about the fieldtrip mailing list