From pbikle at NIH.GOV Thu Nov 3 22:19:16 2005 From: pbikle at NIH.GOV (Philip C. Bikle) Date: Thu, 3 Nov 2005 22:19:16 +0100 Subject: Source analysis error Message-ID: I get the following error when attempting to source analysis: Warning: cross-spectral density matrix is rank deficient > In fieldtrip-20051027/private/beamformer at 253 In sourceanalysis at 701 ??? Error using ==> mtimes Inner matrix dimensions must agree. Error in ==> fieldtrip-20051027/private/beamformer at 365 filt2 = pinv(lf2' * invCf * lf2) * lf2' * invCf; Error in ==> sourceanalysis at 701 dip(i) = beamformer(grid, sens, vol, [], squeeze(Cf(i,:,:)), 'method', 'dics', 'feedback', cfg.feedback, 'projectnoise', cfg.projectnoise, 'keepfilter', cfg.keepfilter, 'supdip', cfg.supdip, >> Can any one tell me what I am doing wrong? I am attaching a script file which is what I am using to examine coherence. The script errors at line 77 ([source] = sourceanalysis(cfg, freq);). -------------- next part -------------- %function docoh(subj, cond, pre, post, dip) addpath /usr/local/fieldtrip-20051027/ ds = sprintf('/media/usbdisk/MEG/AC/test.ds'); subj='AC'; f = 10; smo = 5.; r = .7; loc = 'RFus'; cond = 'FACES'; pre = 0.0; post = 0.5; dip = [-5.5,-3.7,-1.7] cfg = []; cfg.dataset = ds; cfg.trialdef.eventtype = cond; cfg.trialdef.prestim = pre; cfg.trialdef.poststim = post; [data] = preprocessing(cfg); cfg = []; cfg.method = 'fft'; cfg.output = 'powandcsd'; cfg.tapsmofrq = smo; cfg.foilim = [f f]; cfg.keeptrials = 'yes'; cfg.sgncmb = channelcombination({'MEG' 'MEG'}, data.label); [freq] = freqanalysis(cfg, data); cfg = []; %cfg.xgrid = -10:r:10; %cfg.ygrid = -10:r:10; %cfg.zgrid = -2:r:14; cfg.xgrid = -12:1:12; cfg.ygrid = -10:1:10; cfg.zgrid = -3:1:14; cfg.dim = [length(cfg.xgrid) length(cfg.ygrid) length(cfg.zgrid)]; N=prod(cfg.dim); cfg.inside = 1:N; cfg.outside = []; cfg.hdmfile = strcat(ds, '/localSpheres.hdm'); [grid] = PREPARE_LEADFIELD(cfg, freq); %[grid] = precompute_leadfield(cfg, freq); %[grid] = source2sparse(grid); cfg = []; cfg.channel='MEG'; cfg.method = 'coh_refdip'; cfg.refdip = dip; cfg.projectnoise = 'yes'; cfg.hdmfile = strcat(ds, '/localSpheres.hdm'); cfg.grid = grid; %cfg.rawtrial = 'yes'; cfg.jacknife = 'yes'; cfg.frequency = f; cfg.lambda = .000000000000000000000000001; cfg.keepleadfield = 'no'; cfg.feedback = 'none'; [source] = sourceanalysis(cfg, freq); [source] = sourcedescriptives([], source); %[source] = source2full(source); brikname = sprintf('%s-%s-%s-%gHz', subj, cond, loc, f); [err, errmsg, info] = writesourcebrik(source, source.avg.coh, brikname); if err disp(errmsg) end -------------- next part -------------- Warning: higher order synthetic gradiometer configuration > In fieldtrip-20051027/private/prepare_vol_sens at 202 In sourceanalysis at 442 2684 dipoles inside, 6766 dipoles outside brain 1 conditions, each with 3 data objects constructing 14 jacknife replications scanning repetition 1 Warning: cross-spectral density matrix is rank deficient > In fieldtrip-20051027/private/beamformer at 253 In sourceanalysis at 701 ??? Error using ==> mtimes Inner matrix dimensions must agree. Error in ==> fieldtrip-20051027/private/beamformer at 365 filt2 = pinv(lf2' * invCf * lf2) * lf2' * invCf; Error in ==> sourceanalysis at 701 dip(i) = beamformer(grid, sens, vol, [], squeeze(Cf(i,:,:)), 'method', 'dics', 'feedback', cfg.feedback, 'projectnoise', cfg.projectnoise, 'keepfilter', cfg.keepfilter, 'supdip', cfg.supdip, >> From r.oostenveld at FCDONDERS.RU.NL Mon Nov 7 14:34:31 2005 From: r.oostenveld at FCDONDERS.RU.NL (Robert Oostenveld) Date: Mon, 7 Nov 2005 14:34:31 +0100 Subject: Source analysis error In-Reply-To: Message-ID: Hi Philip, I have tried to replicate your problem. I do not have the same dataset, but a 151 channel dataset gave the same error. On 3-nov-2005, at 22:19, Philip C. Bikle wrote: > I get the following error when attempting to source analysis: > > Warning: cross-spectral density matrix is rank deficient >> In fieldtrip-20051027/private/beamformer at 253 > In sourceanalysis at 701 > ??? Error using ==> mtimes > Inner matrix dimensions must agree. > > Error in ==> fieldtrip-20051027/private/beamformer at 365 > filt2 = pinv(lf2' * invCf * lf2) * lf2' * invCf; I looked in the code and set a breakpoint at the corresponding line. It turned out that lf2 was 184x3 instead of 151x3. Our 151 channel system has 184 channels in total, including the reference channels. > cfg = []; > cfg.xgrid = -12:1:12; > cfg.ygrid = -10:1:10; > cfg.zgrid = -3:1:14; > cfg.dim = [length(cfg.xgrid) length(cfg.ygrid) length(cfg.zgrid)]; > N=prod(cfg.dim); > cfg.inside = 1:N; > cfg.outside = []; (Side note, this should be cfg.grid.inside and cfg.grid.outside to have effect.) > cfg.hdmfile = strcat(ds, '/localSpheres.hdm'); > [grid] = PREPARE_LEADFIELD(cfg, freq); It turns out that you are pre-computing the leadfields on all channels, including the reference channels. Instead, you should only compute it on the channels which you want to use for source analysis. If you do cfg.channel = 'MEG' in prepare_leadfield, the right channels will be selected. best, Robert From marco.buiatti at GMAIL.COM Tue Nov 8 12:25:51 2005 From: marco.buiatti at GMAIL.COM (Marco Buiatti) Date: Tue, 8 Nov 2005 12:25:51 +0100 Subject: about cluster randomization analysis In-Reply-To: <00b301c5dbd0$8cab3c40$de2cae83@fcdc195> Message-ID: Dear Vladimir and Eric, thank you for your accurate responses. I fully understand from your arguments that temporally zooming on clusters is definitely wrong. Still, I wonder whether and how it is possible to use cluster randomization analysis cases in which it is difficult to formulate a precise hypothesis about when to expect an effect (for example, in infants), or cases in which an unexpected effect arises from a t-test. Do you think it would be correct to slide a relatively large (width of 200ms? 400ms? to be chosen a priori of course) window through the epochs and compute cluster randomization analysis for each latency to explore dubious significant t-test clusters? Another related question: I computed a post-hoc non kosher tuning of the window around the most significative cluster in my data, and I saw that it is significative (p<0.05) if the window edges exceed of about 50 ms the cluster edges (since the cluster is about 70 ms long, the whole window is about 170 ms long); but if I take longer windows, the p-value increases quite rapidly (I'm running at least 500 random draws for each window, and checking that the result does not depend on the number of draws). Do you have such instabilities in your data or should I think that the effect relative to my cluster is definitely too weak? Or maybe my data are not clean enough? About the minimum number of channels: I understand and agree that it is set only in space. Maybe it would help saying it explicitly on the tutorial. About the reference: my non kosher approach does not include changing the reference to chase a significant effect! My previous e-mail was probably misleading about that. Thank you and have a good day, Marco On 10/28/05, Eric Maris wrote: > > Dear Marco, > > > > The procedure I am following now is a sort of two-steps method: in the > > first place, I choose a wide time interval and a low minimum number of > > channels. I end up with many clusters that are far from being > > significative. I then shorten the time interval to include just one > > cluster (starting from the most significant one), and increase the > minimum > > number of channels, and run the analysis again. In this case, I > eventually > > got a significative cluster where I was expecting it from a simple > > observation of the t-test. Do you think this procedure is right or am I > > doing something wrong? Is it correct to temporally focus on a cluster to > > check its significance? > > > Clusterrandanalysis only controls the false alarm (type I error) rate if > you > choose the "tuning parameters" (latency interval, channel subset, the > minnbchan-parameter; and if you use on TFRs, also the frequency interval) > independent of the data. Instead, if you play around with these tuning > parameters until you find a cluster whose p-value exceeds the critical > alpha-level, you are not controlling the false alarm rate. In this case, > the > chosen tuning parameters depend on the data. > > An extreme example illustrates this even better. Assume you calculate > T-statistics for all (channel, time point)-pairs and you select the pair > with the largest T-statistic. Then, you select the latency interval that > only contains this time point and the channel subset that only contains > this > channel. With these tuning parameters, you reduce your data to a single > cell > in the spatiotemporal matrix, and clusterrrandanalysis will produce a > p-value that is very close to the p-value of a T-test. Since you have > selected this (channel, time point)-pair on the basis of its T-statistic, > this p-value is strongly biased. > > > > Another couple of questions: > > 1) Minnbchan. I understood it is the minimum number of significative > > neighbor (channel,time) points for a (channel,time) point to enter a > > cluster, no matter if adjacency is more in channel space or time > > direction. Am I right? Since time and channel space are quite different > > dimension, would it be better to set a minimum channel number separately > > for the two? > > Minnbchan should also be chosen independent of the data. I introduced this > tuning parameter because it turned out that in 3-dimensional analyses on > TFRs (involving the dimensions time, space (i.e., sensors) and frequency), > sometimes a cluster appeared that consisted of two or more 3-dimensional > "blobs" that were connected by a single (channel, time, > frequency)-element. > From a physiological perspective, such a cluster does not make sense. To > remove these physiologically implausible (and therefore probably random) > connections, I introduced the minnbchan parameter. Because of this > physiological rationale, I apply the minimum number criterium to the > spatial, and not to the temporal dimension. Short-lived phenomena are very > well possible from a physiological perspective, whereas effects at > spatially > isolated sensors are not. > > > > 2) Maybe because my data are average-referenced, I often end up with a > > positive and negative cluster emerging almost at the same time. Have you > > thought about any way to include the search of dipole-like > configurations? > > I have not thought about it, but it certainly makes sense to incorporate > biophysical constraints (such dipolar patterns) in the test statistic. > > One should be aware of the fact that different hypotheses are tested > before > and after rereferencing. This is physical and not a statistical issue. As > you most certainly know, EEG-signals are potential DIFFERENCES and > therefore > the underlying physiological events that are measured by EEG depend on the > reference channel(s). If the experimental manipulation affects the current > reference channel, then rereferencing to another channel (or set of > channels) that is not affected by the experimental manipulation makes a > difference for the result of the statistical test. > > > greetings, > > Eric Maris > -- Marco Buiatti - Post Doc ************************************************************** Cognitive Neuroimaging Unit - INSERM U562 Service Hospitalier Frederic Joliot, CEA/DRM/DSV 4 Place du general Leclerc, 91401 Orsay cedex, France Telephone: +33 1 69 86 77 65 Fax: +33 1 69 86 78 16 E-mail: marco.buiatti at gmail.com Web: www.unicog.org *************************************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From maris at NICI.RU.NL Tue Nov 8 13:17:50 2005 From: maris at NICI.RU.NL (Eric Maris) Date: Tue, 8 Nov 2005 13:17:50 +0100 Subject: about cluster randomization analysis Message-ID: Hi Marco, thank you for your accurate responses. I fully understand from your arguments that temporally zooming on clusters is definitely wrong. Still, I wonder whether and how it is possible to use cluster randomization analysis cases in which it is difficult to formulate a precise hypothesis about when to expect an effect (for example, in infants), or cases in which an unexpected effect arises from a t-test. Do you think it would be correct to slide a relatively large (width of 200ms? 400ms? to be chosen a priori of course) window through the epochs and compute cluster randomization analysis for each latency to explore dubious significant t-test clusters? If you have no hypothesis about where to expect an effect, you should use the complete latency window in which it may occur. Of course, this will reduce the sensitivity (statistical power) of your test (in comparison with the situation in which you do know when the effect can occur). As a rule, prior knowledge increases sensitivity. Another related question: I computed a post-hoc non kosher tuning of the window around the most significative cluster in my data, and I saw that it is significative (p<0.05) if the window edges exceed of about 50 ms the cluster edges (since the cluster is about 70 ms long, the whole window is about 170 ms long); but if I take longer windows, the p-value increases quite rapidly (I'm running at least 500 random draws for each window, and checking that the result does not depend on the number of draws). Do you have such instabilities in your data or should I think that the effect relative to my cluster is definitely too weak? Or maybe my data are not clean enough? This phenomenon is not an instability, it is what I would expect. Imagine your trials are 10 seconds long and there is an effect in the latency window between 1.3 and 1.35 seconds (i.e., less than 1 percent of trial length). If you ask clusterrandanalysis to compare the conditions over the complete trial length, it may very well miss the effect in the window between 1.3 and 1.35 seconds, because it has to use a large critical value in order to control for false positives in the time window where there is no effect (i.e., 99 percent of the 10 second trial). greetings, Eric Maris On 10/28/05, Eric Maris wrote: Dear Marco, > The procedure I am following now is a sort of two-steps method: in the > first place, I choose a wide time interval and a low minimum number of > channels. I end up with many clusters that are far from being > significative. I then shorten the time interval to include just one > cluster (starting from the most significant one), and increase the minimum > number of channels, and run the analysis again. In this case, I eventually > got a significative cluster where I was expecting it from a simple > observation of the t-test. Do you think this procedure is right or am I > doing something wrong? Is it correct to temporally focus on a cluster to > check its significance? Clusterrandanalysis only controls the false alarm (type I error) rate if you choose the "tuning parameters" (latency interval, channel subset, the minnbchan-parameter; and if you use on TFRs, also the frequency interval) independent of the data. Instead, if you play around with these tuning parameters until you find a cluster whose p-value exceeds the critical alpha-level, you are not controlling the false alarm rate. In this case, the chosen tuning parameters depend on the data. An extreme example illustrates this even better. Assume you calculate T-statistics for all (channel, time point)-pairs and you select the pair with the largest T-statistic. Then, you select the latency interval that only contains this time point and the channel subset that only contains this channel. With these tuning parameters, you reduce your data to a single cell in the spatiotemporal matrix, and clusterrrandanalysis will produce a p-value that is very close to the p-value of a T-test. Since you have selected this (channel, time point)-pair on the basis of its T-statistic, this p-value is strongly biased. > Another couple of questions: > 1) Minnbchan. I understood it is the minimum number of significative > neighbor (channel,time) points for a (channel,time) point to enter a > cluster, no matter if adjacency is more in channel space or time > direction. Am I right? Since time and channel space are quite different > dimension, would it be better to set a minimum channel number separately > for the two? Minnbchan should also be chosen independent of the data. I introduced this tuning parameter because it turned out that in 3-dimensional analyses on TFRs (involving the dimensions time, space ( i.e., sensors) and frequency), sometimes a cluster appeared that consisted of two or more 3-dimensional "blobs" that were connected by a single (channel, time, frequency)-element. From a physiological perspective, such a cluster does not make sense. To remove these physiologically implausible (and therefore probably random) connections, I introduced the minnbchan parameter. Because of this physiological rationale, I apply the minimum number criterium to the spatial, and not to the temporal dimension. Short-lived phenomena are very well possible from a physiological perspective, whereas effects at spatially isolated sensors are not. > 2) Maybe because my data are average-referenced, I often end up with a > positive and negative cluster emerging almost at the same time. Have you > thought about any way to include the search of dipole-like configurations? I have not thought about it, but it certainly makes sense to incorporate biophysical constraints (such dipolar patterns) in the test statistic. One should be aware of the fact that different hypotheses are tested before and after rereferencing. This is physical and not a statistical issue. As you most certainly know, EEG-signals are potential DIFFERENCES and therefore the underlying physiological events that are measured by EEG depend on the reference channel(s). If the experimental manipulation affects the current reference channel, then rereferencing to another channel (or set of channels) that is not affected by the experimental manipulation makes a difference for the result of the statistical test. greetings, Eric Maris -- Marco Buiatti - Post Doc ************************************************************** Cognitive Neuroimaging Unit - INSERM U562 Service Hospitalier Frederic Joliot, CEA/DRM/DSV 4 Place du general Leclerc, 91401 Orsay cedex, France Telephone: +33 1 69 86 77 65 Fax: +33 1 69 86 78 16 E-mail: marco.buiatti at gmail.com Web: www.unicog.org *************************************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From marco.buiatti at GMAIL.COM Tue Nov 8 13:58:57 2005 From: marco.buiatti at GMAIL.COM (Marco Buiatti) Date: Tue, 8 Nov 2005 13:58:57 +0100 Subject: about cluster randomization analysis In-Reply-To: <008f01c5e45e$70087b70$d72cae83@fcdc195> Message-ID: Hi Eric, On 11/8/05, Eric Maris wrote: > > Hi Marco, > > thank you for your accurate responses. I fully understand from your > arguments that temporally zooming on clusters is definitely wrong. Still, I > wonder whether and how it is possible to use cluster randomization analysis > cases in which it is difficult to formulate a precise hypothesis about when > to expect an effect (for example, in infants), or cases in which an > unexpected effect arises from a t-test. Do you think it would be correct to > slide a relatively large (width of 200ms? 400ms? to be chosen a priori of > course) window through the epochs and compute cluster randomization analysis > for each latency to explore dubious significant t-test clusters? > > If you have no hypothesis about where to expect an effect, you should use > the complete latency window in which it may occur. Of course, this will > reduce the sensitivity (statistical power) of your test (in comparison with > the situation in which you do know when the effect can occur). As a rule, > prior knowledge increases sensitivity. > OK > Another related question: I computed a post-hoc non kosher tuning of the > window around the most significative cluster in my data, and I saw that it > is significative (p<0.05) if the window edges exceed of about 50 ms the > cluster edges (since the cluster is about 70 ms long, the whole window is > about 170 ms long); but if I take longer windows, the p-value increases > quite rapidly (I'm running at least 500 random draws for each window, and > checking that the result does not depend on the number of draws). Do you > have such instabilities in your data or should I think that the effect > relative to my cluster is definitely too weak? Or maybe my data are not > clean enough? > > This phenomenon is not an instability, it is what I would expect. Imagine > your trials are 10 seconds long and there is an effect in the latency window > between 1.3 and 1.35 seconds (i.e., less than 1 percent of trial length). > If you ask clusterrandanalysis to compare the conditions over the complete > trial length, it may very well miss the effect in the window between 1.3and > 1.35 seconds, because it has to use a large critical value in order to > control for false positives in the time window where there is no effect ( > i.e., 99 percent of the 10 second trial). > > I also expected the significativity to decrease while increasing the time window for the same reason, but I was surprised to see the p-value increase so rapidly. I may pose the question more clearly: from your experience, would you say that the effect I described can be considered significative or not? (a few other details: I have 128 electrodes, 8 subjects, and the window I'm choosing is the window where I expect an effect from the literature) A related question is: how much do artifacts influence this kind of test? thank you again, Marco -- Marco Buiatti - Post Doc ************************************************************** Cognitive Neuroimaging Unit - INSERM U562 Service Hospitalier Frederic Joliot, CEA/DRM/DSV 4 Place du general Leclerc, 91401 Orsay cedex, France Telephone: +33 1 69 86 77 65 Fax: +33 1 69 86 78 16 E-mail: marco.buiatti at gmail.com Web: www.unicog.org *************************************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From maris at NICI.RU.NL Tue Nov 8 15:54:22 2005 From: maris at NICI.RU.NL (Eric Maris) Date: Tue, 8 Nov 2005 15:54:22 +0100 Subject: about cluster randomization analysis Message-ID: Hi Marco, Another related question: I computed a post-hoc non kosher tuning of the window around the most significative cluster in my data, and I saw that it is significative (p<0.05) if the window edges exceed of about 50 ms the cluster edges (since the cluster is about 70 ms long, the whole window is about 170 ms long); but if I take longer windows, the p-value increases quite rapidly (I'm running at least 500 random draws for each window, and checking that the result does not depend on the number of draws). Do you have such instabilities in your data or should I think that the effect relative to my cluster is definitely too weak? Or maybe my data are not clean enough? This phenomenon is not an instability, it is what I would expect. Imagine your trials are 10 seconds long and there is an effect in the latency window between 1.3 and 1.35 seconds (i.e., less than 1 percent of trial length). If you ask clusterrandanalysis to compare the conditions over the complete trial length, it may very well miss the effect in the window between 1.3 and 1.35 seconds, because it has to use a large critical value in order to control for false positives in the time window where there is no effect (i.e., 99 percent of the 10 second trial). I also expected the significativity to decrease while increasing the time window for the same reason, but I was surprised to see the p-value increase so rapidly. I may pose the question more clearly: from your experience, would you say that the effect I described can be considered significative or not? (a few other details: I have 128 electrodes, 8 subjects, and the window I'm choosing is the window where I expect an effect from the literature) A related question is: how much do artifacts influence this kind of test? The question of significance can only answered on the basis of probability calculations. My own experience is irrelevant in this respect. With respect to the artifacts, you must be aware of the fact that the power of statistical tests is adversely affected by eye-blinks and all other non-neuronal factors in the signal. greetings, Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From r.oostenveld at FCDONDERS.RU.NL Wed Nov 9 09:28:13 2005 From: r.oostenveld at FCDONDERS.RU.NL (Robert Oostenveld) Date: Wed, 9 Nov 2005 09:28:13 +0100 Subject: about cluster randomization analysis In-Reply-To: <22f732b0511080325i731f02c4odfc1776ef1503e56@mail.gmail.com> Message-ID: Hi Marco, On 8-nov-2005, at 12:25, Marco Buiatti wrote: > Do you think it would be correct to slide a relatively large (width > of 200ms? 400ms? to be chosen a priori of course) window through > the epochs and compute cluster randomization analysis for each > latency to explore dubious significant t-test clusters? You can use such an approach, but then you have to consider each position of the window that you are sliding as a seperate statistical comparison of the data in the experimental conditions. The multiple comparison problem over channels and timepoints within the window is then automatically taken care of by clusterrandanalysis, but the multiple comparisons that arise due to the multiple locations of the window in which you are "interrogating" your data are not treated by clusterrandanalysis. That means that, for this approach to be statistically completely sound, you should do a Bonferoni correction on the alpha threshold, dividing it by the number of window positions. Probably you will loose a lot of your statistical power especially if you slide the window in small steps, so I doubt whether it is usefull. Given that you have expressed your doubts about potential artifacts in some of your subjects and the influence of the artifacts on the outcome of the statistical test, I would guess that putting more effort into making the data itself cleaner is probably more worthwile. best regards, Robert ======================================================= Robert Oostenveld, PhD F.C. Donders Centre for Cognitive Neuroimaging Radboud University Nijmegen phone: +31-24-3619695 http://www.ru.nl/fcdonders/ From marco.buiatti at GMAIL.COM Wed Nov 9 17:57:43 2005 From: marco.buiatti at GMAIL.COM (Marco Buiatti) Date: Wed, 9 Nov 2005 17:57:43 +0100 Subject: about cluster randomization analysis In-Reply-To: <023AE4AB-BFD6-45EE-ADB8-0A80E3905DE3@fcdonders.ru.nl> Message-ID: Dear FieldTrip Masters, thank you again for your clear and rapid answers. Another question about clusterrandanalysis. As I told you, I'm performing a cluster randomization test for a within-subject experiment, using a two-sided t-test as pair statistics. The tutorial says that clustering is performed separately for thresholded positive and negative t-statistics, and that the critical value for the cluster level statistics is also two-sided. I understood that the positive(negative) critical value corresponds to the 95% portion of the randomization distribution of the maximum(minimum) of the positive(negative) clusters statistics. Then, why do I obtain two identical (in absolute value) critical values? What am I missing? thank you, Marco On 11/9/05, Robert Oostenveld wrote: > > Hi Marco, > > On 8-nov-2005, at 12:25, Marco Buiatti wrote: > > Do you think it would be correct to slide a relatively large (width > > of 200ms? 400ms? to be chosen a priori of course) window through > > the epochs and compute cluster randomization analysis for each > > latency to explore dubious significant t-test clusters? > > You can use such an approach, but then you have to consider each > position of the window that you are sliding as a seperate statistical > comparison of the data in the experimental conditions. The multiple > comparison problem over channels and timepoints within the window is > then automatically taken care of by clusterrandanalysis, but the > multiple comparisons that arise due to the multiple locations of the > window in which you are "interrogating" your data are not treated by > clusterrandanalysis. That means that, for this approach to be > statistically completely sound, you should do a Bonferoni correction > on the alpha threshold, dividing it by the number of window positions. > > Probably you will loose a lot of your statistical power especially if > you slide the window in small steps, so I doubt whether it is > usefull. Given that you have expressed your doubts about potential > artifacts in some of your subjects and the influence of the artifacts > on the outcome of the statistical test, I would guess that putting > more effort into making the data itself cleaner is probably more > worthwile. > > best regards, > Robert > > > ======================================================= > Robert Oostenveld, PhD > F.C. Donders Centre for Cognitive Neuroimaging > Radboud University Nijmegen > phone: +31-24-3619695 > http://www.ru.nl/fcdonders/ > -- Marco Buiatti - Post Doc ************************************************************** Cognitive Neuroimaging Unit - INSERM U562 Service Hospitalier Frederic Joliot, CEA/DRM/DSV 4 Place du general Leclerc, 91401 Orsay cedex, France Telephone: +33 1 69 86 77 65 Fax: +33 1 69 86 78 16 E-mail: marco.buiatti at gmail.com Web: www.unicog.org *************************************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.medendorp at NICI.RU.NL Thu Nov 10 12:31:24 2005 From: p.medendorp at NICI.RU.NL (Pieter Medendorp) Date: Thu, 10 Nov 2005 12:31:24 +0100 Subject: Comparing waveforms In-Reply-To: <00d301c5dae1$ce26c030$de2cae83@fcdc195> Message-ID: Eric, mag ik je een vraag stellen: ik heb 10 proefpersonen met elk hun eigen data set. per proefpersoon, zoek ik naar correlaties in hun data, op twee verschillende manieren. Dus, dit levert 2 correlation coeffienten per proefpersoon. Ik heb 10 proefpersonen, en wil vergelijken of de 10 correlatiecoefficenten gevonden op de ene manier afwijkend zijn van de 10 gevonden op de andere manier. Weet jij de geschikte test (Fisher o.i.d.?). Bedankt. Pieter -------------- next part -------------- An HTML attachment was scrubbed... URL: From CAFJ.Miller at PSY.UMCN.NL Mon Nov 14 10:55:17 2005 From: CAFJ.Miller at PSY.UMCN.NL (Christopher Miller) Date: Mon, 14 Nov 2005 10:55:17 +0100 Subject: clusterrandanalysis Message-ID: Dear Eric, I have two questions concerning clusterrandanalysis: First, I performed a frequency analysis with Brain Vision Analyzer and exported these data into an Excel and SPSS file. How can I import these data into Matlab in order to obtain a format on which I can perform a Cluster-level Randomization Test for a Within Subjects experiment? Second, I want to compare three conditions, two drug conditions and a placebo condition. In all conditions, a baseline measurement was made before drug intake. I want to take in account these baseline measurements. In a parametric test like MANOVA this is usually done with a covariate or the introduction of an extra factor (time). How can I perform this in clusterrandanalysis? Thanks in advance, Christopher Miller, MSc Unit for Clinical Psychopharmacology and Neuropsychiatry Department of Psychiatry 974 Radboud University Nijmegen Medical Centre PO Box 9101 6500 HB Nijmegen The Netherlands Tel.: + 31 24 3613204 Email: CAFJ.Miller at psy.umcn.nl From maris at NICI.RU.NL Mon Nov 14 17:13:01 2005 From: maris at NICI.RU.NL (Eric Maris) Date: Mon, 14 Nov 2005 17:13:01 +0100 Subject: clusterrandanalysis Message-ID: Hi Christopher, > I have two questions concerning clusterrandanalysis: > > > First, I performed a frequency analysis with Brain Vision Analyzer and > exported these data into an Excel and SPSS file. How can I import these > data into Matlab in order to obtain a format on which I can perform a > Cluster-level Randomization Test for a Within Subjects experiment? This is not a question about clusterrandanalysis but about how to import preprocessed data from another package such that it is compatible with Fieldtrip functions. Although I am not an expert in these issues (Robert Oostenveld is our expert), I think it is complicated and intellectually not very satisfying (because of all the bookkeeping that is probably involved). I advise you to import your non-preprocessed BVA data files into Fieldtrip (we have import routines for this) and do you frequency analysis in Fieldtrip. Besides sound statistics, Fieldtrip also offers state-of-the art spectral density estimation. Learning the Fieldtrip function freqanalysis will probably take less time than importing your BVA power spectra. > > Second, I want to compare three conditions, two drug conditions and a > placebo condition. In all conditions, a baseline measurement was made > before drug intake. I want to take in account these baseline measurements. > In a parametric test like MANOVA this is usually done with a covariate or > the introduction of an extra factor (time). How can I perform this in > clusterrandanalysis? 1. Divide the activation power by the baseline power (and, optionally, take the log of this ratio) and submit this to clusterrandanalysis. 2. Compare each of the drug conditions with the placebo condition (using a T-statistic) with respect to this baseline-normalized dependent variable. greetings, Eric Maris From wibral at MPIH-FRANKFURT.MPG.DE Mon Nov 14 17:44:48 2005 From: wibral at MPIH-FRANKFURT.MPG.DE (Michael Wibral) Date: Mon, 14 Nov 2005 17:44:48 +0100 Subject: problems importing elp files Message-ID: Dear List Users, I'm trying to import some .avr files exported from BESA. However the read_besa_avr function returns an error like this: ??? Error using ==> strrep Cell elements must be character arrays. Error in ==> fieldtrip-20051113\private\read_besa_avr at 61 avr.label = strrep(lbl.textdata(:,2) ,'''', ''); Error in ==> besa2fieldtrip at 44 tmp = read_besa_avr(filename); My .elp files look like this: EEG Fp1' -89.51 -74.20 EEG Fpz' 89.49 90.00 EEG Fp2' 89.51 74.20 EEG Nz' 108.96 90.00 EEG AF9' -113.26 -50.72 EEG AF7' -89.61 -55.88 EEG AF3' -73.15 -69.74 EEG AFz' 67.74 90.00 EEG AF4' 73.15 69.74 EEG AF8' 89.61 55.88 EEG AF10' 113.27 50.72 EEG F9' -113.98 -38.43 EEG F7' -89.65 -40.32 EEG F5' -72.42 -45.38 EEG F3' -58.13 -55.16 EEG F1' -49.40 -70.86 EEG Fz' 46.01 90.00 EEG F2' 49.40 70.86 EEG F4' 58.13 55.16 EEG F6' 72.42 45.38 EEG F8' 89.65 40.32 EEG F10' 113.98 38.43 (truncated...) When I look into the intermediate output of lbl = importdata(elpfile) inside the crashing function read_besa_avr, I get something like this lbl = data: [71x1 double] textdata: {81x3 cell} [1x23 char] [] [] [1x21 char] [] [] [1x21 char] [] [] [1x22 char] [] [] [1x24 char] [] [] [1x23 char] [] [] [1x23 char] [] [] [1x21 char] [] [] [1x21 char] [] [] [1x21 char] [] [] 'EEG' 'AF10' '113.27' 'EEG' 'F9' '-113.98' 'EEG' 'F7' '-89.65' 'EEG' 'F5' '-72.42' 'EEG' 'F3' '-58.13' 'EEG' 'F1' '-49.40' 'EEG' 'Fz' '46.01' 'EEG' 'F2' '49.40' 'EEG' 'F4' '58.13' 'EEG' 'F6' '72.42' 'EEG' 'F8' '89.65' 'EEG' 'F10' '113.98' 'EEG' 'FT9' '-114.79' 'EEG' 'FT7' '-89.84' 'EEG' 'FC5' '-67.69' 'EEG' 'FC3' '-46.94' (truncated...) Does anybody know what's wrong here? Thank you very much for your help, Michael Wibral M. Wibral Dipl. Phys. Max Planck Institute for Brain Research Dept. Neurophysiology Deutschordenstrasse 46 60528 Frankfurt am Main Germany Phone: +49(0)69/6301-83849 +49(0)173/4966728 Fax: +49(0)69/96769-327 From r.oostenveld at FCDONDERS.RU.NL Tue Nov 15 22:08:05 2005 From: r.oostenveld at FCDONDERS.RU.NL (Robert Oostenveld) Date: Tue, 15 Nov 2005 22:08:05 +0100 Subject: problems importing elp files In-Reply-To: <4378BF00.6060006@mpih-frankfurt.mpg.de> Message-ID: Hi Michael On 14-nov-2005, at 17:44, Michael Wibral wrote: > I'm trying to import some .avr files exported from BESA. However > the read_besa_avr function returns an error like this: > ... I copied and pased your trunctuated elp file content from your mail into a local file and had no problem reading it in. Looking at the output of matlab, it seems to me that the importdata function (which is standard matlab) is not able to detect the boundaries between the columns. Some lines in the file are read as 22 chars, some lines are read as a few chuncks and one line seems to be parsed as a large number of chuncks. Therefore I suspect that the spaces and tabs are messed up in your elp file. Please try copy and paste the content into a new file, make sure that there are no tabs but only spaces, and save it again to disk with the original name. best Robert From wibral at MPIH-FRANKFURT.MPG.DE Wed Nov 16 12:21:40 2005 From: wibral at MPIH-FRANKFURT.MPG.DE (Michael Wibral) Date: Wed, 16 Nov 2005 12:21:40 +0100 Subject: Problem with data from BESA Message-ID: Hi, I have imported averaged EEG data from BESA (std 81 electrodes, average refrence) using the .mul format and the corresponding .sfp file to import the electrode locations. The import into fieldtrip seems to work fine with these formats (it didn't when I tried .avr and .elp...). However, the maps look very different from what I see in BESA (more like something differentiated / inverted from the BESA maps - the foci are clearly shifted). Do I have to tell Fieldtrip somewhere that this is EEG data, so that it doesn't do the things it would when dealing with MEG gradiometer data? Or is there something I have to do to let fieldtrip know that the data are average reference data. I can't find anything in the tutorials on this matter. Thank you very much for any help on this, Michael Wibral M. Wibral Dipl. Phys. Max Planck Institute for Brain Research Dept. Neurophysiology Deutschordenstrasse 46 60528 Frankfurt am Main Germany Phone: +49(0)69/6301-83849 +49(0)173/4966728 Fax: +49(0)69/96769-327 -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephan.moratti at UNI-KONSTANZ.DE Wed Nov 16 13:13:33 2005 From: stephan.moratti at UNI-KONSTANZ.DE (Stephan Moratti) Date: Wed, 16 Nov 2005 13:13:33 +0100 Subject: Problem with data from BESA In-Reply-To: <437B1644.9080300@mpih-frankfurt.mpg.de> Message-ID: Hi Michael, I often use BESA exported data with many different tools. One problem I encountered often, is that the coordinate system applied was not compatible. Sometimes I had to shift the whole thing by 90 degrees or so. As you are using sfp files (containing x,y,z coordinates), this could be the problem. But I am not sure as I havent't imported to fieldtrip yet. Maybe just a hint, maybe not. Stephan At 12:21 16.11.2005 +0100, you wrote: > Hi, > > EEG data from BESA (std 81 electrodes, average refrence) using the .mul >format and the corresponding .sfp file to import the electrode locations. >The import into fieldtrip seems to work fine with these formats (it didn't >when I tried .avr and .elp...). However, the maps look very different from >what I see in BESA (more like something differentiated / inverted from the >BESA maps - the foci are clearly shifted). Do I have to tell Fieldtrip >somewhere that this is EEG data, so that it doesn't do the things it would >when dealing with MEG gradiometer data? Or is there something I have to do >to let fieldtrip know that the data are average reference data. I can't >find anything in the tutorials on this matter. > > Thank you very much for any help on this, > > Michael Wibral > > M. Wibral Dipl. Phys. > Max Planck Institute for Brain Research > Dept. Neurophysiology > Deutschordenstrasse 46 > 60528 Frankfurt am Main > Germany > > +49(0)69/6301-83849 > +49(0)173/4966728 > +49(0)69/96769-327 > ----------------------------- Dr. Stephan Moratti (PhD) Dept. of Psychology University of Konstanz P.O Box D25 Phone: +40 (0)7531 882385 Fax: +49 (0)7531 884601 D-78457 Konstanz, Germany e-mail: Stephan.Moratti at uni-konstanz.de http://www.clinical-psychology.uni-konstanz.de/ From r.oostenveld at FCDONDERS.RU.NL Wed Nov 16 14:16:46 2005 From: r.oostenveld at FCDONDERS.RU.NL (Robert Oostenveld) Date: Wed, 16 Nov 2005 14:16:46 +0100 Subject: Problem with data from BESA In-Reply-To: <437B1644.9080300@mpih-frankfurt.mpg.de> Message-ID: Hi Micahel > I see in BESA (more like something differentiated / inverted from > the BESA maps - the foci are clearly shifted). The projection of the 3D electrode locations towards the 2D plane (in which the color-coded data has to be represented on screen or paper) might be quite different. Fieldtrip uses layout files in which you can specify the location of each sensor in the 2D plane (have a look at one of the *.lay files). If you do not specify a layout file, the 2D layout is constructed on the fly from the 3D electrode locations that are represented as NelecX3 matrix in data.elec.pnt. I suggest that you turn on the electrodes in topoplotER (cfg.showlabels option) and see whether that makes sense. If you are using standard labels of the extended 10-20 system in your EEG data, you can also try topoplotting with a predefined 2D layout, e.g. cfg = ... cfg.layout = 'elec1020.lay' % or elec1010.lay topoplotER(cfg, avg) > Do I have to tell Fieldtrip somewhere that this is EEG data, so > that it doesn't do the things it would when dealing with MEG > gradiometer data? No, the topoplotting of EEG data and MEG data is done just the same. > Or is there something I have to do to let fieldtrip know that the > data are average reference data. I can't find anything in the > tutorials on this matter. No, referencing of EEG data does not influence the spatial topographical distribution. It might change the global color (depending on the coloraxis), but not the pattern. Re-referencing your data at one timepoint just subtract a constant value (the potential at the reference electrode) from all electrodes. A geographical map of the Himalayas would also look the same if you would express the height with respect to the foot of the mountain range instead of with respect to the sea level. best regards, Robert From wibral at MPIH-FRANKFURT.MPG.DE Wed Nov 16 16:29:41 2005 From: wibral at MPIH-FRANKFURT.MPG.DE (Michael Wibral) Date: Wed, 16 Nov 2005 16:29:41 +0100 Subject: Problem with data from BESA In-Reply-To: <28A6AA31-A895-4DCE-BFEF-645AECA62F63@fcdonders.ru.nl> Message-ID: Hi Robert, thank you very much for the quick reply. I noticed that I supplied insufficient information. I actually switched on the electrode labels in the display and the peaks sit at the wrong electrodes. I therefore assume it is not a problem of the layout file (alone). I actually took into account that the data look heavily distorted and tried to check wether it is just a projection problem by playing around with different scalings of elec.pnt (albeit this didn't seem to affect the plot??). I should have also mentioned that I'm using version 20051113. However I imported the electrode positions with read_fcdc_elec from the version 0.9.6 (there doesn't seem to be a read_fcdc_elec version supplied with 2005113..) - I hope this doesn't cause the trouble. Meanwhile I also tried to use the elec1010.lay layout file which works fine.However, in fieldtrip I find a negative peak between electrodes P5 P3 PO7 PO3 and a positive central one at CPz Cz (which has no counterpart in BESA, so contourlines don't match exactly), whereas in BESA a positive peak is found on CP4. This looks like an inversion of signs, an inversion of left/right and a difference in the interpolation algorithm. Below you'll find the code I used (most of it is copied from the BESA tfc sample page on the web site): % this is the list of BESA datafiles in one condition filename_AM = { 'AA_AMU2M.mul' 'AK_AMU2M.mul' 'CB_AMU2M.mul' 'HDR01_AMU2M.mul' 'KRT28_AMU2M.mul' 'KSN14_AMU2M.mul' 'LM_AMU2M.mul' 'MN_AMU2M.mul' 'MW_AMU2M.mul' 'MWA_AMU2M.mul' }; % this is the list of BESA datafiles in the other condition filename_vAM = { 'AA_vAMU2M.mul' 'AK_vAMU2M.mul' 'CB_vAMU2M.mul' 'HDR01_vAMU2M.mul' 'KRT28_vAMU2M.mul' 'KSN14_vAMU2M.mul' 'LM_vAMU2M.mul' 'MN_vAMU2M.mul' 'MW_vAMU2M.mul' 'MWA_vAMU2M.mul' }; nsubj = length(filename_AM); % collect all single subject data in a convenient cell-array for i=1:nsubj AM{i} = besa2fieldtrip(filename_AM{i}); vAM{i} = besa2fieldtrip(filename_vAM{i}); end % load electrode configuration elec= read_fcdc_elec('AA_AMU2M.sfp'); elec.pnt = 10.*elec.pnt; % scale, doesn't seem to affect the plotting ? cfg = []; cfg.keepindividual = 'yes'; AMdata = timelockgrandaverage(cfg, AM{:}); vAMdata = timelockgrandaverage(cfg, vAM{:}); DiffData=AMdata %create dummy structure to hold results of the difference calculation %calculate grand average difference DiffData.individual=AMdata.individual-vAMdata.individual; cfg = []; DiffDataGA=timelockgrandaverage(cfg, DiffData); %plot the differences figure; cfg=[]; plotdata1.elec=elec; plotdata1.time=DiffDataGA.time; plotdata1.label=DiffDataGA.label; plotdata1.data2plot=DiffDataGA.avg; cfg=[]; cfg.layout=elec; cfg.showlabels = 'yes' cfg.zparam='data2plot'; cfg.colorbar='no'; cfg.xlim=[0.5595:0.001:0.5605]; % to zoom in on 560ms, as BESA only gives data a timepoints topoplotER(cfg,plotdata1); Best Regards, Michael M. Wibral Dipl. Phys. Max Planck Institute for Brain Research Dept. Neurophysiology Deutschordenstrasse 46 60528 Frankfurt am Main Germany Phone: +49(0)69/6301-83849 +49(0)173/4966728 Fax: +49(0)69/96769-327 Robert Oostenveld schrieb: > Hi Micahel > >> I see in BESA (more like something differentiated / inverted from >> the BESA maps - the foci are clearly shifted). > > > The projection of the 3D electrode locations towards the 2D plane (in > which the color-coded data has to be represented on screen or paper) > might be quite different. Fieldtrip uses layout files in which you > can specify the location of each sensor in the 2D plane (have a look > at one of the *.lay files). If you do not specify a layout file, the > 2D layout is constructed on the fly from the 3D electrode locations > that are represented as NelecX3 matrix in data.elec.pnt. > > I suggest that you turn on the electrodes in topoplotER > (cfg.showlabels option) and see whether that makes sense. > > If you are using standard labels of the extended 10-20 system in your > EEG data, you can also try topoplotting with a predefined 2D layout, > e.g. > > cfg = ... > cfg.layout = 'elec1020.lay' % or elec1010.lay > topoplotER(cfg, avg) > >> Do I have to tell Fieldtrip somewhere that this is EEG data, so that >> it doesn't do the things it would when dealing with MEG gradiometer >> data? > > > No, the topoplotting of EEG data and MEG data is done just the same. > >> Or is there something I have to do to let fieldtrip know that the >> data are average reference data. I can't find anything in the >> tutorials on this matter. > > > No, referencing of EEG data does not influence the spatial > topographical distribution. It might change the global color > (depending on the coloraxis), but not the pattern. Re-referencing > your data at one timepoint just subtract a constant value (the > potential at the reference electrode) from all electrodes. A > geographical map of the Himalayas would also look the same if you > would express the height with respect to the foot of the mountain > range instead of with respect to the sea level. > > best regards, > Robert > > . > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r.oostenveld at FCDONDERS.RU.NL Wed Nov 16 18:00:52 2005 From: r.oostenveld at FCDONDERS.RU.NL (Robert Oostenveld) Date: Wed, 16 Nov 2005 18:00:52 +0100 Subject: Problem with data from BESA In-Reply-To: <437B5065.3040307@mpih-frankfurt.mpg.de> Message-ID: Hi Michael > I actually switched on the electrode labels in the display and the > peaks sit at the wrong electrodes. That seams to indicate that there is a mismatch between the channel names and the electrode names. If you see a peak at a specific electrode in the topoplot, you should be albe to confirm its value by looking in the data. Could it be that the ordering of the channels is different in the two conditions that you are reading in (compare AM {1}.label and vAM{1}.label)? > I therefore assume it is not a problem of the layout file (alone). > I actually took into account that the data look heavily distorted > and tried to check wether it is just a projection problem by > playing around with different scalings of elec.pnt (albeit this > didn't seem to affect the plot??). The scaling of the radius of the electrodes does not affect the location towards which it is projected in the 2D plane. What would matter however w.r.t. the 2D projection is if you would shift them. The interpolation algorithm that is used in topoplotER is certainly different from the one that is used in BESA. But I would not expect that to make such a big difference that peaks start shifting around. Maybe Ole can comment on the interpolation, since he supplied the topoplotER function based upon some code from EEGLAB (Ole should read along on the mailing list, but I also CCed to him). > I should have also mentioned that I'm using version 20051113. > However I imported the electrode positions with read_fcdc_elec from > the version 0.9.6 (there doesn't seem to be a read_fcdc_elec > version supplied with 2005113..) - I hope this doesn't cause the > trouble. It indeed was missing. I have tagged the read_fcdc_elec file to be included in the upcoming daily release versions (which are updated every evening on the ftp server). You can pick it up tomorrow at ftp://ftp.fconders.nl/pub/fieldtrip/ > Meanwhile I also tried to use the elec1010.lay layout file which > works fine.However, in fieldtrip I find a negative peak between > electrodes P5 P3 PO7 PO3 and a positive central one at CPz Cz > (which has no counterpart in BESA, so contourlines don't match > exactly), whereas in BESA a positive peak is found on CP4. This > looks like an inversion of signs, an inversion of left/right and a > difference in the interpolation algorithm. Does the peak ly on top of an electrode or in between the electrodes? If it is at the electrode, you should be able to verify it's actual value. I am concerned that there might be an ordering/naming problem with your EEG channels. Please try the two low-level functions that you find attached. They work like this: topoplot(cfg,X,Y,datavector,Labels) and triplot([X Y zeros(Nchan,1)], [], Labels, datavector) You can get the X and Y value from the layout file. With the triplot, you can also plot 3D (just use elec.pnt, i.e. [x y z] as the first argument). The triplot does linear interpolation over the triangles that connect the electrodes. It might look coarse, but with it you are guaranteed not to overinterpret the data (i.e. there cannot be any spurious peaks between the electrodes). best, Robert PS if you still cannot figure it out, send me a private mail with your plotdata1 structure and, if not too large, the AM and vAM data. -------------- next part -------------- A non-text attachment was scrubbed... Name: topoplot.m Type: application/octet-stream Size: 15694 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: triplot.m Type: application/octet-stream Size: 10679 bytes Desc: not available URL: -------------- next part -------------- From wibral at MPIH-FRANKFURT.MPG.DE Wed Nov 16 18:56:20 2005 From: wibral at MPIH-FRANKFURT.MPG.DE (Michael Wibral) Date: Wed, 16 Nov 2005 18:56:20 +0100 Subject: Problem with data from BESA In-Reply-To: Message-ID: Hi Robert, thanks for your help. There seems to be indeed a problem with the ordering of the electrodes in the mul-files themselves and the corresponding sfp files, that contain some additional fiducials - so if electrodes and their positions are not matched by name but by order during imports that will of course go wrong. Both files also have a different order from the 10-10 layout used in Fieldtrip, but I guess layout files match electrodes per name, don't they. I will try to figure out a workaround. Best, Michael Robert Oostenveld schrieb: > Hi Michael > >> I actually switched on the electrode labels in the display and the >> peaks sit at the wrong electrodes. > > > That seams to indicate that there is a mismatch between the channel > names and the electrode names. If you see a peak at a specific > electrode in the topoplot, you should be albe to confirm its value by > looking in the data. Could it be that the ordering of the channels > is different in the two conditions that you are reading in (compare AM > {1}.label and vAM{1}.label)? > >> I therefore assume it is not a problem of the layout file (alone). I >> actually took into account that the data look heavily distorted and >> tried to check wether it is just a projection problem by playing >> around with different scalings of elec.pnt (albeit this didn't seem >> to affect the plot??). > > > The scaling of the radius of the electrodes does not affect the > location towards which it is projected in the 2D plane. What would > matter however w.r.t. the 2D projection is if you would shift them. > The interpolation algorithm that is used in topoplotER is certainly > different from the one that is used in BESA. But I would not expect > that to make such a big difference that peaks start shifting around. > Maybe Ole can comment on the interpolation, since he supplied the > topoplotER function based upon some code from EEGLAB (Ole should read > along on the mailing list, but I also CCed to him). > >> I should have also mentioned that I'm using version 20051113. >> However I imported the electrode positions with read_fcdc_elec from >> the version 0.9.6 (there doesn't seem to be a read_fcdc_elec version >> supplied with 2005113..) - I hope this doesn't cause the trouble. > > > It indeed was missing. I have tagged the read_fcdc_elec file to be > included in the upcoming daily release versions (which are updated > every evening on the ftp server). You can pick it up tomorrow at > ftp://ftp.fconders.nl/pub/fieldtrip/ > >> Meanwhile I also tried to use the elec1010.lay layout file which >> works fine.However, in fieldtrip I find a negative peak between >> electrodes P5 P3 PO7 PO3 and a positive central one at CPz Cz (which >> has no counterpart in BESA, so contourlines don't match exactly), >> whereas in BESA a positive peak is found on CP4. This looks like an >> inversion of signs, an inversion of left/right and a difference in >> the interpolation algorithm. > > > Does the peak ly on top of an electrode or in between the electrodes? > If it is at the electrode, you should be able to verify it's actual > value. I am concerned that there might be an ordering/naming problem > with your EEG channels. Please try the two low-level functions that > you find attached. They work like this: > topoplot(cfg,X,Y,datavector,Labels) > and > triplot([X Y zeros(Nchan,1)], [], Labels, datavector) > You can get the X and Y value from the layout file. With the triplot, > you can also plot 3D (just use elec.pnt, i.e. [x y z] as the first > argument). The triplot does linear interpolation over the triangles > that connect the electrodes. It might look coarse, but with it you > are guaranteed not to overinterpret the data (i.e. there cannot be > any spurious peaks between the electrodes). > > best, > Robert > > PS if you still cannot figure it out, send me a private mail with > your plotdata1 structure and, if not too large, the AM and vAM data. > From r.oostenveld at FCDONDERS.RU.NL Wed Nov 16 22:25:41 2005 From: r.oostenveld at FCDONDERS.RU.NL (Robert Oostenveld) Date: Wed, 16 Nov 2005 22:25:41 +0100 Subject: Problem with data from BESA In-Reply-To: <437B72C4.7070105@mpih-frankfurt.mpg.de> Message-ID: On 16-nov-2005, at 18:56, Michael Wibral wrote: > Both files also have a different order from the 10-10 layout used > in Fieldtrip, but I guess layout files match electrodes per name, > don't they. I will try to figure out a workaround. Channel matching is indeed done on name and not on number/index. This applies for the channel names in the layout file, but also for the channel names in the electrode file. It means that the channel ordering in either layout-file or elec-structure can be different from the channel ordering in the data, since both the data and the elec contain labels that can be matched when needed (e.g when plotting or dipole fitting). The elec-structure can also contain more or less electrode positions+labels than the EEG itself, e.g. when you have measured bipolar ECG or EOG along (without position), or when you have additional fiducials or electrodes in your cap that were recorded with a polhemus but not recorded as EEG channel. Since the sfp file is very simple and can hardly be read incorrectly, I suspect that the error in the assignment in channel names occurs in reading the ERP file. Robert From CAFJ.Miller at PSY.UMCN.NL Thu Nov 17 16:48:12 2005 From: CAFJ.Miller at PSY.UMCN.NL (Christopher Miller) Date: Thu, 17 Nov 2005 16:48:12 +0100 Subject: reformat processed data Message-ID: Dear Robert, I have a question about reformating (pre)processed data: I performed a frequency analysis with Brain Vision Analyzer and exported the data into Excel. These data are multidimensional (27 channels X 5 frequencybands) and thus consist of 27 X 5 numbers for each of the 16 subjects. Each number represents the power of every channel-frequency combination. Since this was a within design, I have two sets of 27 X 5 for each subject. I want to compare these two sets with a Cluster-level Randomization Test for a Within Subjects experiment, just like the test which is performed in the tutorial on Cluster-level Randomization Tests, page 16-17. In the tutorial this can be done after "load gravgerfcporig;". When this command is executed, two files appear in the workspace: "gravg_erf_cp_FC" and "gravg_erf_cp_FIC", both with the format: "<1x1 struct> struct". However, when I import my data with the import wizard there appears only one file in the workspace, named :"data" with the format: "<160x30 double> double". The numbers 160 and 30 represent the data as needed for analyzing them in SPSS: 160 rows (16 subjects, with 5 frequencybands in 2 conditions). The number 30 represents 30 columns (27 channels and 3 channels to label: (1) the subject(1-16), (2) the frequencybands(1-5) and (3) the condition(1-2). I know that just saving my imported file as a .mat file doesn't change the structure of the file, since I tried this. My question is, how can I reformat these data in such a way that I can perform a Cluster-level Randomization Test for a Within Subjects experiment? Thanks in advance, Christopher Miller, MSc Unit for Clinical Psychopharmacology and Neuropsychiatry Department of Psychiatry 974 Radboud University Nijmegen Medical Centre PO Box 9101 6500 HB Nijmegen The Netherlands Tel.: + 31 24 3613204 Email: CAFJ.Miller at psy.umcn.nl From r.oostenveld at FCDONDERS.RU.NL Fri Nov 18 09:28:21 2005 From: r.oostenveld at FCDONDERS.RU.NL (Robert Oostenveld) Date: Fri, 18 Nov 2005 09:28:21 +0100 Subject: reformat processed data In-Reply-To: <4CD85D348E46984983185B911CBF3ED1483BD0@umcnet13.umcn.nl> Message-ID: Dear Christopher, The tutorial data that you refer to contains a structure. In general all data in fieldtrip is represented as a structure. A structure is a collection of variables that belong together, and the "freq" structure, i.e. the structure that results from the freqanalysis function contains all elements (but not more) that are required to fully describe the data. The file gravgerfcporig.mat contains a grand average Event Related Field (ERF) structure, which is the result of the timelockgrandaverage function: >> clear all >> load gravgerfcporig >> whos gravg_erf_cp_FC 1x1 11352992 struct array gravg_erf_cp_FIC 1x1 11352992 struct array >> gravg_erf_cp_FC label: {152x1 cell} time: [1x900 double] dimord: 'repl_chan_time' grad: [1x1 struct] individual: [10x152x900 double] (hmmm, the average itself seems to be missing, I was expecting that it also would contain an avg-field of 152x900 double. Maybe Eric deleted it. Also the cfg field is missing, so it seems like it was hand-made and not using timelockgrandaverage.) But that is not the data that you are interested in. Have a look in the file containing the time-frequency representation of the data >> load TFRorig >> whos TFRFC 1x1 20540072 struct array TFRFIC 1x1 20775680 struct array >> TFRFC label: {151x1 cell} dimord: 'rpt_sgncmb_frq_tim' powspctrm: [4-D double] foi: [5 10 20 40 80] toi: [1x39 double] grad: [1x1 struct] cfg: [1x1 struct] There you see that there is a structure TFRFC, which contains a powspctrm field, with the order of dimensions (dimord) repetitions- channels-frequency-time. There is a vector describing the values along the time-axis (toi) and a frequency axis (foi) and a cell-array with the channel labels (label). Furthermore, there is a "grad" structure which contains the position of the MEG gradiometers. If you want to copy your data from excel into fieldtrip, you should create a similar structure in which all sub-elements correspond with the data, since that is what clusterrandanalysis expects (that is the "bookkeeping" that Eric referred to). You currently only have a data matrix of <160x30 double>, but clusterrandanalysis does not know whether it has 160 channels or 30, and whether it contains the power at a single frequency that was estimated at multiple timepoints, or the power at many frequencies that was estimated at a single timepoint, or what the frequencies actually are. You also have to tell (through the elec structure) what the locations of your electrodes are, since clusterrandanalysis needs to know which electrodes are neighbours. Although converting the data from excel to a fieldtrip-compatible structure is possible, I think that it will be easier to do your complete analysis in fieldtrip. Fieldtrip can read Brainvision files, and you can follow all steps in the clusterrandanalysis tutorial, but then instead of doing a time-frequency analysis (mtmconvol) only doing a frequency analysis (mtmfft). best regards, Robert On 17-nov-2005, at 16:48, Christopher Miller wrote: > Dear Robert, > > I have a question about reformating (pre)processed data: > > I performed a frequency analysis with Brain Vision Analyzer and > exported the data into Excel. These data are multidimensional (27 > channels X 5 frequencybands) and thus consist of 27 X 5 numbers for > each of the 16 subjects. Each number represents the power of every > channel-frequency combination. Since this was a within design, I > have two sets of 27 X 5 for each subject. I want to compare these > two sets with a Cluster-level Randomization Test for a Within > Subjects experiment, just like the test which is performed in the > tutorial on Cluster-level Randomization Tests, page 16-17. In the > tutorial this can be done after "load gravgerfcporig;". When this > command is executed, two files appear in the workspace: > "gravg_erf_cp_FC" and "gravg_erf_cp_FIC", both with the format: > "<1x1 struct> struct". However, when I import my data with the > import wizard there appears only one file in the workspace, > named :"data" with the format: "<160x30 double> double". The > numbers 160 and 30 represent the data as needed for analyzing them > in SPSS: 160 rows (16 subjects, with 5 frequencybands in 2 > conditions). The number 30 represents 30 columns (27 channels and 3 > channels to label: (1) the subject(1-16), (2) the frequencybands > (1-5) and (3) the condition(1-2). > I know that just saving my imported file as a .mat file doesn't > change the structure of the file, since I tried this. My question > is, how can I reformat these data in such a way that I can perform > a Cluster-level Randomization Test for a Within Subjects experiment? > > Thanks in advance, > > > Christopher Miller, MSc > Unit for Clinical Psychopharmacology and Neuropsychiatry > Department of Psychiatry 974 > Radboud University Nijmegen Medical Centre > PO Box 9101 > 6500 HB Nijmegen > The Netherlands > Tel.: + 31 24 3613204 > Email: CAFJ.Miller at psy.umcn.nl > ======================================================= Robert Oostenveld, PhD F.C. Donders Centre for Cognitive Neuroimaging Radboud University Nijmegen phone: +31-24-3619695 http://www.ru.nl/fcdonders/ From CAFJ.Miller at PSY.UMCN.NL Fri Nov 18 15:37:45 2005 From: CAFJ.Miller at PSY.UMCN.NL (Christopher Miller) Date: Fri, 18 Nov 2005 15:37:45 +0100 Subject: reformat processed data Message-ID: Dear Robert, How can I read BrainVision-files into matlab? Can I export BVA preprocessed data into Fieldtrip (according to the background information at the FCdonders website, Fieldtrip supports the .dat files from BVA). How can this be done? Or must I do all the preprocessing over? Greetings, Christopher -----Oorspronkelijk bericht----- Van: FieldTrip discussion list [mailto:FIELDTRIP at NIC.SURFNET.NL]Namens Robert Oostenveld Verzonden: vrijdag 18 november 2005 9:28 Aan: FIELDTRIP at NIC.SURFNET.NL Onderwerp: Re: [FIELDTRIP] reformat processed data Dear Christopher, The tutorial data that you refer to contains a structure. In general all data in fieldtrip is represented as a structure. A structure is a collection of variables that belong together, and the "freq" structure, i.e. the structure that results from the freqanalysis function contains all elements (but not more) that are required to fully describe the data. The file gravgerfcporig.mat contains a grand average Event Related Field (ERF) structure, which is the result of the timelockgrandaverage function: >> clear all >> load gravgerfcporig >> whos gravg_erf_cp_FC 1x1 11352992 struct array gravg_erf_cp_FIC 1x1 11352992 struct array >> gravg_erf_cp_FC label: {152x1 cell} time: [1x900 double] dimord: 'repl_chan_time' grad: [1x1 struct] individual: [10x152x900 double] (hmmm, the average itself seems to be missing, I was expecting that it also would contain an avg-field of 152x900 double. Maybe Eric deleted it. Also the cfg field is missing, so it seems like it was hand-made and not using timelockgrandaverage.) But that is not the data that you are interested in. Have a look in the file containing the time-frequency representation of the data >> load TFRorig >> whos TFRFC 1x1 20540072 struct array TFRFIC 1x1 20775680 struct array >> TFRFC label: {151x1 cell} dimord: 'rpt_sgncmb_frq_tim' powspctrm: [4-D double] foi: [5 10 20 40 80] toi: [1x39 double] grad: [1x1 struct] cfg: [1x1 struct] There you see that there is a structure TFRFC, which contains a powspctrm field, with the order of dimensions (dimord) repetitions- channels-frequency-time. There is a vector describing the values along the time-axis (toi) and a frequency axis (foi) and a cell-array with the channel labels (label). Furthermore, there is a "grad" structure which contains the position of the MEG gradiometers. If you want to copy your data from excel into fieldtrip, you should create a similar structure in which all sub-elements correspond with the data, since that is what clusterrandanalysis expects (that is the "bookkeeping" that Eric referred to). You currently only have a data matrix of <160x30 double>, but clusterrandanalysis does not know whether it has 160 channels or 30, and whether it contains the power at a single frequency that was estimated at multiple timepoints, or the power at many frequencies that was estimated at a single timepoint, or what the frequencies actually are. You also have to tell (through the elec structure) what the locations of your electrodes are, since clusterrandanalysis needs to know which electrodes are neighbours. Although converting the data from excel to a fieldtrip-compatible structure is possible, I think that it will be easier to do your complete analysis in fieldtrip. Fieldtrip can read Brainvision files, and you can follow all steps in the clusterrandanalysis tutorial, but then instead of doing a time-frequency analysis (mtmconvol) only doing a frequency analysis (mtmfft). best regards, Robert On 17-nov-2005, at 16:48, Christopher Miller wrote: > Dear Robert, > > I have a question about reformating (pre)processed data: > > I performed a frequency analysis with Brain Vision Analyzer and > exported the data into Excel. These data are multidimensional (27 > channels X 5 frequencybands) and thus consist of 27 X 5 numbers for > each of the 16 subjects. Each number represents the power of every > channel-frequency combination. Since this was a within design, I > have two sets of 27 X 5 for each subject. I want to compare these > two sets with a Cluster-level Randomization Test for a Within > Subjects experiment, just like the test which is performed in the > tutorial on Cluster-level Randomization Tests, page 16-17. In the > tutorial this can be done after "load gravgerfcporig;". When this > command is executed, two files appear in the workspace: > "gravg_erf_cp_FC" and "gravg_erf_cp_FIC", both with the format: > "<1x1 struct> struct". However, when I import my data with the > import wizard there appears only one file in the workspace, > named :"data" with the format: "<160x30 double> double". The > numbers 160 and 30 represent the data as needed for analyzing them > in SPSS: 160 rows (16 subjects, with 5 frequencybands in 2 > conditions). The number 30 represents 30 columns (27 channels and 3 > channels to label: (1) the subject(1-16), (2) the frequencybands > (1-5) and (3) the condition(1-2). > I know that just saving my imported file as a .mat file doesn't > change the structure of the file, since I tried this. My question > is, how can I reformat these data in such a way that I can perform > a Cluster-level Randomization Test for a Within Subjects experiment? > > Thanks in advance, > > > Christopher Miller, MSc > Unit for Clinical Psychopharmacology and Neuropsychiatry > Department of Psychiatry 974 > Radboud University Nijmegen Medical Centre > PO Box 9101 > 6500 HB Nijmegen > The Netherlands > Tel.: + 31 24 3613204 > Email: CAFJ.Miller at psy.umcn.nl > ======================================================= Robert Oostenveld, PhD F.C. Donders Centre for Cognitive Neuroimaging Radboud University Nijmegen phone: +31-24-3619695 http://www.ru.nl/fcdonders/ From r.oostenveld at FCDONDERS.RU.NL Mon Nov 21 13:05:15 2005 From: r.oostenveld at FCDONDERS.RU.NL (Robert Oostenveld) Date: Mon, 21 Nov 2005 13:05:15 +0100 Subject: reformat processed data In-Reply-To: <4CD85D348E46984983185B911CBF3ED1483BD1@umcnet13.umcn.nl> Message-ID: On 18-nov-2005, at 15:37, Christopher Miller wrote: > Dear Robert, > > How can I read BrainVision-files into matlab? > Can I export BVA preprocessed data into Fieldtrip (according to the > background information at the FCdonders website, Fieldtrip supports > the .dat files from BVA). How can this be done? Or must I do all > the preprocessing over? Fieldtrip automatically detects the type of data. If you have done filtering and artefact removal in BVA, you should save the result in a *.dat file and still do the preprocessing in Fieldtrip (wich involves reading in the data, which is what you want, and optionally filtering, which you do not want) except that you can keep the cfg- options of PREPROCESSING empty to prevent it from doing the filtering. You still do have to specify the cfg-settings for DEFINETRIAL. If you specify cfg.trialdef.eventtype='?' a list with the events in your data file will be displayed on screen. best Robert From wibral at MPIH-FRANKFURT.MPG.DE Mon Nov 21 15:21:48 2005 From: wibral at MPIH-FRANKFURT.MPG.DE (Michael Wibral) Date: Mon, 21 Nov 2005 15:21:48 +0100 Subject: Problem with data from BESA In-Reply-To: Message-ID: Hi Robert, thanks to your help I have meanfile figured out and solved the problems with the topographies, i.e. the maps look fine now as far as geometry is concerned. I guess the error was that the BESA files contained " ' " at the end of the electrode names as the interpolation to a commen 81 electrodes was done using digitzed individual coordinates). I removed the the extra " ' " and - just to make sure nothing goes wrong - also made an ordered layout file for my configuration. What remains puzzling however is the inversion of amplitudes (+ -> - ). The exported .mul file from BESA and the BESA data seem to match, however the plotted data seem to be inverted. I want to check this using simpler data, though, and then come back to this if I can confirm it. I have however another question regarding the interpretation of clusteranalysis results. Am I correct in saying that the family wise error rate (alpha) tells me the risk in obtaining a false positive statement of the type that I specify previously with alphatresh? For example if I specify alphathresh of 0.1 (lets calls this trend for abbreviation) in the first pass of the analysis (multiple testing) before clustering then the clusterrandomization using alpha =0.05 tells me that I run a risk of 5% of identifying wrongly at least one of these 'trend clusters'. (Or else, if the above is incorrect what is the reason not to use a very lenient criterion in the first pass to feed the clusterrandomization with as many clusters as possible?) Best rgards, Michael Robert Oostenveld schrieb: > On 16-nov-2005, at 18:56, Michael Wibral wrote: > >> Both files also have a different order from the 10-10 layout used >> in Fieldtrip, but I guess layout files match electrodes per name, >> don't they. I will try to figure out a workaround. > > > Channel matching is indeed done on name and not on number/index. This > applies for the channel names in the layout file, but also for the > channel names in the electrode file. It means that the channel > ordering in either layout-file or elec-structure can be different > from the channel ordering in the data, since both the data and the > elec contain labels that can be matched when needed (e.g when > plotting or dipole fitting). The elec-structure can also contain more > or less electrode positions+labels than the EEG itself, e.g. when you > have measured bipolar ECG or EOG along (without position), or when > you have additional fiducials or electrodes in your cap that were > recorded with a polhemus but not recorded as EEG channel. > > Since the sfp file is very simple and can hardly be read incorrectly, > I suspect that the error in the assignment in channel names occurs in > reading the ERP file. > > Robert > > . > From maris at NICI.RU.NL Mon Nov 21 15:59:10 2005 From: maris at NICI.RU.NL (Eric Maris) Date: Mon, 21 Nov 2005 15:59:10 +0100 Subject: Problem with data from BESA Message-ID: Hi Michael, > I have however another question regarding the interpretation of > clusteranalysis results. Am I correct in saying that the family wise error > rate (alpha) tells me the risk in obtaining a false positive statement of > the type that I specify previously with alphatresh? For example if I > specify alphathresh of 0.1 (lets calls this trend for abbreviation) in the > first pass of the analysis (multiple testing) before clustering then the > clusterrandomization using alpha =0.05 tells me that I run a risk of 5% of > identifying wrongly at least one of these 'trend clusters'. > (Or else, if the above is incorrect what is the reason not to use a very > lenient criterion in the first pass to feed the clusterrandomization with > as many clusters as possible?) The issue is statistical power (sensitivity). If you use a very lenient criterion (say, alphathresh=0.2) to select candidate cluster members, this will result in large clusters purely by chance. If the effect in your data is strong but confined to a small number of sensors and timepoints, clusterrandanalysis may not pick it up. This is because the reference distribution is dominated by these weak but large "chance clusters". You will not encounter this problem if you put alphathresh higher. On the other hand, a high aphathresh will miss weak but widespread effects. To sum up, alphathresh determines the relative sensitivity to "strong but small" and "weak but large" clusters. greetings, Eric Maris From wibral at MPIH-FRANKFURT.MPG.DE Wed Nov 23 13:26:13 2005 From: wibral at MPIH-FRANKFURT.MPG.DE (Michael Wibral) Date: Wed, 23 Nov 2005 13:26:13 +0100 Subject: Problem with data from BESA In-Reply-To: <00c001c5eeac$20c1bb00$d92cae83@fcdc195> Message-ID: Hi Eric, thank you for the explanation, things are much clearer now. In the meantime I have encountered another problem with clusterrandanalysis: When using maxsum as a test statistic everything works fine, but using maxsumtminclustersize with the same specified (maximum) alpha the fields clusrand.posclusters and clusrand.negclusters stay empty - although there seem to be large enough clusters in posclusterslabelmat and negclusterslabelmat (I used cfg.smallestcluster=2..) . Is this a bug or does the use of maxsumtminclustersize somehow reduce sensitivity (- from the description in the 2005 tutorial I thought it is similar to using FDR or 'Holmes' method for computing critical p values?). Maybe I am also missing something on a conceptual level that makes the information in posclusters invalid if I use maxsumtminclustersize as a test statistic ?? Below I pasted the code that produced this behaviour. %Clusterrandomization analysis cfg=[]; cfg.elec =elec; cfg.statistic = 'depsamplesT'; cfg.alphathresh = 0.05; cfg.makeclusters = 'yes'; cfg.minnbchan = 1; %1 neighbour i.e. 2 channels cfg.smallestcluster = 2; cfg.clusterteststat = 'maxsumtminclustersize'; % replace with maxsum to get lots of entries in clusrand.posclusters cfg.onetwo = 'twosided'; cfg.alpha = 0.05; cfg.nranddraws = 1000; cfg.latency = [0.40 0.65]; [clusrand] = clusterrandanalysis(cfg, AMdata, vAMdata); clusrand.elec=elec; Best, Michael Eric Maris schrieb: > Hi Michael, > > >> I have however another question regarding the interpretation of >> clusteranalysis results. Am I correct in saying that the family wise >> error rate (alpha) tells me the risk in obtaining a false positive >> statement of the type that I specify previously with alphatresh? For >> example if I specify alphathresh of 0.1 (lets calls this trend for >> abbreviation) in the first pass of the analysis (multiple testing) >> before clustering then the clusterrandomization using alpha =0.05 >> tells me that I run a risk of 5% of identifying wrongly at least one >> of these 'trend clusters'. >> (Or else, if the above is incorrect what is the reason not to use a >> very lenient criterion in the first pass to feed the >> clusterrandomization with as many clusters as possible?) > > > > The issue is statistical power (sensitivity). If you use a very > lenient criterion (say, alphathresh=0.2) to select candidate cluster > members, this will result in large clusters purely by chance. If the > effect in your data is strong but confined to a small number of > sensors and timepoints, clusterrandanalysis may not pick it up. This > is because the reference distribution is dominated by these weak but > large "chance clusters". You will not encounter this problem if you > put alphathresh higher. On the other hand, a high aphathresh will miss > weak but widespread effects. > > To sum up, alphathresh determines the relative sensitivity to "strong > but small" and "weak but large" clusters. > > > greetings, > > Eric Maris > > . > From maris at NICI.RU.NL Wed Nov 23 13:56:28 2005 From: maris at NICI.RU.NL (Eric Maris) Date: Wed, 23 Nov 2005 13:56:28 +0100 Subject: Problem with data from BESA Message-ID: Hi Michael, > thank you for the explanation, things are much clearer now. In the > meantime I have encountered another problem with clusterrandanalysis: > When using maxsum as a test statistic everything works fine, but using > maxsumtminclustersize with the same specified (maximum) alpha the fields > clusrand.posclusters and clusrand.negclusters stay empty - although there > seem to be large enough clusters in posclusterslabelmat and > negclusterslabelmat (I used cfg.smallestcluster=2..) . Is this a bug or > does the use of maxsumtminclustersize somehow reduce sensitivity (- from > the description in the 2005 tutorial I thought it is similar to using FDR > or 'Holmes' method for computing critical p values?). Maybe I am also > missing something on a conceptual level that makes the information in > posclusters invalid if I use maxsumtminclustersize as a test statistic ?? Yes, there is a bug in Clusterrandanalysis when the option cfg.clusterteststat = 'maxsumtminclustersize' is used. I know where it is, but I need some time to fix it (due to dependencies in the code). Give me a week to fix it. greetings, Eric > Below I pasted the code that produced this behaviour. > > %Clusterrandomization analysis > cfg=[]; > cfg.elec =elec; > cfg.statistic = 'depsamplesT'; > cfg.alphathresh = 0.05; > cfg.makeclusters = 'yes'; > cfg.minnbchan = 1; %1 neighbour i.e. 2 channels > cfg.smallestcluster = 2; > cfg.clusterteststat = 'maxsumtminclustersize'; % replace with maxsum to > get lots of entries in clusrand.posclusters > cfg.onetwo = 'twosided'; > cfg.alpha = 0.05; > cfg.nranddraws = 1000; > cfg.latency = [0.40 0.65]; > [clusrand] = clusterrandanalysis(cfg, AMdata, vAMdata); > clusrand.elec=elec; > > > Best, > Michael > > Eric Maris schrieb: > >> Hi Michael, >> >> >>> I have however another question regarding the interpretation of >>> clusteranalysis results. Am I correct in saying that the family wise >>> error rate (alpha) tells me the risk in obtaining a false positive >>> statement of the type that I specify previously with alphatresh? For >>> example if I specify alphathresh of 0.1 (lets calls this trend for >>> abbreviation) in the first pass of the analysis (multiple testing) >>> before clustering then the clusterrandomization using alpha =0.05 tells >>> me that I run a risk of 5% of identifying wrongly at least one of these >>> 'trend clusters'. >>> (Or else, if the above is incorrect what is the reason not to use a very >>> lenient criterion in the first pass to feed the clusterrandomization >>> with as many clusters as possible?) >> >> >> >> The issue is statistical power (sensitivity). If you use a very lenient >> criterion (say, alphathresh=0.2) to select candidate cluster members, >> this will result in large clusters purely by chance. If the effect in >> your data is strong but confined to a small number of sensors and >> timepoints, clusterrandanalysis may not pick it up. This is because the >> reference distribution is dominated by these weak but large "chance >> clusters". You will not encounter this problem if you put alphathresh >> higher. On the other hand, a high aphathresh will miss weak but >> widespread effects. >> >> To sum up, alphathresh determines the relative sensitivity to "strong but >> small" and "weak but large" clusters. >> >> >> greetings, >> >> Eric Maris >> >> . >> From marie at PSY.GLA.AC.UK Fri Nov 25 11:50:37 2005 From: marie at PSY.GLA.AC.UK (Marie Smith) Date: Fri, 25 Nov 2005 10:50:37 +0000 Subject: meg realign In-Reply-To: <4837D13C-376B-4167-991F-1AE32768B562@fcdonders.ru.nl> Message-ID: Hi, I was wondering if someone could clarify for me in some more detail how the meg-realign function works. From the function help it seems to perform a course source reconstruction and then re-project back to a standard gradiometer array. I am most curious about how this course source representation is implemented. Thanks, Marie Smith From r.oostenveld at FCDONDERS.RU.NL Mon Nov 28 15:53:00 2005 From: r.oostenveld at FCDONDERS.RU.NL (Robert Oostenveld) Date: Mon, 28 Nov 2005 15:53:00 +0100 Subject: meg realign In-Reply-To: <53C0453E-2753-4AF7-A2C4-BEC35B4631CC@psy.gla.ac.uk> Message-ID: Hi Marie, On 25-nov-2005, at 11:50, Marie Smith wrote: > I was wondering if someone could clarify for me in some more detail > how the meg-realign function works. From the function help it seems > to perform a course source reconstruction and then re-project back > to a standard gradiometer array. I am most curious about how this > course source representation is implemented. You are right, it involves projecting the measured activity on a sheet of dipoles that approximates the cortex, followed by a forward computation of the field of those dipoles at the template gradiometer locations. The algorithm is described in combination with a simulation study in the paper T.R. Knosche, Transformation of whole-head MEG recordings between different sensor positions. Biomed Tech (Berl). 2002 Mar;47(3):59-62. A similar algorithm, with the main difference being a different source model, is described in the appendix of the paper de Munck JC, Verbunt JP, Van't Ent D, Van Dijk BW. The use of an MEG device as 3D digitizer and motion monitoring system. Phys Med Biol. 2001 Aug;46(8):2041-52. I will send you a pdf version of both papers in a separate mail addressed directly at you. best regards, Robert From h.f.kwok at BHAM.AC.UK Tue Nov 29 16:07:30 2005 From: h.f.kwok at BHAM.AC.UK (Hoi Fei Kwok) Date: Tue, 29 Nov 2005 16:07:30 +0100 Subject: importing my own data Message-ID: Dear Robert, In the FAQ section of the FieldTrip website, it is said that if I import my own data, I have to define the following fields: data.label, data.trial, data.fsample, data.time and data.cfg. The first four are straighforward enough. However, how to set up data.cfg. What are the subfields? Regards, Hoi Fei From r.oostenveld at FCDONDERS.RU.NL Wed Nov 30 17:10:33 2005 From: r.oostenveld at FCDONDERS.RU.NL (Robert Oostenveld) Date: Wed, 30 Nov 2005 17:10:33 +0100 Subject: importing my own data In-Reply-To: Message-ID: Hi Hoi Fei data.cfg can be empty (i.e. []) in your case. It is used to remember the configuration details of all steps that you take in FT. Some functions assume that data.cfg is present and want to copy it over in their output (e.g. timelock.cfg.previous), therefore it should be present to be sure. The most recent versions of FT however should check whether it is present or not, and only attempt to copy it if persent. best, Robert PS Given that you have biosemi amplifiers, you are probably working with the BDF format. It would also be nice to implement that natively in fieldtrip. That should not be too hard, if you would be interested in taking that approach instead of constructing a data structure, please contact me directly. On 29-nov-2005, at 16:07, Hoi Fei Kwok wrote: > Dear Robert, > > In the FAQ section of the FieldTrip website, it is said that if I > import my > own data, I have to define the following fields: data.label, > data.trial, > data.fsample, data.time and data.cfg. The first four are > straighforward > enough. However, how to set up data.cfg. What are the subfields? > > Regards, > Hoi Fei > From pbikle at NIH.GOV Thu Nov 3 22:19:16 2005 From: pbikle at NIH.GOV (Philip C. Bikle) Date: Thu, 3 Nov 2005 22:19:16 +0100 Subject: Source analysis error Message-ID: I get the following error when attempting to source analysis: Warning: cross-spectral density matrix is rank deficient > In fieldtrip-20051027/private/beamformer at 253 In sourceanalysis at 701 ??? Error using ==> mtimes Inner matrix dimensions must agree. Error in ==> fieldtrip-20051027/private/beamformer at 365 filt2 = pinv(lf2' * invCf * lf2) * lf2' * invCf; Error in ==> sourceanalysis at 701 dip(i) = beamformer(grid, sens, vol, [], squeeze(Cf(i,:,:)), 'method', 'dics', 'feedback', cfg.feedback, 'projectnoise', cfg.projectnoise, 'keepfilter', cfg.keepfilter, 'supdip', cfg.supdip, >> Can any one tell me what I am doing wrong? I am attaching a script file which is what I am using to examine coherence. The script errors at line 77 ([source] = sourceanalysis(cfg, freq);). -------------- next part -------------- %function docoh(subj, cond, pre, post, dip) addpath /usr/local/fieldtrip-20051027/ ds = sprintf('/media/usbdisk/MEG/AC/test.ds'); subj='AC'; f = 10; smo = 5.; r = .7; loc = 'RFus'; cond = 'FACES'; pre = 0.0; post = 0.5; dip = [-5.5,-3.7,-1.7] cfg = []; cfg.dataset = ds; cfg.trialdef.eventtype = cond; cfg.trialdef.prestim = pre; cfg.trialdef.poststim = post; [data] = preprocessing(cfg); cfg = []; cfg.method = 'fft'; cfg.output = 'powandcsd'; cfg.tapsmofrq = smo; cfg.foilim = [f f]; cfg.keeptrials = 'yes'; cfg.sgncmb = channelcombination({'MEG' 'MEG'}, data.label); [freq] = freqanalysis(cfg, data); cfg = []; %cfg.xgrid = -10:r:10; %cfg.ygrid = -10:r:10; %cfg.zgrid = -2:r:14; cfg.xgrid = -12:1:12; cfg.ygrid = -10:1:10; cfg.zgrid = -3:1:14; cfg.dim = [length(cfg.xgrid) length(cfg.ygrid) length(cfg.zgrid)]; N=prod(cfg.dim); cfg.inside = 1:N; cfg.outside = []; cfg.hdmfile = strcat(ds, '/localSpheres.hdm'); [grid] = PREPARE_LEADFIELD(cfg, freq); %[grid] = precompute_leadfield(cfg, freq); %[grid] = source2sparse(grid); cfg = []; cfg.channel='MEG'; cfg.method = 'coh_refdip'; cfg.refdip = dip; cfg.projectnoise = 'yes'; cfg.hdmfile = strcat(ds, '/localSpheres.hdm'); cfg.grid = grid; %cfg.rawtrial = 'yes'; cfg.jacknife = 'yes'; cfg.frequency = f; cfg.lambda = .000000000000000000000000001; cfg.keepleadfield = 'no'; cfg.feedback = 'none'; [source] = sourceanalysis(cfg, freq); [source] = sourcedescriptives([], source); %[source] = source2full(source); brikname = sprintf('%s-%s-%s-%gHz', subj, cond, loc, f); [err, errmsg, info] = writesourcebrik(source, source.avg.coh, brikname); if err disp(errmsg) end -------------- next part -------------- Warning: higher order synthetic gradiometer configuration > In fieldtrip-20051027/private/prepare_vol_sens at 202 In sourceanalysis at 442 2684 dipoles inside, 6766 dipoles outside brain 1 conditions, each with 3 data objects constructing 14 jacknife replications scanning repetition 1 Warning: cross-spectral density matrix is rank deficient > In fieldtrip-20051027/private/beamformer at 253 In sourceanalysis at 701 ??? Error using ==> mtimes Inner matrix dimensions must agree. Error in ==> fieldtrip-20051027/private/beamformer at 365 filt2 = pinv(lf2' * invCf * lf2) * lf2' * invCf; Error in ==> sourceanalysis at 701 dip(i) = beamformer(grid, sens, vol, [], squeeze(Cf(i,:,:)), 'method', 'dics', 'feedback', cfg.feedback, 'projectnoise', cfg.projectnoise, 'keepfilter', cfg.keepfilter, 'supdip', cfg.supdip, >> From r.oostenveld at FCDONDERS.RU.NL Mon Nov 7 14:34:31 2005 From: r.oostenveld at FCDONDERS.RU.NL (Robert Oostenveld) Date: Mon, 7 Nov 2005 14:34:31 +0100 Subject: Source analysis error In-Reply-To: Message-ID: Hi Philip, I have tried to replicate your problem. I do not have the same dataset, but a 151 channel dataset gave the same error. On 3-nov-2005, at 22:19, Philip C. Bikle wrote: > I get the following error when attempting to source analysis: > > Warning: cross-spectral density matrix is rank deficient >> In fieldtrip-20051027/private/beamformer at 253 > In sourceanalysis at 701 > ??? Error using ==> mtimes > Inner matrix dimensions must agree. > > Error in ==> fieldtrip-20051027/private/beamformer at 365 > filt2 = pinv(lf2' * invCf * lf2) * lf2' * invCf; I looked in the code and set a breakpoint at the corresponding line. It turned out that lf2 was 184x3 instead of 151x3. Our 151 channel system has 184 channels in total, including the reference channels. > cfg = []; > cfg.xgrid = -12:1:12; > cfg.ygrid = -10:1:10; > cfg.zgrid = -3:1:14; > cfg.dim = [length(cfg.xgrid) length(cfg.ygrid) length(cfg.zgrid)]; > N=prod(cfg.dim); > cfg.inside = 1:N; > cfg.outside = []; (Side note, this should be cfg.grid.inside and cfg.grid.outside to have effect.) > cfg.hdmfile = strcat(ds, '/localSpheres.hdm'); > [grid] = PREPARE_LEADFIELD(cfg, freq); It turns out that you are pre-computing the leadfields on all channels, including the reference channels. Instead, you should only compute it on the channels which you want to use for source analysis. If you do cfg.channel = 'MEG' in prepare_leadfield, the right channels will be selected. best, Robert From marco.buiatti at GMAIL.COM Tue Nov 8 12:25:51 2005 From: marco.buiatti at GMAIL.COM (Marco Buiatti) Date: Tue, 8 Nov 2005 12:25:51 +0100 Subject: about cluster randomization analysis In-Reply-To: <00b301c5dbd0$8cab3c40$de2cae83@fcdc195> Message-ID: Dear Vladimir and Eric, thank you for your accurate responses. I fully understand from your arguments that temporally zooming on clusters is definitely wrong. Still, I wonder whether and how it is possible to use cluster randomization analysis cases in which it is difficult to formulate a precise hypothesis about when to expect an effect (for example, in infants), or cases in which an unexpected effect arises from a t-test. Do you think it would be correct to slide a relatively large (width of 200ms? 400ms? to be chosen a priori of course) window through the epochs and compute cluster randomization analysis for each latency to explore dubious significant t-test clusters? Another related question: I computed a post-hoc non kosher tuning of the window around the most significative cluster in my data, and I saw that it is significative (p<0.05) if the window edges exceed of about 50 ms the cluster edges (since the cluster is about 70 ms long, the whole window is about 170 ms long); but if I take longer windows, the p-value increases quite rapidly (I'm running at least 500 random draws for each window, and checking that the result does not depend on the number of draws). Do you have such instabilities in your data or should I think that the effect relative to my cluster is definitely too weak? Or maybe my data are not clean enough? About the minimum number of channels: I understand and agree that it is set only in space. Maybe it would help saying it explicitly on the tutorial. About the reference: my non kosher approach does not include changing the reference to chase a significant effect! My previous e-mail was probably misleading about that. Thank you and have a good day, Marco On 10/28/05, Eric Maris wrote: > > Dear Marco, > > > > The procedure I am following now is a sort of two-steps method: in the > > first place, I choose a wide time interval and a low minimum number of > > channels. I end up with many clusters that are far from being > > significative. I then shorten the time interval to include just one > > cluster (starting from the most significant one), and increase the > minimum > > number of channels, and run the analysis again. In this case, I > eventually > > got a significative cluster where I was expecting it from a simple > > observation of the t-test. Do you think this procedure is right or am I > > doing something wrong? Is it correct to temporally focus on a cluster to > > check its significance? > > > Clusterrandanalysis only controls the false alarm (type I error) rate if > you > choose the "tuning parameters" (latency interval, channel subset, the > minnbchan-parameter; and if you use on TFRs, also the frequency interval) > independent of the data. Instead, if you play around with these tuning > parameters until you find a cluster whose p-value exceeds the critical > alpha-level, you are not controlling the false alarm rate. In this case, > the > chosen tuning parameters depend on the data. > > An extreme example illustrates this even better. Assume you calculate > T-statistics for all (channel, time point)-pairs and you select the pair > with the largest T-statistic. Then, you select the latency interval that > only contains this time point and the channel subset that only contains > this > channel. With these tuning parameters, you reduce your data to a single > cell > in the spatiotemporal matrix, and clusterrrandanalysis will produce a > p-value that is very close to the p-value of a T-test. Since you have > selected this (channel, time point)-pair on the basis of its T-statistic, > this p-value is strongly biased. > > > > Another couple of questions: > > 1) Minnbchan. I understood it is the minimum number of significative > > neighbor (channel,time) points for a (channel,time) point to enter a > > cluster, no matter if adjacency is more in channel space or time > > direction. Am I right? Since time and channel space are quite different > > dimension, would it be better to set a minimum channel number separately > > for the two? > > Minnbchan should also be chosen independent of the data. I introduced this > tuning parameter because it turned out that in 3-dimensional analyses on > TFRs (involving the dimensions time, space (i.e., sensors) and frequency), > sometimes a cluster appeared that consisted of two or more 3-dimensional > "blobs" that were connected by a single (channel, time, > frequency)-element. > From a physiological perspective, such a cluster does not make sense. To > remove these physiologically implausible (and therefore probably random) > connections, I introduced the minnbchan parameter. Because of this > physiological rationale, I apply the minimum number criterium to the > spatial, and not to the temporal dimension. Short-lived phenomena are very > well possible from a physiological perspective, whereas effects at > spatially > isolated sensors are not. > > > > 2) Maybe because my data are average-referenced, I often end up with a > > positive and negative cluster emerging almost at the same time. Have you > > thought about any way to include the search of dipole-like > configurations? > > I have not thought about it, but it certainly makes sense to incorporate > biophysical constraints (such dipolar patterns) in the test statistic. > > One should be aware of the fact that different hypotheses are tested > before > and after rereferencing. This is physical and not a statistical issue. As > you most certainly know, EEG-signals are potential DIFFERENCES and > therefore > the underlying physiological events that are measured by EEG depend on the > reference channel(s). If the experimental manipulation affects the current > reference channel, then rereferencing to another channel (or set of > channels) that is not affected by the experimental manipulation makes a > difference for the result of the statistical test. > > > greetings, > > Eric Maris > -- Marco Buiatti - Post Doc ************************************************************** Cognitive Neuroimaging Unit - INSERM U562 Service Hospitalier Frederic Joliot, CEA/DRM/DSV 4 Place du general Leclerc, 91401 Orsay cedex, France Telephone: +33 1 69 86 77 65 Fax: +33 1 69 86 78 16 E-mail: marco.buiatti at gmail.com Web: www.unicog.org *************************************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From maris at NICI.RU.NL Tue Nov 8 13:17:50 2005 From: maris at NICI.RU.NL (Eric Maris) Date: Tue, 8 Nov 2005 13:17:50 +0100 Subject: about cluster randomization analysis Message-ID: Hi Marco, thank you for your accurate responses. I fully understand from your arguments that temporally zooming on clusters is definitely wrong. Still, I wonder whether and how it is possible to use cluster randomization analysis cases in which it is difficult to formulate a precise hypothesis about when to expect an effect (for example, in infants), or cases in which an unexpected effect arises from a t-test. Do you think it would be correct to slide a relatively large (width of 200ms? 400ms? to be chosen a priori of course) window through the epochs and compute cluster randomization analysis for each latency to explore dubious significant t-test clusters? If you have no hypothesis about where to expect an effect, you should use the complete latency window in which it may occur. Of course, this will reduce the sensitivity (statistical power) of your test (in comparison with the situation in which you do know when the effect can occur). As a rule, prior knowledge increases sensitivity. Another related question: I computed a post-hoc non kosher tuning of the window around the most significative cluster in my data, and I saw that it is significative (p<0.05) if the window edges exceed of about 50 ms the cluster edges (since the cluster is about 70 ms long, the whole window is about 170 ms long); but if I take longer windows, the p-value increases quite rapidly (I'm running at least 500 random draws for each window, and checking that the result does not depend on the number of draws). Do you have such instabilities in your data or should I think that the effect relative to my cluster is definitely too weak? Or maybe my data are not clean enough? This phenomenon is not an instability, it is what I would expect. Imagine your trials are 10 seconds long and there is an effect in the latency window between 1.3 and 1.35 seconds (i.e., less than 1 percent of trial length). If you ask clusterrandanalysis to compare the conditions over the complete trial length, it may very well miss the effect in the window between 1.3 and 1.35 seconds, because it has to use a large critical value in order to control for false positives in the time window where there is no effect (i.e., 99 percent of the 10 second trial). greetings, Eric Maris On 10/28/05, Eric Maris wrote: Dear Marco, > The procedure I am following now is a sort of two-steps method: in the > first place, I choose a wide time interval and a low minimum number of > channels. I end up with many clusters that are far from being > significative. I then shorten the time interval to include just one > cluster (starting from the most significant one), and increase the minimum > number of channels, and run the analysis again. In this case, I eventually > got a significative cluster where I was expecting it from a simple > observation of the t-test. Do you think this procedure is right or am I > doing something wrong? Is it correct to temporally focus on a cluster to > check its significance? Clusterrandanalysis only controls the false alarm (type I error) rate if you choose the "tuning parameters" (latency interval, channel subset, the minnbchan-parameter; and if you use on TFRs, also the frequency interval) independent of the data. Instead, if you play around with these tuning parameters until you find a cluster whose p-value exceeds the critical alpha-level, you are not controlling the false alarm rate. In this case, the chosen tuning parameters depend on the data. An extreme example illustrates this even better. Assume you calculate T-statistics for all (channel, time point)-pairs and you select the pair with the largest T-statistic. Then, you select the latency interval that only contains this time point and the channel subset that only contains this channel. With these tuning parameters, you reduce your data to a single cell in the spatiotemporal matrix, and clusterrrandanalysis will produce a p-value that is very close to the p-value of a T-test. Since you have selected this (channel, time point)-pair on the basis of its T-statistic, this p-value is strongly biased. > Another couple of questions: > 1) Minnbchan. I understood it is the minimum number of significative > neighbor (channel,time) points for a (channel,time) point to enter a > cluster, no matter if adjacency is more in channel space or time > direction. Am I right? Since time and channel space are quite different > dimension, would it be better to set a minimum channel number separately > for the two? Minnbchan should also be chosen independent of the data. I introduced this tuning parameter because it turned out that in 3-dimensional analyses on TFRs (involving the dimensions time, space ( i.e., sensors) and frequency), sometimes a cluster appeared that consisted of two or more 3-dimensional "blobs" that were connected by a single (channel, time, frequency)-element. From a physiological perspective, such a cluster does not make sense. To remove these physiologically implausible (and therefore probably random) connections, I introduced the minnbchan parameter. Because of this physiological rationale, I apply the minimum number criterium to the spatial, and not to the temporal dimension. Short-lived phenomena are very well possible from a physiological perspective, whereas effects at spatially isolated sensors are not. > 2) Maybe because my data are average-referenced, I often end up with a > positive and negative cluster emerging almost at the same time. Have you > thought about any way to include the search of dipole-like configurations? I have not thought about it, but it certainly makes sense to incorporate biophysical constraints (such dipolar patterns) in the test statistic. One should be aware of the fact that different hypotheses are tested before and after rereferencing. This is physical and not a statistical issue. As you most certainly know, EEG-signals are potential DIFFERENCES and therefore the underlying physiological events that are measured by EEG depend on the reference channel(s). If the experimental manipulation affects the current reference channel, then rereferencing to another channel (or set of channels) that is not affected by the experimental manipulation makes a difference for the result of the statistical test. greetings, Eric Maris -- Marco Buiatti - Post Doc ************************************************************** Cognitive Neuroimaging Unit - INSERM U562 Service Hospitalier Frederic Joliot, CEA/DRM/DSV 4 Place du general Leclerc, 91401 Orsay cedex, France Telephone: +33 1 69 86 77 65 Fax: +33 1 69 86 78 16 E-mail: marco.buiatti at gmail.com Web: www.unicog.org *************************************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From marco.buiatti at GMAIL.COM Tue Nov 8 13:58:57 2005 From: marco.buiatti at GMAIL.COM (Marco Buiatti) Date: Tue, 8 Nov 2005 13:58:57 +0100 Subject: about cluster randomization analysis In-Reply-To: <008f01c5e45e$70087b70$d72cae83@fcdc195> Message-ID: Hi Eric, On 11/8/05, Eric Maris wrote: > > Hi Marco, > > thank you for your accurate responses. I fully understand from your > arguments that temporally zooming on clusters is definitely wrong. Still, I > wonder whether and how it is possible to use cluster randomization analysis > cases in which it is difficult to formulate a precise hypothesis about when > to expect an effect (for example, in infants), or cases in which an > unexpected effect arises from a t-test. Do you think it would be correct to > slide a relatively large (width of 200ms? 400ms? to be chosen a priori of > course) window through the epochs and compute cluster randomization analysis > for each latency to explore dubious significant t-test clusters? > > If you have no hypothesis about where to expect an effect, you should use > the complete latency window in which it may occur. Of course, this will > reduce the sensitivity (statistical power) of your test (in comparison with > the situation in which you do know when the effect can occur). As a rule, > prior knowledge increases sensitivity. > OK > Another related question: I computed a post-hoc non kosher tuning of the > window around the most significative cluster in my data, and I saw that it > is significative (p<0.05) if the window edges exceed of about 50 ms the > cluster edges (since the cluster is about 70 ms long, the whole window is > about 170 ms long); but if I take longer windows, the p-value increases > quite rapidly (I'm running at least 500 random draws for each window, and > checking that the result does not depend on the number of draws). Do you > have such instabilities in your data or should I think that the effect > relative to my cluster is definitely too weak? Or maybe my data are not > clean enough? > > This phenomenon is not an instability, it is what I would expect. Imagine > your trials are 10 seconds long and there is an effect in the latency window > between 1.3 and 1.35 seconds (i.e., less than 1 percent of trial length). > If you ask clusterrandanalysis to compare the conditions over the complete > trial length, it may very well miss the effect in the window between 1.3and > 1.35 seconds, because it has to use a large critical value in order to > control for false positives in the time window where there is no effect ( > i.e., 99 percent of the 10 second trial). > > I also expected the significativity to decrease while increasing the time window for the same reason, but I was surprised to see the p-value increase so rapidly. I may pose the question more clearly: from your experience, would you say that the effect I described can be considered significative or not? (a few other details: I have 128 electrodes, 8 subjects, and the window I'm choosing is the window where I expect an effect from the literature) A related question is: how much do artifacts influence this kind of test? thank you again, Marco -- Marco Buiatti - Post Doc ************************************************************** Cognitive Neuroimaging Unit - INSERM U562 Service Hospitalier Frederic Joliot, CEA/DRM/DSV 4 Place du general Leclerc, 91401 Orsay cedex, France Telephone: +33 1 69 86 77 65 Fax: +33 1 69 86 78 16 E-mail: marco.buiatti at gmail.com Web: www.unicog.org *************************************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From maris at NICI.RU.NL Tue Nov 8 15:54:22 2005 From: maris at NICI.RU.NL (Eric Maris) Date: Tue, 8 Nov 2005 15:54:22 +0100 Subject: about cluster randomization analysis Message-ID: Hi Marco, Another related question: I computed a post-hoc non kosher tuning of the window around the most significative cluster in my data, and I saw that it is significative (p<0.05) if the window edges exceed of about 50 ms the cluster edges (since the cluster is about 70 ms long, the whole window is about 170 ms long); but if I take longer windows, the p-value increases quite rapidly (I'm running at least 500 random draws for each window, and checking that the result does not depend on the number of draws). Do you have such instabilities in your data or should I think that the effect relative to my cluster is definitely too weak? Or maybe my data are not clean enough? This phenomenon is not an instability, it is what I would expect. Imagine your trials are 10 seconds long and there is an effect in the latency window between 1.3 and 1.35 seconds (i.e., less than 1 percent of trial length). If you ask clusterrandanalysis to compare the conditions over the complete trial length, it may very well miss the effect in the window between 1.3 and 1.35 seconds, because it has to use a large critical value in order to control for false positives in the time window where there is no effect (i.e., 99 percent of the 10 second trial). I also expected the significativity to decrease while increasing the time window for the same reason, but I was surprised to see the p-value increase so rapidly. I may pose the question more clearly: from your experience, would you say that the effect I described can be considered significative or not? (a few other details: I have 128 electrodes, 8 subjects, and the window I'm choosing is the window where I expect an effect from the literature) A related question is: how much do artifacts influence this kind of test? The question of significance can only answered on the basis of probability calculations. My own experience is irrelevant in this respect. With respect to the artifacts, you must be aware of the fact that the power of statistical tests is adversely affected by eye-blinks and all other non-neuronal factors in the signal. greetings, Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From r.oostenveld at FCDONDERS.RU.NL Wed Nov 9 09:28:13 2005 From: r.oostenveld at FCDONDERS.RU.NL (Robert Oostenveld) Date: Wed, 9 Nov 2005 09:28:13 +0100 Subject: about cluster randomization analysis In-Reply-To: <22f732b0511080325i731f02c4odfc1776ef1503e56@mail.gmail.com> Message-ID: Hi Marco, On 8-nov-2005, at 12:25, Marco Buiatti wrote: > Do you think it would be correct to slide a relatively large (width > of 200ms? 400ms? to be chosen a priori of course) window through > the epochs and compute cluster randomization analysis for each > latency to explore dubious significant t-test clusters? You can use such an approach, but then you have to consider each position of the window that you are sliding as a seperate statistical comparison of the data in the experimental conditions. The multiple comparison problem over channels and timepoints within the window is then automatically taken care of by clusterrandanalysis, but the multiple comparisons that arise due to the multiple locations of the window in which you are "interrogating" your data are not treated by clusterrandanalysis. That means that, for this approach to be statistically completely sound, you should do a Bonferoni correction on the alpha threshold, dividing it by the number of window positions. Probably you will loose a lot of your statistical power especially if you slide the window in small steps, so I doubt whether it is usefull. Given that you have expressed your doubts about potential artifacts in some of your subjects and the influence of the artifacts on the outcome of the statistical test, I would guess that putting more effort into making the data itself cleaner is probably more worthwile. best regards, Robert ======================================================= Robert Oostenveld, PhD F.C. Donders Centre for Cognitive Neuroimaging Radboud University Nijmegen phone: +31-24-3619695 http://www.ru.nl/fcdonders/ From marco.buiatti at GMAIL.COM Wed Nov 9 17:57:43 2005 From: marco.buiatti at GMAIL.COM (Marco Buiatti) Date: Wed, 9 Nov 2005 17:57:43 +0100 Subject: about cluster randomization analysis In-Reply-To: <023AE4AB-BFD6-45EE-ADB8-0A80E3905DE3@fcdonders.ru.nl> Message-ID: Dear FieldTrip Masters, thank you again for your clear and rapid answers. Another question about clusterrandanalysis. As I told you, I'm performing a cluster randomization test for a within-subject experiment, using a two-sided t-test as pair statistics. The tutorial says that clustering is performed separately for thresholded positive and negative t-statistics, and that the critical value for the cluster level statistics is also two-sided. I understood that the positive(negative) critical value corresponds to the 95% portion of the randomization distribution of the maximum(minimum) of the positive(negative) clusters statistics. Then, why do I obtain two identical (in absolute value) critical values? What am I missing? thank you, Marco On 11/9/05, Robert Oostenveld wrote: > > Hi Marco, > > On 8-nov-2005, at 12:25, Marco Buiatti wrote: > > Do you think it would be correct to slide a relatively large (width > > of 200ms? 400ms? to be chosen a priori of course) window through > > the epochs and compute cluster randomization analysis for each > > latency to explore dubious significant t-test clusters? > > You can use such an approach, but then you have to consider each > position of the window that you are sliding as a seperate statistical > comparison of the data in the experimental conditions. The multiple > comparison problem over channels and timepoints within the window is > then automatically taken care of by clusterrandanalysis, but the > multiple comparisons that arise due to the multiple locations of the > window in which you are "interrogating" your data are not treated by > clusterrandanalysis. That means that, for this approach to be > statistically completely sound, you should do a Bonferoni correction > on the alpha threshold, dividing it by the number of window positions. > > Probably you will loose a lot of your statistical power especially if > you slide the window in small steps, so I doubt whether it is > usefull. Given that you have expressed your doubts about potential > artifacts in some of your subjects and the influence of the artifacts > on the outcome of the statistical test, I would guess that putting > more effort into making the data itself cleaner is probably more > worthwile. > > best regards, > Robert > > > ======================================================= > Robert Oostenveld, PhD > F.C. Donders Centre for Cognitive Neuroimaging > Radboud University Nijmegen > phone: +31-24-3619695 > http://www.ru.nl/fcdonders/ > -- Marco Buiatti - Post Doc ************************************************************** Cognitive Neuroimaging Unit - INSERM U562 Service Hospitalier Frederic Joliot, CEA/DRM/DSV 4 Place du general Leclerc, 91401 Orsay cedex, France Telephone: +33 1 69 86 77 65 Fax: +33 1 69 86 78 16 E-mail: marco.buiatti at gmail.com Web: www.unicog.org *************************************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.medendorp at NICI.RU.NL Thu Nov 10 12:31:24 2005 From: p.medendorp at NICI.RU.NL (Pieter Medendorp) Date: Thu, 10 Nov 2005 12:31:24 +0100 Subject: Comparing waveforms In-Reply-To: <00d301c5dae1$ce26c030$de2cae83@fcdc195> Message-ID: Eric, mag ik je een vraag stellen: ik heb 10 proefpersonen met elk hun eigen data set. per proefpersoon, zoek ik naar correlaties in hun data, op twee verschillende manieren. Dus, dit levert 2 correlation coeffienten per proefpersoon. Ik heb 10 proefpersonen, en wil vergelijken of de 10 correlatiecoefficenten gevonden op de ene manier afwijkend zijn van de 10 gevonden op de andere manier. Weet jij de geschikte test (Fisher o.i.d.?). Bedankt. Pieter -------------- next part -------------- An HTML attachment was scrubbed... URL: From CAFJ.Miller at PSY.UMCN.NL Mon Nov 14 10:55:17 2005 From: CAFJ.Miller at PSY.UMCN.NL (Christopher Miller) Date: Mon, 14 Nov 2005 10:55:17 +0100 Subject: clusterrandanalysis Message-ID: Dear Eric, I have two questions concerning clusterrandanalysis: First, I performed a frequency analysis with Brain Vision Analyzer and exported these data into an Excel and SPSS file. How can I import these data into Matlab in order to obtain a format on which I can perform a Cluster-level Randomization Test for a Within Subjects experiment? Second, I want to compare three conditions, two drug conditions and a placebo condition. In all conditions, a baseline measurement was made before drug intake. I want to take in account these baseline measurements. In a parametric test like MANOVA this is usually done with a covariate or the introduction of an extra factor (time). How can I perform this in clusterrandanalysis? Thanks in advance, Christopher Miller, MSc Unit for Clinical Psychopharmacology and Neuropsychiatry Department of Psychiatry 974 Radboud University Nijmegen Medical Centre PO Box 9101 6500 HB Nijmegen The Netherlands Tel.: + 31 24 3613204 Email: CAFJ.Miller at psy.umcn.nl From maris at NICI.RU.NL Mon Nov 14 17:13:01 2005 From: maris at NICI.RU.NL (Eric Maris) Date: Mon, 14 Nov 2005 17:13:01 +0100 Subject: clusterrandanalysis Message-ID: Hi Christopher, > I have two questions concerning clusterrandanalysis: > > > First, I performed a frequency analysis with Brain Vision Analyzer and > exported these data into an Excel and SPSS file. How can I import these > data into Matlab in order to obtain a format on which I can perform a > Cluster-level Randomization Test for a Within Subjects experiment? This is not a question about clusterrandanalysis but about how to import preprocessed data from another package such that it is compatible with Fieldtrip functions. Although I am not an expert in these issues (Robert Oostenveld is our expert), I think it is complicated and intellectually not very satisfying (because of all the bookkeeping that is probably involved). I advise you to import your non-preprocessed BVA data files into Fieldtrip (we have import routines for this) and do you frequency analysis in Fieldtrip. Besides sound statistics, Fieldtrip also offers state-of-the art spectral density estimation. Learning the Fieldtrip function freqanalysis will probably take less time than importing your BVA power spectra. > > Second, I want to compare three conditions, two drug conditions and a > placebo condition. In all conditions, a baseline measurement was made > before drug intake. I want to take in account these baseline measurements. > In a parametric test like MANOVA this is usually done with a covariate or > the introduction of an extra factor (time). How can I perform this in > clusterrandanalysis? 1. Divide the activation power by the baseline power (and, optionally, take the log of this ratio) and submit this to clusterrandanalysis. 2. Compare each of the drug conditions with the placebo condition (using a T-statistic) with respect to this baseline-normalized dependent variable. greetings, Eric Maris From wibral at MPIH-FRANKFURT.MPG.DE Mon Nov 14 17:44:48 2005 From: wibral at MPIH-FRANKFURT.MPG.DE (Michael Wibral) Date: Mon, 14 Nov 2005 17:44:48 +0100 Subject: problems importing elp files Message-ID: Dear List Users, I'm trying to import some .avr files exported from BESA. However the read_besa_avr function returns an error like this: ??? Error using ==> strrep Cell elements must be character arrays. Error in ==> fieldtrip-20051113\private\read_besa_avr at 61 avr.label = strrep(lbl.textdata(:,2) ,'''', ''); Error in ==> besa2fieldtrip at 44 tmp = read_besa_avr(filename); My .elp files look like this: EEG Fp1' -89.51 -74.20 EEG Fpz' 89.49 90.00 EEG Fp2' 89.51 74.20 EEG Nz' 108.96 90.00 EEG AF9' -113.26 -50.72 EEG AF7' -89.61 -55.88 EEG AF3' -73.15 -69.74 EEG AFz' 67.74 90.00 EEG AF4' 73.15 69.74 EEG AF8' 89.61 55.88 EEG AF10' 113.27 50.72 EEG F9' -113.98 -38.43 EEG F7' -89.65 -40.32 EEG F5' -72.42 -45.38 EEG F3' -58.13 -55.16 EEG F1' -49.40 -70.86 EEG Fz' 46.01 90.00 EEG F2' 49.40 70.86 EEG F4' 58.13 55.16 EEG F6' 72.42 45.38 EEG F8' 89.65 40.32 EEG F10' 113.98 38.43 (truncated...) When I look into the intermediate output of lbl = importdata(elpfile) inside the crashing function read_besa_avr, I get something like this lbl = data: [71x1 double] textdata: {81x3 cell} [1x23 char] [] [] [1x21 char] [] [] [1x21 char] [] [] [1x22 char] [] [] [1x24 char] [] [] [1x23 char] [] [] [1x23 char] [] [] [1x21 char] [] [] [1x21 char] [] [] [1x21 char] [] [] 'EEG' 'AF10' '113.27' 'EEG' 'F9' '-113.98' 'EEG' 'F7' '-89.65' 'EEG' 'F5' '-72.42' 'EEG' 'F3' '-58.13' 'EEG' 'F1' '-49.40' 'EEG' 'Fz' '46.01' 'EEG' 'F2' '49.40' 'EEG' 'F4' '58.13' 'EEG' 'F6' '72.42' 'EEG' 'F8' '89.65' 'EEG' 'F10' '113.98' 'EEG' 'FT9' '-114.79' 'EEG' 'FT7' '-89.84' 'EEG' 'FC5' '-67.69' 'EEG' 'FC3' '-46.94' (truncated...) Does anybody know what's wrong here? Thank you very much for your help, Michael Wibral M. Wibral Dipl. Phys. Max Planck Institute for Brain Research Dept. Neurophysiology Deutschordenstrasse 46 60528 Frankfurt am Main Germany Phone: +49(0)69/6301-83849 +49(0)173/4966728 Fax: +49(0)69/96769-327 From r.oostenveld at FCDONDERS.RU.NL Tue Nov 15 22:08:05 2005 From: r.oostenveld at FCDONDERS.RU.NL (Robert Oostenveld) Date: Tue, 15 Nov 2005 22:08:05 +0100 Subject: problems importing elp files In-Reply-To: <4378BF00.6060006@mpih-frankfurt.mpg.de> Message-ID: Hi Michael On 14-nov-2005, at 17:44, Michael Wibral wrote: > I'm trying to import some .avr files exported from BESA. However > the read_besa_avr function returns an error like this: > ... I copied and pased your trunctuated elp file content from your mail into a local file and had no problem reading it in. Looking at the output of matlab, it seems to me that the importdata function (which is standard matlab) is not able to detect the boundaries between the columns. Some lines in the file are read as 22 chars, some lines are read as a few chuncks and one line seems to be parsed as a large number of chuncks. Therefore I suspect that the spaces and tabs are messed up in your elp file. Please try copy and paste the content into a new file, make sure that there are no tabs but only spaces, and save it again to disk with the original name. best Robert From wibral at MPIH-FRANKFURT.MPG.DE Wed Nov 16 12:21:40 2005 From: wibral at MPIH-FRANKFURT.MPG.DE (Michael Wibral) Date: Wed, 16 Nov 2005 12:21:40 +0100 Subject: Problem with data from BESA Message-ID: Hi, I have imported averaged EEG data from BESA (std 81 electrodes, average refrence) using the .mul format and the corresponding .sfp file to import the electrode locations. The import into fieldtrip seems to work fine with these formats (it didn't when I tried .avr and .elp...). However, the maps look very different from what I see in BESA (more like something differentiated / inverted from the BESA maps - the foci are clearly shifted). Do I have to tell Fieldtrip somewhere that this is EEG data, so that it doesn't do the things it would when dealing with MEG gradiometer data? Or is there something I have to do to let fieldtrip know that the data are average reference data. I can't find anything in the tutorials on this matter. Thank you very much for any help on this, Michael Wibral M. Wibral Dipl. Phys. Max Planck Institute for Brain Research Dept. Neurophysiology Deutschordenstrasse 46 60528 Frankfurt am Main Germany Phone: +49(0)69/6301-83849 +49(0)173/4966728 Fax: +49(0)69/96769-327 -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephan.moratti at UNI-KONSTANZ.DE Wed Nov 16 13:13:33 2005 From: stephan.moratti at UNI-KONSTANZ.DE (Stephan Moratti) Date: Wed, 16 Nov 2005 13:13:33 +0100 Subject: Problem with data from BESA In-Reply-To: <437B1644.9080300@mpih-frankfurt.mpg.de> Message-ID: Hi Michael, I often use BESA exported data with many different tools. One problem I encountered often, is that the coordinate system applied was not compatible. Sometimes I had to shift the whole thing by 90 degrees or so. As you are using sfp files (containing x,y,z coordinates), this could be the problem. But I am not sure as I havent't imported to fieldtrip yet. Maybe just a hint, maybe not. Stephan At 12:21 16.11.2005 +0100, you wrote: > Hi, > > EEG data from BESA (std 81 electrodes, average refrence) using the .mul >format and the corresponding .sfp file to import the electrode locations. >The import into fieldtrip seems to work fine with these formats (it didn't >when I tried .avr and .elp...). However, the maps look very different from >what I see in BESA (more like something differentiated / inverted from the >BESA maps - the foci are clearly shifted). Do I have to tell Fieldtrip >somewhere that this is EEG data, so that it doesn't do the things it would >when dealing with MEG gradiometer data? Or is there something I have to do >to let fieldtrip know that the data are average reference data. I can't >find anything in the tutorials on this matter. > > Thank you very much for any help on this, > > Michael Wibral > > M. Wibral Dipl. Phys. > Max Planck Institute for Brain Research > Dept. Neurophysiology > Deutschordenstrasse 46 > 60528 Frankfurt am Main > Germany > > +49(0)69/6301-83849 > +49(0)173/4966728 > +49(0)69/96769-327 > ----------------------------- Dr. Stephan Moratti (PhD) Dept. of Psychology University of Konstanz P.O Box D25 Phone: +40 (0)7531 882385 Fax: +49 (0)7531 884601 D-78457 Konstanz, Germany e-mail: Stephan.Moratti at uni-konstanz.de http://www.clinical-psychology.uni-konstanz.de/ From r.oostenveld at FCDONDERS.RU.NL Wed Nov 16 14:16:46 2005 From: r.oostenveld at FCDONDERS.RU.NL (Robert Oostenveld) Date: Wed, 16 Nov 2005 14:16:46 +0100 Subject: Problem with data from BESA In-Reply-To: <437B1644.9080300@mpih-frankfurt.mpg.de> Message-ID: Hi Micahel > I see in BESA (more like something differentiated / inverted from > the BESA maps - the foci are clearly shifted). The projection of the 3D electrode locations towards the 2D plane (in which the color-coded data has to be represented on screen or paper) might be quite different. Fieldtrip uses layout files in which you can specify the location of each sensor in the 2D plane (have a look at one of the *.lay files). If you do not specify a layout file, the 2D layout is constructed on the fly from the 3D electrode locations that are represented as NelecX3 matrix in data.elec.pnt. I suggest that you turn on the electrodes in topoplotER (cfg.showlabels option) and see whether that makes sense. If you are using standard labels of the extended 10-20 system in your EEG data, you can also try topoplotting with a predefined 2D layout, e.g. cfg = ... cfg.layout = 'elec1020.lay' % or elec1010.lay topoplotER(cfg, avg) > Do I have to tell Fieldtrip somewhere that this is EEG data, so > that it doesn't do the things it would when dealing with MEG > gradiometer data? No, the topoplotting of EEG data and MEG data is done just the same. > Or is there something I have to do to let fieldtrip know that the > data are average reference data. I can't find anything in the > tutorials on this matter. No, referencing of EEG data does not influence the spatial topographical distribution. It might change the global color (depending on the coloraxis), but not the pattern. Re-referencing your data at one timepoint just subtract a constant value (the potential at the reference electrode) from all electrodes. A geographical map of the Himalayas would also look the same if you would express the height with respect to the foot of the mountain range instead of with respect to the sea level. best regards, Robert From wibral at MPIH-FRANKFURT.MPG.DE Wed Nov 16 16:29:41 2005 From: wibral at MPIH-FRANKFURT.MPG.DE (Michael Wibral) Date: Wed, 16 Nov 2005 16:29:41 +0100 Subject: Problem with data from BESA In-Reply-To: <28A6AA31-A895-4DCE-BFEF-645AECA62F63@fcdonders.ru.nl> Message-ID: Hi Robert, thank you very much for the quick reply. I noticed that I supplied insufficient information. I actually switched on the electrode labels in the display and the peaks sit at the wrong electrodes. I therefore assume it is not a problem of the layout file (alone). I actually took into account that the data look heavily distorted and tried to check wether it is just a projection problem by playing around with different scalings of elec.pnt (albeit this didn't seem to affect the plot??). I should have also mentioned that I'm using version 20051113. However I imported the electrode positions with read_fcdc_elec from the version 0.9.6 (there doesn't seem to be a read_fcdc_elec version supplied with 2005113..) - I hope this doesn't cause the trouble. Meanwhile I also tried to use the elec1010.lay layout file which works fine.However, in fieldtrip I find a negative peak between electrodes P5 P3 PO7 PO3 and a positive central one at CPz Cz (which has no counterpart in BESA, so contourlines don't match exactly), whereas in BESA a positive peak is found on CP4. This looks like an inversion of signs, an inversion of left/right and a difference in the interpolation algorithm. Below you'll find the code I used (most of it is copied from the BESA tfc sample page on the web site): % this is the list of BESA datafiles in one condition filename_AM = { 'AA_AMU2M.mul' 'AK_AMU2M.mul' 'CB_AMU2M.mul' 'HDR01_AMU2M.mul' 'KRT28_AMU2M.mul' 'KSN14_AMU2M.mul' 'LM_AMU2M.mul' 'MN_AMU2M.mul' 'MW_AMU2M.mul' 'MWA_AMU2M.mul' }; % this is the list of BESA datafiles in the other condition filename_vAM = { 'AA_vAMU2M.mul' 'AK_vAMU2M.mul' 'CB_vAMU2M.mul' 'HDR01_vAMU2M.mul' 'KRT28_vAMU2M.mul' 'KSN14_vAMU2M.mul' 'LM_vAMU2M.mul' 'MN_vAMU2M.mul' 'MW_vAMU2M.mul' 'MWA_vAMU2M.mul' }; nsubj = length(filename_AM); % collect all single subject data in a convenient cell-array for i=1:nsubj AM{i} = besa2fieldtrip(filename_AM{i}); vAM{i} = besa2fieldtrip(filename_vAM{i}); end % load electrode configuration elec= read_fcdc_elec('AA_AMU2M.sfp'); elec.pnt = 10.*elec.pnt; % scale, doesn't seem to affect the plotting ? cfg = []; cfg.keepindividual = 'yes'; AMdata = timelockgrandaverage(cfg, AM{:}); vAMdata = timelockgrandaverage(cfg, vAM{:}); DiffData=AMdata %create dummy structure to hold results of the difference calculation %calculate grand average difference DiffData.individual=AMdata.individual-vAMdata.individual; cfg = []; DiffDataGA=timelockgrandaverage(cfg, DiffData); %plot the differences figure; cfg=[]; plotdata1.elec=elec; plotdata1.time=DiffDataGA.time; plotdata1.label=DiffDataGA.label; plotdata1.data2plot=DiffDataGA.avg; cfg=[]; cfg.layout=elec; cfg.showlabels = 'yes' cfg.zparam='data2plot'; cfg.colorbar='no'; cfg.xlim=[0.5595:0.001:0.5605]; % to zoom in on 560ms, as BESA only gives data a timepoints topoplotER(cfg,plotdata1); Best Regards, Michael M. Wibral Dipl. Phys. Max Planck Institute for Brain Research Dept. Neurophysiology Deutschordenstrasse 46 60528 Frankfurt am Main Germany Phone: +49(0)69/6301-83849 +49(0)173/4966728 Fax: +49(0)69/96769-327 Robert Oostenveld schrieb: > Hi Micahel > >> I see in BESA (more like something differentiated / inverted from >> the BESA maps - the foci are clearly shifted). > > > The projection of the 3D electrode locations towards the 2D plane (in > which the color-coded data has to be represented on screen or paper) > might be quite different. Fieldtrip uses layout files in which you > can specify the location of each sensor in the 2D plane (have a look > at one of the *.lay files). If you do not specify a layout file, the > 2D layout is constructed on the fly from the 3D electrode locations > that are represented as NelecX3 matrix in data.elec.pnt. > > I suggest that you turn on the electrodes in topoplotER > (cfg.showlabels option) and see whether that makes sense. > > If you are using standard labels of the extended 10-20 system in your > EEG data, you can also try topoplotting with a predefined 2D layout, > e.g. > > cfg = ... > cfg.layout = 'elec1020.lay' % or elec1010.lay > topoplotER(cfg, avg) > >> Do I have to tell Fieldtrip somewhere that this is EEG data, so that >> it doesn't do the things it would when dealing with MEG gradiometer >> data? > > > No, the topoplotting of EEG data and MEG data is done just the same. > >> Or is there something I have to do to let fieldtrip know that the >> data are average reference data. I can't find anything in the >> tutorials on this matter. > > > No, referencing of EEG data does not influence the spatial > topographical distribution. It might change the global color > (depending on the coloraxis), but not the pattern. Re-referencing > your data at one timepoint just subtract a constant value (the > potential at the reference electrode) from all electrodes. A > geographical map of the Himalayas would also look the same if you > would express the height with respect to the foot of the mountain > range instead of with respect to the sea level. > > best regards, > Robert > > . > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r.oostenveld at FCDONDERS.RU.NL Wed Nov 16 18:00:52 2005 From: r.oostenveld at FCDONDERS.RU.NL (Robert Oostenveld) Date: Wed, 16 Nov 2005 18:00:52 +0100 Subject: Problem with data from BESA In-Reply-To: <437B5065.3040307@mpih-frankfurt.mpg.de> Message-ID: Hi Michael > I actually switched on the electrode labels in the display and the > peaks sit at the wrong electrodes. That seams to indicate that there is a mismatch between the channel names and the electrode names. If you see a peak at a specific electrode in the topoplot, you should be albe to confirm its value by looking in the data. Could it be that the ordering of the channels is different in the two conditions that you are reading in (compare AM {1}.label and vAM{1}.label)? > I therefore assume it is not a problem of the layout file (alone). > I actually took into account that the data look heavily distorted > and tried to check wether it is just a projection problem by > playing around with different scalings of elec.pnt (albeit this > didn't seem to affect the plot??). The scaling of the radius of the electrodes does not affect the location towards which it is projected in the 2D plane. What would matter however w.r.t. the 2D projection is if you would shift them. The interpolation algorithm that is used in topoplotER is certainly different from the one that is used in BESA. But I would not expect that to make such a big difference that peaks start shifting around. Maybe Ole can comment on the interpolation, since he supplied the topoplotER function based upon some code from EEGLAB (Ole should read along on the mailing list, but I also CCed to him). > I should have also mentioned that I'm using version 20051113. > However I imported the electrode positions with read_fcdc_elec from > the version 0.9.6 (there doesn't seem to be a read_fcdc_elec > version supplied with 2005113..) - I hope this doesn't cause the > trouble. It indeed was missing. I have tagged the read_fcdc_elec file to be included in the upcoming daily release versions (which are updated every evening on the ftp server). You can pick it up tomorrow at ftp://ftp.fconders.nl/pub/fieldtrip/ > Meanwhile I also tried to use the elec1010.lay layout file which > works fine.However, in fieldtrip I find a negative peak between > electrodes P5 P3 PO7 PO3 and a positive central one at CPz Cz > (which has no counterpart in BESA, so contourlines don't match > exactly), whereas in BESA a positive peak is found on CP4. This > looks like an inversion of signs, an inversion of left/right and a > difference in the interpolation algorithm. Does the peak ly on top of an electrode or in between the electrodes? If it is at the electrode, you should be able to verify it's actual value. I am concerned that there might be an ordering/naming problem with your EEG channels. Please try the two low-level functions that you find attached. They work like this: topoplot(cfg,X,Y,datavector,Labels) and triplot([X Y zeros(Nchan,1)], [], Labels, datavector) You can get the X and Y value from the layout file. With the triplot, you can also plot 3D (just use elec.pnt, i.e. [x y z] as the first argument). The triplot does linear interpolation over the triangles that connect the electrodes. It might look coarse, but with it you are guaranteed not to overinterpret the data (i.e. there cannot be any spurious peaks between the electrodes). best, Robert PS if you still cannot figure it out, send me a private mail with your plotdata1 structure and, if not too large, the AM and vAM data. -------------- next part -------------- A non-text attachment was scrubbed... Name: topoplot.m Type: application/octet-stream Size: 15694 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: triplot.m Type: application/octet-stream Size: 10679 bytes Desc: not available URL: -------------- next part -------------- From wibral at MPIH-FRANKFURT.MPG.DE Wed Nov 16 18:56:20 2005 From: wibral at MPIH-FRANKFURT.MPG.DE (Michael Wibral) Date: Wed, 16 Nov 2005 18:56:20 +0100 Subject: Problem with data from BESA In-Reply-To: Message-ID: Hi Robert, thanks for your help. There seems to be indeed a problem with the ordering of the electrodes in the mul-files themselves and the corresponding sfp files, that contain some additional fiducials - so if electrodes and their positions are not matched by name but by order during imports that will of course go wrong. Both files also have a different order from the 10-10 layout used in Fieldtrip, but I guess layout files match electrodes per name, don't they. I will try to figure out a workaround. Best, Michael Robert Oostenveld schrieb: > Hi Michael > >> I actually switched on the electrode labels in the display and the >> peaks sit at the wrong electrodes. > > > That seams to indicate that there is a mismatch between the channel > names and the electrode names. If you see a peak at a specific > electrode in the topoplot, you should be albe to confirm its value by > looking in the data. Could it be that the ordering of the channels > is different in the two conditions that you are reading in (compare AM > {1}.label and vAM{1}.label)? > >> I therefore assume it is not a problem of the layout file (alone). I >> actually took into account that the data look heavily distorted and >> tried to check wether it is just a projection problem by playing >> around with different scalings of elec.pnt (albeit this didn't seem >> to affect the plot??). > > > The scaling of the radius of the electrodes does not affect the > location towards which it is projected in the 2D plane. What would > matter however w.r.t. the 2D projection is if you would shift them. > The interpolation algorithm that is used in topoplotER is certainly > different from the one that is used in BESA. But I would not expect > that to make such a big difference that peaks start shifting around. > Maybe Ole can comment on the interpolation, since he supplied the > topoplotER function based upon some code from EEGLAB (Ole should read > along on the mailing list, but I also CCed to him). > >> I should have also mentioned that I'm using version 20051113. >> However I imported the electrode positions with read_fcdc_elec from >> the version 0.9.6 (there doesn't seem to be a read_fcdc_elec version >> supplied with 2005113..) - I hope this doesn't cause the trouble. > > > It indeed was missing. I have tagged the read_fcdc_elec file to be > included in the upcoming daily release versions (which are updated > every evening on the ftp server). You can pick it up tomorrow at > ftp://ftp.fconders.nl/pub/fieldtrip/ > >> Meanwhile I also tried to use the elec1010.lay layout file which >> works fine.However, in fieldtrip I find a negative peak between >> electrodes P5 P3 PO7 PO3 and a positive central one at CPz Cz (which >> has no counterpart in BESA, so contourlines don't match exactly), >> whereas in BESA a positive peak is found on CP4. This looks like an >> inversion of signs, an inversion of left/right and a difference in >> the interpolation algorithm. > > > Does the peak ly on top of an electrode or in between the electrodes? > If it is at the electrode, you should be able to verify it's actual > value. I am concerned that there might be an ordering/naming problem > with your EEG channels. Please try the two low-level functions that > you find attached. They work like this: > topoplot(cfg,X,Y,datavector,Labels) > and > triplot([X Y zeros(Nchan,1)], [], Labels, datavector) > You can get the X and Y value from the layout file. With the triplot, > you can also plot 3D (just use elec.pnt, i.e. [x y z] as the first > argument). The triplot does linear interpolation over the triangles > that connect the electrodes. It might look coarse, but with it you > are guaranteed not to overinterpret the data (i.e. there cannot be > any spurious peaks between the electrodes). > > best, > Robert > > PS if you still cannot figure it out, send me a private mail with > your plotdata1 structure and, if not too large, the AM and vAM data. > From r.oostenveld at FCDONDERS.RU.NL Wed Nov 16 22:25:41 2005 From: r.oostenveld at FCDONDERS.RU.NL (Robert Oostenveld) Date: Wed, 16 Nov 2005 22:25:41 +0100 Subject: Problem with data from BESA In-Reply-To: <437B72C4.7070105@mpih-frankfurt.mpg.de> Message-ID: On 16-nov-2005, at 18:56, Michael Wibral wrote: > Both files also have a different order from the 10-10 layout used > in Fieldtrip, but I guess layout files match electrodes per name, > don't they. I will try to figure out a workaround. Channel matching is indeed done on name and not on number/index. This applies for the channel names in the layout file, but also for the channel names in the electrode file. It means that the channel ordering in either layout-file or elec-structure can be different from the channel ordering in the data, since both the data and the elec contain labels that can be matched when needed (e.g when plotting or dipole fitting). The elec-structure can also contain more or less electrode positions+labels than the EEG itself, e.g. when you have measured bipolar ECG or EOG along (without position), or when you have additional fiducials or electrodes in your cap that were recorded with a polhemus but not recorded as EEG channel. Since the sfp file is very simple and can hardly be read incorrectly, I suspect that the error in the assignment in channel names occurs in reading the ERP file. Robert From CAFJ.Miller at PSY.UMCN.NL Thu Nov 17 16:48:12 2005 From: CAFJ.Miller at PSY.UMCN.NL (Christopher Miller) Date: Thu, 17 Nov 2005 16:48:12 +0100 Subject: reformat processed data Message-ID: Dear Robert, I have a question about reformating (pre)processed data: I performed a frequency analysis with Brain Vision Analyzer and exported the data into Excel. These data are multidimensional (27 channels X 5 frequencybands) and thus consist of 27 X 5 numbers for each of the 16 subjects. Each number represents the power of every channel-frequency combination. Since this was a within design, I have two sets of 27 X 5 for each subject. I want to compare these two sets with a Cluster-level Randomization Test for a Within Subjects experiment, just like the test which is performed in the tutorial on Cluster-level Randomization Tests, page 16-17. In the tutorial this can be done after "load gravgerfcporig;". When this command is executed, two files appear in the workspace: "gravg_erf_cp_FC" and "gravg_erf_cp_FIC", both with the format: "<1x1 struct> struct". However, when I import my data with the import wizard there appears only one file in the workspace, named :"data" with the format: "<160x30 double> double". The numbers 160 and 30 represent the data as needed for analyzing them in SPSS: 160 rows (16 subjects, with 5 frequencybands in 2 conditions). The number 30 represents 30 columns (27 channels and 3 channels to label: (1) the subject(1-16), (2) the frequencybands(1-5) and (3) the condition(1-2). I know that just saving my imported file as a .mat file doesn't change the structure of the file, since I tried this. My question is, how can I reformat these data in such a way that I can perform a Cluster-level Randomization Test for a Within Subjects experiment? Thanks in advance, Christopher Miller, MSc Unit for Clinical Psychopharmacology and Neuropsychiatry Department of Psychiatry 974 Radboud University Nijmegen Medical Centre PO Box 9101 6500 HB Nijmegen The Netherlands Tel.: + 31 24 3613204 Email: CAFJ.Miller at psy.umcn.nl From r.oostenveld at FCDONDERS.RU.NL Fri Nov 18 09:28:21 2005 From: r.oostenveld at FCDONDERS.RU.NL (Robert Oostenveld) Date: Fri, 18 Nov 2005 09:28:21 +0100 Subject: reformat processed data In-Reply-To: <4CD85D348E46984983185B911CBF3ED1483BD0@umcnet13.umcn.nl> Message-ID: Dear Christopher, The tutorial data that you refer to contains a structure. In general all data in fieldtrip is represented as a structure. A structure is a collection of variables that belong together, and the "freq" structure, i.e. the structure that results from the freqanalysis function contains all elements (but not more) that are required to fully describe the data. The file gravgerfcporig.mat contains a grand average Event Related Field (ERF) structure, which is the result of the timelockgrandaverage function: >> clear all >> load gravgerfcporig >> whos gravg_erf_cp_FC 1x1 11352992 struct array gravg_erf_cp_FIC 1x1 11352992 struct array >> gravg_erf_cp_FC label: {152x1 cell} time: [1x900 double] dimord: 'repl_chan_time' grad: [1x1 struct] individual: [10x152x900 double] (hmmm, the average itself seems to be missing, I was expecting that it also would contain an avg-field of 152x900 double. Maybe Eric deleted it. Also the cfg field is missing, so it seems like it was hand-made and not using timelockgrandaverage.) But that is not the data that you are interested in. Have a look in the file containing the time-frequency representation of the data >> load TFRorig >> whos TFRFC 1x1 20540072 struct array TFRFIC 1x1 20775680 struct array >> TFRFC label: {151x1 cell} dimord: 'rpt_sgncmb_frq_tim' powspctrm: [4-D double] foi: [5 10 20 40 80] toi: [1x39 double] grad: [1x1 struct] cfg: [1x1 struct] There you see that there is a structure TFRFC, which contains a powspctrm field, with the order of dimensions (dimord) repetitions- channels-frequency-time. There is a vector describing the values along the time-axis (toi) and a frequency axis (foi) and a cell-array with the channel labels (label). Furthermore, there is a "grad" structure which contains the position of the MEG gradiometers. If you want to copy your data from excel into fieldtrip, you should create a similar structure in which all sub-elements correspond with the data, since that is what clusterrandanalysis expects (that is the "bookkeeping" that Eric referred to). You currently only have a data matrix of <160x30 double>, but clusterrandanalysis does not know whether it has 160 channels or 30, and whether it contains the power at a single frequency that was estimated at multiple timepoints, or the power at many frequencies that was estimated at a single timepoint, or what the frequencies actually are. You also have to tell (through the elec structure) what the locations of your electrodes are, since clusterrandanalysis needs to know which electrodes are neighbours. Although converting the data from excel to a fieldtrip-compatible structure is possible, I think that it will be easier to do your complete analysis in fieldtrip. Fieldtrip can read Brainvision files, and you can follow all steps in the clusterrandanalysis tutorial, but then instead of doing a time-frequency analysis (mtmconvol) only doing a frequency analysis (mtmfft). best regards, Robert On 17-nov-2005, at 16:48, Christopher Miller wrote: > Dear Robert, > > I have a question about reformating (pre)processed data: > > I performed a frequency analysis with Brain Vision Analyzer and > exported the data into Excel. These data are multidimensional (27 > channels X 5 frequencybands) and thus consist of 27 X 5 numbers for > each of the 16 subjects. Each number represents the power of every > channel-frequency combination. Since this was a within design, I > have two sets of 27 X 5 for each subject. I want to compare these > two sets with a Cluster-level Randomization Test for a Within > Subjects experiment, just like the test which is performed in the > tutorial on Cluster-level Randomization Tests, page 16-17. In the > tutorial this can be done after "load gravgerfcporig;". When this > command is executed, two files appear in the workspace: > "gravg_erf_cp_FC" and "gravg_erf_cp_FIC", both with the format: > "<1x1 struct> struct". However, when I import my data with the > import wizard there appears only one file in the workspace, > named :"data" with the format: "<160x30 double> double". The > numbers 160 and 30 represent the data as needed for analyzing them > in SPSS: 160 rows (16 subjects, with 5 frequencybands in 2 > conditions). The number 30 represents 30 columns (27 channels and 3 > channels to label: (1) the subject(1-16), (2) the frequencybands > (1-5) and (3) the condition(1-2). > I know that just saving my imported file as a .mat file doesn't > change the structure of the file, since I tried this. My question > is, how can I reformat these data in such a way that I can perform > a Cluster-level Randomization Test for a Within Subjects experiment? > > Thanks in advance, > > > Christopher Miller, MSc > Unit for Clinical Psychopharmacology and Neuropsychiatry > Department of Psychiatry 974 > Radboud University Nijmegen Medical Centre > PO Box 9101 > 6500 HB Nijmegen > The Netherlands > Tel.: + 31 24 3613204 > Email: CAFJ.Miller at psy.umcn.nl > ======================================================= Robert Oostenveld, PhD F.C. Donders Centre for Cognitive Neuroimaging Radboud University Nijmegen phone: +31-24-3619695 http://www.ru.nl/fcdonders/ From CAFJ.Miller at PSY.UMCN.NL Fri Nov 18 15:37:45 2005 From: CAFJ.Miller at PSY.UMCN.NL (Christopher Miller) Date: Fri, 18 Nov 2005 15:37:45 +0100 Subject: reformat processed data Message-ID: Dear Robert, How can I read BrainVision-files into matlab? Can I export BVA preprocessed data into Fieldtrip (according to the background information at the FCdonders website, Fieldtrip supports the .dat files from BVA). How can this be done? Or must I do all the preprocessing over? Greetings, Christopher -----Oorspronkelijk bericht----- Van: FieldTrip discussion list [mailto:FIELDTRIP at NIC.SURFNET.NL]Namens Robert Oostenveld Verzonden: vrijdag 18 november 2005 9:28 Aan: FIELDTRIP at NIC.SURFNET.NL Onderwerp: Re: [FIELDTRIP] reformat processed data Dear Christopher, The tutorial data that you refer to contains a structure. In general all data in fieldtrip is represented as a structure. A structure is a collection of variables that belong together, and the "freq" structure, i.e. the structure that results from the freqanalysis function contains all elements (but not more) that are required to fully describe the data. The file gravgerfcporig.mat contains a grand average Event Related Field (ERF) structure, which is the result of the timelockgrandaverage function: >> clear all >> load gravgerfcporig >> whos gravg_erf_cp_FC 1x1 11352992 struct array gravg_erf_cp_FIC 1x1 11352992 struct array >> gravg_erf_cp_FC label: {152x1 cell} time: [1x900 double] dimord: 'repl_chan_time' grad: [1x1 struct] individual: [10x152x900 double] (hmmm, the average itself seems to be missing, I was expecting that it also would contain an avg-field of 152x900 double. Maybe Eric deleted it. Also the cfg field is missing, so it seems like it was hand-made and not using timelockgrandaverage.) But that is not the data that you are interested in. Have a look in the file containing the time-frequency representation of the data >> load TFRorig >> whos TFRFC 1x1 20540072 struct array TFRFIC 1x1 20775680 struct array >> TFRFC label: {151x1 cell} dimord: 'rpt_sgncmb_frq_tim' powspctrm: [4-D double] foi: [5 10 20 40 80] toi: [1x39 double] grad: [1x1 struct] cfg: [1x1 struct] There you see that there is a structure TFRFC, which contains a powspctrm field, with the order of dimensions (dimord) repetitions- channels-frequency-time. There is a vector describing the values along the time-axis (toi) and a frequency axis (foi) and a cell-array with the channel labels (label). Furthermore, there is a "grad" structure which contains the position of the MEG gradiometers. If you want to copy your data from excel into fieldtrip, you should create a similar structure in which all sub-elements correspond with the data, since that is what clusterrandanalysis expects (that is the "bookkeeping" that Eric referred to). You currently only have a data matrix of <160x30 double>, but clusterrandanalysis does not know whether it has 160 channels or 30, and whether it contains the power at a single frequency that was estimated at multiple timepoints, or the power at many frequencies that was estimated at a single timepoint, or what the frequencies actually are. You also have to tell (through the elec structure) what the locations of your electrodes are, since clusterrandanalysis needs to know which electrodes are neighbours. Although converting the data from excel to a fieldtrip-compatible structure is possible, I think that it will be easier to do your complete analysis in fieldtrip. Fieldtrip can read Brainvision files, and you can follow all steps in the clusterrandanalysis tutorial, but then instead of doing a time-frequency analysis (mtmconvol) only doing a frequency analysis (mtmfft). best regards, Robert On 17-nov-2005, at 16:48, Christopher Miller wrote: > Dear Robert, > > I have a question about reformating (pre)processed data: > > I performed a frequency analysis with Brain Vision Analyzer and > exported the data into Excel. These data are multidimensional (27 > channels X 5 frequencybands) and thus consist of 27 X 5 numbers for > each of the 16 subjects. Each number represents the power of every > channel-frequency combination. Since this was a within design, I > have two sets of 27 X 5 for each subject. I want to compare these > two sets with a Cluster-level Randomization Test for a Within > Subjects experiment, just like the test which is performed in the > tutorial on Cluster-level Randomization Tests, page 16-17. In the > tutorial this can be done after "load gravgerfcporig;". When this > command is executed, two files appear in the workspace: > "gravg_erf_cp_FC" and "gravg_erf_cp_FIC", both with the format: > "<1x1 struct> struct". However, when I import my data with the > import wizard there appears only one file in the workspace, > named :"data" with the format: "<160x30 double> double". The > numbers 160 and 30 represent the data as needed for analyzing them > in SPSS: 160 rows (16 subjects, with 5 frequencybands in 2 > conditions). The number 30 represents 30 columns (27 channels and 3 > channels to label: (1) the subject(1-16), (2) the frequencybands > (1-5) and (3) the condition(1-2). > I know that just saving my imported file as a .mat file doesn't > change the structure of the file, since I tried this. My question > is, how can I reformat these data in such a way that I can perform > a Cluster-level Randomization Test for a Within Subjects experiment? > > Thanks in advance, > > > Christopher Miller, MSc > Unit for Clinical Psychopharmacology and Neuropsychiatry > Department of Psychiatry 974 > Radboud University Nijmegen Medical Centre > PO Box 9101 > 6500 HB Nijmegen > The Netherlands > Tel.: + 31 24 3613204 > Email: CAFJ.Miller at psy.umcn.nl > ======================================================= Robert Oostenveld, PhD F.C. Donders Centre for Cognitive Neuroimaging Radboud University Nijmegen phone: +31-24-3619695 http://www.ru.nl/fcdonders/ From r.oostenveld at FCDONDERS.RU.NL Mon Nov 21 13:05:15 2005 From: r.oostenveld at FCDONDERS.RU.NL (Robert Oostenveld) Date: Mon, 21 Nov 2005 13:05:15 +0100 Subject: reformat processed data In-Reply-To: <4CD85D348E46984983185B911CBF3ED1483BD1@umcnet13.umcn.nl> Message-ID: On 18-nov-2005, at 15:37, Christopher Miller wrote: > Dear Robert, > > How can I read BrainVision-files into matlab? > Can I export BVA preprocessed data into Fieldtrip (according to the > background information at the FCdonders website, Fieldtrip supports > the .dat files from BVA). How can this be done? Or must I do all > the preprocessing over? Fieldtrip automatically detects the type of data. If you have done filtering and artefact removal in BVA, you should save the result in a *.dat file and still do the preprocessing in Fieldtrip (wich involves reading in the data, which is what you want, and optionally filtering, which you do not want) except that you can keep the cfg- options of PREPROCESSING empty to prevent it from doing the filtering. You still do have to specify the cfg-settings for DEFINETRIAL. If you specify cfg.trialdef.eventtype='?' a list with the events in your data file will be displayed on screen. best Robert From wibral at MPIH-FRANKFURT.MPG.DE Mon Nov 21 15:21:48 2005 From: wibral at MPIH-FRANKFURT.MPG.DE (Michael Wibral) Date: Mon, 21 Nov 2005 15:21:48 +0100 Subject: Problem with data from BESA In-Reply-To: Message-ID: Hi Robert, thanks to your help I have meanfile figured out and solved the problems with the topographies, i.e. the maps look fine now as far as geometry is concerned. I guess the error was that the BESA files contained " ' " at the end of the electrode names as the interpolation to a commen 81 electrodes was done using digitzed individual coordinates). I removed the the extra " ' " and - just to make sure nothing goes wrong - also made an ordered layout file for my configuration. What remains puzzling however is the inversion of amplitudes (+ -> - ). The exported .mul file from BESA and the BESA data seem to match, however the plotted data seem to be inverted. I want to check this using simpler data, though, and then come back to this if I can confirm it. I have however another question regarding the interpretation of clusteranalysis results. Am I correct in saying that the family wise error rate (alpha) tells me the risk in obtaining a false positive statement of the type that I specify previously with alphatresh? For example if I specify alphathresh of 0.1 (lets calls this trend for abbreviation) in the first pass of the analysis (multiple testing) before clustering then the clusterrandomization using alpha =0.05 tells me that I run a risk of 5% of identifying wrongly at least one of these 'trend clusters'. (Or else, if the above is incorrect what is the reason not to use a very lenient criterion in the first pass to feed the clusterrandomization with as many clusters as possible?) Best rgards, Michael Robert Oostenveld schrieb: > On 16-nov-2005, at 18:56, Michael Wibral wrote: > >> Both files also have a different order from the 10-10 layout used >> in Fieldtrip, but I guess layout files match electrodes per name, >> don't they. I will try to figure out a workaround. > > > Channel matching is indeed done on name and not on number/index. This > applies for the channel names in the layout file, but also for the > channel names in the electrode file. It means that the channel > ordering in either layout-file or elec-structure can be different > from the channel ordering in the data, since both the data and the > elec contain labels that can be matched when needed (e.g when > plotting or dipole fitting). The elec-structure can also contain more > or less electrode positions+labels than the EEG itself, e.g. when you > have measured bipolar ECG or EOG along (without position), or when > you have additional fiducials or electrodes in your cap that were > recorded with a polhemus but not recorded as EEG channel. > > Since the sfp file is very simple and can hardly be read incorrectly, > I suspect that the error in the assignment in channel names occurs in > reading the ERP file. > > Robert > > . > From maris at NICI.RU.NL Mon Nov 21 15:59:10 2005 From: maris at NICI.RU.NL (Eric Maris) Date: Mon, 21 Nov 2005 15:59:10 +0100 Subject: Problem with data from BESA Message-ID: Hi Michael, > I have however another question regarding the interpretation of > clusteranalysis results. Am I correct in saying that the family wise error > rate (alpha) tells me the risk in obtaining a false positive statement of > the type that I specify previously with alphatresh? For example if I > specify alphathresh of 0.1 (lets calls this trend for abbreviation) in the > first pass of the analysis (multiple testing) before clustering then the > clusterrandomization using alpha =0.05 tells me that I run a risk of 5% of > identifying wrongly at least one of these 'trend clusters'. > (Or else, if the above is incorrect what is the reason not to use a very > lenient criterion in the first pass to feed the clusterrandomization with > as many clusters as possible?) The issue is statistical power (sensitivity). If you use a very lenient criterion (say, alphathresh=0.2) to select candidate cluster members, this will result in large clusters purely by chance. If the effect in your data is strong but confined to a small number of sensors and timepoints, clusterrandanalysis may not pick it up. This is because the reference distribution is dominated by these weak but large "chance clusters". You will not encounter this problem if you put alphathresh higher. On the other hand, a high aphathresh will miss weak but widespread effects. To sum up, alphathresh determines the relative sensitivity to "strong but small" and "weak but large" clusters. greetings, Eric Maris From wibral at MPIH-FRANKFURT.MPG.DE Wed Nov 23 13:26:13 2005 From: wibral at MPIH-FRANKFURT.MPG.DE (Michael Wibral) Date: Wed, 23 Nov 2005 13:26:13 +0100 Subject: Problem with data from BESA In-Reply-To: <00c001c5eeac$20c1bb00$d92cae83@fcdc195> Message-ID: Hi Eric, thank you for the explanation, things are much clearer now. In the meantime I have encountered another problem with clusterrandanalysis: When using maxsum as a test statistic everything works fine, but using maxsumtminclustersize with the same specified (maximum) alpha the fields clusrand.posclusters and clusrand.negclusters stay empty - although there seem to be large enough clusters in posclusterslabelmat and negclusterslabelmat (I used cfg.smallestcluster=2..) . Is this a bug or does the use of maxsumtminclustersize somehow reduce sensitivity (- from the description in the 2005 tutorial I thought it is similar to using FDR or 'Holmes' method for computing critical p values?). Maybe I am also missing something on a conceptual level that makes the information in posclusters invalid if I use maxsumtminclustersize as a test statistic ?? Below I pasted the code that produced this behaviour. %Clusterrandomization analysis cfg=[]; cfg.elec =elec; cfg.statistic = 'depsamplesT'; cfg.alphathresh = 0.05; cfg.makeclusters = 'yes'; cfg.minnbchan = 1; %1 neighbour i.e. 2 channels cfg.smallestcluster = 2; cfg.clusterteststat = 'maxsumtminclustersize'; % replace with maxsum to get lots of entries in clusrand.posclusters cfg.onetwo = 'twosided'; cfg.alpha = 0.05; cfg.nranddraws = 1000; cfg.latency = [0.40 0.65]; [clusrand] = clusterrandanalysis(cfg, AMdata, vAMdata); clusrand.elec=elec; Best, Michael Eric Maris schrieb: > Hi Michael, > > >> I have however another question regarding the interpretation of >> clusteranalysis results. Am I correct in saying that the family wise >> error rate (alpha) tells me the risk in obtaining a false positive >> statement of the type that I specify previously with alphatresh? For >> example if I specify alphathresh of 0.1 (lets calls this trend for >> abbreviation) in the first pass of the analysis (multiple testing) >> before clustering then the clusterrandomization using alpha =0.05 >> tells me that I run a risk of 5% of identifying wrongly at least one >> of these 'trend clusters'. >> (Or else, if the above is incorrect what is the reason not to use a >> very lenient criterion in the first pass to feed the >> clusterrandomization with as many clusters as possible?) > > > > The issue is statistical power (sensitivity). If you use a very > lenient criterion (say, alphathresh=0.2) to select candidate cluster > members, this will result in large clusters purely by chance. If the > effect in your data is strong but confined to a small number of > sensors and timepoints, clusterrandanalysis may not pick it up. This > is because the reference distribution is dominated by these weak but > large "chance clusters". You will not encounter this problem if you > put alphathresh higher. On the other hand, a high aphathresh will miss > weak but widespread effects. > > To sum up, alphathresh determines the relative sensitivity to "strong > but small" and "weak but large" clusters. > > > greetings, > > Eric Maris > > . > From maris at NICI.RU.NL Wed Nov 23 13:56:28 2005 From: maris at NICI.RU.NL (Eric Maris) Date: Wed, 23 Nov 2005 13:56:28 +0100 Subject: Problem with data from BESA Message-ID: Hi Michael, > thank you for the explanation, things are much clearer now. In the > meantime I have encountered another problem with clusterrandanalysis: > When using maxsum as a test statistic everything works fine, but using > maxsumtminclustersize with the same specified (maximum) alpha the fields > clusrand.posclusters and clusrand.negclusters stay empty - although there > seem to be large enough clusters in posclusterslabelmat and > negclusterslabelmat (I used cfg.smallestcluster=2..) . Is this a bug or > does the use of maxsumtminclustersize somehow reduce sensitivity (- from > the description in the 2005 tutorial I thought it is similar to using FDR > or 'Holmes' method for computing critical p values?). Maybe I am also > missing something on a conceptual level that makes the information in > posclusters invalid if I use maxsumtminclustersize as a test statistic ?? Yes, there is a bug in Clusterrandanalysis when the option cfg.clusterteststat = 'maxsumtminclustersize' is used. I know where it is, but I need some time to fix it (due to dependencies in the code). Give me a week to fix it. greetings, Eric > Below I pasted the code that produced this behaviour. > > %Clusterrandomization analysis > cfg=[]; > cfg.elec =elec; > cfg.statistic = 'depsamplesT'; > cfg.alphathresh = 0.05; > cfg.makeclusters = 'yes'; > cfg.minnbchan = 1; %1 neighbour i.e. 2 channels > cfg.smallestcluster = 2; > cfg.clusterteststat = 'maxsumtminclustersize'; % replace with maxsum to > get lots of entries in clusrand.posclusters > cfg.onetwo = 'twosided'; > cfg.alpha = 0.05; > cfg.nranddraws = 1000; > cfg.latency = [0.40 0.65]; > [clusrand] = clusterrandanalysis(cfg, AMdata, vAMdata); > clusrand.elec=elec; > > > Best, > Michael > > Eric Maris schrieb: > >> Hi Michael, >> >> >>> I have however another question regarding the interpretation of >>> clusteranalysis results. Am I correct in saying that the family wise >>> error rate (alpha) tells me the risk in obtaining a false positive >>> statement of the type that I specify previously with alphatresh? For >>> example if I specify alphathresh of 0.1 (lets calls this trend for >>> abbreviation) in the first pass of the analysis (multiple testing) >>> before clustering then the clusterrandomization using alpha =0.05 tells >>> me that I run a risk of 5% of identifying wrongly at least one of these >>> 'trend clusters'. >>> (Or else, if the above is incorrect what is the reason not to use a very >>> lenient criterion in the first pass to feed the clusterrandomization >>> with as many clusters as possible?) >> >> >> >> The issue is statistical power (sensitivity). If you use a very lenient >> criterion (say, alphathresh=0.2) to select candidate cluster members, >> this will result in large clusters purely by chance. If the effect in >> your data is strong but confined to a small number of sensors and >> timepoints, clusterrandanalysis may not pick it up. This is because the >> reference distribution is dominated by these weak but large "chance >> clusters". You will not encounter this problem if you put alphathresh >> higher. On the other hand, a high aphathresh will miss weak but >> widespread effects. >> >> To sum up, alphathresh determines the relative sensitivity to "strong but >> small" and "weak but large" clusters. >> >> >> greetings, >> >> Eric Maris >> >> . >> From marie at PSY.GLA.AC.UK Fri Nov 25 11:50:37 2005 From: marie at PSY.GLA.AC.UK (Marie Smith) Date: Fri, 25 Nov 2005 10:50:37 +0000 Subject: meg realign In-Reply-To: <4837D13C-376B-4167-991F-1AE32768B562@fcdonders.ru.nl> Message-ID: Hi, I was wondering if someone could clarify for me in some more detail how the meg-realign function works. From the function help it seems to perform a course source reconstruction and then re-project back to a standard gradiometer array. I am most curious about how this course source representation is implemented. Thanks, Marie Smith From r.oostenveld at FCDONDERS.RU.NL Mon Nov 28 15:53:00 2005 From: r.oostenveld at FCDONDERS.RU.NL (Robert Oostenveld) Date: Mon, 28 Nov 2005 15:53:00 +0100 Subject: meg realign In-Reply-To: <53C0453E-2753-4AF7-A2C4-BEC35B4631CC@psy.gla.ac.uk> Message-ID: Hi Marie, On 25-nov-2005, at 11:50, Marie Smith wrote: > I was wondering if someone could clarify for me in some more detail > how the meg-realign function works. From the function help it seems > to perform a course source reconstruction and then re-project back > to a standard gradiometer array. I am most curious about how this > course source representation is implemented. You are right, it involves projecting the measured activity on a sheet of dipoles that approximates the cortex, followed by a forward computation of the field of those dipoles at the template gradiometer locations. The algorithm is described in combination with a simulation study in the paper T.R. Knosche, Transformation of whole-head MEG recordings between different sensor positions. Biomed Tech (Berl). 2002 Mar;47(3):59-62. A similar algorithm, with the main difference being a different source model, is described in the appendix of the paper de Munck JC, Verbunt JP, Van't Ent D, Van Dijk BW. The use of an MEG device as 3D digitizer and motion monitoring system. Phys Med Biol. 2001 Aug;46(8):2041-52. I will send you a pdf version of both papers in a separate mail addressed directly at you. best regards, Robert From h.f.kwok at BHAM.AC.UK Tue Nov 29 16:07:30 2005 From: h.f.kwok at BHAM.AC.UK (Hoi Fei Kwok) Date: Tue, 29 Nov 2005 16:07:30 +0100 Subject: importing my own data Message-ID: Dear Robert, In the FAQ section of the FieldTrip website, it is said that if I import my own data, I have to define the following fields: data.label, data.trial, data.fsample, data.time and data.cfg. The first four are straighforward enough. However, how to set up data.cfg. What are the subfields? Regards, Hoi Fei From r.oostenveld at FCDONDERS.RU.NL Wed Nov 30 17:10:33 2005 From: r.oostenveld at FCDONDERS.RU.NL (Robert Oostenveld) Date: Wed, 30 Nov 2005 17:10:33 +0100 Subject: importing my own data In-Reply-To: Message-ID: Hi Hoi Fei data.cfg can be empty (i.e. []) in your case. It is used to remember the configuration details of all steps that you take in FT. Some functions assume that data.cfg is present and want to copy it over in their output (e.g. timelock.cfg.previous), therefore it should be present to be sure. The most recent versions of FT however should check whether it is present or not, and only attempt to copy it if persent. best, Robert PS Given that you have biosemi amplifiers, you are probably working with the BDF format. It would also be nice to implement that natively in fieldtrip. That should not be too hard, if you would be interested in taking that approach instead of constructing a data structure, please contact me directly. On 29-nov-2005, at 16:07, Hoi Fei Kwok wrote: > Dear Robert, > > In the FAQ section of the FieldTrip website, it is said that if I > import my > own data, I have to define the following fields: data.label, > data.trial, > data.fsample, data.time and data.cfg. The first four are > straighforward > enough. However, how to set up data.cfg. What are the subfields? > > Regards, > Hoi Fei > From pbikle at NIH.GOV Thu Nov 3 22:19:16 2005 From: pbikle at NIH.GOV (Philip C. Bikle) Date: Thu, 3 Nov 2005 22:19:16 +0100 Subject: Source analysis error Message-ID: I get the following error when attempting to source analysis: Warning: cross-spectral density matrix is rank deficient > In fieldtrip-20051027/private/beamformer at 253 In sourceanalysis at 701 ??? Error using ==> mtimes Inner matrix dimensions must agree. Error in ==> fieldtrip-20051027/private/beamformer at 365 filt2 = pinv(lf2' * invCf * lf2) * lf2' * invCf; Error in ==> sourceanalysis at 701 dip(i) = beamformer(grid, sens, vol, [], squeeze(Cf(i,:,:)), 'method', 'dics', 'feedback', cfg.feedback, 'projectnoise', cfg.projectnoise, 'keepfilter', cfg.keepfilter, 'supdip', cfg.supdip, >> Can any one tell me what I am doing wrong? I am attaching a script file which is what I am using to examine coherence. The script errors at line 77 ([source] = sourceanalysis(cfg, freq);). -------------- next part -------------- %function docoh(subj, cond, pre, post, dip) addpath /usr/local/fieldtrip-20051027/ ds = sprintf('/media/usbdisk/MEG/AC/test.ds'); subj='AC'; f = 10; smo = 5.; r = .7; loc = 'RFus'; cond = 'FACES'; pre = 0.0; post = 0.5; dip = [-5.5,-3.7,-1.7] cfg = []; cfg.dataset = ds; cfg.trialdef.eventtype = cond; cfg.trialdef.prestim = pre; cfg.trialdef.poststim = post; [data] = preprocessing(cfg); cfg = []; cfg.method = 'fft'; cfg.output = 'powandcsd'; cfg.tapsmofrq = smo; cfg.foilim = [f f]; cfg.keeptrials = 'yes'; cfg.sgncmb = channelcombination({'MEG' 'MEG'}, data.label); [freq] = freqanalysis(cfg, data); cfg = []; %cfg.xgrid = -10:r:10; %cfg.ygrid = -10:r:10; %cfg.zgrid = -2:r:14; cfg.xgrid = -12:1:12; cfg.ygrid = -10:1:10; cfg.zgrid = -3:1:14; cfg.dim = [length(cfg.xgrid) length(cfg.ygrid) length(cfg.zgrid)]; N=prod(cfg.dim); cfg.inside = 1:N; cfg.outside = []; cfg.hdmfile = strcat(ds, '/localSpheres.hdm'); [grid] = PREPARE_LEADFIELD(cfg, freq); %[grid] = precompute_leadfield(cfg, freq); %[grid] = source2sparse(grid); cfg = []; cfg.channel='MEG'; cfg.method = 'coh_refdip'; cfg.refdip = dip; cfg.projectnoise = 'yes'; cfg.hdmfile = strcat(ds, '/localSpheres.hdm'); cfg.grid = grid; %cfg.rawtrial = 'yes'; cfg.jacknife = 'yes'; cfg.frequency = f; cfg.lambda = .000000000000000000000000001; cfg.keepleadfield = 'no'; cfg.feedback = 'none'; [source] = sourceanalysis(cfg, freq); [source] = sourcedescriptives([], source); %[source] = source2full(source); brikname = sprintf('%s-%s-%s-%gHz', subj, cond, loc, f); [err, errmsg, info] = writesourcebrik(source, source.avg.coh, brikname); if err disp(errmsg) end -------------- next part -------------- Warning: higher order synthetic gradiometer configuration > In fieldtrip-20051027/private/prepare_vol_sens at 202 In sourceanalysis at 442 2684 dipoles inside, 6766 dipoles outside brain 1 conditions, each with 3 data objects constructing 14 jacknife replications scanning repetition 1 Warning: cross-spectral density matrix is rank deficient > In fieldtrip-20051027/private/beamformer at 253 In sourceanalysis at 701 ??? Error using ==> mtimes Inner matrix dimensions must agree. Error in ==> fieldtrip-20051027/private/beamformer at 365 filt2 = pinv(lf2' * invCf * lf2) * lf2' * invCf; Error in ==> sourceanalysis at 701 dip(i) = beamformer(grid, sens, vol, [], squeeze(Cf(i,:,:)), 'method', 'dics', 'feedback', cfg.feedback, 'projectnoise', cfg.projectnoise, 'keepfilter', cfg.keepfilter, 'supdip', cfg.supdip, >> From r.oostenveld at FCDONDERS.RU.NL Mon Nov 7 14:34:31 2005 From: r.oostenveld at FCDONDERS.RU.NL (Robert Oostenveld) Date: Mon, 7 Nov 2005 14:34:31 +0100 Subject: Source analysis error In-Reply-To: Message-ID: Hi Philip, I have tried to replicate your problem. I do not have the same dataset, but a 151 channel dataset gave the same error. On 3-nov-2005, at 22:19, Philip C. Bikle wrote: > I get the following error when attempting to source analysis: > > Warning: cross-spectral density matrix is rank deficient >> In fieldtrip-20051027/private/beamformer at 253 > In sourceanalysis at 701 > ??? Error using ==> mtimes > Inner matrix dimensions must agree. > > Error in ==> fieldtrip-20051027/private/beamformer at 365 > filt2 = pinv(lf2' * invCf * lf2) * lf2' * invCf; I looked in the code and set a breakpoint at the corresponding line. It turned out that lf2 was 184x3 instead of 151x3. Our 151 channel system has 184 channels in total, including the reference channels. > cfg = []; > cfg.xgrid = -12:1:12; > cfg.ygrid = -10:1:10; > cfg.zgrid = -3:1:14; > cfg.dim = [length(cfg.xgrid) length(cfg.ygrid) length(cfg.zgrid)]; > N=prod(cfg.dim); > cfg.inside = 1:N; > cfg.outside = []; (Side note, this should be cfg.grid.inside and cfg.grid.outside to have effect.) > cfg.hdmfile = strcat(ds, '/localSpheres.hdm'); > [grid] = PREPARE_LEADFIELD(cfg, freq); It turns out that you are pre-computing the leadfields on all channels, including the reference channels. Instead, you should only compute it on the channels which you want to use for source analysis. If you do cfg.channel = 'MEG' in prepare_leadfield, the right channels will be selected. best, Robert From marco.buiatti at GMAIL.COM Tue Nov 8 12:25:51 2005 From: marco.buiatti at GMAIL.COM (Marco Buiatti) Date: Tue, 8 Nov 2005 12:25:51 +0100 Subject: about cluster randomization analysis In-Reply-To: <00b301c5dbd0$8cab3c40$de2cae83@fcdc195> Message-ID: Dear Vladimir and Eric, thank you for your accurate responses. I fully understand from your arguments that temporally zooming on clusters is definitely wrong. Still, I wonder whether and how it is possible to use cluster randomization analysis cases in which it is difficult to formulate a precise hypothesis about when to expect an effect (for example, in infants), or cases in which an unexpected effect arises from a t-test. Do you think it would be correct to slide a relatively large (width of 200ms? 400ms? to be chosen a priori of course) window through the epochs and compute cluster randomization analysis for each latency to explore dubious significant t-test clusters? Another related question: I computed a post-hoc non kosher tuning of the window around the most significative cluster in my data, and I saw that it is significative (p<0.05) if the window edges exceed of about 50 ms the cluster edges (since the cluster is about 70 ms long, the whole window is about 170 ms long); but if I take longer windows, the p-value increases quite rapidly (I'm running at least 500 random draws for each window, and checking that the result does not depend on the number of draws). Do you have such instabilities in your data or should I think that the effect relative to my cluster is definitely too weak? Or maybe my data are not clean enough? About the minimum number of channels: I understand and agree that it is set only in space. Maybe it would help saying it explicitly on the tutorial. About the reference: my non kosher approach does not include changing the reference to chase a significant effect! My previous e-mail was probably misleading about that. Thank you and have a good day, Marco On 10/28/05, Eric Maris wrote: > > Dear Marco, > > > > The procedure I am following now is a sort of two-steps method: in the > > first place, I choose a wide time interval and a low minimum number of > > channels. I end up with many clusters that are far from being > > significative. I then shorten the time interval to include just one > > cluster (starting from the most significant one), and increase the > minimum > > number of channels, and run the analysis again. In this case, I > eventually > > got a significative cluster where I was expecting it from a simple > > observation of the t-test. Do you think this procedure is right or am I > > doing something wrong? Is it correct to temporally focus on a cluster to > > check its significance? > > > Clusterrandanalysis only controls the false alarm (type I error) rate if > you > choose the "tuning parameters" (latency interval, channel subset, the > minnbchan-parameter; and if you use on TFRs, also the frequency interval) > independent of the data. Instead, if you play around with these tuning > parameters until you find a cluster whose p-value exceeds the critical > alpha-level, you are not controlling the false alarm rate. In this case, > the > chosen tuning parameters depend on the data. > > An extreme example illustrates this even better. Assume you calculate > T-statistics for all (channel, time point)-pairs and you select the pair > with the largest T-statistic. Then, you select the latency interval that > only contains this time point and the channel subset that only contains > this > channel. With these tuning parameters, you reduce your data to a single > cell > in the spatiotemporal matrix, and clusterrrandanalysis will produce a > p-value that is very close to the p-value of a T-test. Since you have > selected this (channel, time point)-pair on the basis of its T-statistic, > this p-value is strongly biased. > > > > Another couple of questions: > > 1) Minnbchan. I understood it is the minimum number of significative > > neighbor (channel,time) points for a (channel,time) point to enter a > > cluster, no matter if adjacency is more in channel space or time > > direction. Am I right? Since time and channel space are quite different > > dimension, would it be better to set a minimum channel number separately > > for the two? > > Minnbchan should also be chosen independent of the data. I introduced this > tuning parameter because it turned out that in 3-dimensional analyses on > TFRs (involving the dimensions time, space (i.e., sensors) and frequency), > sometimes a cluster appeared that consisted of two or more 3-dimensional > "blobs" that were connected by a single (channel, time, > frequency)-element. > From a physiological perspective, such a cluster does not make sense. To > remove these physiologically implausible (and therefore probably random) > connections, I introduced the minnbchan parameter. Because of this > physiological rationale, I apply the minimum number criterium to the > spatial, and not to the temporal dimension. Short-lived phenomena are very > well possible from a physiological perspective, whereas effects at > spatially > isolated sensors are not. > > > > 2) Maybe because my data are average-referenced, I often end up with a > > positive and negative cluster emerging almost at the same time. Have you > > thought about any way to include the search of dipole-like > configurations? > > I have not thought about it, but it certainly makes sense to incorporate > biophysical constraints (such dipolar patterns) in the test statistic. > > One should be aware of the fact that different hypotheses are tested > before > and after rereferencing. This is physical and not a statistical issue. As > you most certainly know, EEG-signals are potential DIFFERENCES and > therefore > the underlying physiological events that are measured by EEG depend on the > reference channel(s). If the experimental manipulation affects the current > reference channel, then rereferencing to another channel (or set of > channels) that is not affected by the experimental manipulation makes a > difference for the result of the statistical test. > > > greetings, > > Eric Maris > -- Marco Buiatti - Post Doc ************************************************************** Cognitive Neuroimaging Unit - INSERM U562 Service Hospitalier Frederic Joliot, CEA/DRM/DSV 4 Place du general Leclerc, 91401 Orsay cedex, France Telephone: +33 1 69 86 77 65 Fax: +33 1 69 86 78 16 E-mail: marco.buiatti at gmail.com Web: www.unicog.org *************************************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From maris at NICI.RU.NL Tue Nov 8 13:17:50 2005 From: maris at NICI.RU.NL (Eric Maris) Date: Tue, 8 Nov 2005 13:17:50 +0100 Subject: about cluster randomization analysis Message-ID: Hi Marco, thank you for your accurate responses. I fully understand from your arguments that temporally zooming on clusters is definitely wrong. Still, I wonder whether and how it is possible to use cluster randomization analysis cases in which it is difficult to formulate a precise hypothesis about when to expect an effect (for example, in infants), or cases in which an unexpected effect arises from a t-test. Do you think it would be correct to slide a relatively large (width of 200ms? 400ms? to be chosen a priori of course) window through the epochs and compute cluster randomization analysis for each latency to explore dubious significant t-test clusters? If you have no hypothesis about where to expect an effect, you should use the complete latency window in which it may occur. Of course, this will reduce the sensitivity (statistical power) of your test (in comparison with the situation in which you do know when the effect can occur). As a rule, prior knowledge increases sensitivity. Another related question: I computed a post-hoc non kosher tuning of the window around the most significative cluster in my data, and I saw that it is significative (p<0.05) if the window edges exceed of about 50 ms the cluster edges (since the cluster is about 70 ms long, the whole window is about 170 ms long); but if I take longer windows, the p-value increases quite rapidly (I'm running at least 500 random draws for each window, and checking that the result does not depend on the number of draws). Do you have such instabilities in your data or should I think that the effect relative to my cluster is definitely too weak? Or maybe my data are not clean enough? This phenomenon is not an instability, it is what I would expect. Imagine your trials are 10 seconds long and there is an effect in the latency window between 1.3 and 1.35 seconds (i.e., less than 1 percent of trial length). If you ask clusterrandanalysis to compare the conditions over the complete trial length, it may very well miss the effect in the window between 1.3 and 1.35 seconds, because it has to use a large critical value in order to control for false positives in the time window where there is no effect (i.e., 99 percent of the 10 second trial). greetings, Eric Maris On 10/28/05, Eric Maris wrote: Dear Marco, > The procedure I am following now is a sort of two-steps method: in the > first place, I choose a wide time interval and a low minimum number of > channels. I end up with many clusters that are far from being > significative. I then shorten the time interval to include just one > cluster (starting from the most significant one), and increase the minimum > number of channels, and run the analysis again. In this case, I eventually > got a significative cluster where I was expecting it from a simple > observation of the t-test. Do you think this procedure is right or am I > doing something wrong? Is it correct to temporally focus on a cluster to > check its significance? Clusterrandanalysis only controls the false alarm (type I error) rate if you choose the "tuning parameters" (latency interval, channel subset, the minnbchan-parameter; and if you use on TFRs, also the frequency interval) independent of the data. Instead, if you play around with these tuning parameters until you find a cluster whose p-value exceeds the critical alpha-level, you are not controlling the false alarm rate. In this case, the chosen tuning parameters depend on the data. An extreme example illustrates this even better. Assume you calculate T-statistics for all (channel, time point)-pairs and you select the pair with the largest T-statistic. Then, you select the latency interval that only contains this time point and the channel subset that only contains this channel. With these tuning parameters, you reduce your data to a single cell in the spatiotemporal matrix, and clusterrrandanalysis will produce a p-value that is very close to the p-value of a T-test. Since you have selected this (channel, time point)-pair on the basis of its T-statistic, this p-value is strongly biased. > Another couple of questions: > 1) Minnbchan. I understood it is the minimum number of significative > neighbor (channel,time) points for a (channel,time) point to enter a > cluster, no matter if adjacency is more in channel space or time > direction. Am I right? Since time and channel space are quite different > dimension, would it be better to set a minimum channel number separately > for the two? Minnbchan should also be chosen independent of the data. I introduced this tuning parameter because it turned out that in 3-dimensional analyses on TFRs (involving the dimensions time, space ( i.e., sensors) and frequency), sometimes a cluster appeared that consisted of two or more 3-dimensional "blobs" that were connected by a single (channel, time, frequency)-element. From a physiological perspective, such a cluster does not make sense. To remove these physiologically implausible (and therefore probably random) connections, I introduced the minnbchan parameter. Because of this physiological rationale, I apply the minimum number criterium to the spatial, and not to the temporal dimension. Short-lived phenomena are very well possible from a physiological perspective, whereas effects at spatially isolated sensors are not. > 2) Maybe because my data are average-referenced, I often end up with a > positive and negative cluster emerging almost at the same time. Have you > thought about any way to include the search of dipole-like configurations? I have not thought about it, but it certainly makes sense to incorporate biophysical constraints (such dipolar patterns) in the test statistic. One should be aware of the fact that different hypotheses are tested before and after rereferencing. This is physical and not a statistical issue. As you most certainly know, EEG-signals are potential DIFFERENCES and therefore the underlying physiological events that are measured by EEG depend on the reference channel(s). If the experimental manipulation affects the current reference channel, then rereferencing to another channel (or set of channels) that is not affected by the experimental manipulation makes a difference for the result of the statistical test. greetings, Eric Maris -- Marco Buiatti - Post Doc ************************************************************** Cognitive Neuroimaging Unit - INSERM U562 Service Hospitalier Frederic Joliot, CEA/DRM/DSV 4 Place du general Leclerc, 91401 Orsay cedex, France Telephone: +33 1 69 86 77 65 Fax: +33 1 69 86 78 16 E-mail: marco.buiatti at gmail.com Web: www.unicog.org *************************************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From marco.buiatti at GMAIL.COM Tue Nov 8 13:58:57 2005 From: marco.buiatti at GMAIL.COM (Marco Buiatti) Date: Tue, 8 Nov 2005 13:58:57 +0100 Subject: about cluster randomization analysis In-Reply-To: <008f01c5e45e$70087b70$d72cae83@fcdc195> Message-ID: Hi Eric, On 11/8/05, Eric Maris wrote: > > Hi Marco, > > thank you for your accurate responses. I fully understand from your > arguments that temporally zooming on clusters is definitely wrong. Still, I > wonder whether and how it is possible to use cluster randomization analysis > cases in which it is difficult to formulate a precise hypothesis about when > to expect an effect (for example, in infants), or cases in which an > unexpected effect arises from a t-test. Do you think it would be correct to > slide a relatively large (width of 200ms? 400ms? to be chosen a priori of > course) window through the epochs and compute cluster randomization analysis > for each latency to explore dubious significant t-test clusters? > > If you have no hypothesis about where to expect an effect, you should use > the complete latency window in which it may occur. Of course, this will > reduce the sensitivity (statistical power) of your test (in comparison with > the situation in which you do know when the effect can occur). As a rule, > prior knowledge increases sensitivity. > OK > Another related question: I computed a post-hoc non kosher tuning of the > window around the most significative cluster in my data, and I saw that it > is significative (p<0.05) if the window edges exceed of about 50 ms the > cluster edges (since the cluster is about 70 ms long, the whole window is > about 170 ms long); but if I take longer windows, the p-value increases > quite rapidly (I'm running at least 500 random draws for each window, and > checking that the result does not depend on the number of draws). Do you > have such instabilities in your data or should I think that the effect > relative to my cluster is definitely too weak? Or maybe my data are not > clean enough? > > This phenomenon is not an instability, it is what I would expect. Imagine > your trials are 10 seconds long and there is an effect in the latency window > between 1.3 and 1.35 seconds (i.e., less than 1 percent of trial length). > If you ask clusterrandanalysis to compare the conditions over the complete > trial length, it may very well miss the effect in the window between 1.3and > 1.35 seconds, because it has to use a large critical value in order to > control for false positives in the time window where there is no effect ( > i.e., 99 percent of the 10 second trial). > > I also expected the significativity to decrease while increasing the time window for the same reason, but I was surprised to see the p-value increase so rapidly. I may pose the question more clearly: from your experience, would you say that the effect I described can be considered significative or not? (a few other details: I have 128 electrodes, 8 subjects, and the window I'm choosing is the window where I expect an effect from the literature) A related question is: how much do artifacts influence this kind of test? thank you again, Marco -- Marco Buiatti - Post Doc ************************************************************** Cognitive Neuroimaging Unit - INSERM U562 Service Hospitalier Frederic Joliot, CEA/DRM/DSV 4 Place du general Leclerc, 91401 Orsay cedex, France Telephone: +33 1 69 86 77 65 Fax: +33 1 69 86 78 16 E-mail: marco.buiatti at gmail.com Web: www.unicog.org *************************************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From maris at NICI.RU.NL Tue Nov 8 15:54:22 2005 From: maris at NICI.RU.NL (Eric Maris) Date: Tue, 8 Nov 2005 15:54:22 +0100 Subject: about cluster randomization analysis Message-ID: Hi Marco, Another related question: I computed a post-hoc non kosher tuning of the window around the most significative cluster in my data, and I saw that it is significative (p<0.05) if the window edges exceed of about 50 ms the cluster edges (since the cluster is about 70 ms long, the whole window is about 170 ms long); but if I take longer windows, the p-value increases quite rapidly (I'm running at least 500 random draws for each window, and checking that the result does not depend on the number of draws). Do you have such instabilities in your data or should I think that the effect relative to my cluster is definitely too weak? Or maybe my data are not clean enough? This phenomenon is not an instability, it is what I would expect. Imagine your trials are 10 seconds long and there is an effect in the latency window between 1.3 and 1.35 seconds (i.e., less than 1 percent of trial length). If you ask clusterrandanalysis to compare the conditions over the complete trial length, it may very well miss the effect in the window between 1.3 and 1.35 seconds, because it has to use a large critical value in order to control for false positives in the time window where there is no effect (i.e., 99 percent of the 10 second trial). I also expected the significativity to decrease while increasing the time window for the same reason, but I was surprised to see the p-value increase so rapidly. I may pose the question more clearly: from your experience, would you say that the effect I described can be considered significative or not? (a few other details: I have 128 electrodes, 8 subjects, and the window I'm choosing is the window where I expect an effect from the literature) A related question is: how much do artifacts influence this kind of test? The question of significance can only answered on the basis of probability calculations. My own experience is irrelevant in this respect. With respect to the artifacts, you must be aware of the fact that the power of statistical tests is adversely affected by eye-blinks and all other non-neuronal factors in the signal. greetings, Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From r.oostenveld at FCDONDERS.RU.NL Wed Nov 9 09:28:13 2005 From: r.oostenveld at FCDONDERS.RU.NL (Robert Oostenveld) Date: Wed, 9 Nov 2005 09:28:13 +0100 Subject: about cluster randomization analysis In-Reply-To: <22f732b0511080325i731f02c4odfc1776ef1503e56@mail.gmail.com> Message-ID: Hi Marco, On 8-nov-2005, at 12:25, Marco Buiatti wrote: > Do you think it would be correct to slide a relatively large (width > of 200ms? 400ms? to be chosen a priori of course) window through > the epochs and compute cluster randomization analysis for each > latency to explore dubious significant t-test clusters? You can use such an approach, but then you have to consider each position of the window that you are sliding as a seperate statistical comparison of the data in the experimental conditions. The multiple comparison problem over channels and timepoints within the window is then automatically taken care of by clusterrandanalysis, but the multiple comparisons that arise due to the multiple locations of the window in which you are "interrogating" your data are not treated by clusterrandanalysis. That means that, for this approach to be statistically completely sound, you should do a Bonferoni correction on the alpha threshold, dividing it by the number of window positions. Probably you will loose a lot of your statistical power especially if you slide the window in small steps, so I doubt whether it is usefull. Given that you have expressed your doubts about potential artifacts in some of your subjects and the influence of the artifacts on the outcome of the statistical test, I would guess that putting more effort into making the data itself cleaner is probably more worthwile. best regards, Robert ======================================================= Robert Oostenveld, PhD F.C. Donders Centre for Cognitive Neuroimaging Radboud University Nijmegen phone: +31-24-3619695 http://www.ru.nl/fcdonders/ From marco.buiatti at GMAIL.COM Wed Nov 9 17:57:43 2005 From: marco.buiatti at GMAIL.COM (Marco Buiatti) Date: Wed, 9 Nov 2005 17:57:43 +0100 Subject: about cluster randomization analysis In-Reply-To: <023AE4AB-BFD6-45EE-ADB8-0A80E3905DE3@fcdonders.ru.nl> Message-ID: Dear FieldTrip Masters, thank you again for your clear and rapid answers. Another question about clusterrandanalysis. As I told you, I'm performing a cluster randomization test for a within-subject experiment, using a two-sided t-test as pair statistics. The tutorial says that clustering is performed separately for thresholded positive and negative t-statistics, and that the critical value for the cluster level statistics is also two-sided. I understood that the positive(negative) critical value corresponds to the 95% portion of the randomization distribution of the maximum(minimum) of the positive(negative) clusters statistics. Then, why do I obtain two identical (in absolute value) critical values? What am I missing? thank you, Marco On 11/9/05, Robert Oostenveld wrote: > > Hi Marco, > > On 8-nov-2005, at 12:25, Marco Buiatti wrote: > > Do you think it would be correct to slide a relatively large (width > > of 200ms? 400ms? to be chosen a priori of course) window through > > the epochs and compute cluster randomization analysis for each > > latency to explore dubious significant t-test clusters? > > You can use such an approach, but then you have to consider each > position of the window that you are sliding as a seperate statistical > comparison of the data in the experimental conditions. The multiple > comparison problem over channels and timepoints within the window is > then automatically taken care of by clusterrandanalysis, but the > multiple comparisons that arise due to the multiple locations of the > window in which you are "interrogating" your data are not treated by > clusterrandanalysis. That means that, for this approach to be > statistically completely sound, you should do a Bonferoni correction > on the alpha threshold, dividing it by the number of window positions. > > Probably you will loose a lot of your statistical power especially if > you slide the window in small steps, so I doubt whether it is > usefull. Given that you have expressed your doubts about potential > artifacts in some of your subjects and the influence of the artifacts > on the outcome of the statistical test, I would guess that putting > more effort into making the data itself cleaner is probably more > worthwile. > > best regards, > Robert > > > ======================================================= > Robert Oostenveld, PhD > F.C. Donders Centre for Cognitive Neuroimaging > Radboud University Nijmegen > phone: +31-24-3619695 > http://www.ru.nl/fcdonders/ > -- Marco Buiatti - Post Doc ************************************************************** Cognitive Neuroimaging Unit - INSERM U562 Service Hospitalier Frederic Joliot, CEA/DRM/DSV 4 Place du general Leclerc, 91401 Orsay cedex, France Telephone: +33 1 69 86 77 65 Fax: +33 1 69 86 78 16 E-mail: marco.buiatti at gmail.com Web: www.unicog.org *************************************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.medendorp at NICI.RU.NL Thu Nov 10 12:31:24 2005 From: p.medendorp at NICI.RU.NL (Pieter Medendorp) Date: Thu, 10 Nov 2005 12:31:24 +0100 Subject: Comparing waveforms In-Reply-To: <00d301c5dae1$ce26c030$de2cae83@fcdc195> Message-ID: Eric, mag ik je een vraag stellen: ik heb 10 proefpersonen met elk hun eigen data set. per proefpersoon, zoek ik naar correlaties in hun data, op twee verschillende manieren. Dus, dit levert 2 correlation coeffienten per proefpersoon. Ik heb 10 proefpersonen, en wil vergelijken of de 10 correlatiecoefficenten gevonden op de ene manier afwijkend zijn van de 10 gevonden op de andere manier. Weet jij de geschikte test (Fisher o.i.d.?). Bedankt. Pieter -------------- next part -------------- An HTML attachment was scrubbed... URL: From CAFJ.Miller at PSY.UMCN.NL Mon Nov 14 10:55:17 2005 From: CAFJ.Miller at PSY.UMCN.NL (Christopher Miller) Date: Mon, 14 Nov 2005 10:55:17 +0100 Subject: clusterrandanalysis Message-ID: Dear Eric, I have two questions concerning clusterrandanalysis: First, I performed a frequency analysis with Brain Vision Analyzer and exported these data into an Excel and SPSS file. How can I import these data into Matlab in order to obtain a format on which I can perform a Cluster-level Randomization Test for a Within Subjects experiment? Second, I want to compare three conditions, two drug conditions and a placebo condition. In all conditions, a baseline measurement was made before drug intake. I want to take in account these baseline measurements. In a parametric test like MANOVA this is usually done with a covariate or the introduction of an extra factor (time). How can I perform this in clusterrandanalysis? Thanks in advance, Christopher Miller, MSc Unit for Clinical Psychopharmacology and Neuropsychiatry Department of Psychiatry 974 Radboud University Nijmegen Medical Centre PO Box 9101 6500 HB Nijmegen The Netherlands Tel.: + 31 24 3613204 Email: CAFJ.Miller at psy.umcn.nl From maris at NICI.RU.NL Mon Nov 14 17:13:01 2005 From: maris at NICI.RU.NL (Eric Maris) Date: Mon, 14 Nov 2005 17:13:01 +0100 Subject: clusterrandanalysis Message-ID: Hi Christopher, > I have two questions concerning clusterrandanalysis: > > > First, I performed a frequency analysis with Brain Vision Analyzer and > exported these data into an Excel and SPSS file. How can I import these > data into Matlab in order to obtain a format on which I can perform a > Cluster-level Randomization Test for a Within Subjects experiment? This is not a question about clusterrandanalysis but about how to import preprocessed data from another package such that it is compatible with Fieldtrip functions. Although I am not an expert in these issues (Robert Oostenveld is our expert), I think it is complicated and intellectually not very satisfying (because of all the bookkeeping that is probably involved). I advise you to import your non-preprocessed BVA data files into Fieldtrip (we have import routines for this) and do you frequency analysis in Fieldtrip. Besides sound statistics, Fieldtrip also offers state-of-the art spectral density estimation. Learning the Fieldtrip function freqanalysis will probably take less time than importing your BVA power spectra. > > Second, I want to compare three conditions, two drug conditions and a > placebo condition. In all conditions, a baseline measurement was made > before drug intake. I want to take in account these baseline measurements. > In a parametric test like MANOVA this is usually done with a covariate or > the introduction of an extra factor (time). How can I perform this in > clusterrandanalysis? 1. Divide the activation power by the baseline power (and, optionally, take the log of this ratio) and submit this to clusterrandanalysis. 2. Compare each of the drug conditions with the placebo condition (using a T-statistic) with respect to this baseline-normalized dependent variable. greetings, Eric Maris From wibral at MPIH-FRANKFURT.MPG.DE Mon Nov 14 17:44:48 2005 From: wibral at MPIH-FRANKFURT.MPG.DE (Michael Wibral) Date: Mon, 14 Nov 2005 17:44:48 +0100 Subject: problems importing elp files Message-ID: Dear List Users, I'm trying to import some .avr files exported from BESA. However the read_besa_avr function returns an error like this: ??? Error using ==> strrep Cell elements must be character arrays. Error in ==> fieldtrip-20051113\private\read_besa_avr at 61 avr.label = strrep(lbl.textdata(:,2) ,'''', ''); Error in ==> besa2fieldtrip at 44 tmp = read_besa_avr(filename); My .elp files look like this: EEG Fp1' -89.51 -74.20 EEG Fpz' 89.49 90.00 EEG Fp2' 89.51 74.20 EEG Nz' 108.96 90.00 EEG AF9' -113.26 -50.72 EEG AF7' -89.61 -55.88 EEG AF3' -73.15 -69.74 EEG AFz' 67.74 90.00 EEG AF4' 73.15 69.74 EEG AF8' 89.61 55.88 EEG AF10' 113.27 50.72 EEG F9' -113.98 -38.43 EEG F7' -89.65 -40.32 EEG F5' -72.42 -45.38 EEG F3' -58.13 -55.16 EEG F1' -49.40 -70.86 EEG Fz' 46.01 90.00 EEG F2' 49.40 70.86 EEG F4' 58.13 55.16 EEG F6' 72.42 45.38 EEG F8' 89.65 40.32 EEG F10' 113.98 38.43 (truncated...) When I look into the intermediate output of lbl = importdata(elpfile) inside the crashing function read_besa_avr, I get something like this lbl = data: [71x1 double] textdata: {81x3 cell} [1x23 char] [] [] [1x21 char] [] [] [1x21 char] [] [] [1x22 char] [] [] [1x24 char] [] [] [1x23 char] [] [] [1x23 char] [] [] [1x21 char] [] [] [1x21 char] [] [] [1x21 char] [] [] 'EEG' 'AF10' '113.27' 'EEG' 'F9' '-113.98' 'EEG' 'F7' '-89.65' 'EEG' 'F5' '-72.42' 'EEG' 'F3' '-58.13' 'EEG' 'F1' '-49.40' 'EEG' 'Fz' '46.01' 'EEG' 'F2' '49.40' 'EEG' 'F4' '58.13' 'EEG' 'F6' '72.42' 'EEG' 'F8' '89.65' 'EEG' 'F10' '113.98' 'EEG' 'FT9' '-114.79' 'EEG' 'FT7' '-89.84' 'EEG' 'FC5' '-67.69' 'EEG' 'FC3' '-46.94' (truncated...) Does anybody know what's wrong here? Thank you very much for your help, Michael Wibral M. Wibral Dipl. Phys. Max Planck Institute for Brain Research Dept. Neurophysiology Deutschordenstrasse 46 60528 Frankfurt am Main Germany Phone: +49(0)69/6301-83849 +49(0)173/4966728 Fax: +49(0)69/96769-327 From r.oostenveld at FCDONDERS.RU.NL Tue Nov 15 22:08:05 2005 From: r.oostenveld at FCDONDERS.RU.NL (Robert Oostenveld) Date: Tue, 15 Nov 2005 22:08:05 +0100 Subject: problems importing elp files In-Reply-To: <4378BF00.6060006@mpih-frankfurt.mpg.de> Message-ID: Hi Michael On 14-nov-2005, at 17:44, Michael Wibral wrote: > I'm trying to import some .avr files exported from BESA. However > the read_besa_avr function returns an error like this: > ... I copied and pased your trunctuated elp file content from your mail into a local file and had no problem reading it in. Looking at the output of matlab, it seems to me that the importdata function (which is standard matlab) is not able to detect the boundaries between the columns. Some lines in the file are read as 22 chars, some lines are read as a few chuncks and one line seems to be parsed as a large number of chuncks. Therefore I suspect that the spaces and tabs are messed up in your elp file. Please try copy and paste the content into a new file, make sure that there are no tabs but only spaces, and save it again to disk with the original name. best Robert From wibral at MPIH-FRANKFURT.MPG.DE Wed Nov 16 12:21:40 2005 From: wibral at MPIH-FRANKFURT.MPG.DE (Michael Wibral) Date: Wed, 16 Nov 2005 12:21:40 +0100 Subject: Problem with data from BESA Message-ID: Hi, I have imported averaged EEG data from BESA (std 81 electrodes, average refrence) using the .mul format and the corresponding .sfp file to import the electrode locations. The import into fieldtrip seems to work fine with these formats (it didn't when I tried .avr and .elp...). However, the maps look very different from what I see in BESA (more like something differentiated / inverted from the BESA maps - the foci are clearly shifted). Do I have to tell Fieldtrip somewhere that this is EEG data, so that it doesn't do the things it would when dealing with MEG gradiometer data? Or is there something I have to do to let fieldtrip know that the data are average reference data. I can't find anything in the tutorials on this matter. Thank you very much for any help on this, Michael Wibral M. Wibral Dipl. Phys. Max Planck Institute for Brain Research Dept. Neurophysiology Deutschordenstrasse 46 60528 Frankfurt am Main Germany Phone: +49(0)69/6301-83849 +49(0)173/4966728 Fax: +49(0)69/96769-327 -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephan.moratti at UNI-KONSTANZ.DE Wed Nov 16 13:13:33 2005 From: stephan.moratti at UNI-KONSTANZ.DE (Stephan Moratti) Date: Wed, 16 Nov 2005 13:13:33 +0100 Subject: Problem with data from BESA In-Reply-To: <437B1644.9080300@mpih-frankfurt.mpg.de> Message-ID: Hi Michael, I often use BESA exported data with many different tools. One problem I encountered often, is that the coordinate system applied was not compatible. Sometimes I had to shift the whole thing by 90 degrees or so. As you are using sfp files (containing x,y,z coordinates), this could be the problem. But I am not sure as I havent't imported to fieldtrip yet. Maybe just a hint, maybe not. Stephan At 12:21 16.11.2005 +0100, you wrote: > Hi, > > EEG data from BESA (std 81 electrodes, average refrence) using the .mul >format and the corresponding .sfp file to import the electrode locations. >The import into fieldtrip seems to work fine with these formats (it didn't >when I tried .avr and .elp...). However, the maps look very different from >what I see in BESA (more like something differentiated / inverted from the >BESA maps - the foci are clearly shifted). Do I have to tell Fieldtrip >somewhere that this is EEG data, so that it doesn't do the things it would >when dealing with MEG gradiometer data? Or is there something I have to do >to let fieldtrip know that the data are average reference data. I can't >find anything in the tutorials on this matter. > > Thank you very much for any help on this, > > Michael Wibral > > M. Wibral Dipl. Phys. > Max Planck Institute for Brain Research > Dept. Neurophysiology > Deutschordenstrasse 46 > 60528 Frankfurt am Main > Germany > > +49(0)69/6301-83849 > +49(0)173/4966728 > +49(0)69/96769-327 > ----------------------------- Dr. Stephan Moratti (PhD) Dept. of Psychology University of Konstanz P.O Box D25 Phone: +40 (0)7531 882385 Fax: +49 (0)7531 884601 D-78457 Konstanz, Germany e-mail: Stephan.Moratti at uni-konstanz.de http://www.clinical-psychology.uni-konstanz.de/ From r.oostenveld at FCDONDERS.RU.NL Wed Nov 16 14:16:46 2005 From: r.oostenveld at FCDONDERS.RU.NL (Robert Oostenveld) Date: Wed, 16 Nov 2005 14:16:46 +0100 Subject: Problem with data from BESA In-Reply-To: <437B1644.9080300@mpih-frankfurt.mpg.de> Message-ID: Hi Micahel > I see in BESA (more like something differentiated / inverted from > the BESA maps - the foci are clearly shifted). The projection of the 3D electrode locations towards the 2D plane (in which the color-coded data has to be represented on screen or paper) might be quite different. Fieldtrip uses layout files in which you can specify the location of each sensor in the 2D plane (have a look at one of the *.lay files). If you do not specify a layout file, the 2D layout is constructed on the fly from the 3D electrode locations that are represented as NelecX3 matrix in data.elec.pnt. I suggest that you turn on the electrodes in topoplotER (cfg.showlabels option) and see whether that makes sense. If you are using standard labels of the extended 10-20 system in your EEG data, you can also try topoplotting with a predefined 2D layout, e.g. cfg = ... cfg.layout = 'elec1020.lay' % or elec1010.lay topoplotER(cfg, avg) > Do I have to tell Fieldtrip somewhere that this is EEG data, so > that it doesn't do the things it would when dealing with MEG > gradiometer data? No, the topoplotting of EEG data and MEG data is done just the same. > Or is there something I have to do to let fieldtrip know that the > data are average reference data. I can't find anything in the > tutorials on this matter. No, referencing of EEG data does not influence the spatial topographical distribution. It might change the global color (depending on the coloraxis), but not the pattern. Re-referencing your data at one timepoint just subtract a constant value (the potential at the reference electrode) from all electrodes. A geographical map of the Himalayas would also look the same if you would express the height with respect to the foot of the mountain range instead of with respect to the sea level. best regards, Robert From wibral at MPIH-FRANKFURT.MPG.DE Wed Nov 16 16:29:41 2005 From: wibral at MPIH-FRANKFURT.MPG.DE (Michael Wibral) Date: Wed, 16 Nov 2005 16:29:41 +0100 Subject: Problem with data from BESA In-Reply-To: <28A6AA31-A895-4DCE-BFEF-645AECA62F63@fcdonders.ru.nl> Message-ID: Hi Robert, thank you very much for the quick reply. I noticed that I supplied insufficient information. I actually switched on the electrode labels in the display and the peaks sit at the wrong electrodes. I therefore assume it is not a problem of the layout file (alone). I actually took into account that the data look heavily distorted and tried to check wether it is just a projection problem by playing around with different scalings of elec.pnt (albeit this didn't seem to affect the plot??). I should have also mentioned that I'm using version 20051113. However I imported the electrode positions with read_fcdc_elec from the version 0.9.6 (there doesn't seem to be a read_fcdc_elec version supplied with 2005113..) - I hope this doesn't cause the trouble. Meanwhile I also tried to use the elec1010.lay layout file which works fine.However, in fieldtrip I find a negative peak between electrodes P5 P3 PO7 PO3 and a positive central one at CPz Cz (which has no counterpart in BESA, so contourlines don't match exactly), whereas in BESA a positive peak is found on CP4. This looks like an inversion of signs, an inversion of left/right and a difference in the interpolation algorithm. Below you'll find the code I used (most of it is copied from the BESA tfc sample page on the web site): % this is the list of BESA datafiles in one condition filename_AM = { 'AA_AMU2M.mul' 'AK_AMU2M.mul' 'CB_AMU2M.mul' 'HDR01_AMU2M.mul' 'KRT28_AMU2M.mul' 'KSN14_AMU2M.mul' 'LM_AMU2M.mul' 'MN_AMU2M.mul' 'MW_AMU2M.mul' 'MWA_AMU2M.mul' }; % this is the list of BESA datafiles in the other condition filename_vAM = { 'AA_vAMU2M.mul' 'AK_vAMU2M.mul' 'CB_vAMU2M.mul' 'HDR01_vAMU2M.mul' 'KRT28_vAMU2M.mul' 'KSN14_vAMU2M.mul' 'LM_vAMU2M.mul' 'MN_vAMU2M.mul' 'MW_vAMU2M.mul' 'MWA_vAMU2M.mul' }; nsubj = length(filename_AM); % collect all single subject data in a convenient cell-array for i=1:nsubj AM{i} = besa2fieldtrip(filename_AM{i}); vAM{i} = besa2fieldtrip(filename_vAM{i}); end % load electrode configuration elec= read_fcdc_elec('AA_AMU2M.sfp'); elec.pnt = 10.*elec.pnt; % scale, doesn't seem to affect the plotting ? cfg = []; cfg.keepindividual = 'yes'; AMdata = timelockgrandaverage(cfg, AM{:}); vAMdata = timelockgrandaverage(cfg, vAM{:}); DiffData=AMdata %create dummy structure to hold results of the difference calculation %calculate grand average difference DiffData.individual=AMdata.individual-vAMdata.individual; cfg = []; DiffDataGA=timelockgrandaverage(cfg, DiffData); %plot the differences figure; cfg=[]; plotdata1.elec=elec; plotdata1.time=DiffDataGA.time; plotdata1.label=DiffDataGA.label; plotdata1.data2plot=DiffDataGA.avg; cfg=[]; cfg.layout=elec; cfg.showlabels = 'yes' cfg.zparam='data2plot'; cfg.colorbar='no'; cfg.xlim=[0.5595:0.001:0.5605]; % to zoom in on 560ms, as BESA only gives data a timepoints topoplotER(cfg,plotdata1); Best Regards, Michael M. Wibral Dipl. Phys. Max Planck Institute for Brain Research Dept. Neurophysiology Deutschordenstrasse 46 60528 Frankfurt am Main Germany Phone: +49(0)69/6301-83849 +49(0)173/4966728 Fax: +49(0)69/96769-327 Robert Oostenveld schrieb: > Hi Micahel > >> I see in BESA (more like something differentiated / inverted from >> the BESA maps - the foci are clearly shifted). > > > The projection of the 3D electrode locations towards the 2D plane (in > which the color-coded data has to be represented on screen or paper) > might be quite different. Fieldtrip uses layout files in which you > can specify the location of each sensor in the 2D plane (have a look > at one of the *.lay files). If you do not specify a layout file, the > 2D layout is constructed on the fly from the 3D electrode locations > that are represented as NelecX3 matrix in data.elec.pnt. > > I suggest that you turn on the electrodes in topoplotER > (cfg.showlabels option) and see whether that makes sense. > > If you are using standard labels of the extended 10-20 system in your > EEG data, you can also try topoplotting with a predefined 2D layout, > e.g. > > cfg = ... > cfg.layout = 'elec1020.lay' % or elec1010.lay > topoplotER(cfg, avg) > >> Do I have to tell Fieldtrip somewhere that this is EEG data, so that >> it doesn't do the things it would when dealing with MEG gradiometer >> data? > > > No, the topoplotting of EEG data and MEG data is done just the same. > >> Or is there something I have to do to let fieldtrip know that the >> data are average reference data. I can't find anything in the >> tutorials on this matter. > > > No, referencing of EEG data does not influence the spatial > topographical distribution. It might change the global color > (depending on the coloraxis), but not the pattern. Re-referencing > your data at one timepoint just subtract a constant value (the > potential at the reference electrode) from all electrodes. A > geographical map of the Himalayas would also look the same if you > would express the height with respect to the foot of the mountain > range instead of with respect to the sea level. > > best regards, > Robert > > . > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r.oostenveld at FCDONDERS.RU.NL Wed Nov 16 18:00:52 2005 From: r.oostenveld at FCDONDERS.RU.NL (Robert Oostenveld) Date: Wed, 16 Nov 2005 18:00:52 +0100 Subject: Problem with data from BESA In-Reply-To: <437B5065.3040307@mpih-frankfurt.mpg.de> Message-ID: Hi Michael > I actually switched on the electrode labels in the display and the > peaks sit at the wrong electrodes. That seams to indicate that there is a mismatch between the channel names and the electrode names. If you see a peak at a specific electrode in the topoplot, you should be albe to confirm its value by looking in the data. Could it be that the ordering of the channels is different in the two conditions that you are reading in (compare AM {1}.label and vAM{1}.label)? > I therefore assume it is not a problem of the layout file (alone). > I actually took into account that the data look heavily distorted > and tried to check wether it is just a projection problem by > playing around with different scalings of elec.pnt (albeit this > didn't seem to affect the plot??). The scaling of the radius of the electrodes does not affect the location towards which it is projected in the 2D plane. What would matter however w.r.t. the 2D projection is if you would shift them. The interpolation algorithm that is used in topoplotER is certainly different from the one that is used in BESA. But I would not expect that to make such a big difference that peaks start shifting around. Maybe Ole can comment on the interpolation, since he supplied the topoplotER function based upon some code from EEGLAB (Ole should read along on the mailing list, but I also CCed to him). > I should have also mentioned that I'm using version 20051113. > However I imported the electrode positions with read_fcdc_elec from > the version 0.9.6 (there doesn't seem to be a read_fcdc_elec > version supplied with 2005113..) - I hope this doesn't cause the > trouble. It indeed was missing. I have tagged the read_fcdc_elec file to be included in the upcoming daily release versions (which are updated every evening on the ftp server). You can pick it up tomorrow at ftp://ftp.fconders.nl/pub/fieldtrip/ > Meanwhile I also tried to use the elec1010.lay layout file which > works fine.However, in fieldtrip I find a negative peak between > electrodes P5 P3 PO7 PO3 and a positive central one at CPz Cz > (which has no counterpart in BESA, so contourlines don't match > exactly), whereas in BESA a positive peak is found on CP4. This > looks like an inversion of signs, an inversion of left/right and a > difference in the interpolation algorithm. Does the peak ly on top of an electrode or in between the electrodes? If it is at the electrode, you should be able to verify it's actual value. I am concerned that there might be an ordering/naming problem with your EEG channels. Please try the two low-level functions that you find attached. They work like this: topoplot(cfg,X,Y,datavector,Labels) and triplot([X Y zeros(Nchan,1)], [], Labels, datavector) You can get the X and Y value from the layout file. With the triplot, you can also plot 3D (just use elec.pnt, i.e. [x y z] as the first argument). The triplot does linear interpolation over the triangles that connect the electrodes. It might look coarse, but with it you are guaranteed not to overinterpret the data (i.e. there cannot be any spurious peaks between the electrodes). best, Robert PS if you still cannot figure it out, send me a private mail with your plotdata1 structure and, if not too large, the AM and vAM data. -------------- next part -------------- A non-text attachment was scrubbed... Name: topoplot.m Type: application/octet-stream Size: 15694 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: triplot.m Type: application/octet-stream Size: 10679 bytes Desc: not available URL: -------------- next part -------------- From wibral at MPIH-FRANKFURT.MPG.DE Wed Nov 16 18:56:20 2005 From: wibral at MPIH-FRANKFURT.MPG.DE (Michael Wibral) Date: Wed, 16 Nov 2005 18:56:20 +0100 Subject: Problem with data from BESA In-Reply-To: Message-ID: Hi Robert, thanks for your help. There seems to be indeed a problem with the ordering of the electrodes in the mul-files themselves and the corresponding sfp files, that contain some additional fiducials - so if electrodes and their positions are not matched by name but by order during imports that will of course go wrong. Both files also have a different order from the 10-10 layout used in Fieldtrip, but I guess layout files match electrodes per name, don't they. I will try to figure out a workaround. Best, Michael Robert Oostenveld schrieb: > Hi Michael > >> I actually switched on the electrode labels in the display and the >> peaks sit at the wrong electrodes. > > > That seams to indicate that there is a mismatch between the channel > names and the electrode names. If you see a peak at a specific > electrode in the topoplot, you should be albe to confirm its value by > looking in the data. Could it be that the ordering of the channels > is different in the two conditions that you are reading in (compare AM > {1}.label and vAM{1}.label)? > >> I therefore assume it is not a problem of the layout file (alone). I >> actually took into account that the data look heavily distorted and >> tried to check wether it is just a projection problem by playing >> around with different scalings of elec.pnt (albeit this didn't seem >> to affect the plot??). > > > The scaling of the radius of the electrodes does not affect the > location towards which it is projected in the 2D plane. What would > matter however w.r.t. the 2D projection is if you would shift them. > The interpolation algorithm that is used in topoplotER is certainly > different from the one that is used in BESA. But I would not expect > that to make such a big difference that peaks start shifting around. > Maybe Ole can comment on the interpolation, since he supplied the > topoplotER function based upon some code from EEGLAB (Ole should read > along on the mailing list, but I also CCed to him). > >> I should have also mentioned that I'm using version 20051113. >> However I imported the electrode positions with read_fcdc_elec from >> the version 0.9.6 (there doesn't seem to be a read_fcdc_elec version >> supplied with 2005113..) - I hope this doesn't cause the trouble. > > > It indeed was missing. I have tagged the read_fcdc_elec file to be > included in the upcoming daily release versions (which are updated > every evening on the ftp server). You can pick it up tomorrow at > ftp://ftp.fconders.nl/pub/fieldtrip/ > >> Meanwhile I also tried to use the elec1010.lay layout file which >> works fine.However, in fieldtrip I find a negative peak between >> electrodes P5 P3 PO7 PO3 and a positive central one at CPz Cz (which >> has no counterpart in BESA, so contourlines don't match exactly), >> whereas in BESA a positive peak is found on CP4. This looks like an >> inversion of signs, an inversion of left/right and a difference in >> the interpolation algorithm. > > > Does the peak ly on top of an electrode or in between the electrodes? > If it is at the electrode, you should be able to verify it's actual > value. I am concerned that there might be an ordering/naming problem > with your EEG channels. Please try the two low-level functions that > you find attached. They work like this: > topoplot(cfg,X,Y,datavector,Labels) > and > triplot([X Y zeros(Nchan,1)], [], Labels, datavector) > You can get the X and Y value from the layout file. With the triplot, > you can also plot 3D (just use elec.pnt, i.e. [x y z] as the first > argument). The triplot does linear interpolation over the triangles > that connect the electrodes. It might look coarse, but with it you > are guaranteed not to overinterpret the data (i.e. there cannot be > any spurious peaks between the electrodes). > > best, > Robert > > PS if you still cannot figure it out, send me a private mail with > your plotdata1 structure and, if not too large, the AM and vAM data. > From r.oostenveld at FCDONDERS.RU.NL Wed Nov 16 22:25:41 2005 From: r.oostenveld at FCDONDERS.RU.NL (Robert Oostenveld) Date: Wed, 16 Nov 2005 22:25:41 +0100 Subject: Problem with data from BESA In-Reply-To: <437B72C4.7070105@mpih-frankfurt.mpg.de> Message-ID: On 16-nov-2005, at 18:56, Michael Wibral wrote: > Both files also have a different order from the 10-10 layout used > in Fieldtrip, but I guess layout files match electrodes per name, > don't they. I will try to figure out a workaround. Channel matching is indeed done on name and not on number/index. This applies for the channel names in the layout file, but also for the channel names in the electrode file. It means that the channel ordering in either layout-file or elec-structure can be different from the channel ordering in the data, since both the data and the elec contain labels that can be matched when needed (e.g when plotting or dipole fitting). The elec-structure can also contain more or less electrode positions+labels than the EEG itself, e.g. when you have measured bipolar ECG or EOG along (without position), or when you have additional fiducials or electrodes in your cap that were recorded with a polhemus but not recorded as EEG channel. Since the sfp file is very simple and can hardly be read incorrectly, I suspect that the error in the assignment in channel names occurs in reading the ERP file. Robert From CAFJ.Miller at PSY.UMCN.NL Thu Nov 17 16:48:12 2005 From: CAFJ.Miller at PSY.UMCN.NL (Christopher Miller) Date: Thu, 17 Nov 2005 16:48:12 +0100 Subject: reformat processed data Message-ID: Dear Robert, I have a question about reformating (pre)processed data: I performed a frequency analysis with Brain Vision Analyzer and exported the data into Excel. These data are multidimensional (27 channels X 5 frequencybands) and thus consist of 27 X 5 numbers for each of the 16 subjects. Each number represents the power of every channel-frequency combination. Since this was a within design, I have two sets of 27 X 5 for each subject. I want to compare these two sets with a Cluster-level Randomization Test for a Within Subjects experiment, just like the test which is performed in the tutorial on Cluster-level Randomization Tests, page 16-17. In the tutorial this can be done after "load gravgerfcporig;". When this command is executed, two files appear in the workspace: "gravg_erf_cp_FC" and "gravg_erf_cp_FIC", both with the format: "<1x1 struct> struct". However, when I import my data with the import wizard there appears only one file in the workspace, named :"data" with the format: "<160x30 double> double". The numbers 160 and 30 represent the data as needed for analyzing them in SPSS: 160 rows (16 subjects, with 5 frequencybands in 2 conditions). The number 30 represents 30 columns (27 channels and 3 channels to label: (1) the subject(1-16), (2) the frequencybands(1-5) and (3) the condition(1-2). I know that just saving my imported file as a .mat file doesn't change the structure of the file, since I tried this. My question is, how can I reformat these data in such a way that I can perform a Cluster-level Randomization Test for a Within Subjects experiment? Thanks in advance, Christopher Miller, MSc Unit for Clinical Psychopharmacology and Neuropsychiatry Department of Psychiatry 974 Radboud University Nijmegen Medical Centre PO Box 9101 6500 HB Nijmegen The Netherlands Tel.: + 31 24 3613204 Email: CAFJ.Miller at psy.umcn.nl From r.oostenveld at FCDONDERS.RU.NL Fri Nov 18 09:28:21 2005 From: r.oostenveld at FCDONDERS.RU.NL (Robert Oostenveld) Date: Fri, 18 Nov 2005 09:28:21 +0100 Subject: reformat processed data In-Reply-To: <4CD85D348E46984983185B911CBF3ED1483BD0@umcnet13.umcn.nl> Message-ID: Dear Christopher, The tutorial data that you refer to contains a structure. In general all data in fieldtrip is represented as a structure. A structure is a collection of variables that belong together, and the "freq" structure, i.e. the structure that results from the freqanalysis function contains all elements (but not more) that are required to fully describe the data. The file gravgerfcporig.mat contains a grand average Event Related Field (ERF) structure, which is the result of the timelockgrandaverage function: >> clear all >> load gravgerfcporig >> whos gravg_erf_cp_FC 1x1 11352992 struct array gravg_erf_cp_FIC 1x1 11352992 struct array >> gravg_erf_cp_FC label: {152x1 cell} time: [1x900 double] dimord: 'repl_chan_time' grad: [1x1 struct] individual: [10x152x900 double] (hmmm, the average itself seems to be missing, I was expecting that it also would contain an avg-field of 152x900 double. Maybe Eric deleted it. Also the cfg field is missing, so it seems like it was hand-made and not using timelockgrandaverage.) But that is not the data that you are interested in. Have a look in the file containing the time-frequency representation of the data >> load TFRorig >> whos TFRFC 1x1 20540072 struct array TFRFIC 1x1 20775680 struct array >> TFRFC label: {151x1 cell} dimord: 'rpt_sgncmb_frq_tim' powspctrm: [4-D double] foi: [5 10 20 40 80] toi: [1x39 double] grad: [1x1 struct] cfg: [1x1 struct] There you see that there is a structure TFRFC, which contains a powspctrm field, with the order of dimensions (dimord) repetitions- channels-frequency-time. There is a vector describing the values along the time-axis (toi) and a frequency axis (foi) and a cell-array with the channel labels (label). Furthermore, there is a "grad" structure which contains the position of the MEG gradiometers. If you want to copy your data from excel into fieldtrip, you should create a similar structure in which all sub-elements correspond with the data, since that is what clusterrandanalysis expects (that is the "bookkeeping" that Eric referred to). You currently only have a data matrix of <160x30 double>, but clusterrandanalysis does not know whether it has 160 channels or 30, and whether it contains the power at a single frequency that was estimated at multiple timepoints, or the power at many frequencies that was estimated at a single timepoint, or what the frequencies actually are. You also have to tell (through the elec structure) what the locations of your electrodes are, since clusterrandanalysis needs to know which electrodes are neighbours. Although converting the data from excel to a fieldtrip-compatible structure is possible, I think that it will be easier to do your complete analysis in fieldtrip. Fieldtrip can read Brainvision files, and you can follow all steps in the clusterrandanalysis tutorial, but then instead of doing a time-frequency analysis (mtmconvol) only doing a frequency analysis (mtmfft). best regards, Robert On 17-nov-2005, at 16:48, Christopher Miller wrote: > Dear Robert, > > I have a question about reformating (pre)processed data: > > I performed a frequency analysis with Brain Vision Analyzer and > exported the data into Excel. These data are multidimensional (27 > channels X 5 frequencybands) and thus consist of 27 X 5 numbers for > each of the 16 subjects. Each number represents the power of every > channel-frequency combination. Since this was a within design, I > have two sets of 27 X 5 for each subject. I want to compare these > two sets with a Cluster-level Randomization Test for a Within > Subjects experiment, just like the test which is performed in the > tutorial on Cluster-level Randomization Tests, page 16-17. In the > tutorial this can be done after "load gravgerfcporig;". When this > command is executed, two files appear in the workspace: > "gravg_erf_cp_FC" and "gravg_erf_cp_FIC", both with the format: > "<1x1 struct> struct". However, when I import my data with the > import wizard there appears only one file in the workspace, > named :"data" with the format: "<160x30 double> double". The > numbers 160 and 30 represent the data as needed for analyzing them > in SPSS: 160 rows (16 subjects, with 5 frequencybands in 2 > conditions). The number 30 represents 30 columns (27 channels and 3 > channels to label: (1) the subject(1-16), (2) the frequencybands > (1-5) and (3) the condition(1-2). > I know that just saving my imported file as a .mat file doesn't > change the structure of the file, since I tried this. My question > is, how can I reformat these data in such a way that I can perform > a Cluster-level Randomization Test for a Within Subjects experiment? > > Thanks in advance, > > > Christopher Miller, MSc > Unit for Clinical Psychopharmacology and Neuropsychiatry > Department of Psychiatry 974 > Radboud University Nijmegen Medical Centre > PO Box 9101 > 6500 HB Nijmegen > The Netherlands > Tel.: + 31 24 3613204 > Email: CAFJ.Miller at psy.umcn.nl > ======================================================= Robert Oostenveld, PhD F.C. Donders Centre for Cognitive Neuroimaging Radboud University Nijmegen phone: +31-24-3619695 http://www.ru.nl/fcdonders/ From CAFJ.Miller at PSY.UMCN.NL Fri Nov 18 15:37:45 2005 From: CAFJ.Miller at PSY.UMCN.NL (Christopher Miller) Date: Fri, 18 Nov 2005 15:37:45 +0100 Subject: reformat processed data Message-ID: Dear Robert, How can I read BrainVision-files into matlab? Can I export BVA preprocessed data into Fieldtrip (according to the background information at the FCdonders website, Fieldtrip supports the .dat files from BVA). How can this be done? Or must I do all the preprocessing over? Greetings, Christopher -----Oorspronkelijk bericht----- Van: FieldTrip discussion list [mailto:FIELDTRIP at NIC.SURFNET.NL]Namens Robert Oostenveld Verzonden: vrijdag 18 november 2005 9:28 Aan: FIELDTRIP at NIC.SURFNET.NL Onderwerp: Re: [FIELDTRIP] reformat processed data Dear Christopher, The tutorial data that you refer to contains a structure. In general all data in fieldtrip is represented as a structure. A structure is a collection of variables that belong together, and the "freq" structure, i.e. the structure that results from the freqanalysis function contains all elements (but not more) that are required to fully describe the data. The file gravgerfcporig.mat contains a grand average Event Related Field (ERF) structure, which is the result of the timelockgrandaverage function: >> clear all >> load gravgerfcporig >> whos gravg_erf_cp_FC 1x1 11352992 struct array gravg_erf_cp_FIC 1x1 11352992 struct array >> gravg_erf_cp_FC label: {152x1 cell} time: [1x900 double] dimord: 'repl_chan_time' grad: [1x1 struct] individual: [10x152x900 double] (hmmm, the average itself seems to be missing, I was expecting that it also would contain an avg-field of 152x900 double. Maybe Eric deleted it. Also the cfg field is missing, so it seems like it was hand-made and not using timelockgrandaverage.) But that is not the data that you are interested in. Have a look in the file containing the time-frequency representation of the data >> load TFRorig >> whos TFRFC 1x1 20540072 struct array TFRFIC 1x1 20775680 struct array >> TFRFC label: {151x1 cell} dimord: 'rpt_sgncmb_frq_tim' powspctrm: [4-D double] foi: [5 10 20 40 80] toi: [1x39 double] grad: [1x1 struct] cfg: [1x1 struct] There you see that there is a structure TFRFC, which contains a powspctrm field, with the order of dimensions (dimord) repetitions- channels-frequency-time. There is a vector describing the values along the time-axis (toi) and a frequency axis (foi) and a cell-array with the channel labels (label). Furthermore, there is a "grad" structure which contains the position of the MEG gradiometers. If you want to copy your data from excel into fieldtrip, you should create a similar structure in which all sub-elements correspond with the data, since that is what clusterrandanalysis expects (that is the "bookkeeping" that Eric referred to). You currently only have a data matrix of <160x30 double>, but clusterrandanalysis does not know whether it has 160 channels or 30, and whether it contains the power at a single frequency that was estimated at multiple timepoints, or the power at many frequencies that was estimated at a single timepoint, or what the frequencies actually are. You also have to tell (through the elec structure) what the locations of your electrodes are, since clusterrandanalysis needs to know which electrodes are neighbours. Although converting the data from excel to a fieldtrip-compatible structure is possible, I think that it will be easier to do your complete analysis in fieldtrip. Fieldtrip can read Brainvision files, and you can follow all steps in the clusterrandanalysis tutorial, but then instead of doing a time-frequency analysis (mtmconvol) only doing a frequency analysis (mtmfft). best regards, Robert On 17-nov-2005, at 16:48, Christopher Miller wrote: > Dear Robert, > > I have a question about reformating (pre)processed data: > > I performed a frequency analysis with Brain Vision Analyzer and > exported the data into Excel. These data are multidimensional (27 > channels X 5 frequencybands) and thus consist of 27 X 5 numbers for > each of the 16 subjects. Each number represents the power of every > channel-frequency combination. Since this was a within design, I > have two sets of 27 X 5 for each subject. I want to compare these > two sets with a Cluster-level Randomization Test for a Within > Subjects experiment, just like the test which is performed in the > tutorial on Cluster-level Randomization Tests, page 16-17. In the > tutorial this can be done after "load gravgerfcporig;". When this > command is executed, two files appear in the workspace: > "gravg_erf_cp_FC" and "gravg_erf_cp_FIC", both with the format: > "<1x1 struct> struct". However, when I import my data with the > import wizard there appears only one file in the workspace, > named :"data" with the format: "<160x30 double> double". The > numbers 160 and 30 represent the data as needed for analyzing them > in SPSS: 160 rows (16 subjects, with 5 frequencybands in 2 > conditions). The number 30 represents 30 columns (27 channels and 3 > channels to label: (1) the subject(1-16), (2) the frequencybands > (1-5) and (3) the condition(1-2). > I know that just saving my imported file as a .mat file doesn't > change the structure of the file, since I tried this. My question > is, how can I reformat these data in such a way that I can perform > a Cluster-level Randomization Test for a Within Subjects experiment? > > Thanks in advance, > > > Christopher Miller, MSc > Unit for Clinical Psychopharmacology and Neuropsychiatry > Department of Psychiatry 974 > Radboud University Nijmegen Medical Centre > PO Box 9101 > 6500 HB Nijmegen > The Netherlands > Tel.: + 31 24 3613204 > Email: CAFJ.Miller at psy.umcn.nl > ======================================================= Robert Oostenveld, PhD F.C. Donders Centre for Cognitive Neuroimaging Radboud University Nijmegen phone: +31-24-3619695 http://www.ru.nl/fcdonders/ From r.oostenveld at FCDONDERS.RU.NL Mon Nov 21 13:05:15 2005 From: r.oostenveld at FCDONDERS.RU.NL (Robert Oostenveld) Date: Mon, 21 Nov 2005 13:05:15 +0100 Subject: reformat processed data In-Reply-To: <4CD85D348E46984983185B911CBF3ED1483BD1@umcnet13.umcn.nl> Message-ID: On 18-nov-2005, at 15:37, Christopher Miller wrote: > Dear Robert, > > How can I read BrainVision-files into matlab? > Can I export BVA preprocessed data into Fieldtrip (according to the > background information at the FCdonders website, Fieldtrip supports > the .dat files from BVA). How can this be done? Or must I do all > the preprocessing over? Fieldtrip automatically detects the type of data. If you have done filtering and artefact removal in BVA, you should save the result in a *.dat file and still do the preprocessing in Fieldtrip (wich involves reading in the data, which is what you want, and optionally filtering, which you do not want) except that you can keep the cfg- options of PREPROCESSING empty to prevent it from doing the filtering. You still do have to specify the cfg-settings for DEFINETRIAL. If you specify cfg.trialdef.eventtype='?' a list with the events in your data file will be displayed on screen. best Robert From wibral at MPIH-FRANKFURT.MPG.DE Mon Nov 21 15:21:48 2005 From: wibral at MPIH-FRANKFURT.MPG.DE (Michael Wibral) Date: Mon, 21 Nov 2005 15:21:48 +0100 Subject: Problem with data from BESA In-Reply-To: Message-ID: Hi Robert, thanks to your help I have meanfile figured out and solved the problems with the topographies, i.e. the maps look fine now as far as geometry is concerned. I guess the error was that the BESA files contained " ' " at the end of the electrode names as the interpolation to a commen 81 electrodes was done using digitzed individual coordinates). I removed the the extra " ' " and - just to make sure nothing goes wrong - also made an ordered layout file for my configuration. What remains puzzling however is the inversion of amplitudes (+ -> - ). The exported .mul file from BESA and the BESA data seem to match, however the plotted data seem to be inverted. I want to check this using simpler data, though, and then come back to this if I can confirm it. I have however another question regarding the interpretation of clusteranalysis results. Am I correct in saying that the family wise error rate (alpha) tells me the risk in obtaining a false positive statement of the type that I specify previously with alphatresh? For example if I specify alphathresh of 0.1 (lets calls this trend for abbreviation) in the first pass of the analysis (multiple testing) before clustering then the clusterrandomization using alpha =0.05 tells me that I run a risk of 5% of identifying wrongly at least one of these 'trend clusters'. (Or else, if the above is incorrect what is the reason not to use a very lenient criterion in the first pass to feed the clusterrandomization with as many clusters as possible?) Best rgards, Michael Robert Oostenveld schrieb: > On 16-nov-2005, at 18:56, Michael Wibral wrote: > >> Both files also have a different order from the 10-10 layout used >> in Fieldtrip, but I guess layout files match electrodes per name, >> don't they. I will try to figure out a workaround. > > > Channel matching is indeed done on name and not on number/index. This > applies for the channel names in the layout file, but also for the > channel names in the electrode file. It means that the channel > ordering in either layout-file or elec-structure can be different > from the channel ordering in the data, since both the data and the > elec contain labels that can be matched when needed (e.g when > plotting or dipole fitting). The elec-structure can also contain more > or less electrode positions+labels than the EEG itself, e.g. when you > have measured bipolar ECG or EOG along (without position), or when > you have additional fiducials or electrodes in your cap that were > recorded with a polhemus but not recorded as EEG channel. > > Since the sfp file is very simple and can hardly be read incorrectly, > I suspect that the error in the assignment in channel names occurs in > reading the ERP file. > > Robert > > . > From maris at NICI.RU.NL Mon Nov 21 15:59:10 2005 From: maris at NICI.RU.NL (Eric Maris) Date: Mon, 21 Nov 2005 15:59:10 +0100 Subject: Problem with data from BESA Message-ID: Hi Michael, > I have however another question regarding the interpretation of > clusteranalysis results. Am I correct in saying that the family wise error > rate (alpha) tells me the risk in obtaining a false positive statement of > the type that I specify previously with alphatresh? For example if I > specify alphathresh of 0.1 (lets calls this trend for abbreviation) in the > first pass of the analysis (multiple testing) before clustering then the > clusterrandomization using alpha =0.05 tells me that I run a risk of 5% of > identifying wrongly at least one of these 'trend clusters'. > (Or else, if the above is incorrect what is the reason not to use a very > lenient criterion in the first pass to feed the clusterrandomization with > as many clusters as possible?) The issue is statistical power (sensitivity). If you use a very lenient criterion (say, alphathresh=0.2) to select candidate cluster members, this will result in large clusters purely by chance. If the effect in your data is strong but confined to a small number of sensors and timepoints, clusterrandanalysis may not pick it up. This is because the reference distribution is dominated by these weak but large "chance clusters". You will not encounter this problem if you put alphathresh higher. On the other hand, a high aphathresh will miss weak but widespread effects. To sum up, alphathresh determines the relative sensitivity to "strong but small" and "weak but large" clusters. greetings, Eric Maris From wibral at MPIH-FRANKFURT.MPG.DE Wed Nov 23 13:26:13 2005 From: wibral at MPIH-FRANKFURT.MPG.DE (Michael Wibral) Date: Wed, 23 Nov 2005 13:26:13 +0100 Subject: Problem with data from BESA In-Reply-To: <00c001c5eeac$20c1bb00$d92cae83@fcdc195> Message-ID: Hi Eric, thank you for the explanation, things are much clearer now. In the meantime I have encountered another problem with clusterrandanalysis: When using maxsum as a test statistic everything works fine, but using maxsumtminclustersize with the same specified (maximum) alpha the fields clusrand.posclusters and clusrand.negclusters stay empty - although there seem to be large enough clusters in posclusterslabelmat and negclusterslabelmat (I used cfg.smallestcluster=2..) . Is this a bug or does the use of maxsumtminclustersize somehow reduce sensitivity (- from the description in the 2005 tutorial I thought it is similar to using FDR or 'Holmes' method for computing critical p values?). Maybe I am also missing something on a conceptual level that makes the information in posclusters invalid if I use maxsumtminclustersize as a test statistic ?? Below I pasted the code that produced this behaviour. %Clusterrandomization analysis cfg=[]; cfg.elec =elec; cfg.statistic = 'depsamplesT'; cfg.alphathresh = 0.05; cfg.makeclusters = 'yes'; cfg.minnbchan = 1; %1 neighbour i.e. 2 channels cfg.smallestcluster = 2; cfg.clusterteststat = 'maxsumtminclustersize'; % replace with maxsum to get lots of entries in clusrand.posclusters cfg.onetwo = 'twosided'; cfg.alpha = 0.05; cfg.nranddraws = 1000; cfg.latency = [0.40 0.65]; [clusrand] = clusterrandanalysis(cfg, AMdata, vAMdata); clusrand.elec=elec; Best, Michael Eric Maris schrieb: > Hi Michael, > > >> I have however another question regarding the interpretation of >> clusteranalysis results. Am I correct in saying that the family wise >> error rate (alpha) tells me the risk in obtaining a false positive >> statement of the type that I specify previously with alphatresh? For >> example if I specify alphathresh of 0.1 (lets calls this trend for >> abbreviation) in the first pass of the analysis (multiple testing) >> before clustering then the clusterrandomization using alpha =0.05 >> tells me that I run a risk of 5% of identifying wrongly at least one >> of these 'trend clusters'. >> (Or else, if the above is incorrect what is the reason not to use a >> very lenient criterion in the first pass to feed the >> clusterrandomization with as many clusters as possible?) > > > > The issue is statistical power (sensitivity). If you use a very > lenient criterion (say, alphathresh=0.2) to select candidate cluster > members, this will result in large clusters purely by chance. If the > effect in your data is strong but confined to a small number of > sensors and timepoints, clusterrandanalysis may not pick it up. This > is because the reference distribution is dominated by these weak but > large "chance clusters". You will not encounter this problem if you > put alphathresh higher. On the other hand, a high aphathresh will miss > weak but widespread effects. > > To sum up, alphathresh determines the relative sensitivity to "strong > but small" and "weak but large" clusters. > > > greetings, > > Eric Maris > > . > From maris at NICI.RU.NL Wed Nov 23 13:56:28 2005 From: maris at NICI.RU.NL (Eric Maris) Date: Wed, 23 Nov 2005 13:56:28 +0100 Subject: Problem with data from BESA Message-ID: Hi Michael, > thank you for the explanation, things are much clearer now. In the > meantime I have encountered another problem with clusterrandanalysis: > When using maxsum as a test statistic everything works fine, but using > maxsumtminclustersize with the same specified (maximum) alpha the fields > clusrand.posclusters and clusrand.negclusters stay empty - although there > seem to be large enough clusters in posclusterslabelmat and > negclusterslabelmat (I used cfg.smallestcluster=2..) . Is this a bug or > does the use of maxsumtminclustersize somehow reduce sensitivity (- from > the description in the 2005 tutorial I thought it is similar to using FDR > or 'Holmes' method for computing critical p values?). Maybe I am also > missing something on a conceptual level that makes the information in > posclusters invalid if I use maxsumtminclustersize as a test statistic ?? Yes, there is a bug in Clusterrandanalysis when the option cfg.clusterteststat = 'maxsumtminclustersize' is used. I know where it is, but I need some time to fix it (due to dependencies in the code). Give me a week to fix it. greetings, Eric > Below I pasted the code that produced this behaviour. > > %Clusterrandomization analysis > cfg=[]; > cfg.elec =elec; > cfg.statistic = 'depsamplesT'; > cfg.alphathresh = 0.05; > cfg.makeclusters = 'yes'; > cfg.minnbchan = 1; %1 neighbour i.e. 2 channels > cfg.smallestcluster = 2; > cfg.clusterteststat = 'maxsumtminclustersize'; % replace with maxsum to > get lots of entries in clusrand.posclusters > cfg.onetwo = 'twosided'; > cfg.alpha = 0.05; > cfg.nranddraws = 1000; > cfg.latency = [0.40 0.65]; > [clusrand] = clusterrandanalysis(cfg, AMdata, vAMdata); > clusrand.elec=elec; > > > Best, > Michael > > Eric Maris schrieb: > >> Hi Michael, >> >> >>> I have however another question regarding the interpretation of >>> clusteranalysis results. Am I correct in saying that the family wise >>> error rate (alpha) tells me the risk in obtaining a false positive >>> statement of the type that I specify previously with alphatresh? For >>> example if I specify alphathresh of 0.1 (lets calls this trend for >>> abbreviation) in the first pass of the analysis (multiple testing) >>> before clustering then the clusterrandomization using alpha =0.05 tells >>> me that I run a risk of 5% of identifying wrongly at least one of these >>> 'trend clusters'. >>> (Or else, if the above is incorrect what is the reason not to use a very >>> lenient criterion in the first pass to feed the clusterrandomization >>> with as many clusters as possible?) >> >> >> >> The issue is statistical power (sensitivity). If you use a very lenient >> criterion (say, alphathresh=0.2) to select candidate cluster members, >> this will result in large clusters purely by chance. If the effect in >> your data is strong but confined to a small number of sensors and >> timepoints, clusterrandanalysis may not pick it up. This is because the >> reference distribution is dominated by these weak but large "chance >> clusters". You will not encounter this problem if you put alphathresh >> higher. On the other hand, a high aphathresh will miss weak but >> widespread effects. >> >> To sum up, alphathresh determines the relative sensitivity to "strong but >> small" and "weak but large" clusters. >> >> >> greetings, >> >> Eric Maris >> >> . >> From marie at PSY.GLA.AC.UK Fri Nov 25 11:50:37 2005 From: marie at PSY.GLA.AC.UK (Marie Smith) Date: Fri, 25 Nov 2005 10:50:37 +0000 Subject: meg realign In-Reply-To: <4837D13C-376B-4167-991F-1AE32768B562@fcdonders.ru.nl> Message-ID: Hi, I was wondering if someone could clarify for me in some more detail how the meg-realign function works. From the function help it seems to perform a course source reconstruction and then re-project back to a standard gradiometer array. I am most curious about how this course source representation is implemented. Thanks, Marie Smith From r.oostenveld at FCDONDERS.RU.NL Mon Nov 28 15:53:00 2005 From: r.oostenveld at FCDONDERS.RU.NL (Robert Oostenveld) Date: Mon, 28 Nov 2005 15:53:00 +0100 Subject: meg realign In-Reply-To: <53C0453E-2753-4AF7-A2C4-BEC35B4631CC@psy.gla.ac.uk> Message-ID: Hi Marie, On 25-nov-2005, at 11:50, Marie Smith wrote: > I was wondering if someone could clarify for me in some more detail > how the meg-realign function works. From the function help it seems > to perform a course source reconstruction and then re-project back > to a standard gradiometer array. I am most curious about how this > course source representation is implemented. You are right, it involves projecting the measured activity on a sheet of dipoles that approximates the cortex, followed by a forward computation of the field of those dipoles at the template gradiometer locations. The algorithm is described in combination with a simulation study in the paper T.R. Knosche, Transformation of whole-head MEG recordings between different sensor positions. Biomed Tech (Berl). 2002 Mar;47(3):59-62. A similar algorithm, with the main difference being a different source model, is described in the appendix of the paper de Munck JC, Verbunt JP, Van't Ent D, Van Dijk BW. The use of an MEG device as 3D digitizer and motion monitoring system. Phys Med Biol. 2001 Aug;46(8):2041-52. I will send you a pdf version of both papers in a separate mail addressed directly at you. best regards, Robert From h.f.kwok at BHAM.AC.UK Tue Nov 29 16:07:30 2005 From: h.f.kwok at BHAM.AC.UK (Hoi Fei Kwok) Date: Tue, 29 Nov 2005 16:07:30 +0100 Subject: importing my own data Message-ID: Dear Robert, In the FAQ section of the FieldTrip website, it is said that if I import my own data, I have to define the following fields: data.label, data.trial, data.fsample, data.time and data.cfg. The first four are straighforward enough. However, how to set up data.cfg. What are the subfields? Regards, Hoi Fei From r.oostenveld at FCDONDERS.RU.NL Wed Nov 30 17:10:33 2005 From: r.oostenveld at FCDONDERS.RU.NL (Robert Oostenveld) Date: Wed, 30 Nov 2005 17:10:33 +0100 Subject: importing my own data In-Reply-To: Message-ID: Hi Hoi Fei data.cfg can be empty (i.e. []) in your case. It is used to remember the configuration details of all steps that you take in FT. Some functions assume that data.cfg is present and want to copy it over in their output (e.g. timelock.cfg.previous), therefore it should be present to be sure. The most recent versions of FT however should check whether it is present or not, and only attempt to copy it if persent. best, Robert PS Given that you have biosemi amplifiers, you are probably working with the BDF format. It would also be nice to implement that natively in fieldtrip. That should not be too hard, if you would be interested in taking that approach instead of constructing a data structure, please contact me directly. On 29-nov-2005, at 16:07, Hoi Fei Kwok wrote: > Dear Robert, > > In the FAQ section of the FieldTrip website, it is said that if I > import my > own data, I have to define the following fields: data.label, > data.trial, > data.fsample, data.time and data.cfg. The first four are > straighforward > enough. However, how to set up data.cfg. What are the subfields? > > Regards, > Hoi Fei >