[FieldTrip] TRENTOOL pipeline help

Max Cantor mcantor at umich.edu
Tue Sep 9 15:19:55 CEST 2014


This is immensely helpful, thank you. I was very confused about why some
versions of the pipeline I saw were using group calculate and others were
using interaction delay reconstruction and what that meant, and I think I
have a more clear idea of what the different steps of the pipeline are
doing. There are still a few things I'm a bit confused about though in
terms of the pipeline. For instance, whether or not I need to do TEprepare
before group prepare, and if I need to do graph analysis (which I'm not
sure I fully understand but also haven't looked deeply into) before group
stats.

If you don't mind me taking you up on your offer, I think seeing your
example script might help clarify some of these issues.

Thank you!

On Tue, Sep 9, 2014 at 8:16 AM, Patricia Wollstadt <
Patricia.Wollstadt at gmx.de> wrote:

>  Hello Max,
>
> I added a few comments to the questions regarding individual parameters
> below. To address the general problem of TRENTOOL telling you, that there
> are not enough sample points in your data: From what I can see in your
> script, you probably don't have enough data points in each time series to
> robustly estimate TE. You analyze 800 ms of data sampled at 300 Hz, which
> gives you 240 samples per time series. Can you maybe avoid downsampling to
> 300 Hz and downsample to 600 Hz instead? Or could you analyze a longer time
> window of interest?
> Note that you also 'lose' data to embedding and the interaction delay: The
> first point that can be used for TE estimation is at max. embedding length
> + max. interaction delay in samples. For example: max. embedding length =
> dim * tau_factor * ACT = 10 * 0.4 * 5 = 20 samples plus the max interaction
> delay of 30 ms = 9 samples. In this example, you would be left with 240 -
> 29 samples for TE estimation per trial. There is also the possibility to
> estimate time resolved TE/TE for shorter time windows of interest (see
> section 4.4 in the manual); however, this method requires the use of a GPU
> for TE estimation.
>
> I would further recommend to use the new pipeline for group statistics
> described in the manual in section 4.5 (the function 'TEgroup_calculate' is
> deprecated). The new pipeline allows you to reconstruct the interaction
> delay and uses the following functions (see also comments in the script):
>
> TEgroup_prepare    -> prepares all data sets (all subjects/all conditions)
> for group analysis (this means finding common embedding parameters such
> that estimates are not biased between groups)
> InteractionDelayReconstruction_calculate     -> estimates TE for
> individual data sets and all assumed interaction delays u
> InteractionDelayReconstruction_analyze       -> reconstructs the
> interaction delay by selecting the u that maximizes TE for each channel
> TEgroup_stats        -> calculate group statistics using a permutation test
>
> I can send you an example script for group TE analysis using this pipeline
> to get you started. I hope this helps you to get the group analysis
> running. Just write again if you're having trouble setting up the pipeline
> or something is not clear about the parameters/my comments.
>
> Best,
> Patricia
>
>
>
>
> On 09/04/2014 08:30 PM, Max Cantor wrote:
>
>   Hi fieldtrippers,
>
>  I know trentool is not produced by the Donders Institute, so I'm not 100%
> sure if it is appropriate to ask questions about it here, but to the best
> of my knowledge they do not have a mailing list and I saw a few trentool
> questions in the archives, so I'm going to assume it's ok...
>
>  In any case, below is my current pipeline (slightly modified for
> comprehensibility):
>
>  (notes in bold are comments/questions made in this email, not present in
> the pipeline. Sorry in advance for the long post! Any help would be greatly
> appreciated as I'm a bit over my head on this but I think I'm close!)
>
> *****
>
> % Prepare group TE data
>
> cfgP                        = [];
> cfgP.Path2TSTOOL  = *TSTOOLPATH*
> cfgP.TEcalctype       = 'VW_ds';
> cfgP.channel            = {'ctfdip_LAC'  'ctfdip_RAC'};
>
> *I'm trying to find the transfer entropy between the left and right
> auditory cortices in my experiment. The input is virtual sensor data that
> was produced using SAM in fieldtrip on real MEG data. *
>
> % specify u to be scanned
>
> cfgP.predicttime_u    = 30;
> cfgP.toi                    = [-0.4 0.4];
>
>    *For clarification, the predicttime_u is in seconds but the toi is in
> milliseconds. If I understand correctly, the predicttime_u must fit within
> the toi, but beyond that are there any benefits to it being earlier or
> later?* PW: The predictiontime_u is in milliseconds and the toi is in
> seconds. The prediction time is the assumed interaction delay between your
> two sources and should fit within your toi. In general it is preferable to
> use the method for interaction delay reconstruction for TE estimation,
> because it allows you to reconstruct the actual delay between your source
> and target times series. A non-optimal u/interaction delay may cause an
> underestimation of TE, so it is recommended to use the pipeline for
> interaction delay reconstruction whenever estimating TE for unknown delays.
> If you use the methods for interaction delay reconstruction
> 'predicttime_u' is replaced by
> cfgTEP.predicttimemin_u % minimum u to be scanned
> cfgTEP.predicttimemax_u % maximum u to be scanned
> cfgTEP.predicttimestepsize % time steps between u to be scanned
> A large range for u values to be scanned increases computing time a lot,
> so it is best to limit the u range to values that are physiologically
> plausible.
>
>
>  % ACT (Autocorrelation Time) estimation and constraints
>
> cfgP.maxlag              = 150;
> cfgP.actthrvalue         = 7.5;
> cfgP.minnrtrials          = 5;
>
>  *My understanding is maxlag should be 1/2 the sampling rate, so since
> the data are downsampled to 300hz, it should be 150. I know that the sample
> rate and filters are used to determine the actthrvalue, but I don't
> actually know the calculation. 7.5 was a rough guess just to test the
> pipeline. I'm also uncertain of what minnrtrials should be.* PW: You can
> set the actthrvalue based on the filtering you did prior to TE analysis. If
> you for example highpass filtered at 10 Hz, you shouldn't find an ACT
> higher than 30 samples, because you filtered out any components of the
> signal slower than 10 Hz/30 samples (given your sampling frequency of 300
> Hz). So in this scenario the actthrvalue would be 30.
> A good value for cfgP.minnrtrials is 12 (a minimum number of trials is
> needed to realize the  permutation test for estimated TE values).
>
>
> % Optimization
>
> cfgP.optimizemethod   = 'ragwitz';
> cfgP.ragdim                 = 4:8;
> cfgP.ragtaurange          = [0.2 0.4];
> cfgP.ragtausteps          = 15;
> cfgP.repPred                = 100;
>
>  *I am completely at a loss for this. I've done some reading into
> transfer entropy, mutual information, etc., cited in trentool, but I'm yet
> to understand how exactly this optimization works and what the
> configuration should be, given my data and experimental intentions.* PW:
> The Ragwitz criterion tries to find optimal embedding parameters dim and
> tau for the data. To do that, the method iteratively takes all possible
> combinations of dim and tau values that are provided in cfgP.ragdim and
> cfgP.ragtaurange/.ragtausteps and tests how well these combinations embed
> the data. To test an embedding, the method builds the embedding vectors
> from the data; it then tests for each point how well the next point in time
> can be predicted from the reference point's nearest neighbours. So for each
> embedded point, the method searches for the nearest neighbours and
> calculates the average of those nearest neighbours. The difference between
> the averaged/predicted point and the actual next point is the error of the
> local predictor. The Ragwitz criterion will then return the parameter
> combination for which this error over all points is minimal.
> The parameters set the following: 'ragdim' are dimensions to be tested by
> the method (I would reccomend to start with 2:10), 'ragtaurange' together
> with 'ragtausteps' specifies the tau values to be tested (TRENTOOL will
> build a vector from 0.2 to 0.4 in 15 steps). Note, that the values here are
> factors that are later multiplied with the ACT to obtain the actual tau.
> 'repPred' is the number of points that will be used for the local
> prediction, i.e. the Ragwitz criterion will test the local prediction and
> calculate the error for the first 100 points in your time series. The two
> parameters 'flagNei' ans 'sizeNei' below specify the type of neighbour
> search conducted by the Ragwitz criterion: 'flagNei' tells the method to
> either conduct a kNN or range search; 'sizeNei' specifies the number of
> neighbours or the radius to be searched by a range search.
>
>
> % Kernel-based TE estimation
>
> cfgP.flagNei                  = 'Mass';
> cfgP.sizeNei                  = 4; % Default
>
> cfgP.ensemblemethod    = 'no';
> cfgP.outputpath              = *OUTPUT PATH*;
>
> if ~exist(*Path for TEprepare data object*)
>     load VSdat;
>     TE_Wrd                     = {};
>     for i                           = 1:nConds
>         for j                       = 1:Nsub
>             TE_Wrd{i}{j}        = TEprepare(cfgP, VSdat{i}{j});
>         end
>     end
>     clear VSdat;
>     save('TE_Wrd', 'TE_Wrd');
> end
>
>  *The configuration and virtual sensor data, organized in a 3 x 15 cell
> of structures (condition by subject) are the input. The TEprepare
> substructure is added to each individual condition x subject .mat files'
> data structure which are stored on disk independently.*
>
> % Use object_to_mat_conversion.m to replace individual condition x subject
> virtual sensor data
> % .mat files with their TE_Wrd equivalent
>
>  *I'm using a separate script to make some manipulations to the objects
> from disk; this will all eventually be integrated into the main pipeline*.*
> TRENTOOL seems to handle data output very differently from fieldtrip and
> I've had trouble thinking through the most logical way to handle the data
> so it's a bit haphazard right now.*
>
> load cond080sub01.mat
>
> cfgG                               = [];
> cfgG.dim                         = cond080sub01.TEprepare.optdim;
> cfgG.tau                          = cond080sub01.TEprepare.opttau;
>
> if isfield(cond080sub01, 'TEprepare')
>                               TEgroup_prepare(cfgG, fileCell);
> else
>     error('Need to run TEprepare before TEgroup_prepare');
> end
>
>  *For clarification, fileCell is a cell with the name of each condition x
> subject .mat file, which as I said before is collectively the same as the 3
> x 15 VSdat structure (condition x subject).*
>
> % Replace .mat files with '_for_TEgroup_calculate' version in
> % object_to_mat_conversion.m
>
> % TE Group Calculate
>
> load cond080sub01.mat
> if isfield(cond080sub01, 'TEgroupprepare')
>     for i                   = 1:length(fileCell)
>                               TEgroup_calculate(fileCell{i});
>     end
> else
>     error('Need to run TEgroup_prepare before TEgroup_calculate');
> end
>
>
>
>
>
>
>
> *At this step I get the following error: Error using transferentropy (line
> 337) \nTRENTOOL ERROR: not enough data points left after embedding Error in
> TEgroup_calculate (line 133) [TEresult] = transferentropy(cfg,data);*
>
> % TE Group Stats
>
> cfgGSTAT                              = [];
> cfgGSTAT.design(1,1:2*Nsub) = [ones(1,Nsub) 2*ones(1,Nsub)];
> cfgGSTAT.design(2,1:2*Nsub) = [1:Nsub 1:Nsub];
>
> cfgGSTAT.uvar                       = 1;
> cfgGSTAT.ivar                        = 2;
> cfgGSTAT.fileidout                  = 'test_groupstats';
>
>                               TEgroup_stats(cfgGSTAT, fileCell);
>
>  *Given the error above, I am yet to get to this step, but it does not
> seem fundamentally different from normal fieldtrip stats.*
>
> *****
>
>  In case my notes were not clear or you skipped to the bottom, *my
> primary concern is whether the error I'm getting in TEgroup_calculate is a
> pipeline issue* (I noticed the example pipeline in trentool, the manual,
> and published methods articles all seem to have slightly or significantly
> different pipeline compositions), *or if the error is* due to ACT,
> ragwitz optimization, or some other faulty parameterization *on my part
> due to a lack of understanding of how transfer entropy works on a more
> theoretical/mathematical level*. If the latter is the case, is there any
> relatively straightforward way to conceptualize this, or is this something
> where I'm just going to have to keep reading and rereading until it
> eventually makes sense? I've already done quite a bit of that and it hasn't
> pierced my thick skull yet but I'm sure it will eventually!
>
>  Thank you so much,
>
>  Max Cantor
>
>
> --
> Max Cantor
> Lab Manager
> Computational Neurolinguistics Lab
> University of Michigan
>
>
> _______________________________________________
> fieldtrip mailing listfieldtrip at donders.ru.nlhttp://mailman.science.ru.nl/mailman/listinfo/fieldtrip
>
>
> --
> ------------------------------------------------------------
>
>  Patricia Wollstadt, PhD Student
>
>   MEG Unit, Brain Imaging Center Goethe University, Frankfurt, Germany
>
>  Heinrich Hoffmann Strasse 10, Haus 93 B D - 60528 Frankfurt am Main
>
> _______________________________________________
> fieldtrip mailing list
> fieldtrip at donders.ru.nl
> http://mailman.science.ru.nl/mailman/listinfo/fieldtrip
>



-- 
Max Cantor
Lab Manager
Computational Neurolinguistics Lab
University of Michigan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.science.ru.nl/pipermail/fieldtrip/attachments/20140909/29e18024/attachment-0001.html>


More information about the fieldtrip mailing list