[FieldTrip] Estimate coherence between conditions?

Maria Hakonen maria.hakonen at gmail.com
Fri Jul 7 08:08:10 CEST 2017


Hi Maria,

I think there is more than one solution for what you are aiming to do.
Maybe a more experienced user or developer could show you the most
straightforward way (?).

IMO, using LCMV is more direct for this application because with DICS you
will need to provide the reference signal (i.e., the source timecourse from
the other condition). Therefore, you will need to apply LCMV anyway.

You could apply a band-pass filter to the channel activity before
localizing the frequency band of interest with LCMV. Alternatively, you
could obtain the virtual channels (without band pass) and define the
frequency bands of interest when computing the coherence as in the tutorial
(see ft_freqanalysis steps at
http://www.fieldtriptoolbox.org/tutorial/coherence#computing_the_coherence).
With DICS, in this case, I see it more intricate: 1) obtain the source
timecourses of condition 2 with LCVM; 2) compute the cross spectral density
between all data channels and each source timecourse (reference signal); 3)
compute DICS for each reference signal. Of course, you don't need to
compute the coherence for the whole brain, but only for the source of
interest. For each reference signal, you could change the cfg.grid.inside
value to include only the position of the voxel of interest (the same voxel
of the reference signal).
I hope this helps.

Best,Maité

Hi Maité,

Many thanks for your advice again!

I have been wondering whether I could calculate coherence straight from the
cross spectros without reference signals or virtual channels by using
beamformer_dics. In beamformer_dics, it seems to be possible to define the
location of the dipole with which coherence is computed (i.e. refdip).
However, I am not sure if it is possible to calculate the coherence between
the same brain region in two different conditions.

Best,
Maria

2017-06-29 12:57 GMT+03:00 Maria Hakonen <maria.hakonen at gmail.com>:

> Hi Maria,
> for obtaining the sources timecourses (aka virtual channels) you can follow the tutorials pasted below.
>
> Best,Maité
> http://www.fieldtriptoolbox.org/tutorial/connectivity#extract_the_virtual_channel_time-series
> http://www.fieldtriptoolbox.org/tutorial/shared/virtual_sensors#extract_the_virtual_channel_time-series
>
> Hi Maité,
>
> Thank for your answer again!
>
> However, I would need to calculate coherence within certain frequency bands and, therefore, I would like to use dics. The examples in the links seem to use lcmv. Could you please let me know how I can get coherence between conditions using dics?
>
> Best,
>
> Maria
>
>
> 2017-06-26 12:45 GMT+03:00 Maria Hakonen <maria.hakonen at gmail.com>:
>
>> Hi Maria,
>> maybe in this case it is better that you export the sources timecourses, build a data matrix with them and treat them in the same way as you did with the channels.
>>
>> Best,Maité
>>
>>
>> Hi Maité,
>>
>> Could you please yet let me know how to get the sources timecources?
>> source = ft_sourceanalysis(cfg, freq); only gives
>> source =
>>
>>          freq: 18
>>     cumtapcnt: [180x1 double]
>>           dim: [19 15 15]
>>        inside: [4275x1 logical]
>>           pos: [4275x3 double]
>>        method: 'average'
>>           avg: [1x1 struct]
>>           cfg: [1x1 struct]
>>
>> Best,
>> Maria
>>
>> 2017-06-25 13:53 GMT+03:00 Maria Hakonen <maria.hakonen at gmail.com>:
>>
>>> Hi Maité,
>>>
>>> Thank you for your answer!
>>>
>>> I have managed to calculate the coherence between two conditions in the sensor space in the way you suggested. However, I haven't managed to calculate the coherence between conditions in the source space (i.e. Appendix 1 in http://www.fieldtriptoolbox.org/tutorial/coherence). ft_sourceanalysis doesn't have channelcmb. I wonder if anyone has any solutions for this?
>>>
>>> BTW. I didn't get the answer to my question in my email but found it from Fieldtrip archive. However, I have also got some other emails from fieldtrip discussion forum.
>>>
>>> Best,
>>>
>>> Maria
>>>
>>>
>>> Hi Maria,
>>> Here it is a possible solution. First, rename channels from one of both conditions: for example, for condition 2, {'ch01cond2', 'ch02cond2', ...}. Then, append the data from both conditions. In ft_freqanalysis introduce all the channels combinations you want:
>>>
>>> cfg.channel    = {'MEG' 'ch01cond2' 'ch02cond2' ...};
>>> cfg.channelcmb = {'ch01' 'ch01cond2'; 'ch02' 'ch02cond2'};
>>> As I understand, you could use the same channelcmb later on in ft_connectivityanalysis.
>>> I hope it helps.
>>> Best wishes,Maité
>>>
>>>
>>>
>>> Dear FieldTrip experts,
>>>
>>> I have just started to use Fieldtrip and would like to estimate
>>> coherence between MEG responses measured in two different conditions from
>>> the same cortical areas. The example in Appendix 1 is close to what I would
>>> like to do:
>>> http://www.fieldtriptoolbox.org/tutorial/coherence
>>>
>>> However, in the example, coherence is calculated between the reference
>>> signal (EMG) and all MEG channels. Could it be possible to calculate
>>> coherence between each MEG channel in one condition and the same MEG
>>> channels in the other condition, that is:
>>> ch1 in cond1 vs. ch1 in cond2, ch2 in cond1 vs. ch2 in cond2, ...
>>>
>>> As far as I understand, the example in Appendix 1 would do this:
>>> ch1 in cond1 vs. all channels in cond2, ch2 in cond ch1 all channels in
>>> cond2, ...
>>>
>>> Best,
>>> Maria
>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.science.ru.nl/pipermail/fieldtrip/attachments/20170707/cfe67ed2/attachment-0002.html>


More information about the fieldtrip mailing list