From N.vanKlink-2 at umcutrecht.nl Wed Mar 1 11:43:09 2017 From: N.vanKlink-2 at umcutrecht.nl (Klink-3, N.E.C. van) Date: Wed, 1 Mar 2017 10:43:09 +0000 Subject: [FieldTrip] Normalization of beamformer leadfields Message-ID: Dear all, I want to do SAM beamformer source localization on single trial EEG data. I would like to normalize the leadfields to correct for depth, like mentioned in the lmcv beamformer tutorial: (http://www.fieldtriptoolbox.org/tutorial/beamformer_lcmv) cfg = []; cfg.elec = hdr.elec; % electrode distances cfg.headmodel = vol; % volume conduction headmodel cfg.grid = grid; % normalized grid positions cfg.channel = {'EEG'}; cfg.normalize = 'yes'; % to remove depth bias (Q in eq. 27 of van Veen et al, 1997). lf = ft_prepare_leadfield(cfg); However when I look what happens with cfg.normalize='yes', the following script is used in ft_compute_leadfield, from line 570: case 'yes' for ii=1:Ndipoles tmplf = lf(:, (3*ii-2):(3*ii)); if normalizeparam==0.5 % normalize the leadfield by the Frobenius norm of the matrix % this is the same as below in case normalizeparam is 0.5 nrm = norm(tmplf, 'fro'); else % normalize the leadfield by sum of squares of the elements of the leadfield matrix to the power "normalizeparam" % this is the same as the Frobenius norm if normalizeparam is 0.5 nrm = sum(tmplf(:).^2)^normalizeparam; end if nrm>0 tmplf = tmplf ./ nrm; end lf(:, (3*ii-2):(3*ii)) = tmplf; end This seems to me as independent of the dipole location, and does not use an estimate of the noise spectrum as in Eq 27 of van Veen et al 1997. DICS beamformer has the option to estimate the noise spectrum with 'projectnoise', but SAM beamformer does not have that option. SAM does something with noise and a lambda, which is noise regularization I guess (beamformer_sam from line 102). I use Fieldtrip 20170212. My main question: how do I correct the leadfields for depth bias? Thanks in advance, Nicole ------------------------------------------------------------------------------ De informatie opgenomen in dit bericht kan vertrouwelijk zijn en is uitsluitend bestemd voor de geadresseerde. Indien u dit bericht onterecht ontvangt, wordt u verzocht de inhoud niet te gebruiken en de afzender direct te informeren door het bericht te retourneren. Het Universitair Medisch Centrum Utrecht is een publiekrechtelijke rechtspersoon in de zin van de W.H.W. (Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat geregistreerd bij de Kamer van Koophandel voor Midden-Nederland onder nr. 30244197. Denk s.v.p aan het milieu voor u deze e-mail afdrukt. ------------------------------------------------------------------------------ This message may contain confidential information and is intended exclusively for the addressee. If you receive this message unintentionally, please do not use the contents but notify the sender immediately by return e-mail. University Medical Center Utrecht is a legal person by public law and is registered at the Chamber of Commerce for Midden-Nederland under no. 30244197. Please consider the environment before printing this e-mail. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sarang at cfin.au.dk Wed Mar 1 13:08:55 2017 From: sarang at cfin.au.dk (Sarang S. Dalal) Date: Wed, 1 Mar 2017 12:08:55 +0000 Subject: [FieldTrip] Normalization of beamformer leadfields In-Reply-To: References: Message-ID: <07BC6EA1-15AE-4528-B9CC-05BA838317F0@cfin.au.dk> Hi Nicole, Lead field normalization is a different approach than Van Veen’s method, which is often called the Neural Activity Index (NAI) and closely related to the “unit noise gain” or “weight normalization” concept you might see in some literature. I have implemented the NAI in beamformer_lcmv.m, which you can run with: cfg.method = ‘lcmv’; cfg.lcmv.weightnorm = ‘nai’; However, the equivalent has not been implemented in the other beamformer variants yet (SAM, DICS). You can still get output equivalent to SAM using the LCMV method if you use cfg.keeptrials=‘yes’ and average the power of the resulting time series (in source.avg.mom). This would give you a measure of induced power changes (rather than evoked), like the SAM procedure would. Unfortunately this procedure is not yet documented, but it’s not too tricky. (Please use a brand new version of FieldTrip if you’d like to try this, as an old bug in the NAI orientation selection was inadvertently re-introduced in FieldTrip versions between September 2016 and last week). I personally find that the NAI gives more sensible results if you are contrasting something like post-stimulus activity to a pre-stimulus baseline. If you are instead contrasting two conditions against each other rather than a baseline, then the different normalization approaches should give (almost) the same results anyway. Anyway, regarding lead field normalization: it does indeed do a voxel-by-voxel normalization since it cycles through all the voxels in a for loop ('for ii=1:Ndipoles' on the second line). It is purely based on the properties of the lead field, and as you noticed, is unlike Van Veen’s method in that it does not use the noise estimate at all. BTW, I believe that the lead field "column normalization" approach has been more popular in the literature. This normalizes the x/y/z components of the lead field independently, rather than all together. You can try this with cfg.normalize = ‘column’ and see how the results compare. Cheers, Sarang On 01 Mar 2017, at 11:43, Klink-3, N.E.C. van > wrote: Dear all, I want to do SAM beamformer source localization on single trial EEG data. I would like to normalize the leadfields to correct for depth, like mentioned in the lmcv beamformer tutorial: (http://www.fieldtriptoolbox.org/tutorial/beamformer_lcmv) cfg = []; cfg.elec = hdr.elec; % electrode distances cfg.headmodel = vol; % volume conduction headmodel cfg.grid = grid; % normalized grid positions cfg.channel = {'EEG'}; cfg.normalize = 'yes'; % to remove depth bias (Q in eq. 27 of van Veen et al, 1997). lf = ft_prepare_leadfield(cfg); However when I look what happens with cfg.normalize='yes', the following script is used in ft_compute_leadfield, from line 570: case 'yes' for ii=1:Ndipoles tmplf = lf(:, (3*ii-2):(3*ii)); if normalizeparam==0.5 % normalize the leadfield by the Frobenius norm of the matrix % this is the same as below in case normalizeparam is 0.5 nrm = norm(tmplf, 'fro'); else % normalize the leadfield by sum of squares of the elements of the leadfield matrix to the power "normalizeparam" % this is the same as the Frobenius norm if normalizeparam is 0.5 nrm = sum(tmplf(:).^2)^normalizeparam; end if nrm>0 tmplf = tmplf ./ nrm; end lf(:, (3*ii-2):(3*ii)) = tmplf; end This seems to me as independent of the dipole location, and does not use an estimate of the noise spectrum as in Eq 27 of van Veen et al 1997. DICS beamformer has the option to estimate the noise spectrum with 'projectnoise', but SAM beamformer does not have that option. SAM does something with noise and a lambda, which is noise regularization I guess (beamformer_sam from line 102). I use Fieldtrip 20170212. My main question: how do I correct the leadfields for depth bias? Thanks in advance, Nicole ________________________________ De informatie opgenomen in dit bericht kan vertrouwelijk zijn en is uitsluitend bestemd voor de geadresseerde. Indien u dit bericht onterecht ontvangt, wordt u verzocht de inhoud niet te gebruiken en de afzender direct te informeren door het bericht te retourneren. Het Universitair Medisch Centrum Utrecht is een publiekrechtelijke rechtspersoon in de zin van de W.H.W. (Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat geregistreerd bij de Kamer van Koophandel voor Midden-Nederland onder nr. 30244197. Denk s.v.p aan het milieu voor u deze e-mail afdrukt. ________________________________ This message may contain confidential information and is intended exclusively for the addressee. If you receive this message unintentionally, please do not use the contents but notify the sender immediately by return e-mail. University Medical Center Utrecht is a legal person by public law and is registered at the Chamber of Commerce for Midden-Nederland under no. 30244197. Please consider the environment before printing this e-mail. _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: From anne.urai at gmail.com Wed Mar 1 19:38:44 2017 From: anne.urai at gmail.com (Anne Urai) Date: Wed, 1 Mar 2017 10:38:44 -0800 Subject: [FieldTrip] compiling ft_volumenormalise Message-ID: Hi FieldTrippers, I compile my code to run on the supercomputer cluster (without many matlab licenses), which usually works fine when I do something like: *addpath('~/Documents/fieldtrip');* *ft_defaults; * *addpath('~/Documents/fieldtrip/external/spm8');* *mcc('-mv', '-N', '-p', 'stats', '-p', 'images', '-p', 'signal', ...* * '-R', '-nodisplay', '-R', '-singleCompThread', fname);* However, compiling the ft_volumenormalise function gives me some problems. Specifically, if source is the result of my beamformer analysis, this code * cfg = [];* * cfg.parameter = 'pow';* * cfg.nonlinear = 'no'; % can warp back to individual* * cfg.template = '/home/aeurai/Documents/fieldtrip/external/spm8/templates/T1.nii';* * cfg.write = 'no';* * cfg.keepinside = 'no'; % otherwise, ft_sourcegrandaverage will bug* * source = ft_volumenormalise(cfg, source);* works fine when running it within Matlab. However, when I run the executable after compiling (which completes without error), a low-level spm function throws the following error: *the input is source data with 16777216 brainordinates on a [256 256 256] grid* *Warning: could not reshape "freq" to the expected dimensions* *> In ft_datatype_volume (line 136)* *In ft_checkdata (line 350)* *In ft_volumenormalise (line 98)* *In B6b_sourceContrast_volNormalise (line 57)* *Converting the coordinate system from ctf to spm* *Undefined function 'fname' for input arguments of type 'struct'* *Error in file_array (line 32)* *Error in spm_create_vol>create_vol (line 77)* *Error in spm_create_vol (line 16)* *Error in volumewrite_spm (line 71)* *Error in ft_write_mri (line 65)* *Error in align_ctf2spm (line 168)* *Error in ft_convert_coordsys (line 95)* *Error in ft_volumenormalise (line 124)* *Error in B6b_sourceContrast_volNormalise (line 57)* *MATLAB:UndefinedFunction* I'd be very grateful for hints from anyone who's successfully compiled the ft_normalise function! Adding the template T1.nii file, spm8 or freesurfer at compilation does not solve the problem. Thanks, — Anne E. Urai, MSc PhD student | Institut für Neurophysiologie und Pathophysiologie Universitätsklinikum Hamburg-Eppendorf | Martinistrasse 52, 20246 | Hamburg, Germany www.anneurai.net / @AnneEUrai -------------- next part -------------- An HTML attachment was scrubbed... URL: From braingirl at gmail.com Wed Mar 1 20:22:48 2017 From: braingirl at gmail.com (Teresa Madsen) Date: Wed, 1 Mar 2017 14:22:48 -0500 Subject: [FieldTrip] error in filter_with_correction In-Reply-To: <7AC17F80-7F8D-4EC9-87F5-1B3279AC8DE1@mail.ucsd.edu> References: <7AC17F80-7F8D-4EC9-87F5-1B3279AC8DE1@mail.ucsd.edu> Message-ID: Did you also try searching the mailing list archives? The same error has come up a few times. What is your trial duration and sampling frequency? You'll need several seconds to get an accurate idea of what's going on in such low frequencies. Have you tried just detrending and applying a 2 Hz low-pass filter? It seems like that might have essentially the same effect. Hope one of those helps, Teresa On Fri, Feb 24, 2017 at 8:29 PM, Wong-Barnum, Mona wrote: > Hello fellow FieldTrip'er: > > Can someone help me understand and hopefully fix the following runtime > error message I am seeing (I searched a bit on the website documentation > but didn’t find anything): > > Error using filter_with_correction (line 51) > Calculated filter coefficients have poles on or outside the unit circle and > will not be stable. Try a higher cutoff frequency or a different > type/order of > filter. > > Error in filter_with_correction (line 51) > error('Calculated filter coefficients have poles on or outside the unit > circle and will not be stable. Try a higher cutoff frequency or a > different > type/order of filter.'); > > Error in ft_preproc_bandpassfilter (line 286) > filt = filter_with_correction(B,A,dat,dir,usefftfilt); > > Error in preproc (line 324) > if strcmp(cfg.bpfilter, 'yes'), dat = ft_preproc_bandpassfilter(dat, > fsample, cfg.bpfreq, cfg.bpfiltord, cfg.bpfilttype, cfg.bpfiltdir, > cfg.bpinstabilityfix, cfg.bpfiltdf, cfg.bpfiltwintype, cfg.bpfiltdev, > cfg.plotfiltresp, cfg.usefftfilt); end > > Error in ft_preprocessing (line 592) > [cutdat{i}, label, time{i}, cfg] = preproc(dat, hdr.label(rawindx), > tim, > cfg, begpadding, endpadding); > > Error in test (line 25) > data = ft_preprocessing ( cfg ); > > Error in run (line 96) > evalin('caller', [script ';']); > > Here is my script: > > addpath /path/to/my/fieldtrip > ft_defaults > > % 1. MEG > disp ( 'Reading 1.fif...' ) > cfg = []; > cfg.dataset = '1.fif'; > data = ft_preprocessing ( cfg ); > > disp ( 'Getting MEG channel 1...' ) > meg_channel = ft_channelselection ( 'MEG0111', data.label ); > cfg = []; > cfg.channel = meg_channel; > meg = ft_selectdata ( cfg, data ); > disp ( 'Saving meg...' ) > save meg.mat meg -v7.3; > clearvars cfg meg; > > % 2. Low delta MEG > disp ( 'Low delta MEG...' ) > cfg = []; > cfg.bpfilter = 'yes'; > cfg.bpfreq = [0.1 2]; > cfg.dataset = '1.fif'; > data = ft_preprocessing ( cfg ); > > cfg = []; > cfg.channel = meg_channel; > cfg.frequency = [0.1 2]; > meg = ft_selectdata ( cfg, data ); > disp ( 'Saving low delta meg...' ) > save low_delta_meg.mat meg -v7.3; > clearvars cfg meg; > > Line #25 is the last “data = ft_preprocessing ( cfg );” line. > > If I do cfg.bpfreq = [2 4] then there is no error but I really like to get > this low [0.1 2] range…any tips? > > Mona > > ********************************************* > Mona Wong > Web & Mobile Application Developer > San Diego Supercomputer Center > > Believing we are in control is an > illusion that brings suffering. > ********************************************* > > > > > > > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -- Teresa E. Madsen, PhD Research Technical Specialist: *in vivo *electrophysiology & data analysis Division of Behavioral Neuroscience and Psychiatric Disorders Yerkes National Primate Research Center Emory University Rainnie Lab, NSB 5233 954 Gatewood Rd. NE Atlanta, GA 30329 (770) 296-9119 braingirl at gmail.com https://www.linkedin.com/in/temadsen -------------- next part -------------- An HTML attachment was scrubbed... URL: From timeehan at gmail.com Wed Mar 1 21:55:03 2017 From: timeehan at gmail.com (Tim Meehan) Date: Wed, 1 Mar 2017 15:55:03 -0500 Subject: [FieldTrip] marking artifacts by channel + trial Message-ID: Hello All, When performing visual artifact rejection, I want to be able to mark artifacts that occur during some specific trials and only on some specific channels. In the tutorials I see only ways to mark bad channels (i.e. across all trials) or bad trials (i.e. across all channels). Does FieldTrip handle marking artifacts restricted to some channel/trial combination? Thanks, Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: From boris.burle at univ-amu.fr Thu Mar 2 15:19:18 2017 From: boris.burle at univ-amu.fr (Boris BURLE) Date: Thu, 2 Mar 2017 15:19:18 +0100 Subject: [FieldTrip] Post-doc position in development of cognitive control, Marseille, France Message-ID: <0ce55d3b-c424-633f-225b-2495e615a120@univ-amu.fr> Dear colleagues, Please find below a post-doc position offer that may be of interest to Fieldtrip users: B. Burle ------------------------------------- Post-doc research position in Developmental Psychology/Cognitive Neuroscience in Marseille, France We are seeking for a highly motivated fellow for a 2 years (potentially renewable) post-doc position to conduct EEG and structural MRI studies from children to young adults. This position is opened within a larger project aiming at tracking the development of cognitive control from childhood to adulthood. In the first phase of the project so far, we have collected behavioral data in a large cohort of more than 400 participants (from 5 to 14-years old) performing conflict tasks. The second phase in which occulometry (to extract pupil dilatation and eye movement) and electromyography (to extract the so-called " partial errors") in another group of children (comparable age-span) is currently being completed. Capitalizing on the results of the first two phases, the hired fellow will be mainly involved in the third phase of this project that will study the cortical components related to executive control maturation. EEG (and EMG) will be recorded on children performing conflict tasks to track the maturation of the different electrophysiological markers of executive control. The same children will undergo a structural MRI scan to get precise anatomy and connectivity, along with resting state activity. The recruited fellow will be in charge of the acquisition and processing of those data. The evolution of the EEG markers and of performance will be related to the maturation state of the different brain areas of interest and their connectivity. Candidates should hold a PhD in cognitive/developmental psychology/neuroscience. An expertise in either EEG or structural MRI is required. Having experience with children is a real plus, and if this experience is in association with one of the two techniques listed above, that is a major advantage. However, candidates having a strong background in one of those techniques but no experience with children are still encourage to apply. Knowledge of a high-level programming language (python, matlab, R...) is a real plus. The daily work language will be english but given the large interactions with children, non French-speaking applicant would have to speak a minimum amount of French (French courses can be attended in place). The project is interdsiciplinary, at the cross road of developmental psychology, cognitive neuroscience of cognitive control and neuro-imagery. The recruited fellow will hence interact with researchers in the three domain. Besides, the project in embedded in the vibrant and second biggest “behavioral and brain sciences” community in France. State of the art methodologies are accessible (Research dedicated MRI - Siemens last generation 3T scanner Prisma, MEG, robotized TMS, high resolution EEG etc...). Marseille is located in the south of France (Provence), on the shore of the Mediterranean sea, and is known for his very nice weather and surrounding: it is bordered by the beautiful “Calanques ”, the Alp mountains are within 1h30 ride, and so are the major cultural cities of Provence (Aix-en Provence, Avignon, Arles...). Salary is based on experience according to the CNRS (french National Center for Scientific Research) rules (and will be around 2000 € netto). Applications are encouraged immediately, and will remain open until position is filled. The position is available immediately. Please send candidatures (and or request for more information) at boris.burle at univ-amu.fr with [Post-Doc Devel] in the subject of the mail. -------------- next part -------------- An HTML attachment was scrubbed... URL: From braingirl at gmail.com Thu Mar 2 15:53:06 2017 From: braingirl at gmail.com (Teresa Madsen) Date: Thu, 2 Mar 2017 09:53:06 -0500 Subject: [FieldTrip] marking artifacts by channel + trial In-Reply-To: References: Message-ID: No, not really. The only way I've found to do that is to loop through my artifact rejection process on each trial individually, then merge them back together with NaNs filling in where there are artifacts, but then that breaks every form of analysis I want to do. :-P I wonder if it would work to fill in the artifacts with 0s instead of NaNs....I might play with that. Let me know if you're interested in some example code. ~Teresa On Wed, Mar 1, 2017 at 3:55 PM, Tim Meehan wrote: > Hello All, > > When performing visual artifact rejection, I want to be able to mark > artifacts that occur during some specific trials and only on some specific > channels. In the tutorials I see only ways to mark bad channels (i.e. > across all trials) or bad trials (i.e. across all channels). Does FieldTrip > handle marking artifacts restricted to some channel/trial combination? > > Thanks, > Tim > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -- Teresa E. Madsen, PhD Research Technical Specialist: *in vivo *electrophysiology & data analysis Division of Behavioral Neuroscience and Psychiatric Disorders Yerkes National Primate Research Center Emory University Rainnie Lab, NSB 5233 954 Gatewood Rd. NE Atlanta, GA 30329 (770) 296-9119 braingirl at gmail.com https://www.linkedin.com/in/temadsen -------------- next part -------------- An HTML attachment was scrubbed... URL: From timeehan at gmail.com Thu Mar 2 15:55:14 2017 From: timeehan at gmail.com (Tim Meehan) Date: Thu, 2 Mar 2017 09:55:14 -0500 Subject: [FieldTrip] marking artifacts by channel + trial In-Reply-To: References: Message-ID: Hi Teresa, Thanks for the reply. I'll take a look at your example if you don't mind sharing. Thanks! Tim On Thu, Mar 2, 2017 at 9:53 AM, Teresa Madsen wrote: > No, not really. The only way I've found to do that is to loop through my > artifact rejection process on each trial individually, then merge them back > together with NaNs filling in where there are artifacts, but then that > breaks every form of analysis I want to do. :-P > > I wonder if it would work to fill in the artifacts with 0s instead of > NaNs....I might play with that. Let me know if you're interested in some > example code. > > ~Teresa > > > On Wed, Mar 1, 2017 at 3:55 PM, Tim Meehan wrote: > >> Hello All, >> >> When performing visual artifact rejection, I want to be able to mark >> artifacts that occur during some specific trials and only on some specific >> channels. In the tutorials I see only ways to mark bad channels (i.e. >> across all trials) or bad trials (i.e. across all channels). Does FieldTrip >> handle marking artifacts restricted to some channel/trial combination? >> >> Thanks, >> Tim >> >> _______________________________________________ >> fieldtrip mailing list >> fieldtrip at donders.ru.nl >> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >> > > > > -- > Teresa E. Madsen, PhD > Research Technical Specialist: *in vivo *electrophysiology & data > analysis > Division of Behavioral Neuroscience and Psychiatric Disorders > Yerkes National Primate Research Center > Emory University > Rainnie Lab, NSB 5233 > 954 Gatewood Rd. NE > Atlanta, GA 30329 > (770) 296-9119 > braingirl at gmail.com > https://www.linkedin.com/in/temadsen > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -------------- next part -------------- An HTML attachment was scrubbed... URL: From martabortoletto at yahoo.it Fri Mar 3 09:16:53 2017 From: martabortoletto at yahoo.it (Marta Bortoletto) Date: Fri, 3 Mar 2017 08:16:53 +0000 (UTC) Subject: [FieldTrip] Post-doc position in TMS-EEG coregistration in Brescia, Italy In-Reply-To: <1398938405.1509799.1488528843896@mail.yahoo.com> References: <1398938405.1509799.1488528843896.ref@mail.yahoo.com> <1398938405.1509799.1488528843896@mail.yahoo.com> Message-ID: <89519094.172506.1488529013260@mail.yahoo.com> Dear all, Please find below an announcement for a post-docposition to work on a project of TMS-EEG coregistration, located at theCognitive Neuroscience Unit, IRCCS Centro San Giovanni di Dio Fatebenefratelli,Brescia (Italy), led by prof. Carlo Miniussi.We would be mostly grateful if you couldcirculate this notice to possibly interested candidates.Cheers, Marta Bortoletto and Anna Fertonani  Marta Bortoletto, PhD Cognitive Neuroscience Section, IRCCS Centro San Giovanni di Dio Fatebenefratelli Via Pilastroni 4, 25125 Brescia, Italy Phone number: (+39) 0303501594 E-mail: marta.bortoletto at cognitiveneuroscience.it web: http://www.cognitiveneuroscience.it/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Job description.pdf Type: application/pdf Size: 18043 bytes Desc: not available URL: From marc.lalancette at sickkids.ca Fri Mar 3 18:22:22 2017 From: marc.lalancette at sickkids.ca (Marc Lalancette) Date: Fri, 3 Mar 2017 17:22:22 +0000 Subject: [FieldTrip] Normalization of beamformer leadfields Message-ID: <2A2B6A5B8C4C174CBCCE0B45E548DEB23B964C34@SKMBXX01.sickkids.ca> Normalizing columns of the leadfield separately is not recommended. It is not a rotationally invariant operation, meaning you will get different results depending on your choice of coordinate system, which in short means that it introduces a physically meaningless bias, thus potentially amplitude and localization distortions. Note that this is also true of the unit-noise-gain normalization formula for the vector beamformer of Sekihara (which may still be used in some software, but is not in Fieldtrip). I was planning on writing a short paper on this, but unfortunately never found the time. I had a poster at Biomag 2014. Here's the link, but note that I later found errors in the computations for the "source bias and resolution figures" so it's probably best to ignore them, though the general idea that there are orientation and possibly location biases in most vector formulae is still valid http://dx.doi.org/10.6084/m9.figshare.1148970 . Maybe I'll redo the figures and post a "corrected" version at some point. Cheers, Marc Lalancette Lab Research Project Manager, Research MEG The Hospital for Sick Children, 555 University Avenue, Toronto, ON, M5G 1X8 416-813-7654 x201535 ________________________________ This e-mail may contain confidential, personal and/or health information(information which may be subject to legal restrictions on use, retention and/or disclosure) for the sole use of the intended recipient. Any review or distribution by anyone other than the person for whom it was originally intended is strictly prohibited. If you have received this e-mail in error, please contact the sender and delete all copies. From v.litvak at ucl.ac.uk Fri Mar 3 18:59:54 2017 From: v.litvak at ucl.ac.uk (Vladimir Litvak) Date: Fri, 3 Mar 2017 17:59:54 +0000 Subject: [FieldTrip] SPM course for MEG/EEG in London: May 8-10, 2017 Message-ID: Dear all, We are pleased to announce that our annual SPM course for MEG/EEG will take place this year from Monday May 8 to Wednesday May 10 2017. Hosted by University College London, the course will be held at Queen Square, a very central location in London (UK). The course will present instruction on the analysis of MEG and EEG data. The first two days will combine theoretical presentations with practical demonstrations of the different data analysis methods implemented in SPM. On the last day participants will have the opportunity to work on SPM tutorial data sets under the supervision of the course faculty. We also invite students to bring their own data for analysis. The course is suitable for both beginners and more advanced users. The topics that will be covered range from pre-processing and statistical analysis to source localization and dynamic causal modelling. The program is listed below. Registration is now open. For full details see http://www.fil.ion.ucl.ac.uk/ spm/course/london/ where you can also register. Available places are limited so please register as early as possible if you would like to attend! ---------------------- Monday May 8th (33 Queen square, basement) 9.00 - 9.30 Registration 9.30 - 9.45 SPM introduction and resources Guillaume Flandin 9.45 - 10.30 What are we measuring with M/EEG? Saskia Heibling 10.30 - 11.15 Data pre-processing Hayriye Cagnan Coffee 11.45 - 12.30 Data pre-processing – demo Sofie Meyer, Misun Kim 12.30 - 13.15 General linear model and classical inference Christophe Phillips Lunch 14.15 - 15.00 Multiple comparisons problem and solutions Guillaume Flandin 15.00 - 15.45 Bayesian inference Christophe Mathys Coffee 16.15 - 17.45 Group M/EEG dataset analysis - demo Jason Taylor, Martin Dietz 17.45 - 18.30 Advanced applications of the GLM Ashwani Jha, Bernadette van Wijk Tuesday May 9th (33 Queen square, basement) 9.30 - 10.15 M/EEG source analysis Gareth Barnes 10.15 - 11.15 M/EEG source analysis – demo Jose Lopez, Leonardo Duque Coffee 11.45 - 12.30 The principles of dynamic causal modelling Bernadette van Wijk 12.30 - 13.15 DCM for evoked responses Ryszard Auksztulewicz Lunch 14.15 - 15.00 DCM for steady state responses Rosalyn Moran 15.00 - 15.45 DCM - demo Richard Rosch, Tim West Coffee 16.15 - 17.00 Bayesian model selection and averaging Peter Zeidman 17.00 - 18.30 Clinic - questions & answers Karl Friston 19.00 - ... Social Event Wednesday May 10th 9.30 - 17.00 Practical hands-on session in UCL computer class rooms. Participants can either work on SPM tutorial datasets or on their own data with the help of the faculty. There will also be an opportunity to ask questions in small tutorial groups for further discussions on the topics of the lectures. -------------- next part -------------- An HTML attachment was scrubbed... URL: From braingirl at gmail.com Fri Mar 3 23:31:04 2017 From: braingirl at gmail.com (Teresa Madsen) Date: Fri, 3 Mar 2017 17:31:04 -0500 Subject: [FieldTrip] marking artifacts by channel + trial In-Reply-To: References: Message-ID: Here's a rough sketch of my approach, with one custom function attached. If you or others find it useful, maybe we can think about ways to incorporate it into the FieldTrip code. I've been working mostly with scripts, but you've inspired me to work on functionizing the rest of it so it's more shareable. So, assuming raw multichannel data has been loaded into FieldTrip structure 'data' with unique trial identifiers in data.trialinfo... for ch = 1:numel(data.label) %% pull out one channel at a time cfg = []; cfg.channel = data.label{ch}; datch{ch} = ft_selectdata(cfg, data); %% identify large z-value artifacts and/or whatever else you might want cfg = []; cfg.artfctdef.zvalue.channel = 'all'; cfg.artfctdef.zvalue.cutoff = 15; cfg.artfctdef.zvalue.trlpadding = 0; cfg.artfctdef.zvalue.fltpadding = 0; cfg.artfctdef.zvalue.artpadding = 0.1; cfg.artfctdef.zvalue.rectify = 'yes'; [~, artifact.zvalue] = ft_artifact_zvalue([], datch{ch}); %% replace artifacts with NaNs cfg = []; cfg.artfctdef.zvalue.artifact = artifact.zvalue; cfg.artfctdef.reject = 'nan'; datch{ch} = ft_rejectartifact(cfg,datch{ch}); end %% re-merge channels data = ft_appenddata([],datch); %% mark uniform NaNs as artifacts when they occur across all channels % and replace non-uniform NaNs (on some but not all channels) with zeroes, saving times [artifact,data,times] = artifact_nan2zero_TEM(data) % custom function, see attached %% reject artifacts by breaking into sub-trials cfg = []; cfg.artfctdef.nan2zero.artifact = artifact; cfg.artfctdef.reject = 'partial'; data = ft_rejectartifact(cfg,data); %% identify real trials trlinfo = unique(data.trialinfo,'rows','stable'); for tr = 1:size(trlinfo,1) %% calculate trial spectrogram cfg = []; cfg.trials = ismember(data.trialinfo, trlinfo(tr,:), 'rows'); cfg.keeptrials = 'no'; % refers to sub-trials cfg.method = 'mtmconvol'; cfg.output = 'powandcsd'; cfg.foi = 2.^(0:0.1:log2(300)); % 83 freqs, log2 spaced, 1-300 Hz cfg.tapsmofrq = cfg.foi/10; % smooth by 10% cfg.t_ftimwin = 2./cfg.tapsmofrq; % for 3 tapers (K=3), T=2/W cfg.toi = '50%'; cfg.pad = 'nextpow2'; freq = ft_freqanalysis(cfg,data); %% replace powspctrm & crsspctrum values with NaNs % where t_ftimwin (or wavlen for wavelets) overlaps with artifact for ch = 1:numel(freq.label) badt = [times{tr,ch}]; if ~isempty(badt) && any(... badt > (min(freq.time) - max(freq.cfg.t_ftimwin)) & ... badt < (max(freq.time) + max(freq.cfg.t_ftimwin))) ci = find(any(strcmp(freq.label{ch}, freq.labelcmb))); for t = 1:numel(freq.time) for f = 1:numel(freq.freq) mint = freq.time(t) - freq.cfg.t_ftimwin(f); maxt = freq.time(t) + freq.cfg.t_ftimwin(f); if any(badt > mint & badt < maxt) freq.powspctrm(ch,f,t) = NaN; freq.crsspctrm(ci,f,t) = NaN; end end end end end %% save corrected output save(['trial' num2str(tr) 'mtmconvolTFA.mat'], 'freq', '-v7.3'); end On Thu, Mar 2, 2017 at 9:55 AM, Tim Meehan wrote: > Hi Teresa, > > Thanks for the reply. I'll take a look at your example if you don't mind > sharing. Thanks! > > Tim > > On Thu, Mar 2, 2017 at 9:53 AM, Teresa Madsen wrote: > >> No, not really. The only way I've found to do that is to loop through my >> artifact rejection process on each trial individually, then merge them back >> together with NaNs filling in where there are artifacts, but then that >> breaks every form of analysis I want to do. :-P >> >> I wonder if it would work to fill in the artifacts with 0s instead of >> NaNs....I might play with that. Let me know if you're interested in some >> example code. >> >> ~Teresa >> >> >> On Wed, Mar 1, 2017 at 3:55 PM, Tim Meehan wrote: >> >>> Hello All, >>> >>> When performing visual artifact rejection, I want to be able to mark >>> artifacts that occur during some specific trials and only on some specific >>> channels. In the tutorials I see only ways to mark bad channels (i.e. >>> across all trials) or bad trials (i.e. across all channels). Does FieldTrip >>> handle marking artifacts restricted to some channel/trial combination? >>> >>> Thanks, >>> Tim >>> >>> _______________________________________________ >>> fieldtrip mailing list >>> fieldtrip at donders.ru.nl >>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>> >> >> >> >> -- >> Teresa E. Madsen, PhD >> Research Technical Specialist: *in vivo *electrophysiology & data >> analysis >> Division of Behavioral Neuroscience and Psychiatric Disorders >> Yerkes National Primate Research Center >> Emory University >> Rainnie Lab, NSB 5233 >> 954 Gatewood Rd. NE >> Atlanta, GA 30329 >> (770) 296-9119 >> braingirl at gmail.com >> https://www.linkedin.com/in/temadsen >> >> _______________________________________________ >> fieldtrip mailing list >> fieldtrip at donders.ru.nl >> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >> > > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -- Teresa E. Madsen, PhD Research Technical Specialist: *in vivo *electrophysiology & data analysis Division of Behavioral Neuroscience and Psychiatric Disorders Yerkes National Primate Research Center Emory University Rainnie Lab, NSB 5233 954 Gatewood Rd. NE Atlanta, GA 30329 (770) 296-9119 braingirl at gmail.com https://www.linkedin.com/in/temadsen -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- function [artifact,data,times] = artifact_nan2zero_TEM(data) % ARTIFACT_NAN2ZERO_TEM marks NaNs that occur uniformly across all channels % as artifacts, taking FT format data structure & returning the same % format as ft_artifact_xxx, for input to ft_rejectartifact. Non-uniform % NaNs (those not present on all channels) are replaced with 0s to avoid % breaking analysis functions. Also returns times of replaced NaNs by % trial & channel, so they can be changed back to NaNs in freq output. % % written 3/2/17 by Teresa E. Madsen artifact = []; times = cell(numel(data.trial), numel(data.label)); for tr = 1:numel(data.trial) % find NaNs to mark as artifacts (present uniformly across all channels) trlnan = isnan(data.trial{tr}); % identify NaNs by channel & timepoint allnan = all(trlnan,1); % need to specify dim in case of single channel % find, save timepoints, & replace non-uniform NaNs (not on all chs) w/ 0 replacenan = trlnan & repmat(~allnan,size(trlnan,1),1); for ch = 1:numel(data.label) times{tr,ch} = data.time{tr}(replacenan(ch,:)); % ID before replacing end data.trial{tr}(replacenan) = 0; % replace these w/ 0s if any(allnan) % determine the file sample #s for this trial trsamp = data.sampleinfo(tr,1):data.sampleinfo(tr,2); while any(allnan) % start from the end so sample #s don't shift endnan = find(allnan,1,'last'); allnan = allnan(1:endnan); % remove any non-NaNs after this % find last non-NaN before the NaNs beforenan = find(~allnan,1,'last'); if isempty(beforenan) % if no more non-NaNs begnan = 1; allnan = false; % while loop ends else % still more to remove - while loop continues begnan = beforenan + 1; allnan = allnan(1:beforenan); % remove the identified NaNs end % identify file sample #s that correspond to beginning and end of % this chunk of NaNs and append to artifact artifact = [artifact; trsamp(begnan) trsamp(endnan)]; %#ok end % while any(tnan) end % if any(tnan) end % for tr = 1:numel(data.trial) end From gaur-p at email.ulster.ac.uk Mon Mar 6 13:49:16 2017 From: gaur-p at email.ulster.ac.uk (Pramod Gaur) Date: Mon, 6 Mar 2017 12:49:16 +0000 Subject: [FieldTrip] Problem in buffer connection Message-ID: <518001d29678$12cbe540$3863afc0$@email.ulster.ac.uk> Dear community, My name is Pramod Gaur and I am PhD student in the Ulster university in UK working Brain-Computer Interfaces. Currently I am trying to implement the real-time classification problem mentioned in the tutorials. We have Neuromag Elekta MEG machine. I tried to the execute the following commands, it hangs. strcom = 'buffer:// ip-address-of-acquistion-machine:1972'; hdr = ft_read_header(strcom, 'cache', true); I executed the command ./neuromag2ft in the acquistion computer. Can anybody please suggest how this problem could be resolved. Any help would be highly appreciated. Best Regards, Pramod Gaur -------------- next part -------------- An HTML attachment was scrubbed... URL: From changa5 at mcmaster.ca Mon Mar 6 19:04:31 2017 From: changa5 at mcmaster.ca (Andrew Chang) Date: Mon, 6 Mar 2017 13:04:31 -0500 Subject: [FieldTrip] ft_volumereslice rotates the brain, how to fix? Message-ID: Dear Fieldtrip users, I am following the tutorial ( http://www.fieldtriptoolbox.org/tutorial/natmeg/dipolefitting) to work on coregistering the anatomical MRI (using colin27 template) to the EEG coordinate system, and then reslicing the MRI on to a cubic grid. However, I found that the ft_volumereslice rotates the MRI image, which seems weird. This is the sourceplot of the realigned MRI (from the 'mri_realigned2' variable, see the code below): [image: Inline image 1] However, this is the sourceplot of the resliced MRI, which was rotated in 3 dimensions (from the 'mri_resliced' variable, see the code below): [image: Inline image 3] I found that this rotation effect can be modulated by adjusting the parameters [rotation, scale, translate] on xyz dimensions, when I use the 'headshap' method for ft_volumerealign (see the code below). However, the effect of adjusting these parameters seems not to be linear or intuitive at all, and I cannot find the best combination to fix the rotation problem. Any advice or help would be much appreciated! Thank you all in advance! Here is the .mat file of what I have done: https://www.dropbox.com/s/viazz1vaq8gjyqb/fixingRotationMRI.mat?dl=0 Here is my code %% load MRI [mri_orig] = ft_read_mri('colin27_t1_tal_lin.nii'); %% load elec locations % I do not have the channel location or the headshape file, so I use a template cap to build the channel locations and headshape load('chanCfg') sphcoor = [Theta,Phi]'; cartcoor = elp2coor(sphcoor,10)'; % converting theta/phi coorfinates into xyz elec.elecpos = cartcoor; elec.chanpos = cartcoor; elec.label = ChannelName; % 'ChannelName' is a cell array of channel labels elec.unit = 'cm'; shape.pos = elec.elecpos; shape.label = elec.label; shape.unit = elec.unit ; shape.coordsys = 'spm'; %% Coregister the anatomical MRI to the EEG coordinate system cfg = []; cfg.method = 'interactive'; cfg.coordsys = 'spm'; [mri_realigned1] = ft_volumerealign(cfg, mri_orig); cfg = []; mri_realigned2 = []; cfg.method = 'headshape'; cfg.coordsys = 'spm'; cfg.headshape = shape; [mri_realigned2] = ft_volumerealign(cfg, mri_orig); % key in the following parameter for controlling the alignment % rotation: [0,0,0.5] % scale: [0.95, .8, .8] % translate: [0, 15, 0] cfg = []; cfg.resolution = 1; cfg.xrange = [-100 100]; cfg.yrange = [-110 110]; cfg.zrange = [-50 120]; mri_resliced = ft_volumereslice(cfg, mri_realigned2); Best, Andrew -- Andrew Chang, Ph.D. Candidate Vanier Canada Graduate Scholar http://changa5.wordpress.com/ Auditory Development Lab Department of Psychology, Neuroscience & Behaviour McMaster University -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: realigned.jpg Type: image/jpeg Size: 157286 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: resliced.jpg Type: image/jpeg Size: 145531 bytes Desc: not available URL: From bqrosen at ucsd.edu Tue Mar 7 03:38:22 2017 From: bqrosen at ucsd.edu (Burke Rosen) Date: Tue, 7 Mar 2017 02:38:22 +0000 Subject: [FieldTrip] units of the leadfield matrix Message-ID: <2E1FF091B841104E961728E963EE40D5BA6554F2@XMAIL-MBX-AT1.AD.UCSD.EDU> Hello, What are the units of the leadfield matrix produced by ft_compute_leadfield for EEG, Gradiometers, and Magnetometers? In particular, when using the the OpenMEEG BEM method. Thank you, Burke Rosen -------------- next part -------------- An HTML attachment was scrubbed... URL: From rikkert.hindriks at upf.edu Tue Mar 7 08:53:51 2017 From: rikkert.hindriks at upf.edu (HINDRIKS, RIKKERT) Date: Tue, 7 Mar 2017 08:53:51 +0100 Subject: [FieldTrip] units of the leadfield matrix In-Reply-To: <2E1FF091B841104E961728E963EE40D5BA6554F2@XMAIL-MBX-AT1.AD.UCSD.EDU> References: <2E1FF091B841104E961728E963EE40D5BA6554F2@XMAIL-MBX-AT1.AD.UCSD.EDU> Message-ID: https://mailman.science.ru.nl/pipermail/fieldtrip/2015-August/009561.html On Tue, Mar 7, 2017 at 3:38 AM, Burke Rosen wrote: > Hello, > > What are the units of the leadfield matrix produced by > ft_compute_leadfield for EEG, Gradiometers, and Magnetometers? > > In particular, when using the the OpenMEEG BEM method. > > Thank you, > > Burke Rosen > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.schoffelen at donders.ru.nl Tue Mar 7 09:17:50 2017 From: jan.schoffelen at donders.ru.nl (Schoffelen, J.M. (Jan Mathijs)) Date: Tue, 7 Mar 2017 08:17:50 +0000 Subject: [FieldTrip] ft_volumereslice rotates the brain, how to fix? In-Reply-To: References: Message-ID: <33D0E3BA-5A8B-4262-80D1-99A5AB94C268@donders.ru.nl> Hi Andrew, What’s the point in doing the second, headshape based alignment? I suppose that the template electrode positions are defined in a different coordinate system than ‘spm’? If so, be aware that probably these template positions do not nicely match the reconstructed headsurface from the template MRI, so you need to do the headshape based alignment by hand, since the automatic icp algorithm probably will get caught in an inappropriate local minimum. As long as you don’t rotate around the z-axis, I would assume that the ‘rotation’ would go away. Note, that the rotation of the image itself (as per ft_volumereslice) is not the problem, but the fact that it is rotated probably is, because that suggest that your coregistration between anatomy and electrodes does not make sense. Best, Jan-Mathijs J.M.Schoffelen, MD PhD Senior Researcher, VIDI-fellow - PI, language in interaction Telephone: +31-24-3614793 Physical location: room 00.028 Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands On 06 Mar 2017, at 19:04, Andrew Chang > wrote: Dear Fieldtrip users, I am following the tutorial (http://www.fieldtriptoolbox.org/tutorial/natmeg/dipolefitting) to work on coregistering the anatomical MRI (using colin27 template) to the EEG coordinate system, and then reslicing the MRI on to a cubic grid. However, I found that the ft_volumereslice rotates the MRI image, which seems weird. This is the sourceplot of the realigned MRI (from the 'mri_realigned2' variable, see the code below): However, this is the sourceplot of the resliced MRI, which was rotated in 3 dimensions (from the 'mri_resliced' variable, see the code below): I found that this rotation effect can be modulated by adjusting the parameters [rotation, scale, translate] on xyz dimensions, when I use the 'headshap' method for ft_volumerealign (see the code below). However, the effect of adjusting these parameters seems not to be linear or intuitive at all, and I cannot find the best combination to fix the rotation problem. Any advice or help would be much appreciated! Thank you all in advance! Here is the .mat file of what I have done: https://www.dropbox.com/s/viazz1vaq8gjyqb/fixingRotationMRI.mat?dl=0 Here is my code %% load MRI [mri_orig] = ft_read_mri('colin27_t1_tal_lin.nii'); %% load elec locations % I do not have the channel location or the headshape file, so I use a template cap to build the channel locations and headshape load('chanCfg') sphcoor = [Theta,Phi]'; cartcoor = elp2coor(sphcoor,10)'; % converting theta/phi coorfinates into xyz elec.elecpos = cartcoor; elec.chanpos = cartcoor; elec.label = ChannelName; % 'ChannelName' is a cell array of channel labels elec.unit = 'cm'; shape.pos = elec.elecpos; shape.label = elec.label; shape.unit = elec.unit ; shape.coordsys = 'spm'; %% Coregister the anatomical MRI to the EEG coordinate system cfg = []; cfg.method = 'interactive'; cfg.coordsys = 'spm'; [mri_realigned1] = ft_volumerealign(cfg, mri_orig); cfg = []; mri_realigned2 = []; cfg.method = 'headshape'; cfg.coordsys = 'spm'; cfg.headshape = shape; [mri_realigned2] = ft_volumerealign(cfg, mri_orig); % key in the following parameter for controlling the alignment % rotation: [0,0,0.5] % scale: [0.95, .8, .8] % translate: [0, 15, 0] cfg = []; cfg.resolution = 1; cfg.xrange = [-100 100]; cfg.yrange = [-110 110]; cfg.zrange = [-50 120]; mri_resliced = ft_volumereslice(cfg, mri_realigned2); Best, Andrew -- Andrew Chang, Ph.D. Candidate Vanier Canada Graduate Scholar http://changa5.wordpress.com/ Auditory Development Lab Department of Psychology, Neuroscience & Behaviour McMaster University _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: From jens.klinzing at uni-tuebingen.de Tue Mar 7 23:36:04 2017 From: jens.klinzing at uni-tuebingen.de (=?ISO-8859-1?Q?=22Jens_Klinzing=2C_Universit=E4t_T=FCbingen=22?=) Date: Tue, 07 Mar 2017 23:36:04 +0100 Subject: [FieldTrip] Sourcemodel inside definition too large when using warpmni Message-ID: <58BF35D4.7060400@uni-tuebingen.de> Dear Fieldtrip community, when calling ft_prepare_sourcemodel to create an individual sourcemodel I get quite different 'inside' definitions for the same subject when a) providing an unsegmented MRI and warping to the template MNI (see attachment: green) b) when providing an already segmented MRI (see attachment: blue) In fact, the extent of the inside in scenario a) is pretty similar to when I create a sourcemodel based on the skull instead of the brain. So maybe the segmentation during the warping process is the problem (for warped sourcemodels the inside field is just copied from the template sourcemodel). Is there a way to influence the segmentation performed by ft_prepare_sourcemodel when warping to the template MNI? Fieldtrip does not allow to provide an already segmented MRI in this case (error: missing anatomy). I expected the options cfg.threshold and cfg.smooth to be analogous to the threshold and smooth options for ft_volumesegment but they seem to be used only when I already provide a segmented MRI (so they can't help me here). I could just use the cfg.inwardshift option to fix the issue but I'm afraid that the MNI-warping itself may be affected in case the problem actually results from a flawed segmentation. Thanks in advance for your help! All the best, Jens -------------- next part -------------- A non-text attachment was scrubbed... Name: sourcemodel green_warpmni blue_nowarp_onsegmentedmri.PNG Type: image/png Size: 55645 bytes Desc: not available URL: From m.chait at ucl.ac.uk Wed Mar 8 01:04:46 2017 From: m.chait at ucl.ac.uk (Chait, Maria) Date: Wed, 8 Mar 2017 00:04:46 +0000 Subject: [FieldTrip] Post-Doc position on Auditory Attention [DEADLINE March 31] Message-ID: (please forward) A postdoctoral research associate position is available at the UCL Ear Institute's 'Auditory Cognitive Neuroscience Lab' to work on an EC-funded project that will use psychophysics, eye tracking and EEG to investigate auditory attention in humans. The post is funded for 20 months in the first instance. For more information about the post please see the lab website: http://www.ucl.ac.uk/ear/research/chaitlab/vacancies The Ear Institute is a leading interdisciplinary centre for hearing research in Europe, situated within one of the strongest neuroscience communities in the world at University College London Key Requirements The successful applicant will have a PhD in neuroscience or a neuroscience-related discipline and proven ability to conduct high-quality original research and prepare results for publication. Essential skills include excellent time-management and organizational ability; proficiency in computer programming and good interpersonal, oral and written communication skills. Previous experience with functional brain imaging, neural data analysis, psychophysical assessment, and/or auditory science or acoustics would be desirable. Further Details You should apply for this post (Ref #: 1631454) through UCL's online recruitment website, www.ucl.ac.uk/hr/jobs, where you can download a job description and person specifications. Closing Date for applications is: 31 March 2017 For an informal discussion please contact Dr. Maria Chait (m.chait at ucl.ac.uk). Maria Chait PhD m.chait at ucl.ac.uk Reader in Auditory Cognitive Neuroscience Lab site: http://www.ucl.ac.uk/ear/research/chaitlab/ UCL Ear Institute 332 Gray's Inn Road London WC1X 8EE -------------- next part -------------- An HTML attachment was scrubbed... URL: From ainsley.temudo at nyu.edu Wed Mar 8 07:52:03 2017 From: ainsley.temudo at nyu.edu (Ainsley Temudo) Date: Wed, 8 Mar 2017 10:52:03 +0400 Subject: [FieldTrip] Source Reconstruction Message-ID: Hi FieldTrip Experts, I am trying to perform source reconstruction, and I am having trouble with coregistering my anatomical with the sensors. The MEG system we're using is Yokogawa and the anatomical is a NIFTI file. I get some errors when using ft_sensorrealign and ft_electroderealign. I will go through the steps I took before getting to this stage, as maybe I have done something wrong. first I read in my MRI and determine the coordinate system which is LPS. mriunknown = ft_read_mri('WMCP1011+22+t1mprage.nii'); mri = ft_determine_coordsys(mriunknown, 'interactive','yes') next I realign to the CTF coordinate system by marking the NAS LPA, RPA cfg = []; cfg.method = 'interactive'; cfg.coordsys = 'ctf'; mri_ctf = ft_volumerealign(cfg, mir); I read in the sensor information and added in the coordinates for the marker positions. we have five marker positions, the three I picked were the left and right ear markers and the middle forehead marker. grad=ft_read_sens('srcLocTest01_FT_01.con'); grad.fid.pnt(1,:) = [96.07 3.11 -5.32]./10; grad.fid.pnt(2,:) = [11.13 75.50 -78.23]./10; grad.fid.pnt(3,:) = [8.50 -75.09 -64.60]./10; grad.fid.label = {'NAS' 'LPA' 'RPA'}; I then put the template marker point cordinates into the configuration which were taken from the mri_ctf cfg = []; cfg.method = 'fiducial'; cfg.target.pnt(1,:) = [91.1 3.0 49.2]./10; cfg.target.pnt(2,:) = [-0.1 70.5 0.0]./10; cfg.target.pnt(3,:) = [0.1 -70.5 0.0]./10; cfg.target.label = {'NAS' 'LPA' 'RPA'}; grad_aligned = ft_sensorrealign(cfg, grad); when I use ft_sensorrealign I get the following errors : Undefined function or variable 'lab'. Error in channelposition (line 314) n = size(lab,2); Error in ft_datatype_sens (line 328) [chanpos, chanori, lab] = channelposition(sens); Error in ft_sensorrealign (line 212) elec_original = ft_datatype_sens(elec_original); % ensure up-to-date sensor description (Oct 2011) when I use ft_electroderealign I get the following errors: Error using ft_fetch_sens (line 192) no electrodes or gradiometers specified. Error in ft_electroderealign (line 195) elec_original = ft_fetch_sens(cfg); Hope you can help me figure out why I'm getting these errors. Thanks, Ainsley -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.schoffelen at donders.ru.nl Wed Mar 8 08:26:38 2017 From: jan.schoffelen at donders.ru.nl (Schoffelen, J.M. (Jan Mathijs)) Date: Wed, 8 Mar 2017 07:26:38 +0000 Subject: [FieldTrip] Source Reconstruction In-Reply-To: References: Message-ID: <20B3451F-DE4F-4BF7-A42E-E1A17A766EDC@donders.ru.nl> Hi Ainsley, Why would you want to use sensorrealign/electroderealign since you have MEG-data? The former functions may be needed for EEG electrodes, not for MEG sensors. Best wishes, Jan-Mathijs J.M.Schoffelen, MD PhD Senior Researcher, VIDI-fellow - PI, language in interaction Telephone: +31-24-3614793 Physical location: room 00.028 Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands On 08 Mar 2017, at 07:52, Ainsley Temudo > wrote: Hi FieldTrip Experts, I am trying to perform source reconstruction, and I am having trouble with coregistering my anatomical with the sensors. The MEG system we're using is Yokogawa and the anatomical is a NIFTI file. I get some errors when using ft_sensorrealign and ft_electroderealign. I will go through the steps I took before getting to this stage, as maybe I have done something wrong. first I read in my MRI and determine the coordinate system which is LPS. mriunknown = ft_read_mri('WMCP1011+22+t1mprage.nii'); mri = ft_determine_coordsys(mriunknown, 'interactive','yes') next I realign to the CTF coordinate system by marking the NAS LPA, RPA cfg = []; cfg.method = 'interactive'; cfg.coordsys = 'ctf'; mri_ctf = ft_volumerealign(cfg, mir); I read in the sensor information and added in the coordinates for the marker positions. we have five marker positions, the three I picked were the left and right ear markers and the middle forehead marker. grad=ft_read_sens('srcLocTest01_FT_01.con'); grad.fid.pnt(1,:) = [96.07 3.11 -5.32]./10; grad.fid.pnt(2,:) = [11.13 75.50 -78.23]./10; grad.fid.pnt(3,:) = [8.50 -75.09 -64.60]./10; grad.fid.label = {'NAS' 'LPA' 'RPA'}; I then put the template marker point cordinates into the configuration which were taken from the mri_ctf cfg = []; cfg.method = 'fiducial'; cfg.target.pnt(1,:) = [91.1 3.0 49.2]./10; cfg.target.pnt(2,:) = [-0.1 70.5 0.0]./10; cfg.target.pnt(3,:) = [0.1 -70.5 0.0]./10; cfg.target.label = {'NAS' 'LPA' 'RPA'}; grad_aligned = ft_sensorrealign(cfg, grad); when I use ft_sensorrealign I get the following errors : Undefined function or variable 'lab'. Error in channelposition (line 314) n = size(lab,2); Error in ft_datatype_sens (line 328) [chanpos, chanori, lab] = channelposition(sens); Error in ft_sensorrealign (line 212) elec_original = ft_datatype_sens(elec_original); % ensure up-to-date sensor description (Oct 2011) when I use ft_electroderealign I get the following errors: Error using ft_fetch_sens (line 192) no electrodes or gradiometers specified. Error in ft_electroderealign (line 195) elec_original = ft_fetch_sens(cfg); Hope you can help me figure out why I'm getting these errors. Thanks, Ainsley _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip J.M.Schoffelen, MD PhD Senior Researcher, VIDI-fellow - PI, language in interaction Telephone: +31-24-3614793 Physical location: room 00.028 Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.schoffelen at donders.ru.nl Wed Mar 8 08:27:30 2017 From: jan.schoffelen at donders.ru.nl (Schoffelen, J.M. (Jan Mathijs)) Date: Wed, 8 Mar 2017 07:27:30 +0000 Subject: [FieldTrip] Sourcemodel inside definition too large when using warpmni References: Message-ID: <08748577-A2CA-4D37-8B9E-BA75BD7BA5CD@donders.ru.nl> Hi Jens, What does the ‘green’ point cloud look like relative to the blue points when you switch off the non-linear step in recipe a)? JM > On 07 Mar 2017, at 23:36, Jens Klinzing, Universität Tübingen wrote: > > Dear Fieldtrip community, > when calling ft_prepare_sourcemodel to create an individual sourcemodel I get quite different 'inside' definitions for the same subject when > a) providing an unsegmented MRI and warping to the template MNI (see attachment: green) > b) when providing an already segmented MRI (see attachment: blue) > > In fact, the extent of the inside in scenario a) is pretty similar to when I create a sourcemodel based on the skull instead of the brain. So maybe the segmentation during the warping process is the problem (for warped sourcemodels the inside field is just copied from the template sourcemodel). > > Is there a way to influence the segmentation performed by ft_prepare_sourcemodel when warping to the template MNI? > Fieldtrip does not allow to provide an already segmented MRI in this case (error: missing anatomy). I expected the options cfg.threshold and cfg.smooth to be analogous to the threshold and smooth options for ft_volumesegment but they seem to be used only when I already provide a segmented MRI (so they can't help me here). > > I could just use the cfg.inwardshift option to fix the issue but I'm afraid that the MNI-warping itself may be affected in case the problem actually results from a flawed segmentation. > > Thanks in advance for your help! > > All the best, > Jens > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip From ainsley.temudo at nyu.edu Wed Mar 8 09:03:58 2017 From: ainsley.temudo at nyu.edu (Ainsley Temudo) Date: Wed, 8 Mar 2017 12:03:58 +0400 Subject: [FieldTrip] Source Reconstruction In-Reply-To: <20B3451F-DE4F-4BF7-A42E-E1A17A766EDC@donders.ru.nl> References: <20B3451F-DE4F-4BF7-A42E-E1A17A766EDC@donders.ru.nl> Message-ID: Hi Jan-Mathijs Thanks for getting back to me so quickly. I originally used Ft_sensoralign, but I got the error messages one of which said 'FT_SENSORREALIGN is deprecated, please use FT_ELECTRODEREALIGN instead' so thats why I used electrode realign instead, even though it's MEG data. I've been following this page to do the realignment. http://www.fieldtriptoolbox.org/getting_started/yokogawa?s[]=yokogawa if I use sensorrealign how should I deal with these error messages? Undefined function or variable 'lab'. Error in channelposition (line 314) n = size(lab,2); Error in ft_datatype_sens (line 328) [chanpos, chanori, lab] = channelposition(sens); Error in ft_sensorrealign (line 212) elec_original = ft_datatype_sens(elec_original); % ensure up-to-date sensor description (Oct 2011) is there another way to realign my anatomical with my MEG sensors without using ft_sensorrealign? Thanks, Ainsley On Wed, Mar 8, 2017 at 11:26 AM, Schoffelen, J.M. (Jan Mathijs) < jan.schoffelen at donders.ru.nl> wrote: > Hi Ainsley, > > Why would you want to use sensorrealign/electroderealign since you have > MEG-data? The former functions may be needed for EEG electrodes, not for > MEG sensors. > > Best wishes, > Jan-Mathijs > > > J.M.Schoffelen, MD PhD > Senior Researcher, VIDI-fellow - PI, language in interaction > Telephone: +31-24-3614793 <+31%2024%20361%204793> > Physical location: room 00.028 > Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands > > > On 08 Mar 2017, at 07:52, Ainsley Temudo wrote: > > Hi FieldTrip Experts, > > I am trying to perform source reconstruction, and I am having trouble with > coregistering my anatomical with the sensors. The MEG system we're using is > Yokogawa and the anatomical is a NIFTI file. I get some errors when using > ft_sensorrealign and ft_electroderealign. I will go through the steps I > took before getting to this stage, as maybe I have done something wrong. > > first I read in my MRI and determine the coordinate system which is LPS. > > mriunknown = ft_read_mri('WMCP1011+22+t1mprage.nii'); > mri = ft_determine_coordsys(mriunknown, 'interactive','yes') > > next I realign to the CTF coordinate system by marking the NAS LPA, RPA > > cfg = []; > cfg.method = 'interactive'; > cfg.coordsys = 'ctf'; > > mri_ctf = ft_volumerealign(cfg, mir); > > I read in the sensor information and added in the coordinates for the > marker positions. we have five marker positions, the three I picked were > the left and right ear markers and the middle forehead marker. > > grad=ft_read_sens('srcLocTest01_FT_01.con'); > > > > > grad.fid.pnt(1,:) = [96.07 3.11 -5.32]./10; > grad.fid.pnt(2,:) = [11.13 75.50 -78.23]./10; > grad.fid.pnt(3,:) = [8.50 -75.09 -64.60]./10; > > > grad.fid.label = {'NAS' 'LPA' 'RPA'}; > > I then put the template marker point cordinates into the configuration > which were taken from the mri_ctf > > cfg = []; > cfg.method = 'fiducial'; > cfg.target.pnt(1,:) = [91.1 3.0 49.2]./10; > cfg.target.pnt(2,:) = [-0.1 70.5 0.0]./10; > cfg.target.pnt(3,:) = [0.1 -70.5 0.0]./10; > > > cfg.target.label = {'NAS' 'LPA' 'RPA'}; > > > > grad_aligned = ft_sensorrealign(cfg, grad); > > when I use ft_sensorrealign I get the following errors : > > Undefined function or variable 'lab'. > > Error in channelposition (line 314) > n = size(lab,2); > > Error in ft_datatype_sens (line 328) > [chanpos, chanori, lab] = channelposition(sens); > > Error in ft_sensorrealign (line 212) > elec_original = ft_datatype_sens(elec_original); % ensure up-to-date > sensor description (Oct 2011) > > > when I use ft_electroderealign I get the following errors: > > Error using ft_fetch_sens (line 192) > no electrodes or gradiometers specified. > > Error in ft_electroderealign (line 195) > elec_original = ft_fetch_sens(cfg); > > > Hope you can help me figure out why I'm getting these errors. > Thanks, > Ainsley > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > > > > > J.M.Schoffelen, MD PhD > Senior Researcher, VIDI-fellow - PI, language in interaction > Telephone: +31-24-3614793 <+31%2024%20361%204793> > Physical location: room 00.028 > Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands > > > > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -- *Ainsley Temudo* Research Assistant Sreenivasan Lab NYU Abu Dhabi Office Tel (UAE): +971 2 628 4764 Mobile (UAE): +971 56 664 6952 NYU Abu Dhabi, Saadiyat Campus P.O. Box 129188 Abu Dhabi, United Arab Emirates -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.schoffelen at donders.ru.nl Wed Mar 8 09:15:48 2017 From: jan.schoffelen at donders.ru.nl (Schoffelen, J.M. (Jan Mathijs)) Date: Wed, 8 Mar 2017 08:15:48 +0000 Subject: [FieldTrip] Source Reconstruction In-Reply-To: References: <20B3451F-DE4F-4BF7-A42E-E1A17A766EDC@donders.ru.nl> Message-ID: Hi Ainsley, I have never worked with ‘yokogawa’ data myself, so I can’t be of much help. The documentation you point to is already several years old, and appears not to have been actively maintained. Most likely the error you get is caused by incompatibility between the current version of the FieldTrip code, and the example code provided. Perhaps someone that has recently done coregistration between anatomical data and yokogawa data can chime in? Best wishes, Jan-Mathijs On 08 Mar 2017, at 09:03, Ainsley Temudo > wrote: Hi Jan-Mathijs Thanks for getting back to me so quickly. I originally used Ft_sensoralign, but I got the error messages one of which said 'FT_SENSORREALIGN is deprecated, please use FT_ELECTRODEREALIGN instead' so thats why I used electrode realign instead, even though it's MEG data. I've been following this page to do the realignment. http://www.fieldtriptoolbox.org/getting_started/yokogawa?s[]=yokogawa if I use sensorrealign how should I deal with these error messages? Undefined function or variable 'lab'. Error in channelposition (line 314) n = size(lab,2); Error in ft_datatype_sens (line 328) [chanpos, chanori, lab] = channelposition(sens); Error in ft_sensorrealign (line 212) elec_original = ft_datatype_sens(elec_original); % ensure up-to-date sensor description (Oct 2011) is there another way to realign my anatomical with my MEG sensors without using ft_sensorrealign? Thanks, Ainsley On Wed, Mar 8, 2017 at 11:26 AM, Schoffelen, J.M. (Jan Mathijs) > wrote: Hi Ainsley, Why would you want to use sensorrealign/electroderealign since you have MEG-data? The former functions may be needed for EEG electrodes, not for MEG sensors. Best wishes, Jan-Mathijs J.M.Schoffelen, MD PhD Senior Researcher, VIDI-fellow - PI, language in interaction Telephone: +31-24-3614793 Physical location: room 00.028 Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands On 08 Mar 2017, at 07:52, Ainsley Temudo > wrote: Hi FieldTrip Experts, I am trying to perform source reconstruction, and I am having trouble with coregistering my anatomical with the sensors. The MEG system we're using is Yokogawa and the anatomical is a NIFTI file. I get some errors when using ft_sensorrealign and ft_electroderealign. I will go through the steps I took before getting to this stage, as maybe I have done something wrong. first I read in my MRI and determine the coordinate system which is LPS. mriunknown = ft_read_mri('WMCP1011+22+t1mprage.nii'); mri = ft_determine_coordsys(mriunknown, 'interactive','yes') next I realign to the CTF coordinate system by marking the NAS LPA, RPA cfg = []; cfg.method = 'interactive'; cfg.coordsys = 'ctf'; mri_ctf = ft_volumerealign(cfg, mir); I read in the sensor information and added in the coordinates for the marker positions. we have five marker positions, the three I picked were the left and right ear markers and the middle forehead marker. grad=ft_read_sens('srcLocTest01_FT_01.con'); grad.fid.pnt(1,:) = [96.07 3.11 -5.32]./10; grad.fid.pnt(2,:) = [11.13 75.50 -78.23]./10; grad.fid.pnt(3,:) = [8.50 -75.09 -64.60]./10; grad.fid.label = {'NAS' 'LPA' 'RPA'}; I then put the template marker point cordinates into the configuration which were taken from the mri_ctf cfg = []; cfg.method = 'fiducial'; cfg.target.pnt(1,:) = [91.1 3.0 49.2]./10; cfg.target.pnt(2,:) = [-0.1 70.5 0.0]./10; cfg.target.pnt(3,:) = [0.1 -70.5 0.0]./10; cfg.target.label = {'NAS' 'LPA' 'RPA'}; grad_aligned = ft_sensorrealign(cfg, grad); when I use ft_sensorrealign I get the following errors : Undefined function or variable 'lab'. Error in channelposition (line 314) n = size(lab,2); Error in ft_datatype_sens (line 328) [chanpos, chanori, lab] = channelposition(sens); Error in ft_sensorrealign (line 212) elec_original = ft_datatype_sens(elec_original); % ensure up-to-date sensor description (Oct 2011) when I use ft_electroderealign I get the following errors: Error using ft_fetch_sens (line 192) no electrodes or gradiometers specified. Error in ft_electroderealign (line 195) elec_original = ft_fetch_sens(cfg); Hope you can help me figure out why I'm getting these errors. Thanks, Ainsley _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip J.M.Schoffelen, MD PhD Senior Researcher, VIDI-fellow - PI, language in interaction Telephone: +31-24-3614793 Physical location: room 00.028 Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -- Ainsley Temudo Research Assistant Sreenivasan Lab NYU Abu Dhabi Office Tel (UAE): +971 2 628 4764 Mobile (UAE): +971 56 664 6952 NYU Abu Dhabi, Saadiyat Campus P.O. Box 129188 Abu Dhabi, United Arab Emirates _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: From jens.klinzing at uni-tuebingen.de Wed Mar 8 10:22:30 2017 From: jens.klinzing at uni-tuebingen.de (=?UTF-8?B?IkplbnMgS2xpbnppbmcsIFVuaSBUw7xiaW5nZW4i?=) Date: Wed, 08 Mar 2017 10:22:30 +0100 Subject: [FieldTrip] Sourcemodel inside definition too large when using warpmni In-Reply-To: <08748577-A2CA-4D37-8B9E-BA75BD7BA5CD@donders.ru.nl> References: <08748577-A2CA-4D37-8B9E-BA75BD7BA5CD@donders.ru.nl> Message-ID: <58BFCD56.2080508@uni-tuebingen.de> Hi Jan-Mathijs, the size difference is still there with cfg.grid.nonlinear = no. Best, Jens > Schoffelen, J.M. (Jan Mathijs) > Mittwoch, 8. März 2017 08:27 > Hi Jens, > > What does the ‘green’ point cloud look like relative to the blue > points when you switch off the non-linear step in recipe a)? > > JM > > > > > > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 101843 bytes Desc: not available URL: From seymourr at aston.ac.uk Wed Mar 8 11:19:13 2017 From: seymourr at aston.ac.uk (Seymour, Robert (Research Student)) Date: Wed, 8 Mar 2017 10:19:13 +0000 Subject: [FieldTrip] Source Reconstruction Message-ID: Hi Ainsley, Good to see you're using Fieldtrip + Yokogawa data as well :D As I'm sure you're aware the issue is that "unlike other systems, the Yokogawa system software does not automatically analyse its sensor locations relative to fiducial coils". One workaround option is to do your coregistration in the Yokogawa/KIT software MEG160 and then export the sensor locations. You can then follow a more standard FT coregistration route without the need to use ft_sensorrealign. As Jan Mathijs said the http://www.fieldtriptoolbox.org/getting_started/yokogawa page is very outdated, so I will update it at some point in the future with more relevant info + updated code for sensor realignment. Many thanks, Robert Seymour -------------- next part -------------- An HTML attachment was scrubbed... URL: From sarang at cfin.au.dk Wed Mar 8 11:34:03 2017 From: sarang at cfin.au.dk (Sarang S. Dalal) Date: Wed, 8 Mar 2017 10:34:03 +0000 Subject: [FieldTrip] Source Reconstruction In-Reply-To: References: <20B3451F-DE4F-4BF7-A42E-E1A17A766EDC@donders.ru.nl> Message-ID: <1488969241.5011.5.camel@cfin.au.dk> Hi Ainsley, You might consider realigning your MEG to your MRI, rather than the other way around. Our group typically does it this way to also simplify some other aspects of our pipeline, in particular, to simplify (re-)use of BEM head models and plotting the final source maps on the participant's own MRI or the MNI template. You can find examples of our scripts on github: https://github.com/meeg-cfin/nemolab Check out basics/nemo_mriproc.m -- you may need to add your particular yokogawa system to line 45. (Note that I've tested this procedure on Elekta, CTF, and 4D/BTi data, but not yet Yokogawa.) An example of how to put it all the pipeline pieces together for a basic LCMV source analysis and visulization of ERF data is given in: basics/nemo_sourcelocER.m Best wishes, Sarang On Wed, 2017-03-08 at 08:15 +0000, Schoffelen, J.M. (Jan Mathijs) wrote: > Hi Ainsley, > > I have never worked with ‘yokogawa’ data myself, so I can’t be of > much help. The documentation you point to is already several years > old, and appears not to have been actively maintained. Most likely > the error you get is caused by incompatibility between the current > version of the FieldTrip code, and the example code provided. Perhaps > someone that has recently done coregistration between anatomical data > and yokogawa data can chime in? > > Best wishes, > Jan-Mathijs > >   > > On 08 Mar 2017, at 09:03, Ainsley Temudo > > wrote: > > > > Hi Jan-Mathijs > > Thanks for getting back to me so quickly. I originally used > > Ft_sensoralign, but I got the error messages one of which said  > > 'FT_SENSORREALIGN is deprecated, please use FT_ELECTRODEREALIGN > > instead' so thats why I used electrode realign instead, even though > > it's MEG data.   > > > > I've been following this page to do the realignment.  > > > > http://www.fieldtriptoolbox.org/getting_started/yokogawa?s[]=yokoga > > wa > > > > if I use sensorrealign how should I deal with these error > > messages?  > > > > Undefined function or variable 'lab'. > > > > Error in channelposition (line 314) > > n   = size(lab,2); > > > > Error in ft_datatype_sens (line 328) > >         [chanpos, chanori, lab] = channelposition(sens); > > > > Error in ft_sensorrealign (line 212) > > elec_original = ft_datatype_sens(elec_original); % ensure up-to- > > date sensor description (Oct 2011) > > > > > > is there another way to realign my anatomical with my MEG sensors > > without using ft_sensorrealign? > > > > Thanks, > > Ainsley  > > > > On Wed, Mar 8, 2017 at 11:26 AM, Schoffelen, J.M. (Jan Mathijs) > n.schoffelen at donders.ru.nl> wrote: > > > Hi Ainsley, > > > > > > Why would you want to use sensorrealign/electroderealign since > > > you have MEG-data? The former functions may be needed for EEG > > > electrodes, not for MEG sensors. > > > > > > Best wishes, > > > Jan-Mathijs > > > > > > > > > J.M.Schoffelen, MD PhD > > > Senior Researcher, VIDI-fellow - PI, language in interaction > > > Telephone: +31-24-3614793 > > > Physical location: room 00.028 > > > Donders Centre for Cognitive Neuroimaging, Nijmegen, > > > The Netherlands > > > > > > > > > > On 08 Mar 2017, at 07:52, Ainsley Temudo > > > u> wrote: > > > > > > > > Hi FieldTrip Experts, > > > > > > > > I am trying to perform source reconstruction, and I am having > > > > trouble with coregistering my anatomical with the sensors. The > > > > MEG system we're using is Yokogawa and the anatomical is a > > > > NIFTI file. I get some errors when using  ft_sensorrealign and > > > > ft_electroderealign. I will go through the steps I took before > > > > getting to this stage, as maybe I have done something wrong.  > > > > > > > > first I read in my MRI and determine the coordinate system > > > > which is LPS. > > > > > > > > mriunknown = ft_read_mri('WMCP1011+22+t1mprage.nii'); > > > > mri = ft_determine_coordsys(mriunknown, 'interactive','yes') > > > > > > > >  next I realign to the CTF coordinate system by marking the NAS > > > > LPA, RPA  > > > > > > > > cfg = []; > > > > cfg.method = 'interactive'; > > > > cfg.coordsys = 'ctf'; > > > > mri_ctf = ft_volumerealign(cfg, mir); > > > > > > > > I read in the sensor information and added in the coordinates > > > > for the marker positions. we have five marker positions, the > > > > three I picked were the left and right ear markers and the > > > > middle forehead marker.   > > > > > > > > grad=ft_read_sens('srcLocTest01_FT_01.con'); > > > >   > > > >   > > > > grad.fid.pnt(1,:) = [96.07 3.11 -5.32]./10; > > > > grad.fid.pnt(2,:) = [11.13 75.50 -78.23]./10; > > > > grad.fid.pnt(3,:) = [8.50 -75.09 -64.60]./10; > > > >   > > > > grad.fid.label = {'NAS' 'LPA' 'RPA'}; > > > > > > > > I then put the template marker point cordinates  into > > > > the configuration which were taken from the mri_ctf  > > > > > > > > cfg = []; > > > > cfg.method = 'fiducial'; > > > > cfg.target.pnt(1,:) = [91.1 3.0 49.2]./10; > > > > cfg.target.pnt(2,:) = [-0.1 70.5 0.0]./10; > > > > cfg.target.pnt(3,:) = [0.1 -70.5 0.0]./10; > > > >   > > > > cfg.target.label = {'NAS' 'LPA' 'RPA'}; > > > >   > > > > grad_aligned = ft_sensorrealign(cfg, grad); > > > > > > > > when I use ft_sensorrealign I get the following errors  : > > > > > > > > Undefined function or variable 'lab'. > > > > > > > > Error in channelposition (line 314) > > > > n   = size(lab,2); > > > > > > > > Error in ft_datatype_sens (line 328) > > > >         [chanpos, chanori, lab] = channelposition(sens); > > > > > > > > Error in ft_sensorrealign (line 212) > > > > elec_original = ft_datatype_sens(elec_original); % ensure up- > > > > to-date sensor description (Oct 2011) > > > > > > > > > > > > when I use ft_electroderealign I get the following errors:  > > > > > > > > Error using ft_fetch_sens (line 192) > > > > no electrodes or gradiometers specified. > > > > > > > > Error in ft_electroderealign (line 195) > > > >   elec_original = ft_fetch_sens(cfg); > > > > > > > > > > > > Hope you can help me figure out why I'm getting these errors. > > > > Thanks, > > > > Ainsley  > > > > _______________________________________________ > > > > fieldtrip mailing list > > > > fieldtrip at donders.ru.nl > > > > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > > > > > > > > > > > > J.M.Schoffelen, MD PhD > > > Senior Researcher, VIDI-fellow - PI, language in interaction > > > Telephone: +31-24-3614793 > > > Physical location: room 00.028 > > > Donders Centre for Cognitive Neuroimaging, Nijmegen, > > > The Netherlands > > > > > > > > > > > > > > > _______________________________________________ > > > fieldtrip mailing list > > > fieldtrip at donders.ru.nl > > > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > > > > > > > > > > > --  > > Ainsley Temudo > > Research Assistant  > > Sreenivasan Lab > > NYU Abu Dhabi > > Office Tel (UAE): +971 2 628 4764 > > Mobile (UAE): +971 56 664 6952 > > > > NYU Abu Dhabi, Saadiyat Campus > > P.O. Box 129188 > > Abu Dhabi, United Arab Emirates > > _______________________________________________ > > fieldtrip mailing list > > fieldtrip at donders.ru.nl > > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip From ainsley.temudo at nyu.edu Wed Mar 8 10:36:22 2017 From: ainsley.temudo at nyu.edu (Ainsley Temudo) Date: Wed, 8 Mar 2017 13:36:22 +0400 Subject: [FieldTrip] Source Reconstruction In-Reply-To: References: <20B3451F-DE4F-4BF7-A42E-E1A17A766EDC@donders.ru.nl> Message-ID: Hi Jan-Mathijs, I managed to get the ft_electoderealign to work after some debugging, which involved commenting out parts of the script. Could you take a look at the two images I've attached. The first is my volume conduction model and the sensors before realignment (unaligned.fig) and the second is after realignment (aligned.fig). any idea why the MRI markers which were used as the template (green) are so far apart from the MEG marker coil positions( red)? also it seems that before realignment everything looks okay, but how is that possible if my volume conduction model is in CTF and my MEG sensors are not? thanks Ainsley On Wed, Mar 8, 2017 at 12:15 PM, Schoffelen, J.M. (Jan Mathijs) < jan.schoffelen at donders.ru.nl> wrote: > Hi Ainsley, > > I have never worked with ‘yokogawa’ data myself, so I can’t be of much > help. The documentation you point to is already several years old, and > appears not to have been actively maintained. Most likely the error you get > is caused by incompatibility between the current version of the FieldTrip > code, and the example code provided. Perhaps someone that has recently done > coregistration between anatomical data and yokogawa data can chime in? > > Best wishes, > Jan-Mathijs > > > > On 08 Mar 2017, at 09:03, Ainsley Temudo wrote: > > Hi Jan-Mathijs > Thanks for getting back to me so quickly. I originally used > Ft_sensoralign, but I got the error messages one of which said > 'FT_SENSORREALIGN is deprecated, please use FT_ELECTRODEREALIGN instead' > so thats why I used electrode realign instead, even though it's MEG data. > > I've been following this page to do the realignment. > > http://www.fieldtriptoolbox.org/getting_started/yokogawa?s[]=yokogawa > > if I use sensorrealign how should I deal with these error messages? > > Undefined function or variable 'lab'. > > Error in channelposition (line 314) > n = size(lab,2); > > Error in ft_datatype_sens (line 328) > [chanpos, chanori, lab] = channelposition(sens); > > Error in ft_sensorrealign (line 212) > elec_original = ft_datatype_sens(elec_original); % ensure up-to-date > sensor description (Oct 2011) > > > is there another way to realign my anatomical with my MEG sensors without > using ft_sensorrealign? > > Thanks, > Ainsley > > On Wed, Mar 8, 2017 at 11:26 AM, Schoffelen, J.M. (Jan Mathijs) < > jan.schoffelen at donders.ru.nl> wrote: > >> Hi Ainsley, >> >> Why would you want to use sensorrealign/electroderealign since you have >> MEG-data? The former functions may be needed for EEG electrodes, not for >> MEG sensors. >> >> Best wishes, >> Jan-Mathijs >> >> >> J.M.Schoffelen, MD PhD >> Senior Researcher, VIDI-fellow - PI, language in interaction >> Telephone: +31-24-3614793 <+31%2024%20361%204793> >> Physical location: room 00.028 >> Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands >> >> >> On 08 Mar 2017, at 07:52, Ainsley Temudo wrote: >> >> Hi FieldTrip Experts, >> >> I am trying to perform source reconstruction, and I am having trouble >> with coregistering my anatomical with the sensors. The MEG system we're >> using is Yokogawa and the anatomical is a NIFTI file. I get some errors >> when using ft_sensorrealign and ft_electroderealign. I will go through the >> steps I took before getting to this stage, as maybe I have done something >> wrong. >> >> first I read in my MRI and determine the coordinate system which is LPS. >> >> mriunknown = ft_read_mri('WMCP1011+22+t1mprage.nii'); >> mri = ft_determine_coordsys(mriunknown, 'interactive','yes') >> >> next I realign to the CTF coordinate system by marking the NAS LPA, RPA >> >> cfg = []; >> cfg.method = 'interactive'; >> cfg.coordsys = 'ctf'; >> >> mri_ctf = ft_volumerealign(cfg, mir); >> >> I read in the sensor information and added in the coordinates for the >> marker positions. we have five marker positions, the three I picked were >> the left and right ear markers and the middle forehead marker. >> >> grad=ft_read_sens('srcLocTest01_FT_01.con'); >> >> >> grad.fid.pnt(1,:) = [96.07 3.11 -5.32]./10; >> grad.fid.pnt(2,:) = [11.13 75.50 -78.23]./10; >> grad.fid.pnt(3,:) = [8.50 -75.09 -64.60]./10; >> >> grad.fid.label = {'NAS' 'LPA' 'RPA'}; >> >> I then put the template marker point cordinates into the configuration >> which were taken from the mri_ctf >> >> cfg = []; >> cfg.method = 'fiducial'; >> cfg.target.pnt(1,:) = [91.1 3.0 49.2]./10; >> cfg.target.pnt(2,:) = [-0.1 70.5 0.0]./10; >> cfg.target.pnt(3,:) = [0.1 -70.5 0.0]./10; >> >> cfg.target.label = {'NAS' 'LPA' 'RPA'}; >> >> >> grad_aligned = ft_sensorrealign(cfg, grad); >> >> when I use ft_sensorrealign I get the following errors : >> >> Undefined function or variable 'lab'. >> >> Error in channelposition (line 314) >> n = size(lab,2); >> >> Error in ft_datatype_sens (line 328) >> [chanpos, chanori, lab] = channelposition(sens); >> >> Error in ft_sensorrealign (line 212) >> elec_original = ft_datatype_sens(elec_original); % ensure up-to-date >> sensor description (Oct 2011) >> >> >> when I use ft_electroderealign I get the following errors: >> >> Error using ft_fetch_sens (line 192) >> no electrodes or gradiometers specified. >> >> Error in ft_electroderealign (line 195) >> elec_original = ft_fetch_sens(cfg); >> >> >> Hope you can help me figure out why I'm getting these errors. >> Thanks, >> Ainsley >> _______________________________________________ >> fieldtrip mailing list >> fieldtrip at donders.ru.nl >> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >> >> >> >> >> J.M.Schoffelen, MD PhD >> Senior Researcher, VIDI-fellow - PI, language in interaction >> Telephone: +31-24-3614793 <+31%2024%20361%204793> >> Physical location: room 00.028 >> Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands >> >> >> >> >> _______________________________________________ >> fieldtrip mailing list >> fieldtrip at donders.ru.nl >> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >> > > > > -- > *Ainsley Temudo* > Research Assistant > Sreenivasan Lab > NYU Abu Dhabi > Office Tel (UAE): +971 2 628 4764 <+971%202%20628%204764> > Mobile (UAE): +971 56 664 6952 <+971%2056%20664%206952> > > NYU Abu Dhabi, Saadiyat Campus > P.O. Box 129188 > Abu Dhabi, United Arab Emirates > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > > > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -- *Ainsley Temudo* Research Assistant Sreenivasan Lab NYU Abu Dhabi Office Tel (UAE): +971 2 628 4764 Mobile (UAE): +971 56 664 6952 NYU Abu Dhabi, Saadiyat Campus P.O. Box 129188 Abu Dhabi, United Arab Emirates -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: aligned.fig Type: application/octet-stream Size: 451127 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: unaligned.fig Type: application/octet-stream Size: 449086 bytes Desc: not available URL: From nick.peatfield at gmail.com Wed Mar 8 17:48:27 2017 From: nick.peatfield at gmail.com (Nicholas A. Peatfield) Date: Wed, 8 Mar 2017 08:48:27 -0800 Subject: [FieldTrip] BTI freesurfer surface Message-ID: Hi Fieldtrippers I want to reconstruct cortical sources using a freesurfer surface, rather than an equidistant grid I will use the points from the surface. To do so I use ft_read_headshape to read the .surf file and use it as the points for the leadfield. However, the MEG data and headmodel are in 'bti' coordinates thus the grid points are not aligned to the headmodel and sensor points. I read the minimum norm estimate tutorial from fieldtrip webpage for transforming spm coordinates to bti, but in my case I am using a surface file in which there are only the points and tri.gonometries and the tutorial doesn't apply. How can I convert the surface points to bti? This is HCP data and I thought I would find some help on this somewhere but couldn't. Cheers, Nick -------------- next part -------------- An HTML attachment was scrubbed... URL: From scho at med.ovgu.de Wed Mar 8 17:52:46 2017 From: scho at med.ovgu.de (Michael Scholz) Date: Wed, 8 Mar 2017 17:52:46 +0100 (CET) Subject: [FieldTrip] different gradiometer-units from fiff-file Message-ID: Dear community, My name is Michael Scholz and I am working in Magdeburg (Germany) in the Department of Neurology. We just started using our Elekta Neuromag Triux System. I was going to use fieldtrip to create some simulation data to test Elekta software. Therefore I read data from a fiff-file acquired by the Elekta-MEG-system including 102 magnetometer-data and 2x102 gradiometer data. Reading fiff-files with ft_read_data creates output with magnetometer-data in Tesla (T) and gradiometer-data in T/m just as in the fiff-file. Reading the same fiff-file by ft_read_sens creates a structure with header-info including T/cm-unit-info for the gradiometer-sensors. This was not expected and was misleading; if one doesnt recognize these different units for the gradiometers and combines data based on ft_read_sens-output and ft_read_data-output, the result is unusable, since scaling of magnetometer-data and gradiometer-data wont match. How can I prevent ft_read_sens from reading gradiometer in different units as given in the source-fiff-file? best, Michael From jan.schoffelen at donders.ru.nl Wed Mar 8 17:55:22 2017 From: jan.schoffelen at donders.ru.nl (Schoffelen, J.M. (Jan Mathijs)) Date: Wed, 8 Mar 2017 16:55:22 +0000 Subject: [FieldTrip] BTI freesurfer surface In-Reply-To: References: Message-ID: Hi Nick, Sounds like you need a transformation matrix from freesurfer space to MEG headspace, true? is there a c_ras.mat file in your freesurfer/mri directory? This may provide you with the missing link Best, JM J.M.Schoffelen, MD PhD Senior Researcher, VIDI-fellow - PI, language in interaction Telephone: +31-24-3614793 Physical location: room 00.028 Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands On 08 Mar 2017, at 17:48, Nicholas A. Peatfield > wrote: Hi Fieldtrippers I want to reconstruct cortical sources using a freesurfer surface, rather than an equidistant grid I will use the points from the surface. To do so I use ft_read_headshape to read the .surf file and use it as the points for the leadfield. However, the MEG data and headmodel are in 'bti' coordinates thus the grid points are not aligned to the headmodel and sensor points. I read the minimum norm estimate tutorial from fieldtrip webpage for transforming spm coordinates to bti, but in my case I am using a surface file in which there are only the points and tri.gonometries and the tutorial doesn't apply. How can I convert the surface points to bti? This is HCP data and I thought I would find some help on this somewhere but couldn't. Cheers, Nick _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: From timeehan at gmail.com Wed Mar 8 18:04:27 2017 From: timeehan at gmail.com (Tim Meehan) Date: Wed, 8 Mar 2017 12:04:27 -0500 Subject: [FieldTrip] marking artifacts by channel + trial In-Reply-To: References: Message-ID: Thanks for sharing! I'm just taking a look now. It looks like you're doing mostly automated rejection. Or are you also doing visual rejection along with the z-value thresholding? Thanks again, Tim On Fri, Mar 3, 2017 at 5:31 PM, Teresa Madsen wrote: > Here's a rough sketch of my approach, with one custom function attached. > If you or others find it useful, maybe we can think about ways to > incorporate it into the FieldTrip code. I've been working mostly with > scripts, but you've inspired me to work on functionizing the rest of it so > it's more shareable. > > So, assuming raw multichannel data has been loaded into FieldTrip > structure 'data' with unique trial identifiers in data.trialinfo... > > for ch = 1:numel(data.label) > %% pull out one channel at a time > cfg = []; > cfg.channel = data.label{ch}; > > datch{ch} = ft_selectdata(cfg, data); > > %% identify large z-value artifacts and/or whatever else you might want > > cfg = []; > cfg.artfctdef.zvalue.channel = 'all'; > cfg.artfctdef.zvalue.cutoff = 15; > cfg.artfctdef.zvalue.trlpadding = 0; > cfg.artfctdef.zvalue.fltpadding = 0; > cfg.artfctdef.zvalue.artpadding = 0.1; > cfg.artfctdef.zvalue.rectify = 'yes'; > > [~, artifact.zvalue] = ft_artifact_zvalue([], datch{ch}); > > %% replace artifacts with NaNs > cfg = []; > cfg.artfctdef.zvalue.artifact = artifact.zvalue; > cfg.artfctdef.reject = 'nan'; > > datch{ch} = ft_rejectartifact(cfg,datch{ch}); > end > > %% re-merge channels > data = ft_appenddata([],datch); > > %% mark uniform NaNs as artifacts when they occur across all channels > % and replace non-uniform NaNs (on some but not all channels) with zeroes, > saving times > [artifact,data,times] = artifact_nan2zero_TEM(data) % custom function, see > attached > > %% reject artifacts by breaking into sub-trials > cfg = []; > cfg.artfctdef.nan2zero.artifact = artifact; > cfg.artfctdef.reject = 'partial'; > > data = ft_rejectartifact(cfg,data); > > %% identify real trials > trlinfo = unique(data.trialinfo,'rows','stable'); > > for tr = 1:size(trlinfo,1) > > %% calculate trial spectrogram > > cfg = []; > > cfg.trials = ismember(data.trialinfo, trlinfo(tr,:), 'rows'); > cfg.keeptrials = 'no'; % refers to sub-trials > > cfg.method = 'mtmconvol'; > > cfg.output = 'powandcsd'; > > cfg.foi = 2.^(0:0.1:log2(300)); % 83 freqs, log2 spaced, 1-300 Hz > cfg.tapsmofrq = cfg.foi/10; % smooth by 10% > cfg.t_ftimwin = 2./cfg.tapsmofrq; % for 3 tapers (K=3), T=2/W > cfg.toi = '50%'; > cfg.pad = 'nextpow2'; > > > freq = ft_freqanalysis(cfg,data); > > %% replace powspctrm & crsspctrum values with NaNs > % where t_ftimwin (or wavlen for wavelets) overlaps with artifact > for ch = 1:numel(freq.label) > badt = [times{tr,ch}]; > if ~isempty(badt) && any(... > badt > (min(freq.time) - max(freq.cfg.t_ftimwin)) & ... > badt < (max(freq.time) + max(freq.cfg.t_ftimwin))) > ci = find(any(strcmp(freq.label{ch}, freq.labelcmb))); > for t = 1:numel(freq.time) > for f = 1:numel(freq.freq) > mint = freq.time(t) - freq.cfg.t_ftimwin(f); > maxt = freq.time(t) + freq.cfg.t_ftimwin(f); > if any(badt > mint & badt < maxt) > freq.powspctrm(ch,f,t) = NaN; > freq.crsspctrm(ci,f,t) = NaN; > end > end > end > end > end > > %% save corrected output > > save(['trial' num2str(tr) 'mtmconvolTFA.mat'], 'freq', '-v7.3'); > end > > > > On Thu, Mar 2, 2017 at 9:55 AM, Tim Meehan wrote: > >> Hi Teresa, >> >> Thanks for the reply. I'll take a look at your example if you don't mind >> sharing. Thanks! >> >> Tim >> >> On Thu, Mar 2, 2017 at 9:53 AM, Teresa Madsen >> wrote: >> >>> No, not really. The only way I've found to do that is to loop through >>> my artifact rejection process on each trial individually, then merge them >>> back together with NaNs filling in where there are artifacts, but then that >>> breaks every form of analysis I want to do. :-P >>> >>> I wonder if it would work to fill in the artifacts with 0s instead of >>> NaNs....I might play with that. Let me know if you're interested in some >>> example code. >>> >>> ~Teresa >>> >>> >>> On Wed, Mar 1, 2017 at 3:55 PM, Tim Meehan wrote: >>> >>>> Hello All, >>>> >>>> When performing visual artifact rejection, I want to be able to mark >>>> artifacts that occur during some specific trials and only on some specific >>>> channels. In the tutorials I see only ways to mark bad channels (i.e. >>>> across all trials) or bad trials (i.e. across all channels). Does FieldTrip >>>> handle marking artifacts restricted to some channel/trial combination? >>>> >>>> Thanks, >>>> Tim >>>> >>>> _______________________________________________ >>>> fieldtrip mailing list >>>> fieldtrip at donders.ru.nl >>>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>>> >>> >>> >>> >>> -- >>> Teresa E. Madsen, PhD >>> Research Technical Specialist: *in vivo *electrophysiology & data >>> analysis >>> Division of Behavioral Neuroscience and Psychiatric Disorders >>> Yerkes National Primate Research Center >>> Emory University >>> Rainnie Lab, NSB 5233 >>> 954 Gatewood Rd. NE >>> Atlanta, GA 30329 >>> (770) 296-9119 >>> braingirl at gmail.com >>> https://www.linkedin.com/in/temadsen >>> >>> _______________________________________________ >>> fieldtrip mailing list >>> fieldtrip at donders.ru.nl >>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>> >> >> >> _______________________________________________ >> fieldtrip mailing list >> fieldtrip at donders.ru.nl >> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >> > > > > -- > Teresa E. Madsen, PhD > Research Technical Specialist: *in vivo *electrophysiology & data > analysis > Division of Behavioral Neuroscience and Psychiatric Disorders > Yerkes National Primate Research Center > Emory University > Rainnie Lab, NSB 5233 > 954 Gatewood Rd. NE > Atlanta, GA 30329 > (770) 296-9119 > braingirl at gmail.com > https://www.linkedin.com/in/temadsen > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -------------- next part -------------- An HTML attachment was scrubbed... URL: From murphyk5 at aston.ac.uk Wed Mar 8 18:49:55 2017 From: murphyk5 at aston.ac.uk (Murphy, Kelly (Research Student)) Date: Wed, 8 Mar 2017 17:49:55 +0000 Subject: [FieldTrip] different gradiometer-units from fiff-file In-Reply-To: References: Message-ID: Hi Michael, You could try using ft_read_data as per usual, then convert the units to the desired ones after. For example: "grad = ft_read_sens(MEG_data); %get fiducial coordinates under grad.chan sens = ft_convert_units('mm', grad);" Kelly ________________________________________ From: fieldtrip-bounces at science.ru.nl [fieldtrip-bounces at science.ru.nl] on behalf of Michael Scholz [scho at med.ovgu.de] Sent: 08 March 2017 16:52 To: fieldtrip at science.ru.nl Subject: [FieldTrip] different gradiometer-units from fiff-file Dear community, My name is Michael Scholz and I am working in Magdeburg (Germany) in the Department of Neurology. We just started using our Elekta Neuromag Triux System. I was going to use fieldtrip to create some simulation data to test Elekta software. Therefore I read data from a fiff-file acquired by the Elekta-MEG-system including 102 magnetometer-data and 2x102 gradiometer data. Reading fiff-files with ft_read_data creates output with magnetometer-data in Tesla (T) and gradiometer-data in T/m just as in the fiff-file. Reading the same fiff-file by ft_read_sens creates a structure with header-info including T/cm-unit-info for the gradiometer-sensors. This was not expected and was misleading; if one doesn?t recognize these different units for the gradiometers and combines data based on ft_read_sens-output and ft_read_data-output, the result is unusable, since scaling of magnetometer-data and gradiometer-data won?t match. How can I prevent ft_read_sens from reading gradiometer in different units as given in the source-fiff-file? best, Michael _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip From braingirl at gmail.com Wed Mar 8 21:35:12 2017 From: braingirl at gmail.com (Teresa Madsen) Date: Wed, 8 Mar 2017 15:35:12 -0500 Subject: [FieldTrip] marking artifacts by channel + trial In-Reply-To: References: Message-ID: I actually do a mix of approaches: a quick look using ft_databrowser with all channels for irregular artifacts like disconnection events, then a channel-by-channel search for large z-value artifacts and clipping artifacts, then I remove all those and do one last ft_databrowser review of all channels together. I'll attach the function I was working on, but it's more complex than you originally asked for and not fully tested yet, so use at your own risk. Do you use ft_databrowser or ft_rejectvisual for visual artifact rejection? ~Teresa On Wed, Mar 8, 2017 at 12:04 PM, Tim Meehan wrote: > Thanks for sharing! I'm just taking a look now. It looks like you're doing > mostly automated rejection. Or are you also doing visual rejection along > with the z-value thresholding? > > Thanks again, > Tim > > On Fri, Mar 3, 2017 at 5:31 PM, Teresa Madsen wrote: > >> Here's a rough sketch of my approach, with one custom function attached. >> If you or others find it useful, maybe we can think about ways to >> incorporate it into the FieldTrip code. I've been working mostly with >> scripts, but you've inspired me to work on functionizing the rest of it so >> it's more shareable. >> >> So, assuming raw multichannel data has been loaded into FieldTrip >> structure 'data' with unique trial identifiers in data.trialinfo... >> >> for ch = 1:numel(data.label) >> %% pull out one channel at a time >> cfg = []; >> cfg.channel = data.label{ch}; >> >> datch{ch} = ft_selectdata(cfg, data); >> >> %% identify large z-value artifacts and/or whatever else you might want >> >> cfg = []; >> cfg.artfctdef.zvalue.channel = 'all'; >> cfg.artfctdef.zvalue.cutoff = 15; >> cfg.artfctdef.zvalue.trlpadding = 0; >> cfg.artfctdef.zvalue.fltpadding = 0; >> cfg.artfctdef.zvalue.artpadding = 0.1; >> cfg.artfctdef.zvalue.rectify = 'yes'; >> >> [~, artifact.zvalue] = ft_artifact_zvalue([], datch{ch}); >> >> %% replace artifacts with NaNs >> cfg = []; >> cfg.artfctdef.zvalue.artifact = artifact.zvalue; >> cfg.artfctdef.reject = 'nan'; >> >> datch{ch} = ft_rejectartifact(cfg,datch{ch}); >> end >> >> %% re-merge channels >> data = ft_appenddata([],datch); >> >> %% mark uniform NaNs as artifacts when they occur across all channels >> % and replace non-uniform NaNs (on some but not all channels) with >> zeroes, saving times >> [artifact,data,times] = artifact_nan2zero_TEM(data) % custom function, >> see attached >> >> %% reject artifacts by breaking into sub-trials >> cfg = []; >> cfg.artfctdef.nan2zero.artifact = artifact; >> cfg.artfctdef.reject = 'partial'; >> >> data = ft_rejectartifact(cfg,data); >> >> %% identify real trials >> trlinfo = unique(data.trialinfo,'rows','stable'); >> >> for tr = 1:size(trlinfo,1) >> >> %% calculate trial spectrogram >> >> cfg = []; >> >> cfg.trials = ismember(data.trialinfo, trlinfo(tr,:), 'rows'); >> cfg.keeptrials = 'no'; % refers to sub-trials >> >> cfg.method = 'mtmconvol'; >> >> cfg.output = 'powandcsd'; >> >> cfg.foi = 2.^(0:0.1:log2(300)); % 83 freqs, log2 spaced, 1-300 Hz >> cfg.tapsmofrq = cfg.foi/10; % smooth by 10% >> cfg.t_ftimwin = 2./cfg.tapsmofrq; % for 3 tapers (K=3), T=2/W >> cfg.toi = '50%'; >> cfg.pad = 'nextpow2'; >> >> >> freq = ft_freqanalysis(cfg,data); >> >> %% replace powspctrm & crsspctrum values with NaNs >> % where t_ftimwin (or wavlen for wavelets) overlaps with artifact >> for ch = 1:numel(freq.label) >> badt = [times{tr,ch}]; >> if ~isempty(badt) && any(... >> badt > (min(freq.time) - max(freq.cfg.t_ftimwin)) & ... >> badt < (max(freq.time) + max(freq.cfg.t_ftimwin))) >> ci = find(any(strcmp(freq.label{ch}, freq.labelcmb))); >> for t = 1:numel(freq.time) >> for f = 1:numel(freq.freq) >> mint = freq.time(t) - freq.cfg.t_ftimwin(f); >> maxt = freq.time(t) + freq.cfg.t_ftimwin(f); >> if any(badt > mint & badt < maxt) >> freq.powspctrm(ch,f,t) = NaN; >> freq.crsspctrm(ci,f,t) = NaN; >> end >> end >> end >> end >> end >> >> %% save corrected output >> >> save(['trial' num2str(tr) 'mtmconvolTFA.mat'], 'freq', '-v7.3'); >> end >> >> >> >> On Thu, Mar 2, 2017 at 9:55 AM, Tim Meehan wrote: >> >>> Hi Teresa, >>> >>> Thanks for the reply. I'll take a look at your example if you don't mind >>> sharing. Thanks! >>> >>> Tim >>> >>> On Thu, Mar 2, 2017 at 9:53 AM, Teresa Madsen >>> wrote: >>> >>>> No, not really. The only way I've found to do that is to loop through >>>> my artifact rejection process on each trial individually, then merge them >>>> back together with NaNs filling in where there are artifacts, but then that >>>> breaks every form of analysis I want to do. :-P >>>> >>>> I wonder if it would work to fill in the artifacts with 0s instead of >>>> NaNs....I might play with that. Let me know if you're interested in some >>>> example code. >>>> >>>> ~Teresa >>>> >>>> >>>> On Wed, Mar 1, 2017 at 3:55 PM, Tim Meehan wrote: >>>> >>>>> Hello All, >>>>> >>>>> When performing visual artifact rejection, I want to be able to mark >>>>> artifacts that occur during some specific trials and only on some specific >>>>> channels. In the tutorials I see only ways to mark bad channels (i.e. >>>>> across all trials) or bad trials (i.e. across all channels). Does FieldTrip >>>>> handle marking artifacts restricted to some channel/trial combination? >>>>> >>>>> Thanks, >>>>> Tim >>>>> >>>>> _______________________________________________ >>>>> fieldtrip mailing list >>>>> fieldtrip at donders.ru.nl >>>>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>>>> >>>> >>>> >>>> >>>> -- >>>> Teresa E. Madsen, PhD >>>> Research Technical Specialist: *in vivo *electrophysiology & data >>>> analysis >>>> Division of Behavioral Neuroscience and Psychiatric Disorders >>>> Yerkes National Primate Research Center >>>> Emory University >>>> Rainnie Lab, NSB 5233 >>>> 954 Gatewood Rd. NE >>>> Atlanta, GA 30329 >>>> (770) 296-9119 >>>> braingirl at gmail.com >>>> https://www.linkedin.com/in/temadsen >>>> >>>> _______________________________________________ >>>> fieldtrip mailing list >>>> fieldtrip at donders.ru.nl >>>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>>> >>> >>> >>> _______________________________________________ >>> fieldtrip mailing list >>> fieldtrip at donders.ru.nl >>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>> >> >> >> >> -- >> Teresa E. Madsen, PhD >> Research Technical Specialist: *in vivo *electrophysiology & data >> analysis >> Division of Behavioral Neuroscience and Psychiatric Disorders >> Yerkes National Primate Research Center >> Emory University >> Rainnie Lab, NSB 5233 >> 954 Gatewood Rd. NE >> Atlanta, GA 30329 >> (770) 296-9119 >> braingirl at gmail.com >> https://www.linkedin.com/in/temadsen >> >> _______________________________________________ >> fieldtrip mailing list >> fieldtrip at donders.ru.nl >> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >> > > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -- Teresa E. Madsen, PhD Research Technical Specialist: *in vivo *electrophysiology & data analysis Division of Behavioral Neuroscience and Psychiatric Disorders Yerkes National Primate Research Center Emory University Rainnie Lab, NSB 5233 954 Gatewood Rd. NE Atlanta, GA 30329 (770) 296-9119 braingirl at gmail.com https://www.linkedin.com/in/temadsen -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- function [data] = AutoArtReject_TEM(cfg,data) % AutoArtReject_TEM performs automated artifact rejection, processing each % channel independently, removing clipping & large zvalue artifacts based % on automated thresholds (best to run on all data from a given subject % in one data structure, so the same threshold is applied consistently % across conditions), and returning the data structure re-merged across % channels, with NaNs in place of artifacts. % % Input cfg structure should contain: % interactsubj = true or false, whether to select visual artifacts per % subject (i.e., per call to this function) & review all channels after % automated detection % interactch = true or false, whether to preview detected artifacts % & select visual artifacts per channel % artfctdef.clip = struct as defined in ft_artifact_clip, but absdiff is % applied before the data is passed to that function, so it's actually % comparing these thresholds to the 2nd derivative of the original data % artfctdef.zvalue = struct as defined in ft_artifact_zvalue % artfctdef.minaccepttim = scalar as defined in ft_rejectartifact % % To facilitate data-handling and distributed computing you can use % cfg.inputfile = ... % cfg.outputfile = ... % If you specify one of these (or both) the input data will be read from a % *.mat file on disk and/or the output data will be written to a *.mat % file. These *.mat files should contain only a single variable 'data', % corresponding with ft_datatype_raw. % % written 3/2/17 by Teresa E. Madsen %% load data, if needed if nargin < 2 && isfield(cfg,'inputfile') load(cfg.inputfile); end %% preview data & mark unusual cross-channel artifacts if cfg.interactsubj cfgtmp = []; cfgtmp = ft_databrowser(cfgtmp,data); visual = cfgtmp.artfctdef.visual.artifact; % not a field of artifact % because this will be reused for all channels, while the rest of % artifact is cleared when starting each new channel else visual = []; end %% perform artifact detection on each channel separately excludech = false(size(data.label)); datch = cell(size(data.label)); for ch = 1:numel(data.label) artifact = []; %% divide data into channels cfgtmp = []; cfgtmp.channel = data.label{ch}; datch{ch} = ft_selectdata(cfgtmp,data); %% identify large zvalue artifacts cfgtmp = []; cfgtmp.artfctdef.zvalue = cfg.artfctdef.zvalue; if ~isfield(cfgtmp.artfctdef.zvalue,'interactive') if interactch cfgtmp.artfctdef.zvalue.interactive = 'yes'; else cfgtmp.artfctdef.zvalue.interactive = 'no'; end end [~, artifact.zvalue] = ft_artifact_zvalue(cfgtmp,datch{ch}); %% take 1st derivative of signal cfgtmp = []; cfgtmp.absdiff = 'yes'; datd1 = ft_preprocessing(cfgtmp,datch{ch}); %% define clipping artifacts % applies absdiff again, so it's actually working on 2nd derivative data cfgtmp = []; cfgtmp.artfctdef.clip = cfg.artfctdef.clip; [~, artifact.clip] = ft_artifact_clip(cfgtmp,datd1); %% review artifacts if needed cfgtmp = []; cfgtmp.artfctdef.clip.artifact = artifact.clip; cfgtmp.artfctdef.zvalue.artifact = artifact.zvalue; cfgtmp.artfctdef.visual.artifact = visual; if cfg.interactch % any new visual artifacts will be automatically added to cfgtmp cfgtmp = ft_databrowser(cfgtmp,datch{ch}); keyboard % dbcont when satisfied % excludech(ch) = true; % exclude this channel if desired end clearvars d1dat %% replace artifactual data with NaNs cfgtmp.artfctdef.reject = 'nan'; cfgtmp.artfctdef.minaccepttim = cfg.artfctdef.minaccepttim; datch{ch} = ft_rejectartifact(cfgtmp,datch{ch}); % if any trials were rejected completely, exclude this channel, or it % won't merge properly if numel(datch{ch}.trial) ~= numel(data.trial) excludech(ch) = true; end end % for ch = 1:numel(data.label) %% remerge each channel file into one cleaned data file cfgtmp = []; if isfield(cfg,'outputfile') cfgtmp.outputfile = cfg.outputfile; end data = ft_appenddata(cfgtmp,datch(~excludech)); %% visualize result if interactsubj cfgtmp = []; cfgtmp = ft_databrowser(cfgtmp,data); %#ok just for debugging keyboard % dbcont when satisfied end end From martabortoletto at yahoo.it Thu Mar 9 11:22:30 2017 From: martabortoletto at yahoo.it (Marta Bortoletto) Date: Thu, 9 Mar 2017 10:22:30 +0000 (UTC) Subject: [FieldTrip] Post-doc position in TMS-EEG coregistration in Brescia, Italy References: <1430592770.3251961.1489054950545.ref@mail.yahoo.com> Message-ID: <1430592770.3251961.1489054950545@mail.yahoo.com> Dear all, Please find below an announcement for a post-docposition to work on a project of TMS-EEG coregistration, located at theCognitive Neuroscience Unit, IRCCS Centro San Giovanni di Dio Fatebenefratelli,Brescia (Italy), led by prof. Carlo Miniussi.We would be mostly grateful if you couldcirculate this notice to possibly interested candidates.Cheers, Marta Bortoletto and Anna Fertonani ------------------------------------------------------------- Job description The Cognitive Neuroscience Unit, IRCCS Centro San Giovannidi Dio Fatebenefratelli, led by Prof. Carlo Miniussi is seeking to recruit apost-doctoral research fellow to work on a project of TMS-EEG coregistration.This is part of projects funded by BIAL foundation and FISM foundation, incollaboration with the University of Genova, ASST Spedali Civili di Brescia andthe Center for Mind/Brain Sciences CIMeC of the University of Trento. Theresearch focus of these projects is the effects of non-invasive brainstimulation (TMS and tES) on cortical networks of the human brain during motorand perceptual tasks and their contributions to learning.The post is available from May 2017 and is funded for oneyear in the first instance, with the possibility of extension for further 2years.  Key Requirements ·    We are seeking for aspiring individuals withsubstantial experience in TMS-EEG or EEG research and strong computationalabilities.·    The applicants should also be interested instudying cortical networks and their disorders.·    Successful candidates should have a backgroundand PhD degree in a neuroscience-related field, broadly specified, and skillsfor working with complex empirical data and human subjects.·    Applicants should have experience with conductingexperimental research, hands-on knowledge in EEG method, and documented skillsin at least one programming language (preferably Matlab). ·    Good command of the English language (writtenand oral), as well as skills for teamwork in a multidisciplinary researchgroup, are required.·    Experience with advanced EEG signal processing,EEG source localization, connectivity analyses and a strong publication recordare an advantage.  What we offer ·    Gross salary: 25.000-28.000 euro per annum ·    Excellent working environment·    Opportunity to work in a motivated, skilled,inspired and supportive group ·    A chance to work in Italy – one of the mostbeautiful countries in the world  To apply, please sendthe following items, as ONE PDF FILE and via email to Dr. Anna Fertonani (anna.fertonani at cognitiveneuroscience.it) preferably by March 31st 2017. Later applications will be considereduntil the position is filled.   ·    A letter of intent including a brief descriptionof your past and current research interests·    Curriculum vitae including the list of yourpublication and degrees·    Names and contact information of 2 referees.    For furtherinformation please contact Anna FertonaniIRCCS Centro SanGiovanni di Dio Fatebenefratelli anna.fertonani at cognitiveneuroscience.it  About the employer The IRCCS San Giovanni di Dio Fatebenefratelli is operatingsince 120 years and has been appointed and funded as national centre ofexcellence in research and care by the Italian Ministry of Health since 1996.More than 4500 patients with Alzheimer’s Dementia or associated disorders andabout 1700 patients with psychiatric diseases are treated each year. The researchdivision, besides the Cognitive Neuroscience Section, includes the laboratoriesof Genetics, Neuropsychopharmacology, Neurobiology, Proteomic, Neuroimaging,Ethic and Epidemiology and employs about fifty professional researchers. The Cognitive Neuroscience Unit is provided with severalstate-of-the-art devices necessary for the application of brain stimulationtechniques (transcranial magnetic stimulation: TMS, rTMS, and transcranialelectrical stimulation: tDCS, tACS and tRNS) and for the recording and theanalysis of electrophysiological signals (EEG, EMG) as well asneuropsychological testing. The simultaneous co-registration ofelectroencephalography and TMS application is also available, field where wehave been pioneers in the national research.  Marta Bortoletto, PhD Cognitive Neuroscience Section, IRCCS Centro San Giovanni di Dio Fatebenefratelli Via Pilastroni 4, 25125 Brescia, Italy Phone number: (+39) 0303501594 E-mail: marta.bortoletto at cognitiveneuroscience.it web: http://www.cognitiveneuroscience.it/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From timeehan at gmail.com Thu Mar 9 16:37:02 2017 From: timeehan at gmail.com (Tim Meehan) Date: Thu, 9 Mar 2017 10:37:02 -0500 Subject: [FieldTrip] marking artifacts by channel + trial In-Reply-To: References: Message-ID: So far I've just been using ft_databrowser in the same way you mention to look for artifacts that affect most or all channels. But I think I will need to also visually check each channel for bad trials. Since I'm working with iEEG, these will be mainly those picking up any epileptic discharges. Of course the channels around the seizure foci I will throw out entirely. I'm a bit daunted by how much work it will be to do a channel x trial visual rejection since I have ~1700 trials and ~ 100 channels for our one subject so far. In fact just typing those numbers makes me think it may not be feasible. Do you find the automated rejection works satisfactorily for you? On Wed, Mar 8, 2017 at 3:35 PM, Teresa Madsen wrote: > I actually do a mix of approaches: a quick look using ft_databrowser with > all channels for irregular artifacts like disconnection events, then a > channel-by-channel search for large z-value artifacts and clipping > artifacts, then I remove all those and do one last ft_databrowser review of > all channels together. I'll attach the function I was working on, but it's > more complex than you originally asked for and not fully tested yet, so use > at your own risk. > > Do you use ft_databrowser or ft_rejectvisual for visual artifact rejection? > > ~Teresa > > > On Wed, Mar 8, 2017 at 12:04 PM, Tim Meehan wrote: > >> Thanks for sharing! I'm just taking a look now. It looks like you're >> doing mostly automated rejection. Or are you also doing visual rejection >> along with the z-value thresholding? >> >> Thanks again, >> Tim >> >> On Fri, Mar 3, 2017 at 5:31 PM, Teresa Madsen >> wrote: >> >>> Here's a rough sketch of my approach, with one custom function >>> attached. If you or others find it useful, maybe we can think about ways >>> to incorporate it into the FieldTrip code. I've been working mostly with >>> scripts, but you've inspired me to work on functionizing the rest of it so >>> it's more shareable. >>> >>> So, assuming raw multichannel data has been loaded into FieldTrip >>> structure 'data' with unique trial identifiers in data.trialinfo... >>> >>> for ch = 1:numel(data.label) >>> %% pull out one channel at a time >>> cfg = []; >>> cfg.channel = data.label{ch}; >>> >>> datch{ch} = ft_selectdata(cfg, data); >>> >>> %% identify large z-value artifacts and/or whatever else you might >>> want >>> >>> cfg = []; >>> cfg.artfctdef.zvalue.channel = 'all'; >>> cfg.artfctdef.zvalue.cutoff = 15; >>> cfg.artfctdef.zvalue.trlpadding = 0; >>> cfg.artfctdef.zvalue.fltpadding = 0; >>> cfg.artfctdef.zvalue.artpadding = 0.1; >>> cfg.artfctdef.zvalue.rectify = 'yes'; >>> >>> [~, artifact.zvalue] = ft_artifact_zvalue([], datch{ch}); >>> >>> %% replace artifacts with NaNs >>> cfg = []; >>> cfg.artfctdef.zvalue.artifact = artifact.zvalue; >>> cfg.artfctdef.reject = 'nan'; >>> >>> datch{ch} = ft_rejectartifact(cfg,datch{ch}); >>> end >>> >>> %% re-merge channels >>> data = ft_appenddata([],datch); >>> >>> %% mark uniform NaNs as artifacts when they occur across all channels >>> % and replace non-uniform NaNs (on some but not all channels) with >>> zeroes, saving times >>> [artifact,data,times] = artifact_nan2zero_TEM(data) % custom function, >>> see attached >>> >>> %% reject artifacts by breaking into sub-trials >>> cfg = []; >>> cfg.artfctdef.nan2zero.artifact = artifact; >>> cfg.artfctdef.reject = 'partial'; >>> >>> data = ft_rejectartifact(cfg,data); >>> >>> %% identify real trials >>> trlinfo = unique(data.trialinfo,'rows','stable'); >>> >>> for tr = 1:size(trlinfo,1) >>> >>> %% calculate trial spectrogram >>> >>> cfg = []; >>> >>> cfg.trials = ismember(data.trialinfo, trlinfo(tr,:), 'rows'); >>> cfg.keeptrials = 'no'; % refers to sub-trials >>> >>> cfg.method = 'mtmconvol'; >>> >>> cfg.output = 'powandcsd'; >>> >>> cfg.foi = 2.^(0:0.1:log2(300)); % 83 freqs, log2 spaced, 1-300 Hz >>> cfg.tapsmofrq = cfg.foi/10; % smooth by 10% >>> cfg.t_ftimwin = 2./cfg.tapsmofrq; % for 3 tapers (K=3), T=2/W >>> cfg.toi = '50%'; >>> cfg.pad = 'nextpow2'; >>> >>> >>> freq = ft_freqanalysis(cfg,data); >>> >>> %% replace powspctrm & crsspctrum values with NaNs >>> % where t_ftimwin (or wavlen for wavelets) overlaps with artifact >>> for ch = 1:numel(freq.label) >>> badt = [times{tr,ch}]; >>> if ~isempty(badt) && any(... >>> badt > (min(freq.time) - max(freq.cfg.t_ftimwin)) & ... >>> badt < (max(freq.time) + max(freq.cfg.t_ftimwin))) >>> ci = find(any(strcmp(freq.label{ch}, freq.labelcmb))); >>> for t = 1:numel(freq.time) >>> for f = 1:numel(freq.freq) >>> mint = freq.time(t) - freq.cfg.t_ftimwin(f); >>> maxt = freq.time(t) + freq.cfg.t_ftimwin(f); >>> if any(badt > mint & badt < maxt) >>> freq.powspctrm(ch,f,t) = NaN; >>> freq.crsspctrm(ci,f,t) = NaN; >>> end >>> end >>> end >>> end >>> end >>> >>> %% save corrected output >>> >>> save(['trial' num2str(tr) 'mtmconvolTFA.mat'], 'freq', '-v7.3'); >>> end >>> >>> >>> >>> On Thu, Mar 2, 2017 at 9:55 AM, Tim Meehan wrote: >>> >>>> Hi Teresa, >>>> >>>> Thanks for the reply. I'll take a look at your example if you don't >>>> mind sharing. Thanks! >>>> >>>> Tim >>>> >>>> On Thu, Mar 2, 2017 at 9:53 AM, Teresa Madsen >>>> wrote: >>>> >>>>> No, not really. The only way I've found to do that is to loop through >>>>> my artifact rejection process on each trial individually, then merge them >>>>> back together with NaNs filling in where there are artifacts, but then that >>>>> breaks every form of analysis I want to do. :-P >>>>> >>>>> I wonder if it would work to fill in the artifacts with 0s instead of >>>>> NaNs....I might play with that. Let me know if you're interested in some >>>>> example code. >>>>> >>>>> ~Teresa >>>>> >>>>> >>>>> On Wed, Mar 1, 2017 at 3:55 PM, Tim Meehan wrote: >>>>> >>>>>> Hello All, >>>>>> >>>>>> When performing visual artifact rejection, I want to be able to mark >>>>>> artifacts that occur during some specific trials and only on some specific >>>>>> channels. In the tutorials I see only ways to mark bad channels (i.e. >>>>>> across all trials) or bad trials (i.e. across all channels). Does FieldTrip >>>>>> handle marking artifacts restricted to some channel/trial combination? >>>>>> >>>>>> Thanks, >>>>>> Tim >>>>>> >>>>>> _______________________________________________ >>>>>> fieldtrip mailing list >>>>>> fieldtrip at donders.ru.nl >>>>>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> Teresa E. Madsen, PhD >>>>> Research Technical Specialist: *in vivo *electrophysiology & data >>>>> analysis >>>>> Division of Behavioral Neuroscience and Psychiatric Disorders >>>>> Yerkes National Primate Research Center >>>>> Emory University >>>>> Rainnie Lab, NSB 5233 >>>>> 954 Gatewood Rd. NE >>>>> Atlanta, GA 30329 >>>>> (770) 296-9119 >>>>> braingirl at gmail.com >>>>> https://www.linkedin.com/in/temadsen >>>>> >>>>> _______________________________________________ >>>>> fieldtrip mailing list >>>>> fieldtrip at donders.ru.nl >>>>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>>>> >>>> >>>> >>>> _______________________________________________ >>>> fieldtrip mailing list >>>> fieldtrip at donders.ru.nl >>>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>>> >>> >>> >>> >>> -- >>> Teresa E. Madsen, PhD >>> Research Technical Specialist: *in vivo *electrophysiology & data >>> analysis >>> Division of Behavioral Neuroscience and Psychiatric Disorders >>> Yerkes National Primate Research Center >>> Emory University >>> Rainnie Lab, NSB 5233 >>> 954 Gatewood Rd. NE >>> Atlanta, GA 30329 >>> (770) 296-9119 >>> braingirl at gmail.com >>> https://www.linkedin.com/in/temadsen >>> >>> _______________________________________________ >>> fieldtrip mailing list >>> fieldtrip at donders.ru.nl >>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>> >> >> >> _______________________________________________ >> fieldtrip mailing list >> fieldtrip at donders.ru.nl >> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >> > > > > -- > Teresa E. Madsen, PhD > Research Technical Specialist: *in vivo *electrophysiology & data > analysis > Division of Behavioral Neuroscience and Psychiatric Disorders > Yerkes National Primate Research Center > Emory University > Rainnie Lab, NSB 5233 > 954 Gatewood Rd. NE > Atlanta, GA 30329 > (770) 296-9119 > braingirl at gmail.com > https://www.linkedin.com/in/temadsen > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -------------- next part -------------- An HTML attachment was scrubbed... URL: From braingirl at gmail.com Thu Mar 9 17:24:04 2017 From: braingirl at gmail.com (Teresa Madsen) Date: Thu, 9 Mar 2017 11:24:04 -0500 Subject: [FieldTrip] marking artifacts by channel + trial In-Reply-To: References: Message-ID: I have spent a lot of time tweaking it for my own purposes (16-32 channels of rat LFP data with lots of motion artifact), so yes, it works reasonably well for me. I greatly prefer to have some sort of objective way of defining artifacts, and only supplement with visual marking when something irregular slips by. It's both faster and makes me more confident that I'm not inadvertently changing my standards across rats/channels/trials. Since epileptic activity tends to have (reasonably?) consistent spatio-temporal patterns, have you considered trying ICA artifact rejection, as demonstrated for EOG and ECG artifacts? That may allow you to retain more "real" neural signal, rather than invalidating whole chunks of time. Then again, maybe it makes more sense to eliminate the whole signal when epileptic activity occurs, since that region of the brain is obviously not functioning normally at that moment. That's a judgement call for you to make in consultation with experienced people in your field. ~Teresa On Thu, Mar 9, 2017 at 10:37 AM, Tim Meehan wrote: > So far I've just been using ft_databrowser in the same way you mention to > look for artifacts that affect most or all channels. But I think I will > need to also visually check each channel for bad trials. Since I'm working > with iEEG, these will be mainly those picking up any epileptic discharges. > Of course the channels around the seizure foci I will throw out entirely. > > I'm a bit daunted by how much work it will be to do a channel x trial > visual rejection since I have ~1700 trials and ~ 100 channels for our one > subject so far. In fact just typing those numbers makes me think it may not > be feasible. Do you find the automated rejection works satisfactorily for > you? > > On Wed, Mar 8, 2017 at 3:35 PM, Teresa Madsen wrote: > >> I actually do a mix of approaches: a quick look using ft_databrowser >> with all channels for irregular artifacts like disconnection events, then a >> channel-by-channel search for large z-value artifacts and clipping >> artifacts, then I remove all those and do one last ft_databrowser review of >> all channels together. I'll attach the function I was working on, but it's >> more complex than you originally asked for and not fully tested yet, so use >> at your own risk. >> >> Do you use ft_databrowser or ft_rejectvisual for visual artifact >> rejection? >> >> ~Teresa >> >> >> On Wed, Mar 8, 2017 at 12:04 PM, Tim Meehan wrote: >> >>> Thanks for sharing! I'm just taking a look now. It looks like you're >>> doing mostly automated rejection. Or are you also doing visual rejection >>> along with the z-value thresholding? >>> >>> Thanks again, >>> Tim >>> >>> On Fri, Mar 3, 2017 at 5:31 PM, Teresa Madsen >>> wrote: >>> >>>> Here's a rough sketch of my approach, with one custom function >>>> attached. If you or others find it useful, maybe we can think about ways >>>> to incorporate it into the FieldTrip code. I've been working mostly with >>>> scripts, but you've inspired me to work on functionizing the rest of it so >>>> it's more shareable. >>>> >>>> So, assuming raw multichannel data has been loaded into FieldTrip >>>> structure 'data' with unique trial identifiers in data.trialinfo... >>>> >>>> for ch = 1:numel(data.label) >>>> %% pull out one channel at a time >>>> cfg = []; >>>> cfg.channel = data.label{ch}; >>>> >>>> datch{ch} = ft_selectdata(cfg, data); >>>> >>>> %% identify large z-value artifacts and/or whatever else you might >>>> want >>>> >>>> cfg = []; >>>> cfg.artfctdef.zvalue.channel = 'all'; >>>> cfg.artfctdef.zvalue.cutoff = 15; >>>> cfg.artfctdef.zvalue.trlpadding = 0; >>>> cfg.artfctdef.zvalue.fltpadding = 0; >>>> cfg.artfctdef.zvalue.artpadding = 0.1; >>>> cfg.artfctdef.zvalue.rectify = 'yes'; >>>> >>>> [~, artifact.zvalue] = ft_artifact_zvalue([], datch{ch}); >>>> >>>> %% replace artifacts with NaNs >>>> cfg = []; >>>> cfg.artfctdef.zvalue.artifact = artifact.zvalue; >>>> cfg.artfctdef.reject = 'nan'; >>>> >>>> datch{ch} = ft_rejectartifact(cfg,datch{ch}); >>>> end >>>> >>>> %% re-merge channels >>>> data = ft_appenddata([],datch); >>>> >>>> %% mark uniform NaNs as artifacts when they occur across all channels >>>> % and replace non-uniform NaNs (on some but not all channels) with >>>> zeroes, saving times >>>> [artifact,data,times] = artifact_nan2zero_TEM(data) % custom function, >>>> see attached >>>> >>>> %% reject artifacts by breaking into sub-trials >>>> cfg = []; >>>> cfg.artfctdef.nan2zero.artifact = artifact; >>>> cfg.artfctdef.reject = 'partial'; >>>> >>>> data = ft_rejectartifact(cfg,data); >>>> >>>> %% identify real trials >>>> trlinfo = unique(data.trialinfo,'rows','stable'); >>>> >>>> for tr = 1:size(trlinfo,1) >>>> >>>> %% calculate trial spectrogram >>>> >>>> cfg = []; >>>> >>>> cfg.trials = ismember(data.trialinfo, trlinfo(tr,:), 'rows'); >>>> cfg.keeptrials = 'no'; % refers to sub-trials >>>> >>>> cfg.method = 'mtmconvol'; >>>> >>>> cfg.output = 'powandcsd'; >>>> >>>> cfg.foi = 2.^(0:0.1:log2(300)); % 83 freqs, log2 spaced, 1-300 Hz >>>> cfg.tapsmofrq = cfg.foi/10; % smooth by 10% >>>> cfg.t_ftimwin = 2./cfg.tapsmofrq; % for 3 tapers (K=3), T=2/W >>>> cfg.toi = '50%'; >>>> cfg.pad = 'nextpow2'; >>>> >>>> >>>> freq = ft_freqanalysis(cfg,data); >>>> >>>> %% replace powspctrm & crsspctrum values with NaNs >>>> % where t_ftimwin (or wavlen for wavelets) overlaps with artifact >>>> for ch = 1:numel(freq.label) >>>> badt = [times{tr,ch}]; >>>> if ~isempty(badt) && any(... >>>> badt > (min(freq.time) - max(freq.cfg.t_ftimwin)) & ... >>>> badt < (max(freq.time) + max(freq.cfg.t_ftimwin))) >>>> ci = find(any(strcmp(freq.label{ch}, freq.labelcmb))); >>>> for t = 1:numel(freq.time) >>>> for f = 1:numel(freq.freq) >>>> mint = freq.time(t) - freq.cfg.t_ftimwin(f); >>>> maxt = freq.time(t) + freq.cfg.t_ftimwin(f); >>>> if any(badt > mint & badt < maxt) >>>> freq.powspctrm(ch,f,t) = NaN; >>>> freq.crsspctrm(ci,f,t) = NaN; >>>> end >>>> end >>>> end >>>> end >>>> end >>>> >>>> %% save corrected output >>>> >>>> save(['trial' num2str(tr) 'mtmconvolTFA.mat'], 'freq', '-v7.3'); >>>> end >>>> >>>> >>>> >>>> On Thu, Mar 2, 2017 at 9:55 AM, Tim Meehan wrote: >>>> >>>>> Hi Teresa, >>>>> >>>>> Thanks for the reply. I'll take a look at your example if you don't >>>>> mind sharing. Thanks! >>>>> >>>>> Tim >>>>> >>>>> On Thu, Mar 2, 2017 at 9:53 AM, Teresa Madsen >>>>> wrote: >>>>> >>>>>> No, not really. The only way I've found to do that is to loop >>>>>> through my artifact rejection process on each trial individually, then >>>>>> merge them back together with NaNs filling in where there are artifacts, >>>>>> but then that breaks every form of analysis I want to do. :-P >>>>>> >>>>>> I wonder if it would work to fill in the artifacts with 0s instead of >>>>>> NaNs....I might play with that. Let me know if you're interested in some >>>>>> example code. >>>>>> >>>>>> ~Teresa >>>>>> >>>>>> >>>>>> On Wed, Mar 1, 2017 at 3:55 PM, Tim Meehan >>>>>> wrote: >>>>>> >>>>>>> Hello All, >>>>>>> >>>>>>> When performing visual artifact rejection, I want to be able to mark >>>>>>> artifacts that occur during some specific trials and only on some specific >>>>>>> channels. In the tutorials I see only ways to mark bad channels (i.e. >>>>>>> across all trials) or bad trials (i.e. across all channels). Does FieldTrip >>>>>>> handle marking artifacts restricted to some channel/trial combination? >>>>>>> >>>>>>> Thanks, >>>>>>> Tim >>>>>>> >>>>>>> _______________________________________________ >>>>>>> fieldtrip mailing list >>>>>>> fieldtrip at donders.ru.nl >>>>>>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Teresa E. Madsen, PhD >>>>>> Research Technical Specialist: *in vivo *electrophysiology & data >>>>>> analysis >>>>>> Division of Behavioral Neuroscience and Psychiatric Disorders >>>>>> Yerkes National Primate Research Center >>>>>> Emory University >>>>>> Rainnie Lab, NSB 5233 >>>>>> 954 Gatewood Rd. NE >>>>>> Atlanta, GA 30329 >>>>>> (770) 296-9119 >>>>>> braingirl at gmail.com >>>>>> https://www.linkedin.com/in/temadsen >>>>>> >>>>>> _______________________________________________ >>>>>> fieldtrip mailing list >>>>>> fieldtrip at donders.ru.nl >>>>>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> fieldtrip mailing list >>>>> fieldtrip at donders.ru.nl >>>>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>>>> >>>> >>>> >>>> >>>> -- >>>> Teresa E. Madsen, PhD >>>> Research Technical Specialist: *in vivo *electrophysiology & data >>>> analysis >>>> Division of Behavioral Neuroscience and Psychiatric Disorders >>>> Yerkes National Primate Research Center >>>> Emory University >>>> Rainnie Lab, NSB 5233 >>>> 954 Gatewood Rd. NE >>>> Atlanta, GA 30329 >>>> (770) 296-9119 >>>> braingirl at gmail.com >>>> https://www.linkedin.com/in/temadsen >>>> >>>> _______________________________________________ >>>> fieldtrip mailing list >>>> fieldtrip at donders.ru.nl >>>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>>> >>> >>> >>> _______________________________________________ >>> fieldtrip mailing list >>> fieldtrip at donders.ru.nl >>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>> >> >> >> >> -- >> Teresa E. Madsen, PhD >> Research Technical Specialist: *in vivo *electrophysiology & data >> analysis >> Division of Behavioral Neuroscience and Psychiatric Disorders >> Yerkes National Primate Research Center >> Emory University >> Rainnie Lab, NSB 5233 >> 954 Gatewood Rd. NE >> Atlanta, GA 30329 >> (770) 296-9119 >> braingirl at gmail.com >> https://www.linkedin.com/in/temadsen >> >> _______________________________________________ >> fieldtrip mailing list >> fieldtrip at donders.ru.nl >> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >> > > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -- Teresa E. Madsen, PhD Research Technical Specialist: *in vivo *electrophysiology & data analysis Division of Behavioral Neuroscience and Psychiatric Disorders Yerkes National Primate Research Center Emory University Rainnie Lab, NSB 5233 954 Gatewood Rd. NE Atlanta, GA 30329 (770) 296-9119 braingirl at gmail.com https://www.linkedin.com/in/temadsen -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim.curran at colorado.edu Thu Mar 9 22:39:41 2017 From: tim.curran at colorado.edu (Tim Curran) Date: Thu, 9 Mar 2017 21:39:41 +0000 Subject: [FieldTrip] Postdoc at Northwestern In-Reply-To: References: Message-ID: <4CA846C9-15B4-42D5-9305-880F4182DA16@colorado.edu> POSTDOCTORAL POSITION AVAILABLE IN PRODROME/EARLY PSYCHOSIS: NORTHWESTERN UNIVERSITY’S ADAPT PROGRAM The Northwestern University Adolescent Development and Preventive Treatment (ADAPT) program is seeking applications for a full-time postdoctoral fellow. We are looking for someone with background in cognitive/affective neuroscience or clinical psychology, with interests and experience in electrophysiological assessment and/or neuroimaging. Currently we are running a number of NIMH/Foundation/University funded multimodal protocols (structural/diffusion-tensor/functional imaging, ERP, brain stimulation, eye tracking, instrumental motor assessment) with prodromal syndrome and early psychosis populations focusing on: brain, immune, and endocrine changes in response to aerobic exercise; neurobiology of motor dysfunction; timing of affective processing dysfunction (in new collaboration with Tim Curran). Please see our website for more details: http://www.adaptprogram.com. The ideal candidate will be a person who is interested in applying a cognitive/affective neuroscience background (e.g., Cognition /Cog-Neuro or related Ph.D.) to investigate early psychosis and the psychosis prodrome. Clinical Psychology Ph.D.’s with related interests and training experiences are also highly encouraged to apply. Preference will be given to candidates with a proven track record of good productivity, as well as strong computer programming skills (e.g., MATLAB/Python). We also strongly encourage diversity, and will give preference to applicants from populations underrepresented in STEM. The successful applicant will join Vijay Mittal and an active research team and will be responsible for designing/running experiments, analyzing and processing data, and disseminating findings. In addition, the applicant will work on collaborative studies with Vijay Mittal and Robin Nusslock, examining shared and distinct pathophysiology, underlying reward-processing abnormalities in psychosis and affective disorders. There will also be ample opportunities to take courses (Northwestern has a number of in-depth advanced training opportunities, covering a range of methodological and quantitative methods), collaborate (benefit from a number of active ADAPT collaborations as well as the vibrant Northwestern research community), help to mentor/train graduate students, and develop and follow Independent research questions. Significant attention will be placed on career development (e.g., regular conference attendance/participation, training in grant writing, mentorship, teaching, presentations/job-talks etc.) – this is ideal position for someone interested in preparing for a tenure track position. For questions or to submit and application, please contact Vijay Mittal (vijay.mittal at northwestern.edu). Applicants should send a C.V., brief letter describing interests and prior experience, and two publications (that best reflect contributions of the candidate). Salary is based on the NIMH Post-doctoral scale, and funding is available for up to two-years (appointments are for one year, but renewable for two years, based on progress/merit). There is a flexible start date (Spring, Summer or Fall 2017), and review of applications will begin March 1st. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-michel.badier at univ-amu.fr Fri Mar 10 16:07:06 2017 From: jean-michel.badier at univ-amu.fr (Jean-Michel Badier) Date: Fri, 10 Mar 2017 16:07:06 +0100 Subject: [FieldTrip] Doc and Post-doc positions in Marseille France Message-ID: <0f5e65c1-2800-8703-2eb3-c23774626895@univ-amu.fr> Dear all, Please find call for doc and pos-doc positions in Marseille. Note that both offer access to the IRMf and MEG platforms. Best regards *Call for Applications* */3 Post-docs; 3 Phd Grants/* ** *Three 2-year postdoc positions * *At Aix-Marseille/Avignon on /Language, Communication and the Brain/* The Center of Excellence on Brain and Language (BLRI, www.blri.fr/ )and the Institute of Language, Communication and the Brain (ILCB, http://www.ilcb.fr) offer: *Three 2-year postdoc positions*on any topic that falls within the area of language, communication, brain and modelling. The institute provides privileged and free access to fMRI, MEG and EEG facilities. The BLRI-ILCB is located in Aix-en-Provence, Avignon and Marseille and regroups several research centers in linguistics, psychology, cognitive neuroscience, medicine, computer science, and mathematics. The scientific project, ideally interdisciplinary, should be supervised by at least one member of the BLRI/ILCB (see http://www.blri.fr/members.html) and should, if possible, involve two different laboratories of the institute. A complete application should contain: 1.A full description of the research project (~ 5 pages): a.Title b.Name of the collaborator/supervisor/sponsor within the BLRI-ILCB c.Short Summary d.Scientific context/state of the art/ e.Objectives and hypotheses f.Methodology g.Expected results h.Brief statement about the relevance of the project for the BLRI/ILCB i.Proposed Timeline 2.CV with complete list of publications 3.Letter of motivation 4.One letter of recommendation or contact information of a potential referee * Duration: 2 years (1 year, extendable for another year) * Monthly salary: ~2000 € net (depending on experience) * Deadline: June 11, 2017 Applications should be sent to: nadera.bureau at blri.fr For supplementary information: _Johannes.Ziegler at univ-amu.fr _ *Three PhD grants * *at Aix-Marseille/Avignon on /Language, Communication and the Brain/* The Center of Excellence on Brain and Language (BLRI, www.blri.fr/ )and the Institute of Language, Communication and the Brain (ILCB, http://www.ilcb.fr/ ) award 3 PhD grants (3 years) on any topic that falls within the area of language, communication, brain and modelling. The institute provides privileged and free access to fMRI and MEG facilities. The BLRI-ILCB is located in Aix-en-Provence, Avignon and Marseille and regroups several research centers in linguistics, psychology, cognitive neuroscience, medicine, computer science, and mathematics. Interested candidates need to find one or more PhD supervisors amongst the members of the BRLI-ILCB (http://www.blri.fr/members.html.) Together with the supervisor(s), they would then need to write a 3-year PhD project. A priority is given to interdisciplinary co-directions and to projects that involve two different laboratories of the institute. The application should contain: 1.A full description of the PhD project (~ 5 pages): a.Title b.Name of the PhD supervisor(s) c.Short Summary d.Scientific context/state of the art/ e.Objectives and hypotheses f.Methodology g.Expected results h.Brief statement about the relevance of the project for the BLRI/ILCB i.Proposed Timeline 2.CV and master degree grades (if available) 3.Letter of motivation 4.One letter of recommendation or contact information of a potential referee * Deadline for submission: June 11, 2017 * Pre-selection of candidates for audition: June 28, 2017 * Auditions: July 3-7, 2017 (international candidates might be interviewed via skype) * Start: September 1, 2017 * Monthly salary: 1 685€(1 368€ net) for a period of 3 years Applications should be sent to: _nadera.bureau at blri.fr _ For supplementary information contact: _Johannes.Ziegler at univ-amu.fr _ ------------------------------------------------------------------------ Philippe Blache | LPL - CNRS & Universite d'Aix-Marseille | tel: +33 (0)4.13.55.27.21 Brain & Language Research Institute | fax: +33 (0)4.42.95.37.44 5 Avenue Pasteur, BP 80975 | email: blache at lpl-aix.fr 13604 Aix-en-Provence Cedex 1 | http://www.lpl-aix.fr/~blache/ France | http://www.blri.fr/ ------------------------------------------------------------------------ -- Jean michel Badier /- UMR S 1106 Institut de Neurosciences des Systèmes/ Aix-Marseille Université - Laboratoire MEG - TIMONE - 27 Boulevard Jean Moulin - 13005 Marseille Tél: +33(0)4 91 38 55 62 - Fax : +33(0)4 91 78 99 14 Site : http://www.univ-amu.fr - Email : jean-michel.badier at univ-amu.fr /Afin de respecter l'environnement, merci de n'imprimer cet email que si nécessaire./ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: logo_amu.jpg Type: image/jpeg Size: 17847 bytes Desc: not available URL: From dmatthes at cbs.mpg.de Fri Mar 10 16:25:52 2017 From: dmatthes at cbs.mpg.de (Daniel Matthes) Date: Fri, 10 Mar 2017 16:25:52 +0100 Subject: [FieldTrip] Estimation of the phase locking value Message-ID: <0bcd8e53-28ba-68fa-ddcd-ab6f3e385f72@cbs.mpg.de> Hi, In our upcoming studies we want to investigate inter-brain couplings on source level. Therefore, I have at present some questions about the calculation of the phase locking value (PLV), mainly about it's implementation in field trip. Since I'm a very beginner both with analysing eeg data and with field trip, I possibly ask already answerd question, so you've my apologies at the beginnig. Based on the paper of Dumas et al. (08/2010) I initially tried to compute the PLV in field trip by using the hilbert transformation to determine the instantaneous phase (ft_preprocessing) and subsequent with ft_connectivityanalysis. I rapidly recognized that this is not possible, since only freq data is a valid input for ft_connectivityanalysis in connection with the parameter 'plv'. Thus, I realized the PLV calculation on my own with matlab from scratch and tried to get a similar results using the ft_connectivityanalysis function. I've solved this issue by now, but on this way several question came up. The first one is about the result of ft_connectivityanalysis in connection with the parameter 'plv'. The function returns the phase difference of the compared components and not the phase locking value , as defined in Lachaux et al. (1999). What's the reason for this implementation? Existing plans for closing this gap? The second question is related to my initial problem. Why is it not possible to use the instananeous phase as input data of ft_connectivityanalysis in connection with the parameter 'plv'. I think this would make the calculation less complex. At last I wonder, why the configuration of cfg.channel and cfg.channelcmb has no effect in ft_connectivityanalysis in connection with the parameter 'plv'. It is only possible to focus solely on two channels if these definitions are made during the previous ft_freqanalysis. I would be thankful for some advice. Alle the best, Daniel From andrea.brovelli at univ-amu.fr Fri Mar 10 14:31:37 2017 From: andrea.brovelli at univ-amu.fr (Andrea Brovelli) Date: Fri, 10 Mar 2017 14:31:37 +0100 Subject: [FieldTrip] 3 PhD grants and 3 Post-Doc positions at Aix-Marseille University (France) In-Reply-To: References: Message-ID: *Call for Applications* */3 Post-docs; 3 Phd Grants/* ** *Three 2-year postdoc positions * *At Aix-Marseille/Avignon on /Language, Communication and the Brain/* The Center of Excellence on Brain and Language (BLRI, www.blri.fr/ )and the Institute of Language, Communication and the Brain (ILCB, http://www.ilcb.fr) offer: *Three 2-year postdoc positions*on any topic that falls within the area of language, communication, brain and modelling. The institute provides privileged and free access to fMRI, MEG and EEG facilities. The BLRI-ILCB is located in Aix-en-Provence, Avignon and Marseille and regroups several research centers in linguistics, psychology, cognitive neuroscience, medicine, computer science, and mathematics. The scientific project, ideally interdisciplinary, should be supervised by at least one member of the BLRI/ILCB (see http://www.blri.fr/members.html) and should, if possible, involve two different laboratories of the institute. A complete application should contain: 1.A full description of the research project (~ 5 pages): a.Title b.Name of the collaborator/supervisor/sponsor within the BLRI-ILCB c.Short Summary d.Scientific context/state of the art/ e.Objectives and hypotheses f.Methodology g.Expected results h.Brief statement about the relevance of the project for the BLRI/ILCB i.Proposed Timeline 2.CV with complete list of publications 3.Letter of motivation 4.One letter of recommendation or contact information of a potential referee * Duration: 2 years (1 year, extendable for another year) * Monthly salary: ~2000 € net (depending on experience) * Deadline: June 11, 2017 Applications should be sent to: nadera.bureau at blri.fr For supplementary information: _Johannes.Ziegler at univ-amu.fr _ *Three PhD grants * *at Aix-Marseille/Avignon on /Language, Communication and the Brain/* The Center of Excellence on Brain and Language (BLRI, www.blri.fr/ )and the Institute of Language, Communication and the Brain (ILCB, http://www.ilcb.fr/ ) award 3 PhD grants (3 years) on any topic that falls within the area of language, communication, brain and modelling. The institute provides privileged and free access to fMRI and MEG facilities. The BLRI-ILCB is located in Aix-en-Provence, Avignon and Marseille and regroups several research centers in linguistics, psychology, cognitive neuroscience, medicine, computer science, and mathematics. Interested candidates need to find one or more PhD supervisors amongst the members of the BRLI-ILCB (http://www.blri.fr/members.html.) Together with the supervisor(s), they would then need to write a 3-year PhD project. A priority is given to interdisciplinary co-directions and to projects that involve two different laboratories of the institute. The application should contain: 1.A full description of the PhD project (~ 5 pages): a.Title b.Name of the PhD supervisor(s) c.Short Summary d.Scientific context/state of the art/ e.Objectives and hypotheses f.Methodology g.Expected results h.Brief statement about the relevance of the project for the BLRI/ILCB i.Proposed Timeline 2.CV and master degree grades (if available) 3.Letter of motivation 4.One letter of recommendation or contact information of a potential referee * Deadline for submission: June 11, 2017 * Pre-selection of candidates for audition: June 28, 2017 * Auditions: July 3-7, 2017 (international candidates might be interviewed via skype) * Start: September 1, 2017 * Monthly salary: 1 685€(1 368€ net) for a period of 3 years Applications should be sent to: _nadera.bureau at blri.fr _ For supplementary information contact: _Johannes.Ziegler at univ-amu.fr _ ------------------------------------------------------------------------ Philippe Blache | LPL - CNRS & Universite d'Aix-Marseille | tel: +33 (0)4.13.55.27.21 Brain & Language Research Institute | fax: +33 (0)4.42.95.37.44 5 Avenue Pasteur, BP 80975 | email: blache at lpl-aix.fr 13604 Aix-en-Provence Cedex 1 | http://www.lpl-aix.fr/~blache/ France | http://www.blri.fr/ ------------------------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-michel.badier at univ-amu.fr Fri Mar 10 17:05:37 2017 From: jean-michel.badier at univ-amu.fr (Jean-Michel Badier) Date: Fri, 10 Mar 2017 17:05:37 +0100 Subject: [FieldTrip] 3 PhD grants and 3 Post-Doc positions at Aix-Marseille University (France) In-Reply-To: References: Message-ID: <76a895ff-1ab5-47a6-5339-c81754dede50@univ-amu.fr> Bonjour Andrea, On dirait que l'on a eu la même idée ! J'espère que tu vas bien. A bientôt JM Le 10/03/2017 à 14:31, Andrea Brovelli a écrit : > > *Call for Applications* > > */3 Post-docs; 3 Phd Grants/* > > ** > > *Three 2-year postdoc positions * > > *At Aix-Marseille/Avignon on /Language, Communication and the Brain/* > > > The Center of Excellence on Brain and Language (BLRI, www.blri.fr/ > )and the Institute of Language, Communication and > the Brain (ILCB, http://www.ilcb.fr) offer: > > *Three 2-year postdoc positions*on any topic that falls within the > area of language, communication, brain and modelling. The institute > provides privileged and free access to fMRI, MEG and EEG facilities. > > The BLRI-ILCB is located in Aix-en-Provence, Avignon and Marseille and > regroups several research centers in linguistics, psychology, > cognitive neuroscience, medicine, computer science, and mathematics. > > The scientific project, ideally interdisciplinary, should be > supervised by at least one member of the BLRI/ILCB (see > http://www.blri.fr/members.html) and should, if possible, involve two > different laboratories of the institute. > > A complete application should contain: > > > 1.A full description of the research project (~ 5 pages): > a.Title > b.Name of the collaborator/supervisor/sponsor within the BLRI-ILCB > c.Short Summary > d.Scientific context/state of the art/ > e.Objectives and hypotheses > f.Methodology > g.Expected results > h.Brief statement about the relevance of the project for the BLRI/ILCB > i.Proposed Timeline > 2.CV with complete list of publications > 3.Letter of motivation > 4.One letter of recommendation or contact information of a potential > referee > > * Duration: 2 years (1 year, extendable for another year) > * Monthly salary: ~2000 € net (depending on experience) > * Deadline: June 11, 2017 > > > Applications should be sent to: nadera.bureau at blri.fr > > > For supplementary information: _Johannes.Ziegler at univ-amu.fr > _ > > > *Three PhD grants * > > *at Aix-Marseille/Avignon on /Language, Communication and the Brain/* > > > The Center of Excellence on Brain and Language (BLRI, www.blri.fr/ > )and the Institute of Language, Communication and > the Brain (ILCB, http://www.ilcb.fr/ ) award > > 3 PhD grants (3 years) on any topic that falls within the area of > language, communication, brain and modelling. The institute provides > privileged and free access to fMRI and MEG facilities. > > The BLRI-ILCB is located in Aix-en-Provence, Avignon and Marseille and > regroups several research centers in linguistics, psychology, > cognitive neuroscience, medicine, computer science, and mathematics. > > Interested candidates need to find one or more PhD supervisors amongst > the members of the BRLI-ILCB (http://www.blri.fr/members.html.) > Together with the supervisor(s), they would then need to write a > 3-year PhD project. A priority is given to interdisciplinary > co-directions and to projects that involve two different laboratories > of the institute. > > The application should contain: > > > 1.A full description of the PhD project (~ 5 pages): > a.Title > b.Name of the PhD supervisor(s) > c.Short Summary > d.Scientific context/state of the art/ > e.Objectives and hypotheses > f.Methodology > g.Expected results > h.Brief statement about the relevance of the project for the BLRI/ILCB > i.Proposed Timeline > 2.CV and master degree grades (if available) > 3.Letter of motivation > 4.One letter of recommendation or contact information of a potential > referee > > > * Deadline for submission: June 11, 2017 > * Pre-selection of candidates for audition: June 28, 2017 > * Auditions: July 3-7, 2017 (international candidates might be > interviewed via skype) > * Start: September 1, 2017 > * Monthly salary: 1 685€(1 368€ net) for a period of 3 years > > > > Applications should be sent to: _nadera.bureau at blri.fr > _ > > For supplementary information contact: _Johannes.Ziegler at univ-amu.fr > _ > > ------------------------------------------------------------------------ > Philippe Blache | > LPL - CNRS & Universite d'Aix-Marseille | tel: +33 (0)4.13.55.27.21 > Brain & Language Research Institute | fax: +33 (0)4.42.95.37.44 > 5 Avenue Pasteur, BP 80975 | email: blache at lpl-aix.fr > > 13604 Aix-en-Provence Cedex 1 | > http://www.lpl-aix.fr/~blache/ > France | http://www.blri.fr/ > ------------------------------------------------------------------------ > > > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -- Jean michel Badier /- UMR S 1106 Institut de Neurosciences des Systèmes/ Aix-Marseille Université - Laboratoire MEG - TIMONE - 27 Boulevard Jean Moulin - 13005 Marseille Tél: +33(0)4 91 38 55 62 - Fax : +33(0)4 91 78 99 14 Site : http://www.univ-amu.fr - Email : jean-michel.badier at univ-amu.fr /Afin de respecter l'environnement, merci de n'imprimer cet email que si nécessaire./ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: logo_amu.jpg Type: image/jpeg Size: 17847 bytes Desc: not available URL: From jan.schoffelen at donders.ru.nl Mon Mar 13 08:49:27 2017 From: jan.schoffelen at donders.ru.nl (Schoffelen, J.M. (Jan Mathijs)) Date: Mon, 13 Mar 2017 07:49:27 +0000 Subject: [FieldTrip] Fwd: ft_volumerealign with headshape References: Message-ID: <3F59D5AD-E147-4B9A-995F-E8ADBBC72452@donders.ru.nl> Hi Ainsley, I forward your message to the discussion list. Dear list, Please have a look at Ainsley’s question below. Did anyone encounter this issue and has a solution? The error is a low-level MATLAB one, so apparently the input arguments to the ismember function call are not what they should be. Thanks and with best wishes, Jan-Mathijs J.M.Schoffelen, MD PhD Senior Researcher, VIDI-fellow - PI, language in interaction Telephone: +31-24-3614793 Physical location: room 00.028 Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands Begin forwarded message: From: Ainsley Temudo > Subject: ft_volumerealign with headshape Date: 13 March 2017 at 07:30:23 GMT+1 To: > Hi Jan-Mathijs, I'm trying to realign an MRI to a headshape. I found a previous discussion mail from someone who had a similar problem a couple of years ago and you replied saying you fixed it locally with a dirty hack. https://mailman.science.ru.nl/pipermail/fieldtrip/2015-November/009828.html I am doing it the same way: mri = ft_read_mri('WMCP1011+22+t1mprage.nii'); cfg = []; cfg.method = 'interactive'; cfg.coordsys = 'ctf'; mri_realigned = ft_volumerealign(cfg,mri); hs=ft_read_headshape('headscan.hsp'); cfg = []; cfg.method = 'headshape' cfg.coordsys = 'ctf'; cfg.headshape.headshape = hs; mri_realigned2 = ft_volumerealign(cfg,mri_realigned); and I get the following errors : doing interactive realignment with headshape Error using cell/ismember (line 34) Input A of class cell and input B of class cell must be cell arrays of strings, unless one is a string. Error in ft_senstype (line 303) if (mean(ismember(ft_senslabel('ant128'), sens.label)) > 0.8) Error in ft_datatype_sens (line 138) ismeg = ft_senstype(sens, 'meg'); Error in ft_checkconfig (line 250) cfg.elec = ft_datatype_sens(cfg.elec); Error in ft_interactiverealign (line 71) cfg.template = ft_checkconfig(cfg.template, 'renamed', {'vol', 'headmodel'}); Error in ft_volumerealign (line 691) tmpcfg = ft_interactiverealign(tmp cfg); Is it the same issue as before? if this issue was fixed, any idea why I'm getting these areas? I'm using field trip version 20160313 Kind Regards, Ainsley -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.schoffelen at donders.ru.nl Mon Mar 13 09:13:50 2017 From: jan.schoffelen at donders.ru.nl (Schoffelen, J.M. (Jan Mathijs)) Date: Mon, 13 Mar 2017 08:13:50 +0000 Subject: [FieldTrip] Fwd: [HCP-Users] hcp_anatomy.m needs an hsfile? References: Message-ID: <6B401176-3DCA-4858-AE3B-30DA4F0E331A@donders.ru.nl> Dear Jeff, Let me forward your question to the discussion list. Dear list, Jeff is encountering some coregistration problems, which may be FieldTrip related, but also could be a user error. Perhaps somebody has encountered them before. Let us know if you have a solution. The 45 degrees tilt looks odd. If this image was produced after reslicing the to-MNI-coregistered-image something went wrong with the realignment. If this image was produced prior to the reslicing, something funky has gone wrong with the acquisition sequence. I don’t know anything about the specifics of Brainstorm, so I am afraid I cannot help there. Best, Jan-Mathijs J.M.Schoffelen, MD PhD Senior Researcher, VIDI-fellow - PI, language in interaction Telephone: +31-24-3614793 Physical location: room 00.028 Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands Begin forwarded message: From: K Jeffrey Eriksen > Subject: RE: [HCP-Users] hcp_anatomy.m needs an hsfile? Date: 11 March 2017 at 02:47:33 GMT+1 To: "Schoffelen, J.M. (Jan Mathijs)" > Hello again, I encountered a problem when I tried to import into Brainstorm, even though I thought I had the transform text file correct. After importing the anatomy in Brainstorm, it was displayed with the brain rotated by 45 degrees in all axes. I then realized the I had visualized the registration of the headshape to the scalp surface and that looked good, but I had never visualized the MNI registration. I went back into the HCP scripts and found where the MNI registration could be visualized and discovered the 45 degree rotation seemed to occur there. So I thought maybe our local HCP pipeline did something unusual. To test this I ran these three conditions: 1. My hcp_anatomy_egi.m with our local HCP-pipeline produced T1 2. original hcp_anatomy.m with our local T1 3. original hcp_anatomy.m with downloaded HCM_MEG_pipeline produced T1 All three had the same apparent problem, shown on the attached images. I am quite puzzled by this since they are all the same, yet Brainstorm only imports #3 correctly (not counting #2 which is mixed). I put all three cases in the attached Word doc, with the Brainstorm registration display and the HCP headshape registration display. -Jeff From: Schoffelen, J.M. (Jan Mathijs) [mailto:jan.schoffelen at donders.ru.nl] Sent: Wednesday, March 08, 2017 8:52 AM To: K Jeffrey Eriksen Subject: Re: [HCP-Users] hcp_anatomy.m needs an hsfile? Hi Jeff, I made it all the way through hcp_anatomy_EGI.m (my version substituting ‘egi’ for ‘bti’. Amazing! I could not figure out how to do the interactive fine registration of the EGI electrode “headshape” to the scalp surface – where is that documented? Well it’s not extensively documented, but in the crude GUI you can fiddle around with translation and rotation parameters to move the electrode point cloud closer to the headsurface mesh, created from the MRI segmentation. The main remaining problem is that the BTI coordinate system has the X-axis toward the nasion, and the Y-axis toward the LPA. The EGI coordinate system has the X-axis toward the RPA and the Y-axis toward the nasion. Can you suggest the best way to change hcp_anatomy_EGI.m to reflect this? Well, it sounds as if the EGI has an RAS convention, which may be similar to the ‘neuromag’ convention (as per http://www.fieldtriptoolbox.org/faq/how_are_the_different_head_and_mri_coordinate_systems_defined) It could be that changing the required coordinate system (coordsys) to ‘neuromag’ while specifying the fiducials in ft_volumerealign (rather than ‘bti’) would do the trick. Each of the supported coordinates systems must have some kind of master definition somewhere in the code, and that would be the best place to define the EGI system. I think it is similar to the BESA system. The code that has the ‘intelligence’ to map the specification of defined fiducial/landmark locations is in https://github.com/fieldtrip/fieldtrip/blob/master/utilities/ft_headcoordinates.m with a typo in line48/49 I noticed just now. Feel free to suggest a new coordinate system if needed. Perhaps this is best done through the fieldtrip discussion list. Best, Jan-Mathijs -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 45 degree rotqation of MNI registration HCP_MEG_anatomy.docx Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document Size: 494751 bytes Desc: 45 degree rotqation of MNI registration HCP_MEG_anatomy.docx URL: From mpcoll at mac.com Mon Mar 13 13:47:25 2017 From: mpcoll at mac.com (MP Coll) Date: Mon, 13 Mar 2017 12:47:25 +0000 Subject: [FieldTrip] Precisions on DICS beamforming on EEG data and using normalisation Message-ID: <28ea0ca0-fed2-3cba-aa11-a79cc8c7c1a3@mac.com> Dear Fieldtrip Community, My name is Michel-Pierre Coll and I am a postdoctoral researcher at King's college London. A reviewer recently suggested we perform DICS beamforming to source localise EEG effects in the frequency domain during an action observation/execution paradigm. I was able to perform these analyses using the very good tutorial on the Fiedltrip website. However, I have some questions regarding these analyses. I have searched the literature and the mailing list but somehow I can't find clear answers to these basic questions. 1) Is it appropriate to perform DICS beamforming using EEG with 60 channels (standard montage) ? If not, what would be the appropriate number of channels ? Can you suggest a reference discussing this issue ? 2) When not using a contrast to perform the beamforming, is the normalisation of the lead field an adequate procedure to correct for the depth/centre bias ? The Fieldtrip tutorial suggest it is but other posts on the mailing list suggest that it is not. 3) How does one choose an optimal time window for DICS beamforming when the duration of the effect is quite long (e.g. several seconds of alpha changes in response to a visual stimulus) ? Is it correct to use a longer time-window (e.g. 2 seconds) that is representative of the duration of the effect ? I would greatly appreciate any hints on these questions or if you could point me towards relevant texts discussing these issues. Best, MP Coll From n.molinaro at bcbl.eu Tue Mar 14 17:21:43 2017 From: n.molinaro at bcbl.eu (Nicola Molinaro) Date: Tue, 14 Mar 2017 17:21:43 +0100 (CET) Subject: [FieldTrip] RESEARCH FACULTY POSITIONS at the BCBL Message-ID: <1648301278.309469.1489508503037.JavaMail.zimbra@bcbl.eu> Dear Fieldtrip community I am forwarding this message from the BCBL direction Nicola ------------- RESEARCH FACULTY POSITIONS at the BCBL- Basque Center on Cognition Brain and Language (San Sebastián, Basque Country, Spain) www.bcbl.eu (Center of excellence Severo Ochoa) The Basque Center on Cognition Brain and Language (San Sebastián, Basque Country, Spain) together with IKERBASQUE (Basque Foundation for Science) offer 3 permanent IKERBASQUE Research Professor positions in the following areas: - Language acquisition - Any area of Language processing and/or disorders with advanced experience in MRI - Any area of Language processing and/or disorders with advanced experience in MEG The BCBL Center (recently awarded the label of excellence Severo Ochoa) promotes a vibrant research environment without substantial teaching obligations. It provides access to the most advanced behavioral and neuroimaging techniques, including 3 Tesla MRI, a whole-head MEG system, four ERP labs, a NIRS lab, a baby lab including eyetracker, EEG and NIRS, two eyetracking labs, and several well-equipped behavioral labs. There are excellent technical support staff and research personnel (PhD and postdoctoral students). The senior positions are permanent appointments. We are looking for cognitive neuroscientists or experimental psychologists with a background in psycholinguistics and/or neighboring cognitive neuroscience areas, and physicists and/or engineers with fMRI or MEG expertise. Individuals interested in undertaking research in the fields described in http://www.bcbl.eu/research/lines/ should apply through the BCBL web page (www.bcbl.eu/jobs). The successful candidate will be working within the research lines of the BCBL whose main aim is to develop high-risk/high gain projects at the frontiers of Cognitive Neuroscience. We expect high readiness to work with strong engagement and creativity in an interdisciplinary and international environment. Deadline June 30th We encourage immediate applications as the selection process will be ongoing and the appointment may be made before the deadline. Only senior researchers with a strong record of research experience will be considered. Women candidates are especially welcome. To submit your application please follow this link: http://www.bcbl.eu/jobs applying for Ikerbasque Research Professor 2017 and upload: Your curriculum vitae. A cover letter/statement describing your research interests (4000 characters maximum) The names of two referees who would be willing to write letters of recommendation Applicants should be fluent in English. Knowledge of Spanish and/or Basque will be considered useful but is not compulsory. For more information, please contact the Director of BCBL, Manuel Carreiras (m.carreiras at bcbl.eu). From marc.lalancette at sickkids.ca Tue Mar 14 17:46:50 2017 From: marc.lalancette at sickkids.ca (Marc Lalancette) Date: Tue, 14 Mar 2017 16:46:50 +0000 Subject: [FieldTrip] Precisions on DICS beamforming on EEG data and normalisation Message-ID: <2A2B6A5B8C4C174CBCCE0B45E548DEB23B96F1D4@SKMBXX01.sickkids.ca> Hi Michel-Pierre, Regarding question 2, I'm mostly familiar with LCMV, and I can't remember exactly how DICS works, but I would guess normalization approaches have the same properties in both. (Please someone correct me on this if I'm wrong.) One great reference for LCMV beamformer in general, and normalization in particular, is the book by Sekihara and Nagarajan. For a scalar beamformer, yes normalizing the leadfield ("array-gain") will correct depth bias, but I find these absolute values harder to interpret. Dividing instead by projected noise ("unit-noise-gain") also corrects depth bias, and has better spatial resolution. For a vector beamformer, things get a bit more complicated as the "array-gain" and "unit-noise-gain" vector formulae in that book are not rotationally invariant and I would not recommend using them. (See my recent post: https://mailman.science.ru.nl/pipermail/fieldtrip/2017-March/011390.html) Fieldtrip does not by default use these normalizations, but I also haven't seen an analysis of (or had time to investigate much) how its vector beamformer normalization approach fares in terms of bias and resolution compared to others. Maybe it exists somewhere? Sorry if it's not a very practical answer... Cheers, Marc Lalancette Lab Research Project Manager, Research MEG The Hospital for Sick Children, Diagnostic Imaging, Room S742 555 University Avenue, Toronto, ON, M5G 1X8 416-813-7654 x201535 -----Original Message----- Date: Mon, 13 Mar 2017 12:47:25 +0000 From: MP Coll To: fieldtrip at science.ru.nl Subject: [FieldTrip] Precisions on DICS beamforming on EEG data and using normalisation Message-ID: <28ea0ca0-fed2-3cba-aa11-a79cc8c7c1a3 at mac.com> Content-Type: text/plain; charset=utf-8; format=flowed Dear Fieldtrip Community, My name is Michel-Pierre Coll and I am a postdoctoral researcher at King's college London. A reviewer recently suggested we perform DICS beamforming to source localise EEG effects in the frequency domain during an action observation/execution paradigm. I was able to perform these analyses using the very good tutorial on the Fiedltrip website. However, I have some questions regarding these analyses. I have searched the literature and the mailing list but somehow I can't find clear answers to these basic questions. 1) Is it appropriate to perform DICS beamforming using EEG with 60 channels (standard montage) ? If not, what would be the appropriate number of channels ? Can you suggest a reference discussing this issue ? 2) When not using a contrast to perform the beamforming, is the normalisation of the lead field an adequate procedure to correct for the depth/centre bias ? The Fieldtrip tutorial suggest it is but other posts on the mailing list suggest that it is not. 3) How does one choose an optimal time window for DICS beamforming when the duration of the effect is quite long (e.g. several seconds of alpha changes in response to a visual stimulus) ? Is it correct to use a longer time-window (e.g. 2 seconds) that is representative of the duration of the effect ? I would greatly appreciate any hints on these questions or if you could point me towards relevant texts discussing these issues. Best, MP Coll ------------------------------ _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip End of fieldtrip Digest, Vol 76, Issue 14 ***************************************** ________________________________ This e-mail may contain confidential, personal and/or health information(information which may be subject to legal restrictions on use, retention and/or disclosure) for the sole use of the intended recipient. Any review or distribution by anyone other than the person for whom it was originally intended is strictly prohibited. If you have received this e-mail in error, please contact the sender and delete all copies. From hamedtaheri at yahoo.com Tue Mar 14 18:26:54 2017 From: hamedtaheri at yahoo.com (Hamed Taheri) Date: Tue, 14 Mar 2017 17:26:54 +0000 (UTC) Subject: [FieldTrip] How can i see EEG References: <687200850.5575467.1489512414151.ref@mail.yahoo.com> Message-ID: <687200850.5575467.1489512414151@mail.yahoo.com> Hello allMy name is Hamed a Ph.D. candidate from the Sapienza University of Rome.I have an EEG data that recorded in 64 channel with .eeg format.How can I see my data in Fieldtrip. cfg = []cfg.dataset = 'mydata........'....  Hamed Taheri Gorji PhD Candidate Brain Imaging Laboratory  DEPARTMENT OF PSYCHOLOGY FACULTY OF MEDICINE AND PSYCHOLOGY SAPIENZA University of Rome Santa Lucia Foundation, ViaArdeatina 306, 00179 Rome -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailtome.2113 at gmail.com Wed Mar 15 07:16:28 2017 From: mailtome.2113 at gmail.com (Arti Abhishek) Date: Wed, 15 Mar 2017 17:16:28 +1100 Subject: [FieldTrip] Epoching between 1 to 30 seconds Message-ID: Dear fieldtrip community, I have EEG recorded in an auditory steady state paradigm and I want to epoch between 1-30 seconds. I don't want in my epoch any prestimulus time or the first second of the stimulus (to remove the onset response). I was wondering how I can epoch like this in fieldtrip? Can I epoch without using cfg.trialdef.prestim and cfg.trialdef.poststim parameters? Thanks, Arti -------------- next part -------------- An HTML attachment was scrubbed... URL: From toni.rbaena at gmail.com Wed Mar 15 08:26:05 2017 From: toni.rbaena at gmail.com (Antonio Rodriguez) Date: Wed, 15 Mar 2017 08:26:05 +0100 Subject: [FieldTrip] Epoching between 1 to 30 seconds In-Reply-To: References: Message-ID: Hello Arti, maybe you can try to set your pre stim time to a negative value ( so you will start after the event) , and then set the post stim at your final epoch time. Like this: cfg = []; cfg.datafile = datafile; cfg.headerfile = headerfile; cfg.trialdef.eventtype = 'Stimulus'; cfg.trialdef.eventvalue = 'S 19'; cfg.trialdef.prestim = -1; % start 1 second AFTER stim cfg.trialdef.poststim = 30; % end in the second 30 AFTER stim td = ft_definetrial(cfg); % my epochs are 29 second long Hope this helps. 2017-03-15 7:16 GMT+01:00 Arti Abhishek : > Dear fieldtrip community, > > I have EEG recorded in an auditory steady state paradigm and I want to > epoch between 1-30 seconds. I don't want in my epoch any prestimulus time > or the first second of the stimulus (to remove the onset response). I was > wondering how I can epoch like this in fieldtrip? Can I epoch without using > cfg.trialdef.prestim and cfg.trialdef.poststim parameters? > > Thanks, > Arti > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stan.vanpelt at donders.ru.nl Wed Mar 15 08:52:23 2017 From: stan.vanpelt at donders.ru.nl (Pelt, S. van (Stan)) Date: Wed, 15 Mar 2017 07:52:23 +0000 Subject: [FieldTrip] How can i see EEG In-Reply-To: <687200850.5575467.1489512414151@mail.yahoo.com> References: <687200850.5575467.1489512414151.ref@mail.yahoo.com> <687200850.5575467.1489512414151@mail.yahoo.com> Message-ID: <7CCA2706D7A4DA45931A892DF3C2894C58F0E941@exprd03.hosting.ru.nl> Hi Hamed, The FieldTrip tutorials are your friend here. See http://www.fieldtriptoolbox.org/getting_started. Best, Stan -- Stan van Pelt, PhD Donders Institute for Brain, Cognition and Behaviour Radboud University Montessorilaan 3, B.01.34 6525 HR Nijmegen, the Netherlands tel: +31 24 3616288 From: fieldtrip-bounces at science.ru.nl [mailto:fieldtrip-bounces at science.ru.nl] On Behalf Of Hamed Taheri Sent: dinsdag 14 maart 2017 18:27 To: fieldtrip at science.ru.nl Subject: [FieldTrip] How can i see EEG Hello all My name is Hamed a Ph.D. candidate from the Sapienza University of Rome. I have an EEG data that recorded in 64 channel with .eeg format. How can I see my data in Fieldtrip. cfg = [] cfg.dataset = 'mydata........' . . . . Hamed Taheri Gorji PhD Candidate Brain Imaging Laboratory DEPARTMENT OF PSYCHOLOGY FACULTY OF MEDICINE AND PSYCHOLOGY SAPIENZA University of Rome Santa Lucia Foundation, Via Ardeatina 306, 00179 Rome -------------- next part -------------- An HTML attachment was scrubbed... URL: From hamedtaheri at yahoo.com Wed Mar 15 09:10:23 2017 From: hamedtaheri at yahoo.com (hamed taheri) Date: Wed, 15 Mar 2017 09:10:23 +0100 Subject: [FieldTrip] How can i see EEG In-Reply-To: <7CCA2706D7A4DA45931A892DF3C2894C58F0E941@exprd03.hosting.ru.nl> References: <687200850.5575467.1489512414151.ref@mail.yahoo.com> <687200850.5575467.1489512414151@mail.yahoo.com> <7CCA2706D7A4DA45931A892DF3C2894C58F0E941@exprd03.hosting.ru.nl> Message-ID: I saw this tutorial but I couldn't find viewing code I want to see my 64 channels Sent from my iPhone > On Mar 15, 2017, at 8:52 AM, Pelt, S. van (Stan) wrote: > > Hi Hamed, > > The FieldTrip tutorials are your friend here. See http://www.fieldtriptoolbox.org/getting_started. > > Best, > Stan > > -- > Stan van Pelt, PhD > Donders Institute for Brain, Cognition and Behaviour > Radboud University > Montessorilaan 3, B.01.34 > 6525 HR Nijmegen, the Netherlands > tel: +31 24 3616288 > > From: fieldtrip-bounces at science.ru.nl [mailto:fieldtrip-bounces at science.ru.nl] On Behalf Of Hamed Taheri > Sent: dinsdag 14 maart 2017 18:27 > To: fieldtrip at science.ru.nl > Subject: [FieldTrip] How can i see EEG > > Hello all > My name is Hamed a Ph.D. candidate from the Sapienza University of Rome. > I have an EEG data that recorded in 64 channel with .eeg format. > How can I see my data in Fieldtrip. > > cfg = [] > cfg.dataset = 'mydata........' > . > . > . > . > > > > Hamed Taheri Gorji > PhD Candidate > Brain Imaging Laboratory > > DEPARTMENT OF PSYCHOLOGY > FACULTY OF MEDICINE AND PSYCHOLOGY > SAPIENZA > University of Rome > > Santa Lucia Foundation, Via > Ardeatina 306, 00179 Rome > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: From stan.vanpelt at donders.ru.nl Wed Mar 15 09:16:46 2017 From: stan.vanpelt at donders.ru.nl (Pelt, S. van (Stan)) Date: Wed, 15 Mar 2017 08:16:46 +0000 Subject: [FieldTrip] How can i see EEG In-Reply-To: References: <687200850.5575467.1489512414151.ref@mail.yahoo.com> <687200850.5575467.1489512414151@mail.yahoo.com> <7CCA2706D7A4DA45931A892DF3C2894C58F0E941@exprd03.hosting.ru.nl> Message-ID: <7CCA2706D7A4DA45931A892DF3C2894C58F0E95E@exprd03.hosting.ru.nl> Try http://www.fieldtriptoolbox.org/tutorial/preprocessing_erp Or the excellent walkthrough: http://www.fieldtriptoolbox.org/walkthrough From: fieldtrip-bounces at science.ru.nl [mailto:fieldtrip-bounces at science.ru.nl] On Behalf Of hamed taheri Sent: woensdag 15 maart 2017 9:10 To: FieldTrip discussion list Subject: Re: [FieldTrip] How can i see EEG I saw this tutorial but I couldn't find viewing code I want to see my 64 channels Sent from my iPhone On Mar 15, 2017, at 8:52 AM, Pelt, S. van (Stan) > wrote: Hi Hamed, The FieldTrip tutorials are your friend here. See http://www.fieldtriptoolbox.org/getting_started. Best, Stan -- Stan van Pelt, PhD Donders Institute for Brain, Cognition and Behaviour Radboud University Montessorilaan 3, B.01.34 6525 HR Nijmegen, the Netherlands tel: +31 24 3616288 From: fieldtrip-bounces at science.ru.nl [mailto:fieldtrip-bounces at science.ru.nl] On Behalf Of Hamed Taheri Sent: dinsdag 14 maart 2017 18:27 To: fieldtrip at science.ru.nl Subject: [FieldTrip] How can i see EEG Hello all My name is Hamed a Ph.D. candidate from the Sapienza University of Rome. I have an EEG data that recorded in 64 channel with .eeg format. How can I see my data in Fieldtrip. cfg = [] cfg.dataset = 'mydata........' . . . . Hamed Taheri Gorji PhD Candidate Brain Imaging Laboratory DEPARTMENT OF PSYCHOLOGY FACULTY OF MEDICINE AND PSYCHOLOGY SAPIENZA University of Rome Santa Lucia Foundation, Via Ardeatina 306, 00179 Rome _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.schoffelen at donders.ru.nl Wed Mar 15 09:27:29 2017 From: jan.schoffelen at donders.ru.nl (Schoffelen, J.M. (Jan Mathijs)) Date: Wed, 15 Mar 2017 08:27:29 +0000 Subject: [FieldTrip] Fwd: How can i see EEG References: <0CE090DC-AF2A-4133-83E8-F52895A64ECC@gmail.com> Message-ID: Hamed, To add to Stan’s excellent suggestions: If your question is about visualization, you could have a look at the plotting tutorial, or familiarize yourself with matlab’s basic plotting functionality, functions such as plot etc. Perhaps you could also check with colleagues in your lab who might know how to do this. Good luck Jan-Mathijs On 15 Mar 2017, at 09:10, hamed taheri > wrote: I saw this tutorial but I couldn't find viewing code I want to see my 64 channels Sent from my iPhone On Mar 15, 2017, at 8:52 AM, Pelt, S. van (Stan) > wrote: Hi Hamed, The FieldTrip tutorials are your friend here. See http://www.fieldtriptoolbox.org/getting_started. Best, Stan -- Stan van Pelt, PhD Donders Institute for Brain, Cognition and Behaviour Radboud University Montessorilaan 3, B.01.34 6525 HR Nijmegen, the Netherlands tel: +31 24 3616288 From: fieldtrip-bounces at science.ru.nl [mailto:fieldtrip-bounces at science.ru.nl] On Behalf Of Hamed Taheri Sent: dinsdag 14 maart 2017 18:27 To: fieldtrip at science.ru.nl Subject: [FieldTrip] How can i see EEG Hello all My name is Hamed a Ph.D. candidate from the Sapienza University of Rome. I have an EEG data that recorded in 64 channel with .eeg format. How can I see my data in Fieldtrip. cfg = [] cfg.dataset = 'mydata........' . . . . Hamed Taheri Gorji PhD Candidate Brain Imaging Laboratory DEPARTMENT OF PSYCHOLOGY FACULTY OF MEDICINE AND PSYCHOLOGY SAPIENZA University of Rome Santa Lucia Foundation, Via Ardeatina 306, 00179 Rome _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: From anne.hauswald at me.com Wed Mar 15 09:37:49 2017 From: anne.hauswald at me.com (anne Hauswald) Date: Wed, 15 Mar 2017 09:37:49 +0100 Subject: [FieldTrip] How can i see EEG In-Reply-To: References: <687200850.5575467.1489512414151.ref@mail.yahoo.com> <687200850.5575467.1489512414151@mail.yahoo.com> <7CCA2706D7A4DA45931A892DF3C2894C58F0E941@exprd03.hosting.ru.nl> Message-ID: <9C222EA7-DD8C-45F6-B3DE-BBA5B40276FD@me.com> Hi Hamed, as Stan pointed you to, you will find information on visual data inspection in http://www.fieldtriptoolbox.org/walkthrough . Basically, you can use e.g. ft_databrowser to view your data. for example cfg=[]; cfg.dataset='path to your eeg data‘; ft_databrowser(cfg) for more options see the references for this function. best anne > Am 15.03.2017 um 09:10 schrieb hamed taheri : > > I saw this tutorial but I couldn't find viewing code > I want to see my 64 channels > > > Sent from my iPhone > > On Mar 15, 2017, at 8:52 AM, Pelt, S. van (Stan) > wrote: > >> Hi Hamed, >> >> The FieldTrip tutorials are your friend here. See http://www.fieldtriptoolbox.org/getting_started . >> >> Best, >> Stan >> >> -- >> Stan van Pelt, PhD >> Donders Institute for Brain, Cognition and Behaviour >> Radboud University >> Montessorilaan 3, B.01.34 >> 6525 HR Nijmegen, the Netherlands >> tel: +31 24 3616288 >> >> From: fieldtrip-bounces at science.ru.nl [mailto:fieldtrip-bounces at science.ru.nl ] On Behalf Of Hamed Taheri >> Sent: dinsdag 14 maart 2017 18:27 >> To: fieldtrip at science.ru.nl >> Subject: [FieldTrip] How can i see EEG >> >> Hello all >> My name is Hamed a Ph.D. candidate from the Sapienza University of Rome. >> I have an EEG data that recorded in 64 channel with .eeg format. >> How can I see my data in Fieldtrip. >> >> cfg = [] >> cfg.dataset = 'mydata........' >> . >> . >> . >> . >> >> >> >> Hamed Taheri Gorji >> PhD Candidate >> Brain Imaging Laboratory >> >> DEPARTMENT OF PSYCHOLOGY >> FACULTY OF MEDICINE AND PSYCHOLOGY >> SAPIENZA >> University of Rome >> >> Santa Lucia Foundation, Via >> Ardeatina 306, 00179 Rome >> _______________________________________________ >> fieldtrip mailing list >> fieldtrip at donders.ru.nl >> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: From J.Herring at donders.ru.nl Wed Mar 15 09:44:56 2017 From: J.Herring at donders.ru.nl (Herring, J.D. (Jim)) Date: Wed, 15 Mar 2017 08:44:56 +0000 Subject: [FieldTrip] Precisions on DICS beamforming on EEG data and normalisation In-Reply-To: <2A2B6A5B8C4C174CBCCE0B45E548DEB23B96F1D4@SKMBXX01.sickkids.ca> References: <2A2B6A5B8C4C174CBCCE0B45E548DEB23B96F1D4@SKMBXX01.sickkids.ca> Message-ID: <6F9804CE79B042468FDC7E8C86CF4CBC500CF390@exprd04.hosting.ru.nl> Dear Michel-Pierre, Allow me to add some additional (unfortunately non-referenced) advice. 1) Is it appropriate to perform DICS beamforming using EEG with 60 channels (standard montage) ? If not, what would be the appropriate number of channels ? Can you suggest a reference discussing this issue ? First, make sure your data are referenced to the common-average as the forward model assumes this. Then, the appropriate number of channels depends on the required spatial resolution; If you wish to source localize posterior alpha activity 60 channels is fine. If you wish to parcellate your brain into 100 regions and do whole-brain connectivity, 60 channels is not fine and you might want to consider switching to MEG as well. 2) When not using a contrast to perform the beamforming, is the normalisation of the lead field an adequate procedure to correct for the depth/centre bias ? The Fieldtrip tutorial suggest it is but other posts on the mailing list suggest that it is not. You say that you are looking at a change of alpha in response to a visual stimulus? It seems like you do have a contrast. You can compare to the baseline. 3) How does one choose an optimal time window for DICS beamforming when the duration of the effect is quite long (e.g. several seconds of alpha changes in response to a visual stimulus) ? Is it correct to use a longer time-window (e.g. 2 seconds) that is representative of the duration of the effect ? Together with the previous point, you can compare your time window of interest to your baseline. Here it is important that you take the same window length from the baseline period as you take during the activation period to prevent a bias towards the window with more data when calculating the common filter. However, according to http://www.fieldtriptoolbox.org/example/common_filters_in_beamforming it is fine to have an unequal amount of trials in each conditions so if your baseline period is only 1 second, you could cut your 'active' period into 1s segments using ft_redefinetrial so you can still use all of the data. Best, Jim -----Original Message----- From: fieldtrip-bounces at science.ru.nl [mailto:fieldtrip-bounces at science.ru.nl] On Behalf Of Marc Lalancette Sent: Tuesday, March 14, 2017 5:47 PM To: fieldtrip at science.ru.nl Cc: mpcoll at mac.com Subject: Re: [FieldTrip] Precisions on DICS beamforming on EEG data and normalisation Hi Michel-Pierre, Regarding question 2, I'm mostly familiar with LCMV, and I can't remember exactly how DICS works, but I would guess normalization approaches have the same properties in both. (Please someone correct me on this if I'm wrong.) One great reference for LCMV beamformer in general, and normalization in particular, is the book by Sekihara and Nagarajan. For a scalar beamformer, yes normalizing the leadfield ("array-gain") will correct depth bias, but I find these absolute values harder to interpret. Dividing instead by projected noise ("unit-noise-gain") also corrects depth bias, and has better spatial resolution. For a vector beamformer, things get a bit more complicated as the "array-gain" and "unit-noise-gain" vector formulae in that book are not rotationally invariant and I would not recommend using them. (See my recent post: https://mailman.science.ru.nl/pipermail/fieldtrip/2017-March/011390.html) Fieldtrip does not by default use these normalizations, but I also haven'! t seen an analysis of (or had time to investigate much) how its vector beamformer normalization approach fares in terms of bias and resolution compared to others. Maybe it exists somewhere? Sorry if it's not a very practical answer... Cheers, Marc Lalancette Lab Research Project Manager, Research MEG The Hospital for Sick Children, Diagnostic Imaging, Room S742 555 University Avenue, Toronto, ON, M5G 1X8 416-813-7654 x201535 -----Original Message----- Date: Mon, 13 Mar 2017 12:47:25 +0000 From: MP Coll > To: fieldtrip at science.ru.nl Subject: [FieldTrip] Precisions on DICS beamforming on EEG data and using normalisation Message-ID: <28ea0ca0-fed2-3cba-aa11-a79cc8c7c1a3 at mac.com> Content-Type: text/plain; charset=utf-8; format=flowed Dear Fieldtrip Community, My name is Michel-Pierre Coll and I am a postdoctoral researcher at King's college London. A reviewer recently suggested we perform DICS beamforming to source localise EEG effects in the frequency domain during an action observation/execution paradigm. I was able to perform these analyses using the very good tutorial on the Fiedltrip website. However, I have some questions regarding these analyses. I have searched the literature and the mailing list but somehow I can't find clear answers to these basic questions. 1) Is it appropriate to perform DICS beamforming using EEG with 60 channels (standard montage) ? If not, what would be the appropriate number of channels ? Can you suggest a reference discussing this issue ? 2) When not using a contrast to perform the beamforming, is the normalisation of the lead field an adequate procedure to correct for the depth/centre bias ? The Fieldtrip tutorial suggest it is but other posts on the mailing list suggest that it is not. 3) How does one choose an optimal time window for DICS beamforming when the duration of the effect is quite long (e.g. several seconds of alpha changes in response to a visual stimulus) ? Is it correct to use a longer time-window (e.g. 2 seconds) that is representative of the duration of the effect ? I would greatly appreciate any hints on these questions or if you could point me towards relevant texts discussing these issues. Best, MP Coll ------------------------------ _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip End of fieldtrip Digest, Vol 76, Issue 14 ***************************************** ________________________________ This e-mail may contain confidential, personal and/or health information(information which may be subject to legal restrictions on use, retention and/or disclosure) for the sole use of the intended recipient. Any review or distribution by anyone other than the person for whom it was originally intended is strictly prohibited. If you have received this e-mail in error, please contact the sender and delete all copies. _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.schoffelen at donders.ru.nl Wed Mar 15 10:11:39 2017 From: jan.schoffelen at donders.ru.nl (Schoffelen, J.M. (Jan Mathijs)) Date: Wed, 15 Mar 2017 09:11:39 +0000 Subject: [FieldTrip] How can i see EEG In-Reply-To: <9C222EA7-DD8C-45F6-B3DE-BBA5B40276FD@me.com> References: <687200850.5575467.1489512414151.ref@mail.yahoo.com> <687200850.5575467.1489512414151@mail.yahoo.com> <7CCA2706D7A4DA45931A892DF3C2894C58F0E941@exprd03.hosting.ru.nl> <9C222EA7-DD8C-45F6-B3DE-BBA5B40276FD@me.com> Message-ID: <2AF1219A-FE85-4EFC-A8E1-2D25DB95F593@donders.ru.nl> tja, een beetje broekafzakkerig. T On 15 Mar 2017, at 09:37, anne Hauswald > wrote: Hi Hamed, as Stan pointed you to, you will find information on visual data inspection in http://www.fieldtriptoolbox.org/walkthrough. Basically, you can use e.g. ft_databrowser to view your data. for example cfg=[]; cfg.dataset='path to your eeg data‘; ft_databrowser(cfg) for more options see the references for this function. best anne Am 15.03.2017 um 09:10 schrieb hamed taheri >: I saw this tutorial but I couldn't find viewing code I want to see my 64 channels Sent from my iPhone On Mar 15, 2017, at 8:52 AM, Pelt, S. van (Stan) > wrote: Hi Hamed, The FieldTrip tutorials are your friend here. See http://www.fieldtriptoolbox.org/getting_started. Best, Stan -- Stan van Pelt, PhD Donders Institute for Brain, Cognition and Behaviour Radboud University Montessorilaan 3, B.01.34 6525 HR Nijmegen, the Netherlands tel: +31 24 3616288 From: fieldtrip-bounces at science.ru.nl [mailto:fieldtrip-bounces at science.ru.nl] On Behalf Of Hamed Taheri Sent: dinsdag 14 maart 2017 18:27 To: fieldtrip at science.ru.nl Subject: [FieldTrip] How can i see EEG Hello all My name is Hamed a Ph.D. candidate from the Sapienza University of Rome. I have an EEG data that recorded in 64 channel with .eeg format. How can I see my data in Fieldtrip. cfg = [] cfg.dataset = 'mydata........' . . . . Hamed Taheri Gorji PhD Candidate Brain Imaging Laboratory DEPARTMENT OF PSYCHOLOGY FACULTY OF MEDICINE AND PSYCHOLOGY SAPIENZA University of Rome Santa Lucia Foundation, Via Ardeatina 306, 00179 Rome _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.schoffelen at donders.ru.nl Wed Mar 15 10:16:47 2017 From: jan.schoffelen at donders.ru.nl (Schoffelen, J.M. (Jan Mathijs)) Date: Wed, 15 Mar 2017 09:16:47 +0000 Subject: [FieldTrip] How can i see EEG In-Reply-To: <2AF1219A-FE85-4EFC-A8E1-2D25DB95F593@donders.ru.nl> References: <687200850.5575467.1489512414151.ref@mail.yahoo.com> <687200850.5575467.1489512414151@mail.yahoo.com> <7CCA2706D7A4DA45931A892DF3C2894C58F0E941@exprd03.hosting.ru.nl> <9C222EA7-DD8C-45F6-B3DE-BBA5B40276FD@me.com> <2AF1219A-FE85-4EFC-A8E1-2D25DB95F593@donders.ru.nl> Message-ID: <01F7D3C2-7726-46A0-93FC-6025F919319E@donders.ru.nl> Hi all, Apologies to all. I replied incorrectly to this e-mail, so please ignore it. It’s out of context (and impossible to understand it without context). Best wishes, Jan-Mathijs > On 15 Mar 2017, at 10:11, Schoffelen, J.M. (Jan Mathijs) wrote: > > tja, een beetje broekafzakkerig. > T From jens.klinzing at uni-tuebingen.de Wed Mar 15 11:31:19 2017 From: jens.klinzing at uni-tuebingen.de (=?UTF-8?B?IkplbnMgS2xpbnppbmcsIFVuaSBUw7xiaW5nZW4i?=) Date: Wed, 15 Mar 2017 11:31:19 +0100 Subject: [FieldTrip] Sourcemodel inside definition too large when using warpmni In-Reply-To: <58BFCD56.2080508@uni-tuebingen.de> References: <08748577-A2CA-4D37-8B9E-BA75BD7BA5CD@donders.ru.nl> <58BFCD56.2080508@uni-tuebingen.de> Message-ID: <58C917F7.5040009@uni-tuebingen.de> I realized the problem also occurs when processing the fieldtrip example brain and filed it as bug 3271. Best, Jens > Jens Klinzing, Uni Tübingen > Mittwoch, 8. März 2017 10:22 > Hi Jan-Mathijs, > the size difference is still there with cfg.grid.nonlinear = no. > > > > > Best, > Jens > > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > Schoffelen, J.M. (Jan Mathijs) > Mittwoch, 8. März 2017 08:27 > Hi Jens, > > What does the ‘green’ point cloud look like relative to the blue > points when you switch off the non-linear step in recipe a)? > > JM > > > > > > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 111863 bytes Desc: not available URL: From kirsten.petras at uclouvain.be Wed Mar 15 13:04:57 2017 From: kirsten.petras at uclouvain.be (Kirsten Petras) Date: Wed, 15 Mar 2017 12:04:57 +0000 Subject: [FieldTrip] ft_electroderealign Reference to non-existent field error Message-ID: <2194f3f6bdc14ab6be094db21ad3487f@ucl-mbx02.OASIS.UCLOUVAIN.BE> Dear Fieldtrippers, I am a PhD student with at UC Louvain and am currently working on source-space analysis of 256 channel EEG data. I am having troubles using ft_electroderealign as follows to project my electrodes to the surface of the scalp. cfg = []; cfg.method = 'headshape'; cfg.elec = elec_prealigned; cfg.warp = 'rigidbody'; cfg.headshape = mesh; elec_aligned = ft_electroderealign(cfg); This fails with the following error message: Reference to non-existent field 'pos'. Error in ft_warp_error (line 57) el = project_elec(input, target.pos, target.tri); Error in fminunc (line 253) f = feval(funfcn{3},x,varargin{:}); Error in ft_warp_optim (line 129) rf = optimfun(errorfun, ri, options, pos1, pos2, 'rigidbody'); Error in ft_electroderealign (line 361) [norm.chanpos, norm.m] = ft_warp_optim(elec.chanpos, headshape, cfg.warp); Caused by: Failure in initial user-supplied objective function evaluation. FMINUNC cannot continue. The electrode-positions come in the format of the EGI template, however, the coordinates have been exchanged for the actual coordinates on the individual participant (done manually from the MRI). The Fid positions have been removed. So the struct looks like this: >> disp (elec_prealigned) chanpos: [256x3 double] elecpos: [256x3 double] homogeneous: [4x4 double] label: {256x1 cell} type: 'egi256' unit: 'cm' cfg: [1x1 struct] Mesh is the following structure: hex: [4794932x8 double] pnt: [4940731x3 double] labels: [4794932x1 double] tissue: [4794932x1 double] tissuelabel: {'air' 'csf' 'gray' 'scalp' 'skull' 'white'} unit: 'mm' cfg: [1x1 struct] At the point where it crashes, 'target' looks like this: disp(target) pnt: [4940731x3 double] poly: [291320x4 double] unit: 'cm' It looks like the headshape created in ft_electroderealign lines 249-259 ( if isstruct(cfg.headshape) && isfield(cfg.headshape, 'hex') cfg.headshape = fixpos(cfg.headshape); headshape = mesh2edge(cfg.headshape); ) is used as the 'target' input for ft_warp_optim(elec.chanpos, headshape, cfg.warp); in line 361. I tried replacing the input by cfg.headshape, but then the .tri field is still missing... Would anyone have a suggestion as to what I am doing wrong here? Thanks a lot! Kirsten -------------- next part -------------- An HTML attachment was scrubbed... URL: From anne.urai at gmail.com Wed Mar 15 13:29:25 2017 From: anne.urai at gmail.com (Anne Urai) Date: Wed, 15 Mar 2017 13:29:25 +0100 Subject: [FieldTrip] compiling ft_volumenormalise In-Reply-To: References: Message-ID: If anyone encounters the same problem, compilation works if I manually add a bunch of spm functions (which are not recognised by mcc, probably because they are in a class definition folder). Specifically, including '-a', '~/Documents/fieldtrip/external/spm8/spm.m', ... '-a', '~/Documents/fieldtrip/external/spm8/templates/T1.nii', ... '-a', '~/Documents/fieldtrip/external/freesurfer/MRIread', ... '-a', '~/code/Tools/spmbug/dim.m', ... '-a', '~/code/Tools/spmbug/dtype.m', ... '-a', '~/code/Tools/spmbug/fname.m', ... '-a', '~/code/Tools/spmbug/offset.m', ... '-a', '~/code/Tools/spmbug/scl_slope.m', ... '-a', '~/code/Tools/spmbug/scl_inter.m', ... '-a', '~/code/Tools/spmbug/permission.m', ... '-a', '~/code/Tools/spmbug/niftistruc.m', ... '-a', '~/code/Tools/spmbug/read_hdr.m', ... '-a', '~/code/Tools/spmbug/getdict.m', ... '-a', '~/code/Tools/spmbug/read_extras.m', ... '-a', '~/code/Tools/spmbug/read_hdr_raw.m', ... does the trick. Happy compiling, Anne On 1 March 2017 at 19:38, Anne Urai wrote: > Hi FieldTrippers, > > I compile my code to run on the supercomputer cluster (without many matlab > licenses), which usually works fine when I do something like: > > *addpath('~/Documents/fieldtrip');* > *ft_defaults; * > *addpath('~/Documents/fieldtrip/external/spm8');* > *mcc('-mv', '-N', '-p', 'stats', '-p', 'images', '-p', 'signal', ...* > * '-R', '-nodisplay', '-R', '-singleCompThread', fname);* > > However, compiling the ft_volumenormalise function gives me some problems. > Specifically, if source is the result of my beamformer analysis, this code > > * cfg = [];* > * cfg.parameter = 'pow';* > * cfg.nonlinear = 'no'; % can warp back to individual* > * cfg.template = > '/home/aeurai/Documents/fieldtrip/external/spm8/templates/T1.nii';* > * cfg.write = 'no';* > * cfg.keepinside = 'no'; % otherwise, ft_sourcegrandaverage > will bug* > * source = ft_volumenormalise(cfg, source);* > > works fine when running it within Matlab. However, when I run the > executable after compiling (which completes without error), a low-level spm > function throws the following error: > > *the input is source data with 16777216 brainordinates on a [256 256 256] > grid* > *Warning: could not reshape "freq" to the expected dimensions* > *> In ft_datatype_volume (line 136)* > *In ft_checkdata (line 350)* > *In ft_volumenormalise (line 98)* > *In B6b_sourceContrast_volNormalise (line 57)* > *Converting the coordinate system from ctf to spm* > *Undefined function 'fname' for input arguments of type 'struct'* > *Error in file_array (line 32)* > *Error in spm_create_vol>create_vol (line 77)* > *Error in spm_create_vol (line 16)* > *Error in volumewrite_spm (line 71)* > *Error in ft_write_mri (line 65)* > *Error in align_ctf2spm (line 168)* > *Error in ft_convert_coordsys (line 95)* > *Error in ft_volumenormalise (line 124)* > *Error in B6b_sourceContrast_volNormalise (line 57)* > *MATLAB:UndefinedFunction* > > I'd be very grateful for hints from anyone who's successfully compiled the > ft_normalise function! Adding the template T1.nii file, spm8 or freesurfer > at compilation does not solve the problem. > Thanks, > > — > Anne E. Urai, MSc > PhD student | Institut für Neurophysiologie und Pathophysiologie > Universitätsklinikum Hamburg-Eppendorf | Martinistrasse 52, 20246 | > Hamburg, Germany > www.anneurai.net / @AnneEUrai > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hamedtaheri at yahoo.com Wed Mar 15 14:33:54 2017 From: hamedtaheri at yahoo.com (Hamed Taheri) Date: Wed, 15 Mar 2017 13:33:54 +0000 (UTC) Subject: [FieldTrip] How can i see EEG In-Reply-To: <9C222EA7-DD8C-45F6-B3DE-BBA5B40276FD@me.com> References: <687200850.5575467.1489512414151.ref@mail.yahoo.com> <687200850.5575467.1489512414151@mail.yahoo.com> <7CCA2706D7A4DA45931A892DF3C2894C58F0E941@exprd03.hosting.ru.nl> <9C222EA7-DD8C-45F6-B3DE-BBA5B40276FD@me.com> Message-ID: <174832299.566947.1489584834648@mail.yahoo.com> Thanks, Dear Anne with ft_databrowser(cfg); i saw my signal but it's not good as EEGLAB  Hamed Taheri Gorji PhD Candidate Brain Imaging Laboratory  DEPARTMENT OF PSYCHOLOGY FACULTY OF MEDICINE AND PSYCHOLOGY SAPIENZA University of Rome Santa Lucia Foundation, ViaArdeatina 306, 00179 Rome On Wednesday, March 15, 2017 9:43 AM, anne Hauswald wrote: Hi Hamed,  as Stan pointed you to, you will find information on visual data inspection in http://www.fieldtriptoolbox.org/walkthrough. Basically, you can use e.g. ft_databrowser to view your data.for example cfg=[]; cfg.dataset='path to your eeg data‘;ft_databrowser(cfg) for more options see the references for this function.best anne Am 15.03.2017 um 09:10 schrieb hamed taheri : I saw this tutorial but I couldn't find viewing codeI want to see my 64 channels  Sent from my iPhone On Mar 15, 2017, at 8:52 AM, Pelt, S. van (Stan) wrote: Hi Hamed,  The FieldTrip tutorials are your friend here. See http://www.fieldtriptoolbox.org/getting_started.  Best,Stan  --Stan van Pelt, PhDDonders Institute for Brain, Cognition and BehaviourRadboud UniversityMontessorilaan 3, B.01.346525 HR Nijmegen, the Netherlandstel: +31 24 3616288  From: fieldtrip-bounces at science.ru.nl [mailto:fieldtrip-bounces at science.ru.nl] On Behalf Of Hamed Taheri Sent: dinsdag 14 maart 2017 18:27 To: fieldtrip at science.ru.nl Subject: [FieldTrip] How can i see EEG  Hello allMy name is Hamed a Ph.D. candidate from the Sapienza University of Rome.I have an EEG data that recorded in 64 channel with .eeg format.How can I see my data in Fieldtrip.  cfg = []cfg.dataset = 'mydata........'....     Hamed Taheri Gorji PhD Candidate  Brain Imaging Laboratory  DEPARTMENT OF PSYCHOLOGY FACULTY OF MEDICINE AND PSYCHOLOGY SAPIENZA University of Rome Santa Lucia Foundation, ViaArdeatina 306, 00179 Rome _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: From elam4hcp at gmail.com Wed Mar 15 17:41:03 2017 From: elam4hcp at gmail.com (Jennifer Elam) Date: Wed, 15 Mar 2017 11:41:03 -0500 Subject: [FieldTrip] HCP Course 2017: Accommodations Reservations now available Message-ID: Spaces are beginning to fill for the 2017 HCP Course: "Exploring the Human Connectome" , to be held June 19-23 at the Djavad Mowafagian Centre for Brain Health at University of British Columbia (UBC) in Vancouver, BC, Canada! Reservations for on-site accommodations for those attending the course are now available. The 5-day intensive HCP course will provide training in acquisition, processing, analysis and visualization of whole brain imaging and behavioral data using methods and software tools developed by the WU-Minn-Oxford Human Connectome Project (HCP) consortium. The HCP Course is the best place to learn directly from HCP investigators and to explore HCP data and methods. This year's course will cover where HCP is heading with advent of the Lifespan HCP development (ages 5-18) and aging (ages 35-90+) projects and will provide hands-on experience in working with the multi-modal human cortical parcellation (Glasser *et al.* 2016, Nature ) and with the “HCP-Style” paradigm for data acquisition, analysis, and sharing (Glasser *et al.* 2016, Nature Neuroscience ). For more info and to register visit the HCP Course 2017 website . If you have any questions, please contact us at: hcpcourse at humanconnectome.org We look forward to seeing you in Vancouver! Best, 2017 HCP Course Staff -------------- next part -------------- An HTML attachment was scrubbed... URL: From eriksenj at ohsu.edu Wed Mar 15 23:56:21 2017 From: eriksenj at ohsu.edu (K Jeffrey Eriksen) Date: Wed, 15 Mar 2017 22:56:21 +0000 Subject: [FieldTrip] why realignment tilted in hcp_anatomy? Message-ID: All HCP_MEG users: In the hope of getting some responses, let me simplify this to the bare minimum. By setting a flag in [hcp_anatomy.m] to allow visualization of the realignment result, I have discovered something that appears wrong. The coronal view in the attached “realignment result” is tilted at a 45 degree angle. My first question is simply: is this what I should see? If so, why is it tilted? [cid:image001.png at 01D29D9F.396AFF00] I have not modified the script except to turn on this visualization. The input file (T1w_acpc_dc_restore.nii) is from one of the publically available HCP_MEG subjects (177746) that I downloaded. So there can be no “user error” at this point on my account, unless it is using [hcp_anatomy] outside the context of the whole HCP_MEG pipeline. The above plot occurs on line 156 of [hcp_anatomy.m]. Thanks, -Jeff PS. Just in case I am marking the ac, pc, zx, and r landmark points wrong, here is what I marked: [cid:image002.png at 01D29DA0.14D6DF00] And here is all the console output up to the point of drawing the realignment result: dicomfile = A:\HCP_MEG_subs\HCP-MEG-177746\MEG\anatomy\T1w_acpc_dc_restore.nii executing the anatomy pipeline for subject 177746 not using the high quality structural preprocessing results ------------------------------------------------------------------------- Running the interactive part of the anatomy pipeline Rescaling NIFTI: slope = 1, intercept = 0 Please identify the Anterior Commissure, Posterior Commissure, a point on the positive Z and X axes, and a point on the right part of the head the input is volume data with dimensions [260 311 260] 1. To change the slice viewed in one plane, either: a. click (left mouse) in the image on a different plane. Eg, to view a more superior slice in the horizontal plane, click on a superior position in the coronal plane, or b. use the arrow keys to increase or decrease the slice number by one 2. To mark a fiducial position or anatomical landmark, do BOTH: a. select the position by clicking on it in any slice with the left mouse button b. identify it by pressing the letter corresponding to the fiducial/landmark: press a for ac, p for pc, z for xzpoint press r for an extra control point that should be on the right side You can mark the fiducials multiple times, until you are satisfied with the positions. 3. To change the display: a. press c on keyboard to toggle crosshair visibility b. press f on keyboard to toggle fiducial visibility c. press + or - on (numeric) keyboard to change the color range's upper limit 4. To finalize markers and quit interactive mode, press q on keyboard ================================================================================== crosshair: voxel 7405710, index = [130 183 92], head = [-0.3 1.4 -8.3] mm ac: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm pc: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm xzpoint: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm right: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm ================================================================================== selected ac ================================================================================== crosshair: voxel 7405710, index = [130 183 92], head = [-0.3 1.4 -8.3] mm ac: voxel 7405710, index = [130 183 92], head = [-0.3 1.4 -8.3] mm pc: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm xzpoint: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm right: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm ================================================================================== crosshair: voxel 9014850, index = [130 152 112], head = [-0.3 -20.3 5.7] mm ac: voxel 7405710, index = [130 183 92], head = [-0.3 1.4 -8.3] mm pc: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm xzpoint: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm right: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm ================================================================================== selected pc ================================================================================== crosshair: voxel 9014850, index = [130 152 112], head = [-0.3 -20.3 5.7] mm ac: voxel 7405710, index = [130 183 92], head = [-0.3 1.4 -8.3] mm pc: voxel 9014850, index = [130 152 112], head = [-0.3 -20.3 5.7] mm xzpoint: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm right: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm ================================================================================== crosshair: voxel 14108972, index = [ 72 152 175], head = [40.3 -20.3 49.8] mm ac: voxel 7405710, index = [130 183 92], head = [-0.3 1.4 -8.3] mm pc: voxel 9014850, index = [130 152 112], head = [-0.3 -20.3 5.7] mm xzpoint: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm right: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm ================================================================================== selected xzpoint ================================================================================== crosshair: voxel 14108972, index = [ 72 152 175], head = [40.3 -20.3 49.8] mm ac: voxel 7405710, index = [130 183 92], head = [-0.3 1.4 -8.3] mm pc: voxel 9014850, index = [130 152 112], head = [-0.3 -20.3 5.7] mm xzpoint: voxel 14108972, index = [ 72 152 175], head = [40.3 -20.3 49.8] mm right: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm ================================================================================== crosshair: voxel 6508124, index = [ 64 152 81], head = [45.9 -20.3 -16.0] mm ac: voxel 7405710, index = [130 183 92], head = [-0.3 1.4 -8.3] mm pc: voxel 9014850, index = [130 152 112], head = [-0.3 -20.3 5.7] mm xzpoint: voxel 14108972, index = [ 72 152 175], head = [40.3 -20.3 49.8] mm right: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm ================================================================================== selected right ================================================================================== crosshair: voxel 6508124, index = [ 64 152 81], head = [45.9 -20.3 -16.0] mm ac: voxel 7405710, index = [130 183 92], head = [-0.3 1.4 -8.3] mm pc: voxel 9014850, index = [130 152 112], head = [-0.3 -20.3 5.7] mm xzpoint: voxel 14108972, index = [ 72 152 175], head = [40.3 -20.3 49.8] mm right: voxel 6508124, index = [ 64 152 81], head = [45.9 -20.3 -16.0] mm ================================================================================== crosshair: voxel 6508124, index = [ 64 152 81], head = [45.9 -20.3 -16.0] mm ac: voxel 7405710, index = [130 183 92], head = [-0.3 1.4 -8.3] mm pc: voxel 9014850, index = [130 152 112], head = [-0.3 -20.3 5.7] mm xzpoint: voxel 14108972, index = [ 72 152 175], head = [40.3 -20.3 49.8] mm right: voxel 6508124, index = [ 64 152 81], head = [45.9 -20.3 -16.0] mm 615 cfg.fiducial = opt.fiducial; Warning: assuming that the units are "mm" > In ft_estimate_units (line 49) In ft_plot_slice (line 197) In ft_plot_ortho (line 120) In ft_volumerealign>cb_redraw (line 1446) In ft_volumerealign (line 1293) In hcp_anatomy (line 156) In test_hcp_meg_anat (line 27) Warning: assuming that the units are "mm" > In ft_estimate_units (line 49) In ft_plot_slice (line 197) In ft_plot_ortho (line 134) In ft_volumerealign>cb_redraw (line 1446) In ft_volumerealign (line 1293) In hcp_anatomy (line 156) In test_hcp_meg_anat (line 27) Warning: assuming that the units are "mm" > In ft_estimate_units (line 49) In ft_plot_slice (line 197) In ft_plot_ortho (line 148) In ft_volumerealign>cb_redraw (line 1446) In ft_volumerealign (line 1293) In hcp_anatomy (line 156) In test_hcp_meg_anat (line 27) K>> From: Schoffelen, J.M. (Jan Mathijs) [mailto:jan.schoffelen at donders.ru.nl] Sent: Monday, March 13, 2017 1:14 AM To: FieldTrip discussion list; K Jeffrey Eriksen Subject: Fwd: [HCP-Users] hcp_anatomy.m needs an hsfile? Dear Jeff, Let me forward your question to the discussion list. Dear list, Jeff is encountering some coregistration problems, which may be FieldTrip related, but also could be a user error. Perhaps somebody has encountered them before. Let us know if you have a solution. The 45 degrees tilt looks odd. If this image was produced after reslicing the to-MNI-coregistered-image something went wrong with the realignment. If this image was produced prior to the reslicing, something funky has gone wrong with the acquisition sequence. I don’t know anything about the specifics of Brainstorm, so I am afraid I cannot help there. Best, Jan-Mathijs J.M.Schoffelen, MD PhD Senior Researcher, VIDI-fellow - PI, language in interaction Telephone: +31-24-3614793 Physical location: room 00.028 Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands Begin forwarded message: From: K Jeffrey Eriksen > Subject: RE: [HCP-Users] hcp_anatomy.m needs an hsfile? Date: 11 March 2017 at 02:47:33 GMT+1 To: "Schoffelen, J.M. (Jan Mathijs)" > Hello again, I encountered a problem when I tried to import into Brainstorm, even though I thought I had the transform text file correct. After importing the anatomy in Brainstorm, it was displayed with the brain rotated by 45 degrees in all axes. I then realized the I had visualized the registration of the headshape to the scalp surface and that looked good, but I had never visualized the MNI registration. I went back into the HCP scripts and found where the MNI registration could be visualized and discovered the 45 degree rotation seemed to occur there. So I thought maybe our local HCP pipeline did something unusual. To test this I ran these three conditions: 1. My hcp_anatomy_egi.m with our local HCP-pipeline produced T1 2. original hcp_anatomy.m with our local T1 3. original hcp_anatomy.m with downloaded HCM_MEG_pipeline produced T1 All three had the same apparent problem, shown on the attached images. I am quite puzzled by this since they are all the same, yet Brainstorm only imports #3 correctly (not counting #2 which is mixed). I put all three cases in the attached Word doc, with the Brainstorm registration display and the HCP headshape registration display. -Jeff From: Schoffelen, J.M. (Jan Mathijs) [mailto:jan.schoffelen at donders.ru.nl] Sent: Wednesday, March 08, 2017 8:52 AM To: K Jeffrey Eriksen Subject: Re: [HCP-Users] hcp_anatomy.m needs an hsfile? Hi Jeff, I made it all the way through hcp_anatomy_EGI.m (my version substituting ‘egi’ for ‘bti’. Amazing! I could not figure out how to do the interactive fine registration of the EGI electrode “headshape” to the scalp surface – where is that documented? Well it’s not extensively documented, but in the crude GUI you can fiddle around with translation and rotation parameters to move the electrode point cloud closer to the headsurface mesh, created from the MRI segmentation. The main remaining problem is that the BTI coordinate system has the X-axis toward the nasion, and the Y-axis toward the LPA. The EGI coordinate system has the X-axis toward the RPA and the Y-axis toward the nasion. Can you suggest the best way to change hcp_anatomy_EGI.m to reflect this? Well, it sounds as if the EGI has an RAS convention, which may be similar to the ‘neuromag’ convention (as per http://www.fieldtriptoolbox.org/faq/how_are_the_different_head_and_mri_coordinate_systems_defined) It could be that changing the required coordinate system (coordsys) to ‘neuromag’ while specifying the fiducials in ft_volumerealign (rather than ‘bti’) would do the trick. Each of the supported coordinates systems must have some kind of master definition somewhere in the code, and that would be the best place to define the EGI system. I think it is similar to the BESA system. The code that has the ‘intelligence’ to map the specification of defined fiducial/landmark locations is in https://github.com/fieldtrip/fieldtrip/blob/master/utilities/ft_headcoordinates.m with a typo in line48/49 I noticed just now. Feel free to suggest a new coordinate system if needed. Perhaps this is best done through the fieldtrip discussion list. Best, Jan-Mathijs -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 92989 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 122781 bytes Desc: image002.png URL: From eriksenj at ohsu.edu Thu Mar 16 01:59:01 2017 From: eriksenj at ohsu.edu (K Jeffrey Eriksen) Date: Thu, 16 Mar 2017 00:59:01 +0000 Subject: [FieldTrip] how many points in Polhemus headshape file for HCP_MEG pipeline use? Message-ID: I am trying to simulate the HCP_MEG pipeline (specifically hcp_anatomy) and thus have to create my own simulated hs_file, as if I had the non-anonymized T1 and a Polhemus headshape file. Can someone tell me how many point are usually captured in these files? -Jeff -------------- next part -------------- An HTML attachment was scrubbed... URL: From da434 at cam.ac.uk Thu Mar 16 17:16:35 2017 From: da434 at cam.ac.uk (D.Akarca) Date: Thu, 16 Mar 2017 16:16:35 +0000 Subject: [FieldTrip] Neighbouring issue with ft_timelockstatistics Message-ID: Dear all, My name is Danyal Akarca, I’m a Master’s student working at Cambridge University, working at the MRC Cognition and Brain Sciences Unit. I’m currently working on some MEG data analysis, using ft_timelockstatistics and ft_clusterplot to determine clustering of neuromag magnetometers for task-related data. My neighbouring function is defined as follows: cfg = []; cfg.method = ‘distance’; cfg.neighbourdist = 0.13; cfg.template = ‘neuromag306mag_neighb’; cfg.layout = ‘NM306mag.lay’ cfg.channel = ‘all' neighbours = ft_prepare_neighbours(cfg, MagGM_Control_Deviant); % The input data here is one of the grand means computed with ft_timelockgrandaverage This provides me with an average of 5.5 neighbours per channel, and upon inspection with ft_neighbourplot, it looks very reasonable. I then went on to compute statistics, using ft_timelockstatistics as follows cfg = []; cfg.channel = ‘all’; cfg.neighbours = neighbours; cfg.latency = [0.1 0.54] cfg.method = ‘montecarlo’; cfg.randomization = 1000; cfg.correctm = ‘cluster’; cfg.correctail = ‘prob’; cfg.ivar = 1; cfg.uvar = 2; cfg.statistic = ‘ft_statfun_depsamplesT’; Nsub = 14; cfg.design(1,1:2*Nsub) = [ones(1,Nsub) 2*ones(1,Nsub)] cfg.design(2,1:2*Nsub)= [1:Nsub 1:Nsub]; stat = ft_timelockstatistics(cfg, cw{:}, cw1{:}) % cw and cw1 are cells containing my files When I run this, i obtain 52 positive clusters and 100 negative clusters, of which 6 negative clusters are significant. However, I have realised that this assumed that each channel is an independant cluster? These 6 ‘clusters’ are very close to each other when plotted using ft_clusterplot, so I thought that actually this should be 1 big cluster rather than 6 independant clusters very close to each other. So I therefore added cfg.minnbchan = 2; However, when I do this, it says there are 0 clusters generated at all. This occurs no matter how large I make cfg.neighbourdist (even when I make it so that each magnetometer is neighbours with every other neighbour, I still get no clusters forming). I was wondering if anyone had any thoughts, or could help me with this? I am still new to FieldTrip so any help would be very much appreciated. I hope that I’ve included all the relevant information above required. All the best, Danyal Akarca MPhil Neuroscience, Cambridge University MRC Cognition and Brain Sciences Unit From SXM1085 at student.bham.ac.uk Thu Mar 16 17:30:55 2017 From: SXM1085 at student.bham.ac.uk (Sebastian Michelmann) Date: Thu, 16 Mar 2017 16:30:55 +0000 Subject: [FieldTrip] how many points in Polhemus headshape file for HCP_MEG pipeline use? In-Reply-To: References: Message-ID: <2D9C9145AF1E4D4799ADDB2C0F996AE8019EF96FF9@EX13.adf.bham.ac.uk> Hi Jeff, we are currently taking >500 points Best, Sebastian From: fieldtrip-bounces at science.ru.nl [mailto:fieldtrip-bounces at science.ru.nl] On Behalf Of K Jeffrey Eriksen Sent: 16 March 2017 00:59 To: hcp-users at humanconnectome.org; fieldtrip at science.ru.nl Subject: [FieldTrip] how many points in Polhemus headshape file for HCP_MEG pipeline use? I am trying to simulate the HCP_MEG pipeline (specifically hcp_anatomy) and thus have to create my own simulated hs_file, as if I had the non-anonymized T1 and a Polhemus headshape file. Can someone tell me how many point are usually captured in these files? -Jeff -------------- next part -------------- An HTML attachment was scrubbed... URL: From seymourr at aston.ac.uk Thu Mar 16 18:58:36 2017 From: seymourr at aston.ac.uk (Seymour, Robert (Research Student)) Date: Thu, 16 Mar 2017 17:58:36 +0000 Subject: [FieldTrip] Granger Causality & ft_timelockstatistics Message-ID: Hi all, I'm currently using ft_timelockstatistics to compute the group-level statistical difference between 2 granger causality spectra (I'm substituting freq for time data). My question is whether my current cfg settings for ft_timelockstatistics (see code below) will cluster my data over time? I assume by selecting cfg.avgovertime = 'no' FT_STATISTICS_MONTECARLO will cluster over time rather than space.. but I just wanted to double check... Many thanks, Robert Seymour (Aston Brain Centre) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% cfg = []; cfg.avgovertime = 'no'; cfg.parameter = 'avg'; cfg.method = 'montecarlo'; cfg.statistic = 'ft_statfun_depsamplesT'; cfg.alpha = 0.05; cfg.clusteralpha = 0.05; cfg.correctm = 'cluster'; cfg.numrandomization = 1000; Nsub = numel(grandavgA); cfg.design(1,1:2*Nsub) = [ones(1,Nsub) 2*ones(1,Nsub)]; cfg.design(2,1:2*Nsub) = [1:Nsub 1:Nsub]; cfg.ivar = 1; % the 1st row in cfg.design contains the independent variable cfg.uvar = 2; % the 2nd row in cfg.design contains the subject number stat = ft_timelockstatistics(cfg,grandavgB{:},grandavgA{:}); figure; plot(stat.stat); xlabel('Freq (Hz)'); ylabel('t-value'); figure; plot(stat.prob);xlabel('Freq (Hz)'); ylabel('p-value'); %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% -------------- next part -------------- An HTML attachment was scrubbed... URL: From Umla-Runge at cardiff.ac.uk Thu Mar 16 19:15:39 2017 From: Umla-Runge at cardiff.ac.uk (Katja Umla-Runge) Date: Thu, 16 Mar 2017 18:15:39 +0000 Subject: [FieldTrip] PhD studentship at Cardiff University Message-ID: Applications are invited for a PhD studentship on functional and structural properties of spatial processing networks in the brain at Cardiff University starting from July 2017. Please see here for more details on the project and do contact me if you would like to know more: https://www.findaphd.com/search/ProjectDetails.aspx?PJID=82152 http://psych.cf.ac.uk/degreeprogrammes/postgraduate/research/ Regards, Katja Katja Umla-Runge Lecturer CUBRIC, School of Psychology (College of Biomedical and Life Sciences) Cardiff University Maindy Road Cardiff, CF24 4HQ Tel: +44 (0)29 2087 0715 Email: Umla-Runge at cardiff.ac.uk Katja Umla-Runge Darlithydd CUBRIC, Yr Ysgol Seicoleg (Coleg y Gwyddorau Biofeddygol a Bywyd) Prifysgol Caerdydd Maindy Road Caerdydd, CF24 4HQ Ffôn : +44 (0)29 2087 0715 E-bost: Umla-Runge at caerdydd.ac.uk Sent from my iPhone -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmatthes at cbs.mpg.de Fri Mar 17 13:35:10 2017 From: dmatthes at cbs.mpg.de (Daniel Matthes) Date: Fri, 17 Mar 2017 13:35:10 +0100 Subject: [FieldTrip] Bug in ft_trialfun_brainvision_segmented.m Message-ID: <24173161-9793-8db6-04bf-220d7b031ef3@cbs.mpg.de> Hi fieldtrip developers, I found a bug in fieldtrip/trialfun/ft_trialfun_brainvision_segmented.m. If the Brain Vision marker file *.vmrk includes no 'Stimulus' makers, the ft_trialfun_brainvision_segmented function crashes in line 116. The reason for this crash is absence of the trialinfo variable. In detail, if no 'Stimulus' is defined the numstim variable gets 0 (line 99), otherwise the query 'if all(numstim==numstim(1))' in line 100 results in true. I would recommend to change line 100 to: if ((numstim > 0 ) && (all(numstim==numstim(1)))) Hereby the else branch will be executed, if numstim = 0. Furthermore, the mentioned function also crashes, if the stimulus markers in the marker file either have no value or a value with wrong letters. This cases should be captured with a more obvious error message. All the best, Daniel From jan.schoffelen at donders.ru.nl Fri Mar 17 13:45:16 2017 From: jan.schoffelen at donders.ru.nl (Schoffelen, J.M. (Jan Mathijs)) Date: Fri, 17 Mar 2017 12:45:16 +0000 Subject: [FieldTrip] Bug in ft_trialfun_brainvision_segmented.m In-Reply-To: <24173161-9793-8db6-04bf-220d7b031ef3@cbs.mpg.de> References: <24173161-9793-8db6-04bf-220d7b031ef3@cbs.mpg.de> Message-ID: <68BDEFCD-EA25-459F-8465-B3D850838B67@donders.ru.nl> Thanks for your input, Daniel. May I suggest you to follow this up through github? http://www.fieldtriptoolbox.org/development/git The best thing for you to do would be to make a Pull Request with the suggested changes. Thanks, and keep up the good work, Jan_Mathijs > On 17 Mar 2017, at 13:35, Daniel Matthes wrote: > > Hi fieldtrip developers, > > I found a bug in fieldtrip/trialfun/ft_trialfun_brainvision_segmented.m. If the Brain Vision marker file *.vmrk includes no 'Stimulus' makers, the ft_trialfun_brainvision_segmented function crashes in line 116. The reason for this crash is absence of the trialinfo variable. > > In detail, if no 'Stimulus' is defined the numstim variable gets 0 (line 99), otherwise the query 'if all(numstim==numstim(1))' in line 100 results in true. > > I would recommend to change line 100 to: > > if ((numstim > 0 ) && (all(numstim==numstim(1)))) > > Hereby the else branch will be executed, if numstim = 0. > > Furthermore, the mentioned function also crashes, if the stimulus markers in the marker file either have no value or a value with wrong letters. This cases should be captured with a more obvious error message. > > All the best, > > Daniel > > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip From r.oostenveld at donders.ru.nl Tue Mar 21 12:01:50 2017 From: r.oostenveld at donders.ru.nl (Robert Oostenveld) Date: Tue, 21 Mar 2017 12:01:50 +0100 Subject: [FieldTrip] Donders training courses: "Tool-kits" of Cognitive Neuroscience Message-ID: <05323DBF-FF87-4F0E-AE08-1E59B82EBEA3@donders.ru.nl> > Begin forwarded message: > > From: "Stijns, M.H. (Tildie)" > Subject: Announcing Donders Tool-kits 2017 > Date: 15 March 2017 at 14:42:02 GMT+1 > > > Are you interested in learning neuroimaging techniques directly from the experts? > Do you like courses that take a practical hands-on approach to training? > To help you become proficient in modern neuroimaging methods, the Donders Institute offers “Tool-kits” of Cognitive Neuroscience, held annually at Radboud University, Nijmegen, the Netherlands. > Donders Tool-kits in 2017 : > Advanced MEG/EEG : (3-7 April 2017) - NOTE: registration is closed > Advanced (f)MRI: (15-18 May 2017) > Brain Stimulation : (30 May-2 June 2017) > Neuroimaging : (28 August-1 September 2017) > > Bests, Tildie -------------- next part -------------- An HTML attachment was scrubbed... URL: From J.Verhoef at donders.ru.nl Tue Mar 21 12:51:46 2017 From: J.Verhoef at donders.ru.nl (Verhoef, J.P. (Julia)) Date: Tue, 21 Mar 2017 11:51:46 +0000 Subject: [FieldTrip] Senior Postdoc for the Dutch Research Consortium 'Language in Interaction' Message-ID: <11E9E0B371DBAE4EB859A9CC30606A04023E199F@exprd04.hosting.ru.nl> Senior Postdoc for the Dutch Research Consortium 'Language in Interaction' (1.0 FTE) Dutch Research Consortium 'Language in Interaction' Maximum salary: € 4,691 gross/month Vacancy number: 30.01.17 Application deadline: 17 April 2017 [Logo NWO] [Logo] Responsibilities The Language in Interaction research consortium invites applications for a senior postdoctoral position. You will contribute to the integration of empirical research in our consortium. You will act in close collaboration with Peter Hagoort, programme director of the consortium. This position provides the opportunity for conducting world-class research as a member of an interdisciplinary team. Moreover, it will provide the opportunity to contribute to developing a theoretical framework for our understanding of the human language faculty. Work environment The Netherlands has an outstanding track record in the language sciences. The Language in Interaction research consortium, which is sponsored by a large grant from the Netherlands Organization for Scientific research (NWO), brings together many of the excellent research groups in the Netherlands in a research programme on the foundations of language. In addition to excellence in the domain of language and related relevant fields of cognition, our consortium provides state-of-the-art research facilities and a research team with ample experience in the complex research methods that will be invoked to address the scientific questions at the highest level of methodological sophistication. These include methods from genetics, neuroimaging, computational modelling, and patient-related research. This consortium realises both quality and critical mass for studying human language at a scale not easily found anywhere else. We have identified five Big Questions (BQ) that are central to our understanding of the human language faculty. These questions are interrelated at multiple levels. Teams of researchers will collaborate to collectively address these key questions of our field. Our five Big Questions are: BQ1: The nature of the mental lexicon: How to bridge neurobiology and psycholinguistic theory by computational modelling? BQ2: What are the characteristics and consequences of internal brain organization for language? BQ3: Creating a shared cognitive space: How is language grounded in and shaped by communicative settings of interacting people? BQ4: Variability in language processing and in language learning: Why does the ability to learn language change with age? How can we characterise and map individual language skills in relation to the population distribution? BQ5: How are other cognitive systems shaped by the presence of a language system in humans? You will be appointed at the Donders Institute, Centre for Cognitive Neuroimaging (Radboud University, Nijmegen). The research is conducted in an international setting at all participating institutions. English is the lingua franca. What we expect from you We are looking for a highly motivated, creative and talented candidate to enrich a unique consortium of researchers that aims to unravel the neurocognitive mechanisms of language at multiple levels. The goal is to understand both the universality and the variability of the human language faculty from genes to behaviour. The selection criteria include: · a PhD in an area related to the neurobiology of language and/or language sciences; · expertise/interest in theoretical neuroscience and language; · an integrative mindset; · a theory-driven approach; · good communication skills; · excellent proficiency in written and spoken English. What we have to offer · employment: 1.0 FTE; · a maximum gross monthly salary of € 4,691 based on a 38-hour working week (salary scale 11); · in addition to the salary: an 8% holiday allowance and an 8.3% end-of-year bonus; · you will be appointed for an initial period of 18 months, after which your performance will be evaluated. If the evaluation is positive, the contract will be extended by 30 months; · the Collective Labour Agreement (CAO) of Dutch Universities is applicable; · you will be classified as Researcher, level 3 in the Dutch university job-ranking system (UFO); · the Dutch universities and institutes involved have a number of regulations that enable employees to create a good work-life balance. Are you interested in our excellent employment conditions? Other Information The institute involved is an equal opportunity employer, committed to building a culturally diverse intellectual community, and as such encourages applications from women and minorities. Would you like to know more? Further information on: Language in Interaction Further information on: Donders Institute for Brain, Cognition and Behaviour For more information about this vacancy, please contact: Prof. dr. Peter Hagoort, programme director Language in Interaction and director of DCCN Telephone: +31 24 3610648, +31 24 3521301 E-mail: p.hagoort at donders.ru.nl Are you interested? You should upload your application (attn. of Prof. dr. P. Hagoort) exclusively using the button 'Apply' below. Your application should include (and be limited to) the following attachment(s): · a cover letter · your curriculum vitae, including a list of publications and the names of at least two people who can provide references Please apply before 17 April 2017, 23:59 CET. [Apply] No commercial propositions please. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 2461 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 40202 bytes Desc: image002.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 3660 bytes Desc: image003.jpg URL: From mailtome.2113 at gmail.com Thu Mar 23 07:09:07 2017 From: mailtome.2113 at gmail.com (Arti Abhishek) Date: Thu, 23 Mar 2017 17:09:07 +1100 Subject: [FieldTrip] Channel order after interpolation Message-ID: Dear list, I am working with 128 channel EEG data recorded from infants and young children. As they had few bad channels, I removed them, computed ICA, removed eye-blink components and then interpolated the removed channels. The interpolated channels were appended at the end, not at their original positions. Is there a way I can add the interpolated channels in the original order? I want to run some scripts outside fieldtrip on the data and the channel order is important. Any help would be greatly appreciated. Thanks, Arti cfg =[]; cfg.layout = 'GSN-Hydrocel-129.sfp'; lay = ft_prepare_layout(cfg); cfg = []; cfg_neighb.layout = lay; cfg_neighb.method = 'triangulation'; cfg.feedback = 'yes'; EEG_neighbours = ft_prepare_neighbours(cfg_neighb); load('NJ_24_ica_artrej.mat') badchannels = setdiff(lay.label(1:129), NJ_24_ica_artrej.label); cfg = []; cfg.layout = lay; cfg.neighbours = EEG_neighbours; cfg.badchannel = badchannels; cfg.method ='spline'; cfg.senstype = 'EEG'; NJ_24_ica_interp = ft_channelrepair(cfg, NJ_24_ica_artrej); -------------- next part -------------- An HTML attachment was scrubbed... URL: From julian.keil at gmail.com Thu Mar 23 09:44:38 2017 From: julian.keil at gmail.com (Julian Keil) Date: Thu, 23 Mar 2017 09:44:38 +0100 Subject: [FieldTrip] Channel order after interpolation In-Reply-To: References: Message-ID: <96928AB6-212E-4AD8-B30E-184B252A7465@gmail.com> Dear Arti, if you know exactly where your channels are, and where they ought to be, you can simply build a vector with the index of the current channel at the position where it should be, and assign this vector as a new matrix index. So for example, if you have channels A, B and C, but they should be ordered B-C-A, you can use something like this: neworder = [2 3 1]; % Element 2, should now be at the beginning, then the third element, and then the first; data.avg = data.avg(neworder,:); % Assign neworder to the 2D-Matrix of - for example - trial averaged data Hope this helps, Julian Am 23.03.2017 um 07:09 schrieb Arti Abhishek: > Dear list, > > I am working with 128 channel EEG data recorded from infants and young children. As they had few bad channels, I removed them, computed ICA, removed eye-blink components and then interpolated the removed channels. The interpolated channels were appended at the end, not at their original positions. Is there a way I can add the interpolated channels in the original order? I want to run some scripts outside fieldtrip on the data and the channel order is important. Any help would be greatly appreciated. > > Thanks, > Arti > > cfg =[]; > cfg.layout = 'GSN-Hydrocel-129.sfp'; > lay = ft_prepare_layout(cfg); > cfg = []; > cfg_neighb.layout = lay; > cfg_neighb.method = 'triangulation'; > cfg.feedback = 'yes'; > EEG_neighbours = ft_prepare_neighbours(cfg_neighb); > > load('NJ_24_ica_artrej.mat') > badchannels = setdiff(lay.label(1:129), NJ_24_ica_artrej.label); > > > cfg = []; > cfg.layout = lay; > cfg.neighbours = EEG_neighbours; > cfg.badchannel = badchannels; > cfg.method ='spline'; > cfg.senstype = 'EEG'; > NJ_24_ica_interp = ft_channelrepair(cfg, NJ_24_ica_artrej); > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail URL: From elinor.tzvi at neuro.uni-luebeck.de Thu Mar 23 11:37:38 2017 From: elinor.tzvi at neuro.uni-luebeck.de (Elinor Tzvi-Minker) Date: Thu, 23 Mar 2017 10:37:38 +0000 Subject: [FieldTrip] =?utf-8?q?OPEN_PHD_POSITION_University_of_L=C3=BCbeck?= =?utf-8?q?=2C_GERMANY?= Message-ID: <00444adb22804d07814214e640867971@hermes.neuro.uni-luebeck.de> The Cognitive Neuroscience Group at the Neurology department of the University of Lübeck offers a PhD position (65% E13 TV-L) starting immediately. The candidate will be working on a project that develops and implements neuromodulation techniques (tDCS) in combination with fMRI and then translates these methods to social neuroscience paradigms. ​ We offer The department of Neurology is part of the Center for Brain, Behavior and Metabolism (CBBM), which offers an excellent and state-of the-art research environment. The research group “Cognitive Neuroscience” (headed by Prof. Ulrike Krämer) is working on different topics related to cognitive and affective control (anger and aggression, response inhibition, regulation of eating behavior) and motor control. Our researchers use diverse and complex methods to analyze brain-behavior relationships. At the CBBM, a 3T Skypra MRI research scanner, several EEG labs, fNIRS, TMS and tDCS are available. Thus, we offer an excellent environment for interdisciplinary research. We require The successful candidate will hold an MSc/MA/Dipl. in Psychology or related fields (cognitive science, neuroscience or other). Experience in acquisition and analysis of human neuroimaging data (fMRI, EEG, MEG or NIRS) and Programming skills in Matlab (or equivalent) is preferred. Interest and/or experience in the field of cognitive neuroscience are obligatory. We are looking for a motivated, analytic and problem-solving oriented candidate who enjoys interdisciplinary challenges. The candidate will work in the “Cognitive Neuroscience Group” under co-supervision of Dr. Elinor Tzvi-Minker and Prof. Ulrike M. Krämer. Applicants with disabilities are preferred if qualification is equal. The University of Lübeck is an equal opportunity employer, aiming to increase the proportion of women in science. Applications by women are particularly welcome. For questions about the details of the assignment please contact Dr. Elinor Tzvi-Minker (elinor.tzvi at neuro.uni-luebeck.de). Please send your application (Letter of motivation, CV, contact information of two references, relevant certificates) as one single complete PDF file to the Email-address mentioned above. Applications will be considered until the position has been filled. -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.chait at ucl.ac.uk Thu Mar 23 23:20:03 2017 From: m.chait at ucl.ac.uk (Chait, Maria) Date: Thu, 23 Mar 2017 22:20:03 +0000 Subject: [FieldTrip] Post-Doc position on Auditory Attention [DEADLINE March 31] Message-ID: (please forward; deadline next week) A postdoctoral research associate position is available at the UCL Ear Institute's 'Auditory Cognitive Neuroscience Lab' to work on an EC-funded project that will use psychophysics, eye tracking and EEG to investigate auditory attention in humans. The post is funded for 20 months in the first instance. For more information about the post please see the lab website: http://www.ucl.ac.uk/ear/research/chaitlab/vacancies The Ear Institute is a leading interdisciplinary centre for hearing research in Europe, situated within one of the strongest neuroscience communities in the world at University College London Key Requirements The successful applicant will have a PhD in neuroscience or a neuroscience-related discipline and proven ability to conduct high-quality original research and prepare results for publication. Essential skills include excellent time-management and organizational ability; proficiency in computer programming and good interpersonal, oral and written communication skills. Previous experience with functional brain imaging, neural data analysis, psychophysical assessment, and/or auditory science or acoustics would be desirable. Further Details You should apply for this post (Ref #: 1631454) through UCL's online recruitment website, www.ucl.ac.uk/hr/jobs, where you can download a job description and person specifications. Closing Date for applications is: 31 March 2017 For an informal discussion please contact Dr. Maria Chait (m.chait at ucl.ac.uk). Maria Chait PhD m.chait at ucl.ac.uk Reader in Auditory Cognitive Neuroscience Lab site: http://www.ucl.ac.uk/ear/research/chaitlab/ UCL Ear Institute 332 Gray's Inn Road London WC1X 8EE -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandrina.guran at uni-luebeck.de Mon Mar 27 11:34:19 2017 From: alexandrina.guran at uni-luebeck.de (Alexandrina Guran) Date: Mon, 27 Mar 2017 09:34:19 +0000 Subject: [FieldTrip] Problem with downsampling / automatic artifact rejection Message-ID: <815CD0C7-3692-488F-8012-3A23529C52E0@uni-luebeck.de> Dear FieldTrip Community, My name is Alexandrina Guran, I am a PhD Student at the University of Lübeck and I have recently started working with FieldTrip in order to preprocess (and later analyse) EEG data. I have encountered an odd problem, that I nor people I asked in the lab could solve, also using the help function and google: After running the epoching (trial length 5s), filtering (high-pass, low-pass and notch, for a time-frequency analysis) and downsampling (to 250 Hz), I wanted to do an automatic artifact rejection, in order to have exploratory information of how many of my trials would be affected by artifacts and if there were participants that blinked on a majority of trials in order to determine whether I should shorten my trial length and/or conduct an ICA. I used the ft_artifact_threshold function, in Matlab R2016b, with different FieldTrip versions (march 2017 as well as end 2016 and end 2015). However, the automatic artifact detection did not work – that is, it would stop rejecting artifacts after a number x of trials (usually between 90 and 140 trials), depending on participant. I would get an error message but then the artifact rejection would go on, telling me all trials were ok (even if I set 1 microvolt as a threshold). The error message I got is the following: “(…) threshold artifact scanning: trial 128 from 320 exceeds max-threshold threshold artifact scanning: trial 129 from 320 is ok threshold artifact scanning: trial 130 from 320 is ok threshold artifact scanning: trial 131 from 320 is ok Warning: data contains NaNs, no filtering or preprocessing applied > In ft_warning (line 184) In preproc (line 283) In ft_artifact_threshold (line 164) In preprocessing (line 266) threshold artifact scanning: trial 132 from 320 is ok threshold artifact scanning: trial 133 from 320 is ok threshold artifact scanning: trial 134 from 320 is ok threshold artifact scanning: trial 135 from 320 is ok threshold artifact scanning: trial 136 from 320 is ok threshold artifact scanning: trial 137 from 320 is ok threshold artifact scanning: trial 138 from 320 is ok threshold artifact scanning: trial 139 from 320 is ok threshold artifact scanning: trial 140 from 320 is ok threshold artifact scanning: trial 141 from 320 is ok threshold artifact scanning: trial 142 from 320 is ok (…)” This was however only the case if I ran the artifact detection on down-sampled data. It worked fine with just filtered data. However, I checked the preprocessed (downsampled) data for NaNs (using the isnan-MATLAB function) and there were none to be found (I also checked visually in one dataset). Has anyone encountered this problem and found a solution? Of course, I considered just doing the downsampling after the automatic and visual artifact rejection, but I would like to be sure that the downsampling will work correctly at any point of the preprocessing and right now I am a little flummoxed at “what is happening” with the data in that function. Down below you can find code excerpts for both the artifact rejection and the downsampling. Both were looped over participants but the error appears regardless of that. Downsampling: cfg = []; cfg.dataset = ['tfdata_filtfilt_' num2str(subj(s)) '.mat']; %tfdata_filtfilt_ is the epoched and filtered data cfg.resamplefs = 250; cfg.detrend = 'no'; cfg.inputfile = ['tfdata_filtfilt_' num2str(subj(s)) '.mat']; cfg.outputfile = ['tfdata_filt_rs_' num2str(subj(s)) '.mat']; datatfrs = ft_resampledata(cfg) Artifact rejection cfg = []; config = load(['tfcfg_' num2str(subj(s)) '.mat']); cfg.trl = config.cfg.trl; cfg.continuous = 'no' ; cfg.artfctdef.threshold.channel = [1:28 33:63]; %exclude eye channels 'VEOG1' 'VEOG2' 'HEOG1' 'HEOG2' cfg.artfctdef.threshold.max = 75; cfg.artfctdef.threshold.min = -75; cfg.artfctdef.threshold.bpfilter = 'no'; cfg.inputfile = ['tfdata_filt_rs_' num2str(subj(s)) '.mat']; cfg.outputfile = ['tfdata_artif_' num2str(subj(s)) '.mat']; cfg = ft_artifact_threshold(cfg); save (cfg.outputfile, 'cfg') Since I am new to FieldTrip, I can imagine it to be a “simple/stupid” error having to do with the cfg. Thank you for reading this and trying to help ☺ Best regards Alexandrina -- C.-N. Alexandrina Guran, M.Sc. PhD student Institute of Psychology I University of Lübeck Maria-Goeppert-Straße 9a 23562 Lübeck Germany Building MFC 8, 1st Floor, Room 1 Phone: +49 451 3101 3635 Fax: +49 451 3101 3604 -------------- next part -------------- An HTML attachment was scrubbed... URL: From chuanjigao at gmail.com Mon Mar 27 14:30:44 2017 From: chuanjigao at gmail.com (Jack Gao) Date: Mon, 27 Mar 2017 08:30:44 -0400 Subject: [FieldTrip] Post-hoc tests for cluster-based permutation tests on event-related fields Message-ID: Dear Community, My name is Chuanji Gao, I'm a PhD student in Experimental Psychology Program at University of South Carolina. I'm now analyzing EEG data to get some event-related fields results. There are three conditions (condition1, 2 and 3) that I want to compare. So I used ft_timelockstatistics to run the cluster-based permutation test firstly. The cfg are as below. %--------- *cfg = [];...cfg.neighbours = neighbours;...* *cfg.latency = [0.1 0.8];cfg.avgovertime = 'no';cfg.parameter = 'avg';cfg.method = 'montecarlo';cfg.statistic = 'depsamplesFmultivariate'; * *...Nsub = 19;cfg.design(1,1:3*Nsub) = [ones(1,Nsub) 2*ones(1,Nsub) 3*ones(1,Nsub)]; cfg.design(2,1:3*Nsub) = [1:Nsub 1:Nsub 1:Nsub];cfg.ivar = 1; cfg.uvar = 2; stat = ft_timelockstatistics(cfg,cond1{:},cond2{:}, cond3{:});* %--------- The null hypothesis was rejected, and it seems the effect was most pronounced from 224ms to 800ms at centro-parietal regions. The next step: I want to do pairwise comparisons of the three conditions. I'm not sure if I should use the time window identified from the last analyses as below: *...cfg.latency = [0.224 0.8];cfg.avgovertime = 'yes';cfg.parameter = 'avg';cfg.method = 'montecarlo';cfg.statistic = 'ft_statfun_depsamplesT'; * *...stat = ft_timelockstatistics(cfg,cond1{:},cond2{:});* OR should I use the whole time window as I used in the first analyses as below: *...cfg.latency = [0.1 0.8];cfg.avgovertime = 'no';cfg.parameter = 'avg';cfg.method = 'montecarlo';cfg.statistic = 'ft_statfun_depsamplesT'; ...stat = ft_timelockstatistics(cfg,cond1{:},cond2{:});* I'm inclined to use the whole time window and "non-average over time", but not entirely sure. Can someone give me some suggestions on it? Any help would be very appreciated. Best, Chuanji Chuanji Gao PhD student Department of Psychology University of South Carolina E-Mail chuanji at email.sc.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From braingirl at gmail.com Mon Mar 27 16:19:03 2017 From: braingirl at gmail.com (Teresa Madsen) Date: Mon, 27 Mar 2017 10:19:03 -0400 Subject: [FieldTrip] Problem with downsampling / automatic artifact rejection In-Reply-To: <815CD0C7-3692-488F-8012-3A23529C52E0@uni-luebeck.de> References: <815CD0C7-3692-488F-8012-3A23529C52E0@uni-luebeck.de> Message-ID: I don't see anything obviously wrong with your cfg, but I don't know what is loaded into the config variable - is it possible config.cfg.trl is requesting samples that are not present in the input file? If it's based on the data before downsampling, the sample numbers could be off by a factor of 250. If that's not it, here are some more general troubleshooting tips: First, I would set Matlab to dbstop if warning so it pauses execution at that warning message. You'll need to dbup at least once to get out of ft_warning, and then you'll have access to the workspace of the preproc function. Examine the dat variable for NaNs and see if you can track back to figure out where they were added. Since dat is an input to that function, you might start by typing dbup twice to get to the workspace of ft_artifact_threshold and verify whether any NaNs are present in dat there. If neither of those help you figure out the problem, it should at least give you more info to provide in a bug report to http://bugzilla.fieldtriptoolbox.org/ Hope that helps, Teresa On Mon, Mar 27, 2017 at 5:34 AM, Alexandrina Guran < alexandrina.guran at uni-luebeck.de> wrote: > Dear FieldTrip Community, > > > > My name is Alexandrina Guran, I am a PhD Student at the University of > Lübeck and I have recently started working with FieldTrip in order to > preprocess (and later analyse) EEG data. I have encountered an odd problem, that > I nor people I asked in the lab could solve, also using the help function > and google: > > > > After running the epoching (trial length 5s), filtering (high-pass, > low-pass and notch, for a time-frequency analysis) and downsampling (to 250 > Hz), I wanted to do an automatic artifact rejection, in order to have > exploratory information of how many of my trials would be affected by > artifacts and if there were participants that blinked on a majority of > trials in order to determine whether I should shorten my trial length > and/or conduct an ICA. > > > > I used the ft_artifact_threshold function, in Matlab R2016b, with > different FieldTrip versions (march 2017 as well as end 2016 and end 2015). > > However, the automatic artifact detection did not work – that is, it would > stop rejecting artifacts after a number x of trials (usually between 90 and > 140 trials), depending on participant. I would get an error message but > then the artifact rejection would go on, telling me all trials were ok > (even if I set 1 microvolt as a threshold). > > The error message I got is the following: > > > > “(…) threshold artifact scanning: trial 128 from 320 exceeds max-threshold > > threshold artifact scanning: trial 129 from 320 is ok > > threshold artifact scanning: trial 130 from 320 is ok > > threshold artifact scanning: trial 131 from 320 is ok > > Warning: data contains NaNs, no filtering or preprocessing applied > > > In ft_warning (line 184) > > In preproc (line 283) > > In ft_artifact_threshold (line 164) > > In preprocessing (line 266) > > threshold artifact scanning: trial 132 from 320 is ok > > threshold artifact scanning: trial 133 from 320 is ok > > threshold artifact scanning: trial 134 from 320 is ok > > threshold artifact scanning: trial 135 from 320 is ok > > threshold artifact scanning: trial 136 from 320 is ok > > threshold artifact scanning: trial 137 from 320 is ok > > threshold artifact scanning: trial 138 from 320 is ok > > threshold artifact scanning: trial 139 from 320 is ok > > threshold artifact scanning: trial 140 from 320 is ok > > threshold artifact scanning: trial 141 from 320 is ok > > threshold artifact scanning: trial 142 from 320 is ok (…)” > > > > This was however only the case if I ran the artifact detection on > down-sampled data. It worked fine with just filtered data. > > > > However, I checked the preprocessed (downsampled) data for NaNs (using the > isnan-MATLAB function) and there were none to be found (I also checked > visually in one dataset). > > > > Has anyone encountered this problem and found a solution? > > > > Of course, I considered just doing the downsampling after the automatic > and visual artifact rejection, but I would like to be sure that the > downsampling will work correctly at any point of the preprocessing and > right now I am a little flummoxed at “what is happening” with the data in > that function. > > > > Down below you can find code excerpts for both the artifact rejection and > the downsampling. Both were looped over participants but the error appears > regardless of that. > > Downsampling: > > > > cfg = []; > > cfg.dataset = ['tfdata_filtfilt_' > num2str(subj(s)) '.mat']; %tfdata_filtfilt_ is the epoched and filtered > data > > cfg.resamplefs = 250; > > cfg.detrend = 'no'; > > cfg.inputfile = ['tfdata_filtfilt_' > num2str(subj(s)) '.mat']; > > cfg.outputfile = ['tfdata_filt_rs_' > num2str(subj(s)) '.mat']; > > datatfrs = ft_resampledata(cfg) > > > > Artifact rejection > > cfg = []; > > config = load(['tfcfg_' > num2str(subj(s)) '.mat']); > > cfg.trl = config.cfg.trl; > > cfg.continuous = 'no' ; > > cfg.artfctdef.threshold.channel = [1:28 33:63]; %exclude eye > channels 'VEOG1' 'VEOG2' 'HEOG1' 'HEOG2' > > cfg.artfctdef.threshold.max = 75; > > cfg.artfctdef.threshold.min = -75; > > cfg.artfctdef.threshold.bpfilter = 'no'; > > cfg.inputfile = ['tfdata_filt_rs_' > num2str(subj(s)) '.mat']; > > cfg.outputfile = ['tfdata_artif_' > num2str(subj(s)) '.mat']; > > cfg = ft_artifact_threshold(cfg); > > save (cfg.outputfile, 'cfg') > > > > Since I am new to FieldTrip, I can imagine it to be a “simple/stupid” > error having to do with the cfg. > > Thank you for reading this and trying to help J > > > > Best regards > > Alexandrina > > > > > > -- > > C.-N. Alexandrina Guran, M.Sc. > > PhD student > > Institute of Psychology I > > University of Lübeck > > Maria-Goeppert-Straße 9a > 23562 Lübeck > > Germany > > > > Building MFC 8, 1st Floor, Room 1 > > Phone: +49 451 3101 3635 <+49%20451%2031013635> > > Fax: +49 451 3101 3604 <+49%20451%2031013604> > > > > > > > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -- Teresa E. Madsen, PhD Research Technical Specialist: *in vivo *electrophysiology & data analysis Division of Behavioral Neuroscience and Psychiatric Disorders Yerkes National Primate Research Center Emory University Rainnie Lab, NSB 5233 954 Gatewood Rd. NE Atlanta, GA 30329 (770) 296-9119 braingirl at gmail.com https://www.linkedin.com/in/temadsen -------------- next part -------------- An HTML attachment was scrubbed... URL: From efrain.torres at marquette.edu Wed Mar 29 07:54:14 2017 From: efrain.torres at marquette.edu (Torres, Efrain) Date: Wed, 29 Mar 2017 05:54:14 +0000 Subject: [FieldTrip] Activity not changing with time, SAM Beamforming Message-ID: When I plot my results using ft_sourceplot. My results do not seem to change despite changes in latency that I indicate through the configuration. Below is my code for SAM Beamformer of EEG data. I am unsure what I am doing wrong. Note that for preprocessing, it was previously done in EEGLab and imported into fieldtrip. cfg.trialdef.eventtype='trigger'; cfg.trialdef.prestim=.2; cfg.trialdef.poststim=.8; cfg.trialdef.ntrials=50; %%This was changed to 1 from 64 to cfg.dataset=rawEEG; cfg=ft_definetrial(cfg) cfg.continous='yes' cfg.trialfun='ft_trialfun_general' cfg.method='trial' %changed from channel to trial PU74954_PL5=ft_preprocessing(cfg) %% timelock analysis cfg=[]; cfg.covariance='yes'; cfg.covariancewindow='poststim'; cfg.vartrllength=2; timelock=ft_timelockanalysis(cfg,PU74954_PL5); plot(timelock.time, timelock.avg); %% headmodel Subject01='/home/etorres/Desktop/HAL_Fieldtrip/Anatomy/PU7493_1/RAW/anat+orig.BRIK'; mri=ft_read_mri(Subject01); cfg=[]; cfg.output='brain'; seg=ft_volumesegment(cfg, mri); cfg = []; cfg.method = 'singlesphere'; headmodel = ft_prepare_headmodel(cfg, seg); %% Preparing the subject specific grid %hdr=ft_read_header(PU74954_PL5); cfg=[]; cfg.elec=PU74954_PL5.hdr.elec; cfg.headmodel=headmodel; cfg.grid.resolution=1; cfg.grid.unit='cm'; %cfg.inwardshift=-1.5; grid=ft_prepare_sourcemodel(cfg); %% Creating the leadfield cfg=[]; cfg.elec=PU74954_PL5.hdr.elec; cfg.reducerank='3'; cfg.headmodel=headmodel; cfg.grid=grid; cfg.normalize='yes'; lf=ft_prepare_leadfield(cfg); %% Source Analysis cfg=[]; cfg.method='sam'; cfg.grid=lf; cfg.headmodel=headmodel; %cfg.keepfilter='yes'; cfg.lcmv.fixedori='yes'; source_avg=ft_sourceanalysis(cfg,timelock); %% Plotting Results mri = ft_read_mri(Subject01); mri = ft_volumereslice([], mri); cfg=[]; cfg.parameter='avg.pow'; [interp]=ft_sourceinterpolate(cfg,source_avg,mri); cfg=[]; cfg.method='slice'; cfg.funcolorlim=[0 10]; cfg.nslices=25; cfg.latency=-.1; cfg.funcolormap='jet'; cfg.funparameter='avg.pow'; ft_sourceplot(cfg, interp); Efrain Torres -------------- next part -------------- An HTML attachment was scrubbed... URL: From gunnar.norrman at biling.su.se Wed Mar 29 09:10:29 2017 From: gunnar.norrman at biling.su.se (Gunnar Norrman) Date: Wed, 29 Mar 2017 07:10:29 +0000 Subject: [FieldTrip] PhD position at Centre for Research on Bilingualism, Stockholm University Message-ID: <1490771429408.37501@biling.su.se> The Centre for Research on Bilingualism at Stockholm University is announcing a fully funded 4-year PhD position in bilingualism, starting fall 2017. The Centre is an interdisciplinary unit with focus on psycholinguistic and sociolinguistic aspects of bilingualism, including bilingual cognition and second language acquisition. We offer a vibrant interdisciplinary research environment, as well as a fully equipped EEG/ERP and Eye Tracking lab, and we strongly encourage students with a background in any of these methodologies to apply. Read more about the position here: http://www.su.se/english/about/vacancies/vacancies-new-list?rmpage=job&rmjob=2862&rmlang=UK Applications are submitted through the university recruitment system, and the last date for applications is April 18, 2017. --- Gunnar Norrman Centre for Research on Bilingualism, Stockholm University +46 (0)8 16 3643 | gunnar.norrman at biling.su.se -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bastiaansen4.M at nhtv.nl Thu Mar 30 12:19:46 2017 From: Bastiaansen4.M at nhtv.nl (Bastiaansen, Marcel) Date: Thu, 30 Mar 2017 10:19:46 +0000 Subject: [FieldTrip] PhD position Tilburg University on 'decoding emotions from the brain' Message-ID: Dear Fieldtrippers, The departments of Cognitive Neuropsychology and Methodology and Statistics have a vacancy for a 4-year, fully funded PhD position to work on decoding emotions induced in Virtual reality environments from EEG signals. Deadline for applications is April 9th, 2017. Additional information about the position can be found through the link below. Inquiries about the position can be addressed directly to me. https://career012.successfactors.eu/career?_s.crb=%252bZoJOFM7vsQ4kHTupKwp7t2BWvc%253d best, Marcel *** Dr Marcel C.M. Bastiaansen Senior lecturer and researcher in quantitative research methods Academy for Leisure & Academy for Tourism NHTV Breda University of Applied Sciences Visiting adress: Room C1.011, Academy for Leisure Archimedesstraat 17, 4816 BA, Breda Phone: +31 76 533 2869 Email: bastiaansen4.m at nhtv.nl And Department of Cognitive Neuropsychology Tilburg School of Social and Behavioral Sciences Tilburg University Visiting address: Room S217, Simon building Warandelaan 2 5000 LE Tilburg Email: M.C.M.Bastiaansen at uvt.nl publications linked-in *** ----------------------------------------------------- Op deze e-mail zijn de volgende voorwaarden van toepassing : The following disclaimer applies to the e-mail message : http://www.nhtv.nl/disclaimer ----------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From elam4hcp at gmail.com Thu Mar 30 21:28:58 2017 From: elam4hcp at gmail.com (Jennifer Elam) Date: Thu, 30 Mar 2017 14:28:58 -0500 Subject: [FieldTrip] HCP Course 2017: Faculty and Course Schedule Available -- Register Now! Message-ID: Faculty listings and the full schedule of covered topics are now available for the 2017 HCP Course: "Exploring the Human Connectome" , to be held June 19-23 at the Djavad Mowafagian Centre for Brain Health at University of British Columbia (UBC) in Vancouver, BC, Canada! The 5-day intensive course is a great opportunity to learn directly from HCP investigators and gain practical experience with the Human Connectome Project's approach to multimodal whole brain imaging acquisition, processing, analysis, visualization, and sharing of data and results. For more info and to register visit the HCP Course 2017 website . Don't delay, registration is limited, and the course is filling up fast! Discounted on-site UBC accommodations are available through May 17, 2017 to attendees reserving through the HCP Course room block . If you have any questions, please contact us at: hcpcourse at humanconnectome. org We look forward to seeing you in Vancouver! Best, 2017 HCP Course Staff Jennifer Elam, Ph.D. Scientific Outreach, Human Connectome Project Washington University School of Medicine Department of Neuroscience, Box 8108 660 South Euclid Avenue St. Louis, MO 63110 314-362-9387 elam at wustl.edu www.humanconnectome.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bastiaansen4.M at nhtv.nl Fri Mar 31 09:31:23 2017 From: Bastiaansen4.M at nhtv.nl (Bastiaansen, Marcel) Date: Fri, 31 Mar 2017 07:31:23 +0000 Subject: [FieldTrip] PhD position Tilburg University on 'decoding emotions from the brain' In-Reply-To: References: Message-ID: Dear list, I posted a PhD vacancy yesterday, but I included a link that for some reason does not seem to work. Below is the full vacancy text as can be found on the website of Tilburg University. Apologies for the multiple posting. Best, Marcel PhD student on Decoding emotions from the brain (1,0 fte) PhD student on Decoding emotions from the brain, Departments of Cognitive Neuropsychology and Methodology and Statistics (1,0 fte) Project description Central aim of the PhD research is to decode / classify discrete categories of emotions, based on recordings of neural activity (EEG) and other physiological measures (HR, GSR, facial EMG). Emotion induction will be realized using Tilburg University’s advanced Virtual and Augmented Reality facilities. Emotion classification will be performed using state-of-the-art machine learning and data science techniques in order to optimize the sensitivity to identify and classify (differences in) emotional states. The PhD project will be supervised by promotors prof.dr. J. Vroomen, and co-promotors dr. Katrijn van Deun and dr. Marcel C.M. Bastiaansen. A more detailed project description is available upon request from dr. Marcel Bastiaansen. Tasks * Designing and conducting research; * Presenting findings on scientific conferences; * Reporting findings in international journals, resulting in a dissertation; * Participating in the graduate school; * Participating in the teaching program of the departments. Qualifications * Master’s degree (preferably Research Master) in cognitive neuroscience or a closely related discipline; * hands-on experience with EEG data analysis (preferably Fieldtrip); * Fluency in spoken English and excellent writing skills in English; * Programming skills (Matlab, R), and a keen interest in using advanced data analysis techniques are an important asset; * Experience with VR would be helpful; * Willingness and proven ability to work independently. Terms of Employment Tilburg University is among the top Dutch employers and has an excellent policy concerning terms of employment. The collective employment terms and conditions for Dutch universities will apply. The appointment is intended to lead to the completion of a PhD thesis. The PhD appointment at Tilburg University begins with a period of 12 months. Continuation of the appointment with another 36 months will will be based on performance evaluation. The gross salary for the PhD position amounts € 2.191 per month in the first year, rising to € 2.801 per month in the fourth year, based on a full-time appointment (38 hours per week). Applications and Information For additional information about the vacancy can be obtained from Dr. Marcel Bastiaansen, M.C.M.Bastiaansen at tilburguniversity.edu, tel.: +31 13 466 2408. Applicants should send their CV and a covering letter to Hans-Georg van Liempd MSc, Managing Director, Tilburg School of Social and Behavioral Sciences, only by the link mentioned below. The closing date for applications is April 9th 2017. Tilburg School of Social and Behavioral Sciences Tilburg School of Social and Behavioral Sciences (TSB) is a modern, specialized university. The teaching and research of the Tilburg School of Social and Behavioral Sciences are organized around the themes of Health, Organization, and Relations between State, Citizen, and Society. The Schools inspiring working environment challenges its workers to realize their ambitions; involvement and cooperation are essential to achieve this. Tilburg School of Social and Behavioral Sciences Department of Cognitive Neuropsychology The Department of Cognitive Neuropsychology of Tilburg University consists of a vibrant mix of people interested in cognitive and clinical neuropsychology. Our department is an intellectually exciting and productive group, advancing fundamental understanding in the cognitive neuroscience and clinical neuropsychology. Our research is highly recognized both nationally and internationally. Our fundamental research is focused on the integration of information from different modalities (hearing, seeing, touch) for perceiving speech, emotions, and crossmodal synchrony in the healthy population and in patient groups with autism, schizophrenia, or developmental dyslexia. We use behavioral measures and variety of psychophysical methods like eye tracking, EEG, and fMRI. We have access to the DAF Technology Lab for creating Virtual Reality. Department Methodology and Statistics The Department of Methodology and Statistics is an internationally renowned group, holding several experts in data science methods, latent variable methods, psychometrics, meta-research, survey methodology, and other applied statistics fields. The department has a strong tradition of working with the other (substantive) research programs in our School. The department is part of the School of Social and Behavioral Sciences at Tilburg University and responsible for the teaching and the research in the area of methodology and statistics for the social and behavioral sciences, the Data Science programs (including the novel joint bachelor in Data Science together with the technical university of Eindhoven), and the Liberal Arts and Science program of Tilburg University. The department is a member of the Interuniversity Graduate School for Psychometrics and Sociometrics (IOPS). Recruitment code Tilburg University applies the recruitmentcode of the Dutch Association for Personnel Management & Organization Development (NVP). Disclaimer The text of this vacancy advertisement is copyright-protected property of Tilburg University. Use, distribution and further disclosure of the advertisement without express permission from Tilburg University is not allowed, and this applies explicitly to use by recruitment and selection agencies which do not act directly on the instructions of Tilburg University. Responses resulting from recruitment by non-contractors of Tilburg Universities will not be handled. *** Dr Marcel C.M. Bastiaansen Senior lecturer and researcher in quantitative research methods Academy for Leisure & Academy for Tourism NHTV Breda University of Applied Sciences Visiting adress: Room C1.011, Academy for Leisure Archimedesstraat 17, 4816 BA, Breda Phone: +31 76 533 2869 Email: bastiaansen4.m at nhtv.nl And Department of Cognitive Neuropsychology Tilburg School of Social and Behavioral Sciences Tilburg University Visiting address: Room S217, Simon building Warandelaan 2 5000 LE Tilburg Email: M.C.M.Bastiaansen at uvt.nl publications linked-in *** From: Bastiaansen, Marcel Sent: donderdag 30 maart 2017 12:20 To: fieldtrip at science.ru.nl Cc: J.Vroomen at uvt.nl; k.vandeun at tilburguniversity.edu Subject: PhD position Tilburg University on 'decoding emotions from the brain' Dear Fieldtrippers, The departments of Cognitive Neuropsychology and Methodology and Statistics have a vacancy for a 4-year, fully funded PhD position to work on decoding emotions induced in Virtual reality environments from EEG signals. Deadline for applications is April 9th, 2017. Additional information about the position can be found through the link below. Inquiries about the position can be addressed directly to me. https://career012.successfactors.eu/career?_s.crb=%252bZoJOFM7vsQ4kHTupKwp7t2BWvc%253d best, Marcel *** Dr Marcel C.M. Bastiaansen Senior lecturer and researcher in quantitative research methods Academy for Leisure & Academy for Tourism NHTV Breda University of Applied Sciences Visiting adress: Room C1.011, Academy for Leisure Archimedesstraat 17, 4816 BA, Breda Phone: +31 76 533 2869 Email: bastiaansen4.m at nhtv.nl And Department of Cognitive Neuropsychology Tilburg School of Social and Behavioral Sciences Tilburg University Visiting address: Room S217, Simon building Warandelaan 2 5000 LE Tilburg Email: M.C.M.Bastiaansen at uvt.nl publications linked-in *** ----------------------------------------------------- Op deze e-mail zijn de volgende voorwaarden van toepassing : The following disclaimer applies to the e-mail message : http://www.nhtv.nl/disclaimer ----------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From narendra.kumar at iitrpr.ac.in Fri Mar 31 13:34:25 2017 From: narendra.kumar at iitrpr.ac.in (narendra karna) Date: Fri, 31 Mar 2017 17:04:25 +0530 Subject: [FieldTrip] Regarding Analysis of EGI's EEG Data using fieldtrip Message-ID: ​Hi, I am pursuing PhD in Linguistics. I don't know much about MATLAB. I have recently done one EEG/ERP experiment using EGI's 128 channel EEG system. I came to know that fieldtrip supports the analysis of EGI's EEG data. So, if be possible can anyone send me the script for analysing EGI's EEG data with ICA analysis. ​Thanks. Narendra​ Research Scholar Department of Humanities and Social Sciences Indian Institute of Technology Ropar Punjab, India - 140001 -------------- next part -------------- An HTML attachment was scrubbed... URL: From max-philipp.stenner at med.ovgu.de Fri Mar 31 15:13:37 2017 From: max-philipp.stenner at med.ovgu.de (Stenner, Max-Philipp) Date: Fri, 31 Mar 2017 13:13:37 +0000 Subject: [FieldTrip] PhD on human motor learning at the Leibniz Insitut for Neurobiology, Magdeburg/Germany Message-ID: Dear fieldtrip community a 3-year PhD position is available for a research project on the role of neural oscillations for motor learning in humans with Dr Max-Philipp Stenner and Prof Jens-Max Hopf at the Leibniz Institute for Neurobiology in Magdeburg, Germany (http://www.lin-magdeburg.de/en/departments/behavioral_neurology/physiology_motorlearning/index.jsp). Please find all details in the attached pdf. Best wishes Max-Philipp Stenner -------------- next part -------------- A non-text attachment was scrubbed... Name: PhD ad.pdf Type: application/pdf Size: 150597 bytes Desc: PhD ad.pdf URL: From dlozanosoldevilla at gmail.com Fri Mar 31 16:40:36 2017 From: dlozanosoldevilla at gmail.com (Diego Lozano-Soldevilla) Date: Fri, 31 Mar 2017 16:40:36 +0200 Subject: [FieldTrip] how to make the cfg.selectfeature work in ft_databrowser? Message-ID: Hi all, I'm using ft_databrowser to inspect sleep data and I want to visually mark different events (spindles, k-complexes, artifacts, so forth) and asign them to different cfg.artfctdef.xxx.artifact substructures. Could somebody help me to mark different artifact trial types using the cfg.selectfeature option? Please find below the code and data to reproduce the error I got. I'm using the very last fieldtrip version on windows with matlab 7.9b. Thanks beforehand, Diego data = []; data.label = {'Fpz';'F7';'F3';'Fz';'F4';'F8';'C3';'Cz';'C4';'P3';'Pz';'P4';'O1';'Oz';'O2'}; data.fsample = 250; data.trial{1} = rand(size(data.label,1),data.fsample*30); data.time{1} = (1:data.fsample*30)./data.fsample; cfg = []; cfg.length = 2; cfg.overlap = 0; trl = ft_redefinetrial(cfg,data); cfg = []; cfg.channel = 'all'; cfg.blocksize = 2; cfg.selectfeature = {'a';'b'}; cfg.viewmode = 'vertical'; events = ft_databrowser(cfg,trl); the input is raw data with 15 channels and 15 trials detected 0 a artifacts detected 0 b artifacts ??? Error using ==> plus Matrix dimensions must agree. Error in ==> ft_databrowser at 745 hsel = [1 2 3] + (opt.ftsel-1) .*3; ??? Reference to non-existent field 'trlvis'. Error in ==> ft_databrowser>redraw_cb at 1639 begsample = opt.trlvis(opt.trlop, 1); Error in ==> ft_databrowser>winresize_cb at 2250 redraw_cb(h,eventdata); ??? Error while evaluating figure ResizeFcn Virus-free. www.avast.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> -------------- next part -------------- An HTML attachment was scrubbed... URL: From sunsunruirui1111 at gmail.com Fri Mar 31 17:40:38 2017 From: sunsunruirui1111 at gmail.com (Rachel S) Date: Fri, 31 Mar 2017 11:40:38 -0400 Subject: [FieldTrip] Fwd: OpenMEEG binaries are not correctly installed In-Reply-To: References: Message-ID: Hello fieldtrip community, My name is Rachel and I am a Master student working on a project on Ecog. I am trying to use ft_prepare_headmodel with cfg = 'openmeeg' and I get the error "OpenMEEG binaries are not correctly installed". I use a Windows machine and I already add the openmeeg install folder to 'PATH'. When I ran system('om_assemble'), the output is: om_assemble version 2.1.0 (799) compiled at Aug 17 2011 19:50:41 Not enough arguments Please try "om_assemble -h" or "om_assemble --help " ans = 0 Any suggestions? Thanks in advance. Best wishes, Rachel -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian.jobke at nexgo.de Fri Mar 31 18:07:43 2017 From: sebastian.jobke at nexgo.de (Sebastian Jobke) Date: Fri, 31 Mar 2017 18:07:43 +0200 Subject: [FieldTrip] How to compute permutation test on ITC data Message-ID: <00f501d2aa38$ee73c260$cb5b4720$@nexgo.de> Hello Fieldtrip-Community, I am writing you to ask for some help. At the moment I am analysing EEG-Data gained during a passive oddball paradigm. For the preprocessing I used eeglab, transformed the data to the fieldtrip structure and computed time-frequency analysis and ITC, for which you provided great tutorials. Now I am a little stuck, because I was wondering how to compute permutation tests on ITC data? I have several subjects and want to compare two conditions (standards and deviants). I saw that there is a function (FT_STATFUN_DIFF_ITC) for this, but I unfortunately don't know how to use it. More specifically, I was wondering how to average over subjects and if I have to do the permutation test on every frequency band again (This, I did for the time-frequency analysis, as described in your tutorial). Further, I was wondering about how to use ft_freqstatistics with ITC-Data, how you described it in the tutorial. For any advise, I would be more than greatful. Thank you very much in advance. Best, Sebastian -------------- next part -------------- An HTML attachment was scrubbed... URL: From N.vanKlink-2 at umcutrecht.nl Wed Mar 1 11:43:09 2017 From: N.vanKlink-2 at umcutrecht.nl (Klink-3, N.E.C. van) Date: Wed, 1 Mar 2017 10:43:09 +0000 Subject: [FieldTrip] Normalization of beamformer leadfields Message-ID: Dear all, I want to do SAM beamformer source localization on single trial EEG data. I would like to normalize the leadfields to correct for depth, like mentioned in the lmcv beamformer tutorial: (http://www.fieldtriptoolbox.org/tutorial/beamformer_lcmv) cfg = []; cfg.elec = hdr.elec; % electrode distances cfg.headmodel = vol; % volume conduction headmodel cfg.grid = grid; % normalized grid positions cfg.channel = {'EEG'}; cfg.normalize = 'yes'; % to remove depth bias (Q in eq. 27 of van Veen et al, 1997). lf = ft_prepare_leadfield(cfg); However when I look what happens with cfg.normalize='yes', the following script is used in ft_compute_leadfield, from line 570: case 'yes' for ii=1:Ndipoles tmplf = lf(:, (3*ii-2):(3*ii)); if normalizeparam==0.5 % normalize the leadfield by the Frobenius norm of the matrix % this is the same as below in case normalizeparam is 0.5 nrm = norm(tmplf, 'fro'); else % normalize the leadfield by sum of squares of the elements of the leadfield matrix to the power "normalizeparam" % this is the same as the Frobenius norm if normalizeparam is 0.5 nrm = sum(tmplf(:).^2)^normalizeparam; end if nrm>0 tmplf = tmplf ./ nrm; end lf(:, (3*ii-2):(3*ii)) = tmplf; end This seems to me as independent of the dipole location, and does not use an estimate of the noise spectrum as in Eq 27 of van Veen et al 1997. DICS beamformer has the option to estimate the noise spectrum with 'projectnoise', but SAM beamformer does not have that option. SAM does something with noise and a lambda, which is noise regularization I guess (beamformer_sam from line 102). I use Fieldtrip 20170212. My main question: how do I correct the leadfields for depth bias? Thanks in advance, Nicole ------------------------------------------------------------------------------ De informatie opgenomen in dit bericht kan vertrouwelijk zijn en is uitsluitend bestemd voor de geadresseerde. Indien u dit bericht onterecht ontvangt, wordt u verzocht de inhoud niet te gebruiken en de afzender direct te informeren door het bericht te retourneren. Het Universitair Medisch Centrum Utrecht is een publiekrechtelijke rechtspersoon in de zin van de W.H.W. (Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat geregistreerd bij de Kamer van Koophandel voor Midden-Nederland onder nr. 30244197. Denk s.v.p aan het milieu voor u deze e-mail afdrukt. ------------------------------------------------------------------------------ This message may contain confidential information and is intended exclusively for the addressee. If you receive this message unintentionally, please do not use the contents but notify the sender immediately by return e-mail. University Medical Center Utrecht is a legal person by public law and is registered at the Chamber of Commerce for Midden-Nederland under no. 30244197. Please consider the environment before printing this e-mail. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sarang at cfin.au.dk Wed Mar 1 13:08:55 2017 From: sarang at cfin.au.dk (Sarang S. Dalal) Date: Wed, 1 Mar 2017 12:08:55 +0000 Subject: [FieldTrip] Normalization of beamformer leadfields In-Reply-To: References: Message-ID: <07BC6EA1-15AE-4528-B9CC-05BA838317F0@cfin.au.dk> Hi Nicole, Lead field normalization is a different approach than Van Veen’s method, which is often called the Neural Activity Index (NAI) and closely related to the “unit noise gain” or “weight normalization” concept you might see in some literature. I have implemented the NAI in beamformer_lcmv.m, which you can run with: cfg.method = ‘lcmv’; cfg.lcmv.weightnorm = ‘nai’; However, the equivalent has not been implemented in the other beamformer variants yet (SAM, DICS). You can still get output equivalent to SAM using the LCMV method if you use cfg.keeptrials=‘yes’ and average the power of the resulting time series (in source.avg.mom). This would give you a measure of induced power changes (rather than evoked), like the SAM procedure would. Unfortunately this procedure is not yet documented, but it’s not too tricky. (Please use a brand new version of FieldTrip if you’d like to try this, as an old bug in the NAI orientation selection was inadvertently re-introduced in FieldTrip versions between September 2016 and last week). I personally find that the NAI gives more sensible results if you are contrasting something like post-stimulus activity to a pre-stimulus baseline. If you are instead contrasting two conditions against each other rather than a baseline, then the different normalization approaches should give (almost) the same results anyway. Anyway, regarding lead field normalization: it does indeed do a voxel-by-voxel normalization since it cycles through all the voxels in a for loop ('for ii=1:Ndipoles' on the second line). It is purely based on the properties of the lead field, and as you noticed, is unlike Van Veen’s method in that it does not use the noise estimate at all. BTW, I believe that the lead field "column normalization" approach has been more popular in the literature. This normalizes the x/y/z components of the lead field independently, rather than all together. You can try this with cfg.normalize = ‘column’ and see how the results compare. Cheers, Sarang On 01 Mar 2017, at 11:43, Klink-3, N.E.C. van > wrote: Dear all, I want to do SAM beamformer source localization on single trial EEG data. I would like to normalize the leadfields to correct for depth, like mentioned in the lmcv beamformer tutorial: (http://www.fieldtriptoolbox.org/tutorial/beamformer_lcmv) cfg = []; cfg.elec = hdr.elec; % electrode distances cfg.headmodel = vol; % volume conduction headmodel cfg.grid = grid; % normalized grid positions cfg.channel = {'EEG'}; cfg.normalize = 'yes'; % to remove depth bias (Q in eq. 27 of van Veen et al, 1997). lf = ft_prepare_leadfield(cfg); However when I look what happens with cfg.normalize='yes', the following script is used in ft_compute_leadfield, from line 570: case 'yes' for ii=1:Ndipoles tmplf = lf(:, (3*ii-2):(3*ii)); if normalizeparam==0.5 % normalize the leadfield by the Frobenius norm of the matrix % this is the same as below in case normalizeparam is 0.5 nrm = norm(tmplf, 'fro'); else % normalize the leadfield by sum of squares of the elements of the leadfield matrix to the power "normalizeparam" % this is the same as the Frobenius norm if normalizeparam is 0.5 nrm = sum(tmplf(:).^2)^normalizeparam; end if nrm>0 tmplf = tmplf ./ nrm; end lf(:, (3*ii-2):(3*ii)) = tmplf; end This seems to me as independent of the dipole location, and does not use an estimate of the noise spectrum as in Eq 27 of van Veen et al 1997. DICS beamformer has the option to estimate the noise spectrum with 'projectnoise', but SAM beamformer does not have that option. SAM does something with noise and a lambda, which is noise regularization I guess (beamformer_sam from line 102). I use Fieldtrip 20170212. My main question: how do I correct the leadfields for depth bias? Thanks in advance, Nicole ________________________________ De informatie opgenomen in dit bericht kan vertrouwelijk zijn en is uitsluitend bestemd voor de geadresseerde. Indien u dit bericht onterecht ontvangt, wordt u verzocht de inhoud niet te gebruiken en de afzender direct te informeren door het bericht te retourneren. Het Universitair Medisch Centrum Utrecht is een publiekrechtelijke rechtspersoon in de zin van de W.H.W. (Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat geregistreerd bij de Kamer van Koophandel voor Midden-Nederland onder nr. 30244197. Denk s.v.p aan het milieu voor u deze e-mail afdrukt. ________________________________ This message may contain confidential information and is intended exclusively for the addressee. If you receive this message unintentionally, please do not use the contents but notify the sender immediately by return e-mail. University Medical Center Utrecht is a legal person by public law and is registered at the Chamber of Commerce for Midden-Nederland under no. 30244197. Please consider the environment before printing this e-mail. _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: From anne.urai at gmail.com Wed Mar 1 19:38:44 2017 From: anne.urai at gmail.com (Anne Urai) Date: Wed, 1 Mar 2017 10:38:44 -0800 Subject: [FieldTrip] compiling ft_volumenormalise Message-ID: Hi FieldTrippers, I compile my code to run on the supercomputer cluster (without many matlab licenses), which usually works fine when I do something like: *addpath('~/Documents/fieldtrip');* *ft_defaults; * *addpath('~/Documents/fieldtrip/external/spm8');* *mcc('-mv', '-N', '-p', 'stats', '-p', 'images', '-p', 'signal', ...* * '-R', '-nodisplay', '-R', '-singleCompThread', fname);* However, compiling the ft_volumenormalise function gives me some problems. Specifically, if source is the result of my beamformer analysis, this code * cfg = [];* * cfg.parameter = 'pow';* * cfg.nonlinear = 'no'; % can warp back to individual* * cfg.template = '/home/aeurai/Documents/fieldtrip/external/spm8/templates/T1.nii';* * cfg.write = 'no';* * cfg.keepinside = 'no'; % otherwise, ft_sourcegrandaverage will bug* * source = ft_volumenormalise(cfg, source);* works fine when running it within Matlab. However, when I run the executable after compiling (which completes without error), a low-level spm function throws the following error: *the input is source data with 16777216 brainordinates on a [256 256 256] grid* *Warning: could not reshape "freq" to the expected dimensions* *> In ft_datatype_volume (line 136)* *In ft_checkdata (line 350)* *In ft_volumenormalise (line 98)* *In B6b_sourceContrast_volNormalise (line 57)* *Converting the coordinate system from ctf to spm* *Undefined function 'fname' for input arguments of type 'struct'* *Error in file_array (line 32)* *Error in spm_create_vol>create_vol (line 77)* *Error in spm_create_vol (line 16)* *Error in volumewrite_spm (line 71)* *Error in ft_write_mri (line 65)* *Error in align_ctf2spm (line 168)* *Error in ft_convert_coordsys (line 95)* *Error in ft_volumenormalise (line 124)* *Error in B6b_sourceContrast_volNormalise (line 57)* *MATLAB:UndefinedFunction* I'd be very grateful for hints from anyone who's successfully compiled the ft_normalise function! Adding the template T1.nii file, spm8 or freesurfer at compilation does not solve the problem. Thanks, — Anne E. Urai, MSc PhD student | Institut für Neurophysiologie und Pathophysiologie Universitätsklinikum Hamburg-Eppendorf | Martinistrasse 52, 20246 | Hamburg, Germany www.anneurai.net / @AnneEUrai -------------- next part -------------- An HTML attachment was scrubbed... URL: From braingirl at gmail.com Wed Mar 1 20:22:48 2017 From: braingirl at gmail.com (Teresa Madsen) Date: Wed, 1 Mar 2017 14:22:48 -0500 Subject: [FieldTrip] error in filter_with_correction In-Reply-To: <7AC17F80-7F8D-4EC9-87F5-1B3279AC8DE1@mail.ucsd.edu> References: <7AC17F80-7F8D-4EC9-87F5-1B3279AC8DE1@mail.ucsd.edu> Message-ID: Did you also try searching the mailing list archives? The same error has come up a few times. What is your trial duration and sampling frequency? You'll need several seconds to get an accurate idea of what's going on in such low frequencies. Have you tried just detrending and applying a 2 Hz low-pass filter? It seems like that might have essentially the same effect. Hope one of those helps, Teresa On Fri, Feb 24, 2017 at 8:29 PM, Wong-Barnum, Mona wrote: > Hello fellow FieldTrip'er: > > Can someone help me understand and hopefully fix the following runtime > error message I am seeing (I searched a bit on the website documentation > but didn’t find anything): > > Error using filter_with_correction (line 51) > Calculated filter coefficients have poles on or outside the unit circle and > will not be stable. Try a higher cutoff frequency or a different > type/order of > filter. > > Error in filter_with_correction (line 51) > error('Calculated filter coefficients have poles on or outside the unit > circle and will not be stable. Try a higher cutoff frequency or a > different > type/order of filter.'); > > Error in ft_preproc_bandpassfilter (line 286) > filt = filter_with_correction(B,A,dat,dir,usefftfilt); > > Error in preproc (line 324) > if strcmp(cfg.bpfilter, 'yes'), dat = ft_preproc_bandpassfilter(dat, > fsample, cfg.bpfreq, cfg.bpfiltord, cfg.bpfilttype, cfg.bpfiltdir, > cfg.bpinstabilityfix, cfg.bpfiltdf, cfg.bpfiltwintype, cfg.bpfiltdev, > cfg.plotfiltresp, cfg.usefftfilt); end > > Error in ft_preprocessing (line 592) > [cutdat{i}, label, time{i}, cfg] = preproc(dat, hdr.label(rawindx), > tim, > cfg, begpadding, endpadding); > > Error in test (line 25) > data = ft_preprocessing ( cfg ); > > Error in run (line 96) > evalin('caller', [script ';']); > > Here is my script: > > addpath /path/to/my/fieldtrip > ft_defaults > > % 1. MEG > disp ( 'Reading 1.fif...' ) > cfg = []; > cfg.dataset = '1.fif'; > data = ft_preprocessing ( cfg ); > > disp ( 'Getting MEG channel 1...' ) > meg_channel = ft_channelselection ( 'MEG0111', data.label ); > cfg = []; > cfg.channel = meg_channel; > meg = ft_selectdata ( cfg, data ); > disp ( 'Saving meg...' ) > save meg.mat meg -v7.3; > clearvars cfg meg; > > % 2. Low delta MEG > disp ( 'Low delta MEG...' ) > cfg = []; > cfg.bpfilter = 'yes'; > cfg.bpfreq = [0.1 2]; > cfg.dataset = '1.fif'; > data = ft_preprocessing ( cfg ); > > cfg = []; > cfg.channel = meg_channel; > cfg.frequency = [0.1 2]; > meg = ft_selectdata ( cfg, data ); > disp ( 'Saving low delta meg...' ) > save low_delta_meg.mat meg -v7.3; > clearvars cfg meg; > > Line #25 is the last “data = ft_preprocessing ( cfg );” line. > > If I do cfg.bpfreq = [2 4] then there is no error but I really like to get > this low [0.1 2] range…any tips? > > Mona > > ********************************************* > Mona Wong > Web & Mobile Application Developer > San Diego Supercomputer Center > > Believing we are in control is an > illusion that brings suffering. > ********************************************* > > > > > > > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -- Teresa E. Madsen, PhD Research Technical Specialist: *in vivo *electrophysiology & data analysis Division of Behavioral Neuroscience and Psychiatric Disorders Yerkes National Primate Research Center Emory University Rainnie Lab, NSB 5233 954 Gatewood Rd. NE Atlanta, GA 30329 (770) 296-9119 braingirl at gmail.com https://www.linkedin.com/in/temadsen -------------- next part -------------- An HTML attachment was scrubbed... URL: From timeehan at gmail.com Wed Mar 1 21:55:03 2017 From: timeehan at gmail.com (Tim Meehan) Date: Wed, 1 Mar 2017 15:55:03 -0500 Subject: [FieldTrip] marking artifacts by channel + trial Message-ID: Hello All, When performing visual artifact rejection, I want to be able to mark artifacts that occur during some specific trials and only on some specific channels. In the tutorials I see only ways to mark bad channels (i.e. across all trials) or bad trials (i.e. across all channels). Does FieldTrip handle marking artifacts restricted to some channel/trial combination? Thanks, Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: From boris.burle at univ-amu.fr Thu Mar 2 15:19:18 2017 From: boris.burle at univ-amu.fr (Boris BURLE) Date: Thu, 2 Mar 2017 15:19:18 +0100 Subject: [FieldTrip] Post-doc position in development of cognitive control, Marseille, France Message-ID: <0ce55d3b-c424-633f-225b-2495e615a120@univ-amu.fr> Dear colleagues, Please find below a post-doc position offer that may be of interest to Fieldtrip users: B. Burle ------------------------------------- Post-doc research position in Developmental Psychology/Cognitive Neuroscience in Marseille, France We are seeking for a highly motivated fellow for a 2 years (potentially renewable) post-doc position to conduct EEG and structural MRI studies from children to young adults. This position is opened within a larger project aiming at tracking the development of cognitive control from childhood to adulthood. In the first phase of the project so far, we have collected behavioral data in a large cohort of more than 400 participants (from 5 to 14-years old) performing conflict tasks. The second phase in which occulometry (to extract pupil dilatation and eye movement) and electromyography (to extract the so-called " partial errors") in another group of children (comparable age-span) is currently being completed. Capitalizing on the results of the first two phases, the hired fellow will be mainly involved in the third phase of this project that will study the cortical components related to executive control maturation. EEG (and EMG) will be recorded on children performing conflict tasks to track the maturation of the different electrophysiological markers of executive control. The same children will undergo a structural MRI scan to get precise anatomy and connectivity, along with resting state activity. The recruited fellow will be in charge of the acquisition and processing of those data. The evolution of the EEG markers and of performance will be related to the maturation state of the different brain areas of interest and their connectivity. Candidates should hold a PhD in cognitive/developmental psychology/neuroscience. An expertise in either EEG or structural MRI is required. Having experience with children is a real plus, and if this experience is in association with one of the two techniques listed above, that is a major advantage. However, candidates having a strong background in one of those techniques but no experience with children are still encourage to apply. Knowledge of a high-level programming language (python, matlab, R...) is a real plus. The daily work language will be english but given the large interactions with children, non French-speaking applicant would have to speak a minimum amount of French (French courses can be attended in place). The project is interdsiciplinary, at the cross road of developmental psychology, cognitive neuroscience of cognitive control and neuro-imagery. The recruited fellow will hence interact with researchers in the three domain. Besides, the project in embedded in the vibrant and second biggest “behavioral and brain sciences” community in France. State of the art methodologies are accessible (Research dedicated MRI - Siemens last generation 3T scanner Prisma, MEG, robotized TMS, high resolution EEG etc...). Marseille is located in the south of France (Provence), on the shore of the Mediterranean sea, and is known for his very nice weather and surrounding: it is bordered by the beautiful “Calanques ”, the Alp mountains are within 1h30 ride, and so are the major cultural cities of Provence (Aix-en Provence, Avignon, Arles...). Salary is based on experience according to the CNRS (french National Center for Scientific Research) rules (and will be around 2000 € netto). Applications are encouraged immediately, and will remain open until position is filled. The position is available immediately. Please send candidatures (and or request for more information) at boris.burle at univ-amu.fr with [Post-Doc Devel] in the subject of the mail. -------------- next part -------------- An HTML attachment was scrubbed... URL: From braingirl at gmail.com Thu Mar 2 15:53:06 2017 From: braingirl at gmail.com (Teresa Madsen) Date: Thu, 2 Mar 2017 09:53:06 -0500 Subject: [FieldTrip] marking artifacts by channel + trial In-Reply-To: References: Message-ID: No, not really. The only way I've found to do that is to loop through my artifact rejection process on each trial individually, then merge them back together with NaNs filling in where there are artifacts, but then that breaks every form of analysis I want to do. :-P I wonder if it would work to fill in the artifacts with 0s instead of NaNs....I might play with that. Let me know if you're interested in some example code. ~Teresa On Wed, Mar 1, 2017 at 3:55 PM, Tim Meehan wrote: > Hello All, > > When performing visual artifact rejection, I want to be able to mark > artifacts that occur during some specific trials and only on some specific > channels. In the tutorials I see only ways to mark bad channels (i.e. > across all trials) or bad trials (i.e. across all channels). Does FieldTrip > handle marking artifacts restricted to some channel/trial combination? > > Thanks, > Tim > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -- Teresa E. Madsen, PhD Research Technical Specialist: *in vivo *electrophysiology & data analysis Division of Behavioral Neuroscience and Psychiatric Disorders Yerkes National Primate Research Center Emory University Rainnie Lab, NSB 5233 954 Gatewood Rd. NE Atlanta, GA 30329 (770) 296-9119 braingirl at gmail.com https://www.linkedin.com/in/temadsen -------------- next part -------------- An HTML attachment was scrubbed... URL: From timeehan at gmail.com Thu Mar 2 15:55:14 2017 From: timeehan at gmail.com (Tim Meehan) Date: Thu, 2 Mar 2017 09:55:14 -0500 Subject: [FieldTrip] marking artifacts by channel + trial In-Reply-To: References: Message-ID: Hi Teresa, Thanks for the reply. I'll take a look at your example if you don't mind sharing. Thanks! Tim On Thu, Mar 2, 2017 at 9:53 AM, Teresa Madsen wrote: > No, not really. The only way I've found to do that is to loop through my > artifact rejection process on each trial individually, then merge them back > together with NaNs filling in where there are artifacts, but then that > breaks every form of analysis I want to do. :-P > > I wonder if it would work to fill in the artifacts with 0s instead of > NaNs....I might play with that. Let me know if you're interested in some > example code. > > ~Teresa > > > On Wed, Mar 1, 2017 at 3:55 PM, Tim Meehan wrote: > >> Hello All, >> >> When performing visual artifact rejection, I want to be able to mark >> artifacts that occur during some specific trials and only on some specific >> channels. In the tutorials I see only ways to mark bad channels (i.e. >> across all trials) or bad trials (i.e. across all channels). Does FieldTrip >> handle marking artifacts restricted to some channel/trial combination? >> >> Thanks, >> Tim >> >> _______________________________________________ >> fieldtrip mailing list >> fieldtrip at donders.ru.nl >> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >> > > > > -- > Teresa E. Madsen, PhD > Research Technical Specialist: *in vivo *electrophysiology & data > analysis > Division of Behavioral Neuroscience and Psychiatric Disorders > Yerkes National Primate Research Center > Emory University > Rainnie Lab, NSB 5233 > 954 Gatewood Rd. NE > Atlanta, GA 30329 > (770) 296-9119 > braingirl at gmail.com > https://www.linkedin.com/in/temadsen > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -------------- next part -------------- An HTML attachment was scrubbed... URL: From martabortoletto at yahoo.it Fri Mar 3 09:16:53 2017 From: martabortoletto at yahoo.it (Marta Bortoletto) Date: Fri, 3 Mar 2017 08:16:53 +0000 (UTC) Subject: [FieldTrip] Post-doc position in TMS-EEG coregistration in Brescia, Italy In-Reply-To: <1398938405.1509799.1488528843896@mail.yahoo.com> References: <1398938405.1509799.1488528843896.ref@mail.yahoo.com> <1398938405.1509799.1488528843896@mail.yahoo.com> Message-ID: <89519094.172506.1488529013260@mail.yahoo.com> Dear all, Please find below an announcement for a post-docposition to work on a project of TMS-EEG coregistration, located at theCognitive Neuroscience Unit, IRCCS Centro San Giovanni di Dio Fatebenefratelli,Brescia (Italy), led by prof. Carlo Miniussi.We would be mostly grateful if you couldcirculate this notice to possibly interested candidates.Cheers, Marta Bortoletto and Anna Fertonani  Marta Bortoletto, PhD Cognitive Neuroscience Section, IRCCS Centro San Giovanni di Dio Fatebenefratelli Via Pilastroni 4, 25125 Brescia, Italy Phone number: (+39) 0303501594 E-mail: marta.bortoletto at cognitiveneuroscience.it web: http://www.cognitiveneuroscience.it/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Job description.pdf Type: application/pdf Size: 18043 bytes Desc: not available URL: From marc.lalancette at sickkids.ca Fri Mar 3 18:22:22 2017 From: marc.lalancette at sickkids.ca (Marc Lalancette) Date: Fri, 3 Mar 2017 17:22:22 +0000 Subject: [FieldTrip] Normalization of beamformer leadfields Message-ID: <2A2B6A5B8C4C174CBCCE0B45E548DEB23B964C34@SKMBXX01.sickkids.ca> Normalizing columns of the leadfield separately is not recommended. It is not a rotationally invariant operation, meaning you will get different results depending on your choice of coordinate system, which in short means that it introduces a physically meaningless bias, thus potentially amplitude and localization distortions. Note that this is also true of the unit-noise-gain normalization formula for the vector beamformer of Sekihara (which may still be used in some software, but is not in Fieldtrip). I was planning on writing a short paper on this, but unfortunately never found the time. I had a poster at Biomag 2014. Here's the link, but note that I later found errors in the computations for the "source bias and resolution figures" so it's probably best to ignore them, though the general idea that there are orientation and possibly location biases in most vector formulae is still valid http://dx.doi.org/10.6084/m9.figshare.1148970 . Maybe I'll redo the figures and post a "corrected" version at some point. Cheers, Marc Lalancette Lab Research Project Manager, Research MEG The Hospital for Sick Children, 555 University Avenue, Toronto, ON, M5G 1X8 416-813-7654 x201535 ________________________________ This e-mail may contain confidential, personal and/or health information(information which may be subject to legal restrictions on use, retention and/or disclosure) for the sole use of the intended recipient. Any review or distribution by anyone other than the person for whom it was originally intended is strictly prohibited. If you have received this e-mail in error, please contact the sender and delete all copies. From v.litvak at ucl.ac.uk Fri Mar 3 18:59:54 2017 From: v.litvak at ucl.ac.uk (Vladimir Litvak) Date: Fri, 3 Mar 2017 17:59:54 +0000 Subject: [FieldTrip] SPM course for MEG/EEG in London: May 8-10, 2017 Message-ID: Dear all, We are pleased to announce that our annual SPM course for MEG/EEG will take place this year from Monday May 8 to Wednesday May 10 2017. Hosted by University College London, the course will be held at Queen Square, a very central location in London (UK). The course will present instruction on the analysis of MEG and EEG data. The first two days will combine theoretical presentations with practical demonstrations of the different data analysis methods implemented in SPM. On the last day participants will have the opportunity to work on SPM tutorial data sets under the supervision of the course faculty. We also invite students to bring their own data for analysis. The course is suitable for both beginners and more advanced users. The topics that will be covered range from pre-processing and statistical analysis to source localization and dynamic causal modelling. The program is listed below. Registration is now open. For full details see http://www.fil.ion.ucl.ac.uk/ spm/course/london/ where you can also register. Available places are limited so please register as early as possible if you would like to attend! ---------------------- Monday May 8th (33 Queen square, basement) 9.00 - 9.30 Registration 9.30 - 9.45 SPM introduction and resources Guillaume Flandin 9.45 - 10.30 What are we measuring with M/EEG? Saskia Heibling 10.30 - 11.15 Data pre-processing Hayriye Cagnan Coffee 11.45 - 12.30 Data pre-processing – demo Sofie Meyer, Misun Kim 12.30 - 13.15 General linear model and classical inference Christophe Phillips Lunch 14.15 - 15.00 Multiple comparisons problem and solutions Guillaume Flandin 15.00 - 15.45 Bayesian inference Christophe Mathys Coffee 16.15 - 17.45 Group M/EEG dataset analysis - demo Jason Taylor, Martin Dietz 17.45 - 18.30 Advanced applications of the GLM Ashwani Jha, Bernadette van Wijk Tuesday May 9th (33 Queen square, basement) 9.30 - 10.15 M/EEG source analysis Gareth Barnes 10.15 - 11.15 M/EEG source analysis – demo Jose Lopez, Leonardo Duque Coffee 11.45 - 12.30 The principles of dynamic causal modelling Bernadette van Wijk 12.30 - 13.15 DCM for evoked responses Ryszard Auksztulewicz Lunch 14.15 - 15.00 DCM for steady state responses Rosalyn Moran 15.00 - 15.45 DCM - demo Richard Rosch, Tim West Coffee 16.15 - 17.00 Bayesian model selection and averaging Peter Zeidman 17.00 - 18.30 Clinic - questions & answers Karl Friston 19.00 - ... Social Event Wednesday May 10th 9.30 - 17.00 Practical hands-on session in UCL computer class rooms. Participants can either work on SPM tutorial datasets or on their own data with the help of the faculty. There will also be an opportunity to ask questions in small tutorial groups for further discussions on the topics of the lectures. -------------- next part -------------- An HTML attachment was scrubbed... URL: From braingirl at gmail.com Fri Mar 3 23:31:04 2017 From: braingirl at gmail.com (Teresa Madsen) Date: Fri, 3 Mar 2017 17:31:04 -0500 Subject: [FieldTrip] marking artifacts by channel + trial In-Reply-To: References: Message-ID: Here's a rough sketch of my approach, with one custom function attached. If you or others find it useful, maybe we can think about ways to incorporate it into the FieldTrip code. I've been working mostly with scripts, but you've inspired me to work on functionizing the rest of it so it's more shareable. So, assuming raw multichannel data has been loaded into FieldTrip structure 'data' with unique trial identifiers in data.trialinfo... for ch = 1:numel(data.label) %% pull out one channel at a time cfg = []; cfg.channel = data.label{ch}; datch{ch} = ft_selectdata(cfg, data); %% identify large z-value artifacts and/or whatever else you might want cfg = []; cfg.artfctdef.zvalue.channel = 'all'; cfg.artfctdef.zvalue.cutoff = 15; cfg.artfctdef.zvalue.trlpadding = 0; cfg.artfctdef.zvalue.fltpadding = 0; cfg.artfctdef.zvalue.artpadding = 0.1; cfg.artfctdef.zvalue.rectify = 'yes'; [~, artifact.zvalue] = ft_artifact_zvalue([], datch{ch}); %% replace artifacts with NaNs cfg = []; cfg.artfctdef.zvalue.artifact = artifact.zvalue; cfg.artfctdef.reject = 'nan'; datch{ch} = ft_rejectartifact(cfg,datch{ch}); end %% re-merge channels data = ft_appenddata([],datch); %% mark uniform NaNs as artifacts when they occur across all channels % and replace non-uniform NaNs (on some but not all channels) with zeroes, saving times [artifact,data,times] = artifact_nan2zero_TEM(data) % custom function, see attached %% reject artifacts by breaking into sub-trials cfg = []; cfg.artfctdef.nan2zero.artifact = artifact; cfg.artfctdef.reject = 'partial'; data = ft_rejectartifact(cfg,data); %% identify real trials trlinfo = unique(data.trialinfo,'rows','stable'); for tr = 1:size(trlinfo,1) %% calculate trial spectrogram cfg = []; cfg.trials = ismember(data.trialinfo, trlinfo(tr,:), 'rows'); cfg.keeptrials = 'no'; % refers to sub-trials cfg.method = 'mtmconvol'; cfg.output = 'powandcsd'; cfg.foi = 2.^(0:0.1:log2(300)); % 83 freqs, log2 spaced, 1-300 Hz cfg.tapsmofrq = cfg.foi/10; % smooth by 10% cfg.t_ftimwin = 2./cfg.tapsmofrq; % for 3 tapers (K=3), T=2/W cfg.toi = '50%'; cfg.pad = 'nextpow2'; freq = ft_freqanalysis(cfg,data); %% replace powspctrm & crsspctrum values with NaNs % where t_ftimwin (or wavlen for wavelets) overlaps with artifact for ch = 1:numel(freq.label) badt = [times{tr,ch}]; if ~isempty(badt) && any(... badt > (min(freq.time) - max(freq.cfg.t_ftimwin)) & ... badt < (max(freq.time) + max(freq.cfg.t_ftimwin))) ci = find(any(strcmp(freq.label{ch}, freq.labelcmb))); for t = 1:numel(freq.time) for f = 1:numel(freq.freq) mint = freq.time(t) - freq.cfg.t_ftimwin(f); maxt = freq.time(t) + freq.cfg.t_ftimwin(f); if any(badt > mint & badt < maxt) freq.powspctrm(ch,f,t) = NaN; freq.crsspctrm(ci,f,t) = NaN; end end end end end %% save corrected output save(['trial' num2str(tr) 'mtmconvolTFA.mat'], 'freq', '-v7.3'); end On Thu, Mar 2, 2017 at 9:55 AM, Tim Meehan wrote: > Hi Teresa, > > Thanks for the reply. I'll take a look at your example if you don't mind > sharing. Thanks! > > Tim > > On Thu, Mar 2, 2017 at 9:53 AM, Teresa Madsen wrote: > >> No, not really. The only way I've found to do that is to loop through my >> artifact rejection process on each trial individually, then merge them back >> together with NaNs filling in where there are artifacts, but then that >> breaks every form of analysis I want to do. :-P >> >> I wonder if it would work to fill in the artifacts with 0s instead of >> NaNs....I might play with that. Let me know if you're interested in some >> example code. >> >> ~Teresa >> >> >> On Wed, Mar 1, 2017 at 3:55 PM, Tim Meehan wrote: >> >>> Hello All, >>> >>> When performing visual artifact rejection, I want to be able to mark >>> artifacts that occur during some specific trials and only on some specific >>> channels. In the tutorials I see only ways to mark bad channels (i.e. >>> across all trials) or bad trials (i.e. across all channels). Does FieldTrip >>> handle marking artifacts restricted to some channel/trial combination? >>> >>> Thanks, >>> Tim >>> >>> _______________________________________________ >>> fieldtrip mailing list >>> fieldtrip at donders.ru.nl >>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>> >> >> >> >> -- >> Teresa E. Madsen, PhD >> Research Technical Specialist: *in vivo *electrophysiology & data >> analysis >> Division of Behavioral Neuroscience and Psychiatric Disorders >> Yerkes National Primate Research Center >> Emory University >> Rainnie Lab, NSB 5233 >> 954 Gatewood Rd. NE >> Atlanta, GA 30329 >> (770) 296-9119 >> braingirl at gmail.com >> https://www.linkedin.com/in/temadsen >> >> _______________________________________________ >> fieldtrip mailing list >> fieldtrip at donders.ru.nl >> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >> > > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -- Teresa E. Madsen, PhD Research Technical Specialist: *in vivo *electrophysiology & data analysis Division of Behavioral Neuroscience and Psychiatric Disorders Yerkes National Primate Research Center Emory University Rainnie Lab, NSB 5233 954 Gatewood Rd. NE Atlanta, GA 30329 (770) 296-9119 braingirl at gmail.com https://www.linkedin.com/in/temadsen -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- function [artifact,data,times] = artifact_nan2zero_TEM(data) % ARTIFACT_NAN2ZERO_TEM marks NaNs that occur uniformly across all channels % as artifacts, taking FT format data structure & returning the same % format as ft_artifact_xxx, for input to ft_rejectartifact. Non-uniform % NaNs (those not present on all channels) are replaced with 0s to avoid % breaking analysis functions. Also returns times of replaced NaNs by % trial & channel, so they can be changed back to NaNs in freq output. % % written 3/2/17 by Teresa E. Madsen artifact = []; times = cell(numel(data.trial), numel(data.label)); for tr = 1:numel(data.trial) % find NaNs to mark as artifacts (present uniformly across all channels) trlnan = isnan(data.trial{tr}); % identify NaNs by channel & timepoint allnan = all(trlnan,1); % need to specify dim in case of single channel % find, save timepoints, & replace non-uniform NaNs (not on all chs) w/ 0 replacenan = trlnan & repmat(~allnan,size(trlnan,1),1); for ch = 1:numel(data.label) times{tr,ch} = data.time{tr}(replacenan(ch,:)); % ID before replacing end data.trial{tr}(replacenan) = 0; % replace these w/ 0s if any(allnan) % determine the file sample #s for this trial trsamp = data.sampleinfo(tr,1):data.sampleinfo(tr,2); while any(allnan) % start from the end so sample #s don't shift endnan = find(allnan,1,'last'); allnan = allnan(1:endnan); % remove any non-NaNs after this % find last non-NaN before the NaNs beforenan = find(~allnan,1,'last'); if isempty(beforenan) % if no more non-NaNs begnan = 1; allnan = false; % while loop ends else % still more to remove - while loop continues begnan = beforenan + 1; allnan = allnan(1:beforenan); % remove the identified NaNs end % identify file sample #s that correspond to beginning and end of % this chunk of NaNs and append to artifact artifact = [artifact; trsamp(begnan) trsamp(endnan)]; %#ok end % while any(tnan) end % if any(tnan) end % for tr = 1:numel(data.trial) end From gaur-p at email.ulster.ac.uk Mon Mar 6 13:49:16 2017 From: gaur-p at email.ulster.ac.uk (Pramod Gaur) Date: Mon, 6 Mar 2017 12:49:16 +0000 Subject: [FieldTrip] Problem in buffer connection Message-ID: <518001d29678$12cbe540$3863afc0$@email.ulster.ac.uk> Dear community, My name is Pramod Gaur and I am PhD student in the Ulster university in UK working Brain-Computer Interfaces. Currently I am trying to implement the real-time classification problem mentioned in the tutorials. We have Neuromag Elekta MEG machine. I tried to the execute the following commands, it hangs. strcom = 'buffer:// ip-address-of-acquistion-machine:1972'; hdr = ft_read_header(strcom, 'cache', true); I executed the command ./neuromag2ft in the acquistion computer. Can anybody please suggest how this problem could be resolved. Any help would be highly appreciated. Best Regards, Pramod Gaur -------------- next part -------------- An HTML attachment was scrubbed... URL: From changa5 at mcmaster.ca Mon Mar 6 19:04:31 2017 From: changa5 at mcmaster.ca (Andrew Chang) Date: Mon, 6 Mar 2017 13:04:31 -0500 Subject: [FieldTrip] ft_volumereslice rotates the brain, how to fix? Message-ID: Dear Fieldtrip users, I am following the tutorial ( http://www.fieldtriptoolbox.org/tutorial/natmeg/dipolefitting) to work on coregistering the anatomical MRI (using colin27 template) to the EEG coordinate system, and then reslicing the MRI on to a cubic grid. However, I found that the ft_volumereslice rotates the MRI image, which seems weird. This is the sourceplot of the realigned MRI (from the 'mri_realigned2' variable, see the code below): [image: Inline image 1] However, this is the sourceplot of the resliced MRI, which was rotated in 3 dimensions (from the 'mri_resliced' variable, see the code below): [image: Inline image 3] I found that this rotation effect can be modulated by adjusting the parameters [rotation, scale, translate] on xyz dimensions, when I use the 'headshap' method for ft_volumerealign (see the code below). However, the effect of adjusting these parameters seems not to be linear or intuitive at all, and I cannot find the best combination to fix the rotation problem. Any advice or help would be much appreciated! Thank you all in advance! Here is the .mat file of what I have done: https://www.dropbox.com/s/viazz1vaq8gjyqb/fixingRotationMRI.mat?dl=0 Here is my code %% load MRI [mri_orig] = ft_read_mri('colin27_t1_tal_lin.nii'); %% load elec locations % I do not have the channel location or the headshape file, so I use a template cap to build the channel locations and headshape load('chanCfg') sphcoor = [Theta,Phi]'; cartcoor = elp2coor(sphcoor,10)'; % converting theta/phi coorfinates into xyz elec.elecpos = cartcoor; elec.chanpos = cartcoor; elec.label = ChannelName; % 'ChannelName' is a cell array of channel labels elec.unit = 'cm'; shape.pos = elec.elecpos; shape.label = elec.label; shape.unit = elec.unit ; shape.coordsys = 'spm'; %% Coregister the anatomical MRI to the EEG coordinate system cfg = []; cfg.method = 'interactive'; cfg.coordsys = 'spm'; [mri_realigned1] = ft_volumerealign(cfg, mri_orig); cfg = []; mri_realigned2 = []; cfg.method = 'headshape'; cfg.coordsys = 'spm'; cfg.headshape = shape; [mri_realigned2] = ft_volumerealign(cfg, mri_orig); % key in the following parameter for controlling the alignment % rotation: [0,0,0.5] % scale: [0.95, .8, .8] % translate: [0, 15, 0] cfg = []; cfg.resolution = 1; cfg.xrange = [-100 100]; cfg.yrange = [-110 110]; cfg.zrange = [-50 120]; mri_resliced = ft_volumereslice(cfg, mri_realigned2); Best, Andrew -- Andrew Chang, Ph.D. Candidate Vanier Canada Graduate Scholar http://changa5.wordpress.com/ Auditory Development Lab Department of Psychology, Neuroscience & Behaviour McMaster University -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: realigned.jpg Type: image/jpeg Size: 157286 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: resliced.jpg Type: image/jpeg Size: 145531 bytes Desc: not available URL: From bqrosen at ucsd.edu Tue Mar 7 03:38:22 2017 From: bqrosen at ucsd.edu (Burke Rosen) Date: Tue, 7 Mar 2017 02:38:22 +0000 Subject: [FieldTrip] units of the leadfield matrix Message-ID: <2E1FF091B841104E961728E963EE40D5BA6554F2@XMAIL-MBX-AT1.AD.UCSD.EDU> Hello, What are the units of the leadfield matrix produced by ft_compute_leadfield for EEG, Gradiometers, and Magnetometers? In particular, when using the the OpenMEEG BEM method. Thank you, Burke Rosen -------------- next part -------------- An HTML attachment was scrubbed... URL: From rikkert.hindriks at upf.edu Tue Mar 7 08:53:51 2017 From: rikkert.hindriks at upf.edu (HINDRIKS, RIKKERT) Date: Tue, 7 Mar 2017 08:53:51 +0100 Subject: [FieldTrip] units of the leadfield matrix In-Reply-To: <2E1FF091B841104E961728E963EE40D5BA6554F2@XMAIL-MBX-AT1.AD.UCSD.EDU> References: <2E1FF091B841104E961728E963EE40D5BA6554F2@XMAIL-MBX-AT1.AD.UCSD.EDU> Message-ID: https://mailman.science.ru.nl/pipermail/fieldtrip/2015-August/009561.html On Tue, Mar 7, 2017 at 3:38 AM, Burke Rosen wrote: > Hello, > > What are the units of the leadfield matrix produced by > ft_compute_leadfield for EEG, Gradiometers, and Magnetometers? > > In particular, when using the the OpenMEEG BEM method. > > Thank you, > > Burke Rosen > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.schoffelen at donders.ru.nl Tue Mar 7 09:17:50 2017 From: jan.schoffelen at donders.ru.nl (Schoffelen, J.M. (Jan Mathijs)) Date: Tue, 7 Mar 2017 08:17:50 +0000 Subject: [FieldTrip] ft_volumereslice rotates the brain, how to fix? In-Reply-To: References: Message-ID: <33D0E3BA-5A8B-4262-80D1-99A5AB94C268@donders.ru.nl> Hi Andrew, What’s the point in doing the second, headshape based alignment? I suppose that the template electrode positions are defined in a different coordinate system than ‘spm’? If so, be aware that probably these template positions do not nicely match the reconstructed headsurface from the template MRI, so you need to do the headshape based alignment by hand, since the automatic icp algorithm probably will get caught in an inappropriate local minimum. As long as you don’t rotate around the z-axis, I would assume that the ‘rotation’ would go away. Note, that the rotation of the image itself (as per ft_volumereslice) is not the problem, but the fact that it is rotated probably is, because that suggest that your coregistration between anatomy and electrodes does not make sense. Best, Jan-Mathijs J.M.Schoffelen, MD PhD Senior Researcher, VIDI-fellow - PI, language in interaction Telephone: +31-24-3614793 Physical location: room 00.028 Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands On 06 Mar 2017, at 19:04, Andrew Chang > wrote: Dear Fieldtrip users, I am following the tutorial (http://www.fieldtriptoolbox.org/tutorial/natmeg/dipolefitting) to work on coregistering the anatomical MRI (using colin27 template) to the EEG coordinate system, and then reslicing the MRI on to a cubic grid. However, I found that the ft_volumereslice rotates the MRI image, which seems weird. This is the sourceplot of the realigned MRI (from the 'mri_realigned2' variable, see the code below): However, this is the sourceplot of the resliced MRI, which was rotated in 3 dimensions (from the 'mri_resliced' variable, see the code below): I found that this rotation effect can be modulated by adjusting the parameters [rotation, scale, translate] on xyz dimensions, when I use the 'headshap' method for ft_volumerealign (see the code below). However, the effect of adjusting these parameters seems not to be linear or intuitive at all, and I cannot find the best combination to fix the rotation problem. Any advice or help would be much appreciated! Thank you all in advance! Here is the .mat file of what I have done: https://www.dropbox.com/s/viazz1vaq8gjyqb/fixingRotationMRI.mat?dl=0 Here is my code %% load MRI [mri_orig] = ft_read_mri('colin27_t1_tal_lin.nii'); %% load elec locations % I do not have the channel location or the headshape file, so I use a template cap to build the channel locations and headshape load('chanCfg') sphcoor = [Theta,Phi]'; cartcoor = elp2coor(sphcoor,10)'; % converting theta/phi coorfinates into xyz elec.elecpos = cartcoor; elec.chanpos = cartcoor; elec.label = ChannelName; % 'ChannelName' is a cell array of channel labels elec.unit = 'cm'; shape.pos = elec.elecpos; shape.label = elec.label; shape.unit = elec.unit ; shape.coordsys = 'spm'; %% Coregister the anatomical MRI to the EEG coordinate system cfg = []; cfg.method = 'interactive'; cfg.coordsys = 'spm'; [mri_realigned1] = ft_volumerealign(cfg, mri_orig); cfg = []; mri_realigned2 = []; cfg.method = 'headshape'; cfg.coordsys = 'spm'; cfg.headshape = shape; [mri_realigned2] = ft_volumerealign(cfg, mri_orig); % key in the following parameter for controlling the alignment % rotation: [0,0,0.5] % scale: [0.95, .8, .8] % translate: [0, 15, 0] cfg = []; cfg.resolution = 1; cfg.xrange = [-100 100]; cfg.yrange = [-110 110]; cfg.zrange = [-50 120]; mri_resliced = ft_volumereslice(cfg, mri_realigned2); Best, Andrew -- Andrew Chang, Ph.D. Candidate Vanier Canada Graduate Scholar http://changa5.wordpress.com/ Auditory Development Lab Department of Psychology, Neuroscience & Behaviour McMaster University _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: From jens.klinzing at uni-tuebingen.de Tue Mar 7 23:36:04 2017 From: jens.klinzing at uni-tuebingen.de (=?ISO-8859-1?Q?=22Jens_Klinzing=2C_Universit=E4t_T=FCbingen=22?=) Date: Tue, 07 Mar 2017 23:36:04 +0100 Subject: [FieldTrip] Sourcemodel inside definition too large when using warpmni Message-ID: <58BF35D4.7060400@uni-tuebingen.de> Dear Fieldtrip community, when calling ft_prepare_sourcemodel to create an individual sourcemodel I get quite different 'inside' definitions for the same subject when a) providing an unsegmented MRI and warping to the template MNI (see attachment: green) b) when providing an already segmented MRI (see attachment: blue) In fact, the extent of the inside in scenario a) is pretty similar to when I create a sourcemodel based on the skull instead of the brain. So maybe the segmentation during the warping process is the problem (for warped sourcemodels the inside field is just copied from the template sourcemodel). Is there a way to influence the segmentation performed by ft_prepare_sourcemodel when warping to the template MNI? Fieldtrip does not allow to provide an already segmented MRI in this case (error: missing anatomy). I expected the options cfg.threshold and cfg.smooth to be analogous to the threshold and smooth options for ft_volumesegment but they seem to be used only when I already provide a segmented MRI (so they can't help me here). I could just use the cfg.inwardshift option to fix the issue but I'm afraid that the MNI-warping itself may be affected in case the problem actually results from a flawed segmentation. Thanks in advance for your help! All the best, Jens -------------- next part -------------- A non-text attachment was scrubbed... Name: sourcemodel green_warpmni blue_nowarp_onsegmentedmri.PNG Type: image/png Size: 55645 bytes Desc: not available URL: From m.chait at ucl.ac.uk Wed Mar 8 01:04:46 2017 From: m.chait at ucl.ac.uk (Chait, Maria) Date: Wed, 8 Mar 2017 00:04:46 +0000 Subject: [FieldTrip] Post-Doc position on Auditory Attention [DEADLINE March 31] Message-ID: (please forward) A postdoctoral research associate position is available at the UCL Ear Institute's 'Auditory Cognitive Neuroscience Lab' to work on an EC-funded project that will use psychophysics, eye tracking and EEG to investigate auditory attention in humans. The post is funded for 20 months in the first instance. For more information about the post please see the lab website: http://www.ucl.ac.uk/ear/research/chaitlab/vacancies The Ear Institute is a leading interdisciplinary centre for hearing research in Europe, situated within one of the strongest neuroscience communities in the world at University College London Key Requirements The successful applicant will have a PhD in neuroscience or a neuroscience-related discipline and proven ability to conduct high-quality original research and prepare results for publication. Essential skills include excellent time-management and organizational ability; proficiency in computer programming and good interpersonal, oral and written communication skills. Previous experience with functional brain imaging, neural data analysis, psychophysical assessment, and/or auditory science or acoustics would be desirable. Further Details You should apply for this post (Ref #: 1631454) through UCL's online recruitment website, www.ucl.ac.uk/hr/jobs, where you can download a job description and person specifications. Closing Date for applications is: 31 March 2017 For an informal discussion please contact Dr. Maria Chait (m.chait at ucl.ac.uk). Maria Chait PhD m.chait at ucl.ac.uk Reader in Auditory Cognitive Neuroscience Lab site: http://www.ucl.ac.uk/ear/research/chaitlab/ UCL Ear Institute 332 Gray's Inn Road London WC1X 8EE -------------- next part -------------- An HTML attachment was scrubbed... URL: From ainsley.temudo at nyu.edu Wed Mar 8 07:52:03 2017 From: ainsley.temudo at nyu.edu (Ainsley Temudo) Date: Wed, 8 Mar 2017 10:52:03 +0400 Subject: [FieldTrip] Source Reconstruction Message-ID: Hi FieldTrip Experts, I am trying to perform source reconstruction, and I am having trouble with coregistering my anatomical with the sensors. The MEG system we're using is Yokogawa and the anatomical is a NIFTI file. I get some errors when using ft_sensorrealign and ft_electroderealign. I will go through the steps I took before getting to this stage, as maybe I have done something wrong. first I read in my MRI and determine the coordinate system which is LPS. mriunknown = ft_read_mri('WMCP1011+22+t1mprage.nii'); mri = ft_determine_coordsys(mriunknown, 'interactive','yes') next I realign to the CTF coordinate system by marking the NAS LPA, RPA cfg = []; cfg.method = 'interactive'; cfg.coordsys = 'ctf'; mri_ctf = ft_volumerealign(cfg, mir); I read in the sensor information and added in the coordinates for the marker positions. we have five marker positions, the three I picked were the left and right ear markers and the middle forehead marker. grad=ft_read_sens('srcLocTest01_FT_01.con'); grad.fid.pnt(1,:) = [96.07 3.11 -5.32]./10; grad.fid.pnt(2,:) = [11.13 75.50 -78.23]./10; grad.fid.pnt(3,:) = [8.50 -75.09 -64.60]./10; grad.fid.label = {'NAS' 'LPA' 'RPA'}; I then put the template marker point cordinates into the configuration which were taken from the mri_ctf cfg = []; cfg.method = 'fiducial'; cfg.target.pnt(1,:) = [91.1 3.0 49.2]./10; cfg.target.pnt(2,:) = [-0.1 70.5 0.0]./10; cfg.target.pnt(3,:) = [0.1 -70.5 0.0]./10; cfg.target.label = {'NAS' 'LPA' 'RPA'}; grad_aligned = ft_sensorrealign(cfg, grad); when I use ft_sensorrealign I get the following errors : Undefined function or variable 'lab'. Error in channelposition (line 314) n = size(lab,2); Error in ft_datatype_sens (line 328) [chanpos, chanori, lab] = channelposition(sens); Error in ft_sensorrealign (line 212) elec_original = ft_datatype_sens(elec_original); % ensure up-to-date sensor description (Oct 2011) when I use ft_electroderealign I get the following errors: Error using ft_fetch_sens (line 192) no electrodes or gradiometers specified. Error in ft_electroderealign (line 195) elec_original = ft_fetch_sens(cfg); Hope you can help me figure out why I'm getting these errors. Thanks, Ainsley -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.schoffelen at donders.ru.nl Wed Mar 8 08:26:38 2017 From: jan.schoffelen at donders.ru.nl (Schoffelen, J.M. (Jan Mathijs)) Date: Wed, 8 Mar 2017 07:26:38 +0000 Subject: [FieldTrip] Source Reconstruction In-Reply-To: References: Message-ID: <20B3451F-DE4F-4BF7-A42E-E1A17A766EDC@donders.ru.nl> Hi Ainsley, Why would you want to use sensorrealign/electroderealign since you have MEG-data? The former functions may be needed for EEG electrodes, not for MEG sensors. Best wishes, Jan-Mathijs J.M.Schoffelen, MD PhD Senior Researcher, VIDI-fellow - PI, language in interaction Telephone: +31-24-3614793 Physical location: room 00.028 Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands On 08 Mar 2017, at 07:52, Ainsley Temudo > wrote: Hi FieldTrip Experts, I am trying to perform source reconstruction, and I am having trouble with coregistering my anatomical with the sensors. The MEG system we're using is Yokogawa and the anatomical is a NIFTI file. I get some errors when using ft_sensorrealign and ft_electroderealign. I will go through the steps I took before getting to this stage, as maybe I have done something wrong. first I read in my MRI and determine the coordinate system which is LPS. mriunknown = ft_read_mri('WMCP1011+22+t1mprage.nii'); mri = ft_determine_coordsys(mriunknown, 'interactive','yes') next I realign to the CTF coordinate system by marking the NAS LPA, RPA cfg = []; cfg.method = 'interactive'; cfg.coordsys = 'ctf'; mri_ctf = ft_volumerealign(cfg, mir); I read in the sensor information and added in the coordinates for the marker positions. we have five marker positions, the three I picked were the left and right ear markers and the middle forehead marker. grad=ft_read_sens('srcLocTest01_FT_01.con'); grad.fid.pnt(1,:) = [96.07 3.11 -5.32]./10; grad.fid.pnt(2,:) = [11.13 75.50 -78.23]./10; grad.fid.pnt(3,:) = [8.50 -75.09 -64.60]./10; grad.fid.label = {'NAS' 'LPA' 'RPA'}; I then put the template marker point cordinates into the configuration which were taken from the mri_ctf cfg = []; cfg.method = 'fiducial'; cfg.target.pnt(1,:) = [91.1 3.0 49.2]./10; cfg.target.pnt(2,:) = [-0.1 70.5 0.0]./10; cfg.target.pnt(3,:) = [0.1 -70.5 0.0]./10; cfg.target.label = {'NAS' 'LPA' 'RPA'}; grad_aligned = ft_sensorrealign(cfg, grad); when I use ft_sensorrealign I get the following errors : Undefined function or variable 'lab'. Error in channelposition (line 314) n = size(lab,2); Error in ft_datatype_sens (line 328) [chanpos, chanori, lab] = channelposition(sens); Error in ft_sensorrealign (line 212) elec_original = ft_datatype_sens(elec_original); % ensure up-to-date sensor description (Oct 2011) when I use ft_electroderealign I get the following errors: Error using ft_fetch_sens (line 192) no electrodes or gradiometers specified. Error in ft_electroderealign (line 195) elec_original = ft_fetch_sens(cfg); Hope you can help me figure out why I'm getting these errors. Thanks, Ainsley _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip J.M.Schoffelen, MD PhD Senior Researcher, VIDI-fellow - PI, language in interaction Telephone: +31-24-3614793 Physical location: room 00.028 Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.schoffelen at donders.ru.nl Wed Mar 8 08:27:30 2017 From: jan.schoffelen at donders.ru.nl (Schoffelen, J.M. (Jan Mathijs)) Date: Wed, 8 Mar 2017 07:27:30 +0000 Subject: [FieldTrip] Sourcemodel inside definition too large when using warpmni References: Message-ID: <08748577-A2CA-4D37-8B9E-BA75BD7BA5CD@donders.ru.nl> Hi Jens, What does the ‘green’ point cloud look like relative to the blue points when you switch off the non-linear step in recipe a)? JM > On 07 Mar 2017, at 23:36, Jens Klinzing, Universität Tübingen wrote: > > Dear Fieldtrip community, > when calling ft_prepare_sourcemodel to create an individual sourcemodel I get quite different 'inside' definitions for the same subject when > a) providing an unsegmented MRI and warping to the template MNI (see attachment: green) > b) when providing an already segmented MRI (see attachment: blue) > > In fact, the extent of the inside in scenario a) is pretty similar to when I create a sourcemodel based on the skull instead of the brain. So maybe the segmentation during the warping process is the problem (for warped sourcemodels the inside field is just copied from the template sourcemodel). > > Is there a way to influence the segmentation performed by ft_prepare_sourcemodel when warping to the template MNI? > Fieldtrip does not allow to provide an already segmented MRI in this case (error: missing anatomy). I expected the options cfg.threshold and cfg.smooth to be analogous to the threshold and smooth options for ft_volumesegment but they seem to be used only when I already provide a segmented MRI (so they can't help me here). > > I could just use the cfg.inwardshift option to fix the issue but I'm afraid that the MNI-warping itself may be affected in case the problem actually results from a flawed segmentation. > > Thanks in advance for your help! > > All the best, > Jens > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip From ainsley.temudo at nyu.edu Wed Mar 8 09:03:58 2017 From: ainsley.temudo at nyu.edu (Ainsley Temudo) Date: Wed, 8 Mar 2017 12:03:58 +0400 Subject: [FieldTrip] Source Reconstruction In-Reply-To: <20B3451F-DE4F-4BF7-A42E-E1A17A766EDC@donders.ru.nl> References: <20B3451F-DE4F-4BF7-A42E-E1A17A766EDC@donders.ru.nl> Message-ID: Hi Jan-Mathijs Thanks for getting back to me so quickly. I originally used Ft_sensoralign, but I got the error messages one of which said 'FT_SENSORREALIGN is deprecated, please use FT_ELECTRODEREALIGN instead' so thats why I used electrode realign instead, even though it's MEG data. I've been following this page to do the realignment. http://www.fieldtriptoolbox.org/getting_started/yokogawa?s[]=yokogawa if I use sensorrealign how should I deal with these error messages? Undefined function or variable 'lab'. Error in channelposition (line 314) n = size(lab,2); Error in ft_datatype_sens (line 328) [chanpos, chanori, lab] = channelposition(sens); Error in ft_sensorrealign (line 212) elec_original = ft_datatype_sens(elec_original); % ensure up-to-date sensor description (Oct 2011) is there another way to realign my anatomical with my MEG sensors without using ft_sensorrealign? Thanks, Ainsley On Wed, Mar 8, 2017 at 11:26 AM, Schoffelen, J.M. (Jan Mathijs) < jan.schoffelen at donders.ru.nl> wrote: > Hi Ainsley, > > Why would you want to use sensorrealign/electroderealign since you have > MEG-data? The former functions may be needed for EEG electrodes, not for > MEG sensors. > > Best wishes, > Jan-Mathijs > > > J.M.Schoffelen, MD PhD > Senior Researcher, VIDI-fellow - PI, language in interaction > Telephone: +31-24-3614793 <+31%2024%20361%204793> > Physical location: room 00.028 > Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands > > > On 08 Mar 2017, at 07:52, Ainsley Temudo wrote: > > Hi FieldTrip Experts, > > I am trying to perform source reconstruction, and I am having trouble with > coregistering my anatomical with the sensors. The MEG system we're using is > Yokogawa and the anatomical is a NIFTI file. I get some errors when using > ft_sensorrealign and ft_electroderealign. I will go through the steps I > took before getting to this stage, as maybe I have done something wrong. > > first I read in my MRI and determine the coordinate system which is LPS. > > mriunknown = ft_read_mri('WMCP1011+22+t1mprage.nii'); > mri = ft_determine_coordsys(mriunknown, 'interactive','yes') > > next I realign to the CTF coordinate system by marking the NAS LPA, RPA > > cfg = []; > cfg.method = 'interactive'; > cfg.coordsys = 'ctf'; > > mri_ctf = ft_volumerealign(cfg, mir); > > I read in the sensor information and added in the coordinates for the > marker positions. we have five marker positions, the three I picked were > the left and right ear markers and the middle forehead marker. > > grad=ft_read_sens('srcLocTest01_FT_01.con'); > > > > > grad.fid.pnt(1,:) = [96.07 3.11 -5.32]./10; > grad.fid.pnt(2,:) = [11.13 75.50 -78.23]./10; > grad.fid.pnt(3,:) = [8.50 -75.09 -64.60]./10; > > > grad.fid.label = {'NAS' 'LPA' 'RPA'}; > > I then put the template marker point cordinates into the configuration > which were taken from the mri_ctf > > cfg = []; > cfg.method = 'fiducial'; > cfg.target.pnt(1,:) = [91.1 3.0 49.2]./10; > cfg.target.pnt(2,:) = [-0.1 70.5 0.0]./10; > cfg.target.pnt(3,:) = [0.1 -70.5 0.0]./10; > > > cfg.target.label = {'NAS' 'LPA' 'RPA'}; > > > > grad_aligned = ft_sensorrealign(cfg, grad); > > when I use ft_sensorrealign I get the following errors : > > Undefined function or variable 'lab'. > > Error in channelposition (line 314) > n = size(lab,2); > > Error in ft_datatype_sens (line 328) > [chanpos, chanori, lab] = channelposition(sens); > > Error in ft_sensorrealign (line 212) > elec_original = ft_datatype_sens(elec_original); % ensure up-to-date > sensor description (Oct 2011) > > > when I use ft_electroderealign I get the following errors: > > Error using ft_fetch_sens (line 192) > no electrodes or gradiometers specified. > > Error in ft_electroderealign (line 195) > elec_original = ft_fetch_sens(cfg); > > > Hope you can help me figure out why I'm getting these errors. > Thanks, > Ainsley > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > > > > > J.M.Schoffelen, MD PhD > Senior Researcher, VIDI-fellow - PI, language in interaction > Telephone: +31-24-3614793 <+31%2024%20361%204793> > Physical location: room 00.028 > Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands > > > > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -- *Ainsley Temudo* Research Assistant Sreenivasan Lab NYU Abu Dhabi Office Tel (UAE): +971 2 628 4764 Mobile (UAE): +971 56 664 6952 NYU Abu Dhabi, Saadiyat Campus P.O. Box 129188 Abu Dhabi, United Arab Emirates -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.schoffelen at donders.ru.nl Wed Mar 8 09:15:48 2017 From: jan.schoffelen at donders.ru.nl (Schoffelen, J.M. (Jan Mathijs)) Date: Wed, 8 Mar 2017 08:15:48 +0000 Subject: [FieldTrip] Source Reconstruction In-Reply-To: References: <20B3451F-DE4F-4BF7-A42E-E1A17A766EDC@donders.ru.nl> Message-ID: Hi Ainsley, I have never worked with ‘yokogawa’ data myself, so I can’t be of much help. The documentation you point to is already several years old, and appears not to have been actively maintained. Most likely the error you get is caused by incompatibility between the current version of the FieldTrip code, and the example code provided. Perhaps someone that has recently done coregistration between anatomical data and yokogawa data can chime in? Best wishes, Jan-Mathijs On 08 Mar 2017, at 09:03, Ainsley Temudo > wrote: Hi Jan-Mathijs Thanks for getting back to me so quickly. I originally used Ft_sensoralign, but I got the error messages one of which said 'FT_SENSORREALIGN is deprecated, please use FT_ELECTRODEREALIGN instead' so thats why I used electrode realign instead, even though it's MEG data. I've been following this page to do the realignment. http://www.fieldtriptoolbox.org/getting_started/yokogawa?s[]=yokogawa if I use sensorrealign how should I deal with these error messages? Undefined function or variable 'lab'. Error in channelposition (line 314) n = size(lab,2); Error in ft_datatype_sens (line 328) [chanpos, chanori, lab] = channelposition(sens); Error in ft_sensorrealign (line 212) elec_original = ft_datatype_sens(elec_original); % ensure up-to-date sensor description (Oct 2011) is there another way to realign my anatomical with my MEG sensors without using ft_sensorrealign? Thanks, Ainsley On Wed, Mar 8, 2017 at 11:26 AM, Schoffelen, J.M. (Jan Mathijs) > wrote: Hi Ainsley, Why would you want to use sensorrealign/electroderealign since you have MEG-data? The former functions may be needed for EEG electrodes, not for MEG sensors. Best wishes, Jan-Mathijs J.M.Schoffelen, MD PhD Senior Researcher, VIDI-fellow - PI, language in interaction Telephone: +31-24-3614793 Physical location: room 00.028 Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands On 08 Mar 2017, at 07:52, Ainsley Temudo > wrote: Hi FieldTrip Experts, I am trying to perform source reconstruction, and I am having trouble with coregistering my anatomical with the sensors. The MEG system we're using is Yokogawa and the anatomical is a NIFTI file. I get some errors when using ft_sensorrealign and ft_electroderealign. I will go through the steps I took before getting to this stage, as maybe I have done something wrong. first I read in my MRI and determine the coordinate system which is LPS. mriunknown = ft_read_mri('WMCP1011+22+t1mprage.nii'); mri = ft_determine_coordsys(mriunknown, 'interactive','yes') next I realign to the CTF coordinate system by marking the NAS LPA, RPA cfg = []; cfg.method = 'interactive'; cfg.coordsys = 'ctf'; mri_ctf = ft_volumerealign(cfg, mir); I read in the sensor information and added in the coordinates for the marker positions. we have five marker positions, the three I picked were the left and right ear markers and the middle forehead marker. grad=ft_read_sens('srcLocTest01_FT_01.con'); grad.fid.pnt(1,:) = [96.07 3.11 -5.32]./10; grad.fid.pnt(2,:) = [11.13 75.50 -78.23]./10; grad.fid.pnt(3,:) = [8.50 -75.09 -64.60]./10; grad.fid.label = {'NAS' 'LPA' 'RPA'}; I then put the template marker point cordinates into the configuration which were taken from the mri_ctf cfg = []; cfg.method = 'fiducial'; cfg.target.pnt(1,:) = [91.1 3.0 49.2]./10; cfg.target.pnt(2,:) = [-0.1 70.5 0.0]./10; cfg.target.pnt(3,:) = [0.1 -70.5 0.0]./10; cfg.target.label = {'NAS' 'LPA' 'RPA'}; grad_aligned = ft_sensorrealign(cfg, grad); when I use ft_sensorrealign I get the following errors : Undefined function or variable 'lab'. Error in channelposition (line 314) n = size(lab,2); Error in ft_datatype_sens (line 328) [chanpos, chanori, lab] = channelposition(sens); Error in ft_sensorrealign (line 212) elec_original = ft_datatype_sens(elec_original); % ensure up-to-date sensor description (Oct 2011) when I use ft_electroderealign I get the following errors: Error using ft_fetch_sens (line 192) no electrodes or gradiometers specified. Error in ft_electroderealign (line 195) elec_original = ft_fetch_sens(cfg); Hope you can help me figure out why I'm getting these errors. Thanks, Ainsley _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip J.M.Schoffelen, MD PhD Senior Researcher, VIDI-fellow - PI, language in interaction Telephone: +31-24-3614793 Physical location: room 00.028 Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -- Ainsley Temudo Research Assistant Sreenivasan Lab NYU Abu Dhabi Office Tel (UAE): +971 2 628 4764 Mobile (UAE): +971 56 664 6952 NYU Abu Dhabi, Saadiyat Campus P.O. Box 129188 Abu Dhabi, United Arab Emirates _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: From jens.klinzing at uni-tuebingen.de Wed Mar 8 10:22:30 2017 From: jens.klinzing at uni-tuebingen.de (=?UTF-8?B?IkplbnMgS2xpbnppbmcsIFVuaSBUw7xiaW5nZW4i?=) Date: Wed, 08 Mar 2017 10:22:30 +0100 Subject: [FieldTrip] Sourcemodel inside definition too large when using warpmni In-Reply-To: <08748577-A2CA-4D37-8B9E-BA75BD7BA5CD@donders.ru.nl> References: <08748577-A2CA-4D37-8B9E-BA75BD7BA5CD@donders.ru.nl> Message-ID: <58BFCD56.2080508@uni-tuebingen.de> Hi Jan-Mathijs, the size difference is still there with cfg.grid.nonlinear = no. Best, Jens > Schoffelen, J.M. (Jan Mathijs) > Mittwoch, 8. März 2017 08:27 > Hi Jens, > > What does the ‘green’ point cloud look like relative to the blue > points when you switch off the non-linear step in recipe a)? > > JM > > > > > > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 101843 bytes Desc: not available URL: From seymourr at aston.ac.uk Wed Mar 8 11:19:13 2017 From: seymourr at aston.ac.uk (Seymour, Robert (Research Student)) Date: Wed, 8 Mar 2017 10:19:13 +0000 Subject: [FieldTrip] Source Reconstruction Message-ID: Hi Ainsley, Good to see you're using Fieldtrip + Yokogawa data as well :D As I'm sure you're aware the issue is that "unlike other systems, the Yokogawa system software does not automatically analyse its sensor locations relative to fiducial coils". One workaround option is to do your coregistration in the Yokogawa/KIT software MEG160 and then export the sensor locations. You can then follow a more standard FT coregistration route without the need to use ft_sensorrealign. As Jan Mathijs said the http://www.fieldtriptoolbox.org/getting_started/yokogawa page is very outdated, so I will update it at some point in the future with more relevant info + updated code for sensor realignment. Many thanks, Robert Seymour -------------- next part -------------- An HTML attachment was scrubbed... URL: From sarang at cfin.au.dk Wed Mar 8 11:34:03 2017 From: sarang at cfin.au.dk (Sarang S. Dalal) Date: Wed, 8 Mar 2017 10:34:03 +0000 Subject: [FieldTrip] Source Reconstruction In-Reply-To: References: <20B3451F-DE4F-4BF7-A42E-E1A17A766EDC@donders.ru.nl> Message-ID: <1488969241.5011.5.camel@cfin.au.dk> Hi Ainsley, You might consider realigning your MEG to your MRI, rather than the other way around. Our group typically does it this way to also simplify some other aspects of our pipeline, in particular, to simplify (re-)use of BEM head models and plotting the final source maps on the participant's own MRI or the MNI template. You can find examples of our scripts on github: https://github.com/meeg-cfin/nemolab Check out basics/nemo_mriproc.m -- you may need to add your particular yokogawa system to line 45. (Note that I've tested this procedure on Elekta, CTF, and 4D/BTi data, but not yet Yokogawa.) An example of how to put it all the pipeline pieces together for a basic LCMV source analysis and visulization of ERF data is given in: basics/nemo_sourcelocER.m Best wishes, Sarang On Wed, 2017-03-08 at 08:15 +0000, Schoffelen, J.M. (Jan Mathijs) wrote: > Hi Ainsley, > > I have never worked with ‘yokogawa’ data myself, so I can’t be of > much help. The documentation you point to is already several years > old, and appears not to have been actively maintained. Most likely > the error you get is caused by incompatibility between the current > version of the FieldTrip code, and the example code provided. Perhaps > someone that has recently done coregistration between anatomical data > and yokogawa data can chime in? > > Best wishes, > Jan-Mathijs > >   > > On 08 Mar 2017, at 09:03, Ainsley Temudo > > wrote: > > > > Hi Jan-Mathijs > > Thanks for getting back to me so quickly. I originally used > > Ft_sensoralign, but I got the error messages one of which said  > > 'FT_SENSORREALIGN is deprecated, please use FT_ELECTRODEREALIGN > > instead' so thats why I used electrode realign instead, even though > > it's MEG data.   > > > > I've been following this page to do the realignment.  > > > > http://www.fieldtriptoolbox.org/getting_started/yokogawa?s[]=yokoga > > wa > > > > if I use sensorrealign how should I deal with these error > > messages?  > > > > Undefined function or variable 'lab'. > > > > Error in channelposition (line 314) > > n   = size(lab,2); > > > > Error in ft_datatype_sens (line 328) > >         [chanpos, chanori, lab] = channelposition(sens); > > > > Error in ft_sensorrealign (line 212) > > elec_original = ft_datatype_sens(elec_original); % ensure up-to- > > date sensor description (Oct 2011) > > > > > > is there another way to realign my anatomical with my MEG sensors > > without using ft_sensorrealign? > > > > Thanks, > > Ainsley  > > > > On Wed, Mar 8, 2017 at 11:26 AM, Schoffelen, J.M. (Jan Mathijs) > n.schoffelen at donders.ru.nl> wrote: > > > Hi Ainsley, > > > > > > Why would you want to use sensorrealign/electroderealign since > > > you have MEG-data? The former functions may be needed for EEG > > > electrodes, not for MEG sensors. > > > > > > Best wishes, > > > Jan-Mathijs > > > > > > > > > J.M.Schoffelen, MD PhD > > > Senior Researcher, VIDI-fellow - PI, language in interaction > > > Telephone: +31-24-3614793 > > > Physical location: room 00.028 > > > Donders Centre for Cognitive Neuroimaging, Nijmegen, > > > The Netherlands > > > > > > > > > > On 08 Mar 2017, at 07:52, Ainsley Temudo > > > u> wrote: > > > > > > > > Hi FieldTrip Experts, > > > > > > > > I am trying to perform source reconstruction, and I am having > > > > trouble with coregistering my anatomical with the sensors. The > > > > MEG system we're using is Yokogawa and the anatomical is a > > > > NIFTI file. I get some errors when using  ft_sensorrealign and > > > > ft_electroderealign. I will go through the steps I took before > > > > getting to this stage, as maybe I have done something wrong.  > > > > > > > > first I read in my MRI and determine the coordinate system > > > > which is LPS. > > > > > > > > mriunknown = ft_read_mri('WMCP1011+22+t1mprage.nii'); > > > > mri = ft_determine_coordsys(mriunknown, 'interactive','yes') > > > > > > > >  next I realign to the CTF coordinate system by marking the NAS > > > > LPA, RPA  > > > > > > > > cfg = []; > > > > cfg.method = 'interactive'; > > > > cfg.coordsys = 'ctf'; > > > > mri_ctf = ft_volumerealign(cfg, mir); > > > > > > > > I read in the sensor information and added in the coordinates > > > > for the marker positions. we have five marker positions, the > > > > three I picked were the left and right ear markers and the > > > > middle forehead marker.   > > > > > > > > grad=ft_read_sens('srcLocTest01_FT_01.con'); > > > >   > > > >   > > > > grad.fid.pnt(1,:) = [96.07 3.11 -5.32]./10; > > > > grad.fid.pnt(2,:) = [11.13 75.50 -78.23]./10; > > > > grad.fid.pnt(3,:) = [8.50 -75.09 -64.60]./10; > > > >   > > > > grad.fid.label = {'NAS' 'LPA' 'RPA'}; > > > > > > > > I then put the template marker point cordinates  into > > > > the configuration which were taken from the mri_ctf  > > > > > > > > cfg = []; > > > > cfg.method = 'fiducial'; > > > > cfg.target.pnt(1,:) = [91.1 3.0 49.2]./10; > > > > cfg.target.pnt(2,:) = [-0.1 70.5 0.0]./10; > > > > cfg.target.pnt(3,:) = [0.1 -70.5 0.0]./10; > > > >   > > > > cfg.target.label = {'NAS' 'LPA' 'RPA'}; > > > >   > > > > grad_aligned = ft_sensorrealign(cfg, grad); > > > > > > > > when I use ft_sensorrealign I get the following errors  : > > > > > > > > Undefined function or variable 'lab'. > > > > > > > > Error in channelposition (line 314) > > > > n   = size(lab,2); > > > > > > > > Error in ft_datatype_sens (line 328) > > > >         [chanpos, chanori, lab] = channelposition(sens); > > > > > > > > Error in ft_sensorrealign (line 212) > > > > elec_original = ft_datatype_sens(elec_original); % ensure up- > > > > to-date sensor description (Oct 2011) > > > > > > > > > > > > when I use ft_electroderealign I get the following errors:  > > > > > > > > Error using ft_fetch_sens (line 192) > > > > no electrodes or gradiometers specified. > > > > > > > > Error in ft_electroderealign (line 195) > > > >   elec_original = ft_fetch_sens(cfg); > > > > > > > > > > > > Hope you can help me figure out why I'm getting these errors. > > > > Thanks, > > > > Ainsley  > > > > _______________________________________________ > > > > fieldtrip mailing list > > > > fieldtrip at donders.ru.nl > > > > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > > > > > > > > > > > > J.M.Schoffelen, MD PhD > > > Senior Researcher, VIDI-fellow - PI, language in interaction > > > Telephone: +31-24-3614793 > > > Physical location: room 00.028 > > > Donders Centre for Cognitive Neuroimaging, Nijmegen, > > > The Netherlands > > > > > > > > > > > > > > > _______________________________________________ > > > fieldtrip mailing list > > > fieldtrip at donders.ru.nl > > > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > > > > > > > > > > > --  > > Ainsley Temudo > > Research Assistant  > > Sreenivasan Lab > > NYU Abu Dhabi > > Office Tel (UAE): +971 2 628 4764 > > Mobile (UAE): +971 56 664 6952 > > > > NYU Abu Dhabi, Saadiyat Campus > > P.O. Box 129188 > > Abu Dhabi, United Arab Emirates > > _______________________________________________ > > fieldtrip mailing list > > fieldtrip at donders.ru.nl > > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip From ainsley.temudo at nyu.edu Wed Mar 8 10:36:22 2017 From: ainsley.temudo at nyu.edu (Ainsley Temudo) Date: Wed, 8 Mar 2017 13:36:22 +0400 Subject: [FieldTrip] Source Reconstruction In-Reply-To: References: <20B3451F-DE4F-4BF7-A42E-E1A17A766EDC@donders.ru.nl> Message-ID: Hi Jan-Mathijs, I managed to get the ft_electoderealign to work after some debugging, which involved commenting out parts of the script. Could you take a look at the two images I've attached. The first is my volume conduction model and the sensors before realignment (unaligned.fig) and the second is after realignment (aligned.fig). any idea why the MRI markers which were used as the template (green) are so far apart from the MEG marker coil positions( red)? also it seems that before realignment everything looks okay, but how is that possible if my volume conduction model is in CTF and my MEG sensors are not? thanks Ainsley On Wed, Mar 8, 2017 at 12:15 PM, Schoffelen, J.M. (Jan Mathijs) < jan.schoffelen at donders.ru.nl> wrote: > Hi Ainsley, > > I have never worked with ‘yokogawa’ data myself, so I can’t be of much > help. The documentation you point to is already several years old, and > appears not to have been actively maintained. Most likely the error you get > is caused by incompatibility between the current version of the FieldTrip > code, and the example code provided. Perhaps someone that has recently done > coregistration between anatomical data and yokogawa data can chime in? > > Best wishes, > Jan-Mathijs > > > > On 08 Mar 2017, at 09:03, Ainsley Temudo wrote: > > Hi Jan-Mathijs > Thanks for getting back to me so quickly. I originally used > Ft_sensoralign, but I got the error messages one of which said > 'FT_SENSORREALIGN is deprecated, please use FT_ELECTRODEREALIGN instead' > so thats why I used electrode realign instead, even though it's MEG data. > > I've been following this page to do the realignment. > > http://www.fieldtriptoolbox.org/getting_started/yokogawa?s[]=yokogawa > > if I use sensorrealign how should I deal with these error messages? > > Undefined function or variable 'lab'. > > Error in channelposition (line 314) > n = size(lab,2); > > Error in ft_datatype_sens (line 328) > [chanpos, chanori, lab] = channelposition(sens); > > Error in ft_sensorrealign (line 212) > elec_original = ft_datatype_sens(elec_original); % ensure up-to-date > sensor description (Oct 2011) > > > is there another way to realign my anatomical with my MEG sensors without > using ft_sensorrealign? > > Thanks, > Ainsley > > On Wed, Mar 8, 2017 at 11:26 AM, Schoffelen, J.M. (Jan Mathijs) < > jan.schoffelen at donders.ru.nl> wrote: > >> Hi Ainsley, >> >> Why would you want to use sensorrealign/electroderealign since you have >> MEG-data? The former functions may be needed for EEG electrodes, not for >> MEG sensors. >> >> Best wishes, >> Jan-Mathijs >> >> >> J.M.Schoffelen, MD PhD >> Senior Researcher, VIDI-fellow - PI, language in interaction >> Telephone: +31-24-3614793 <+31%2024%20361%204793> >> Physical location: room 00.028 >> Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands >> >> >> On 08 Mar 2017, at 07:52, Ainsley Temudo wrote: >> >> Hi FieldTrip Experts, >> >> I am trying to perform source reconstruction, and I am having trouble >> with coregistering my anatomical with the sensors. The MEG system we're >> using is Yokogawa and the anatomical is a NIFTI file. I get some errors >> when using ft_sensorrealign and ft_electroderealign. I will go through the >> steps I took before getting to this stage, as maybe I have done something >> wrong. >> >> first I read in my MRI and determine the coordinate system which is LPS. >> >> mriunknown = ft_read_mri('WMCP1011+22+t1mprage.nii'); >> mri = ft_determine_coordsys(mriunknown, 'interactive','yes') >> >> next I realign to the CTF coordinate system by marking the NAS LPA, RPA >> >> cfg = []; >> cfg.method = 'interactive'; >> cfg.coordsys = 'ctf'; >> >> mri_ctf = ft_volumerealign(cfg, mir); >> >> I read in the sensor information and added in the coordinates for the >> marker positions. we have five marker positions, the three I picked were >> the left and right ear markers and the middle forehead marker. >> >> grad=ft_read_sens('srcLocTest01_FT_01.con'); >> >> >> grad.fid.pnt(1,:) = [96.07 3.11 -5.32]./10; >> grad.fid.pnt(2,:) = [11.13 75.50 -78.23]./10; >> grad.fid.pnt(3,:) = [8.50 -75.09 -64.60]./10; >> >> grad.fid.label = {'NAS' 'LPA' 'RPA'}; >> >> I then put the template marker point cordinates into the configuration >> which were taken from the mri_ctf >> >> cfg = []; >> cfg.method = 'fiducial'; >> cfg.target.pnt(1,:) = [91.1 3.0 49.2]./10; >> cfg.target.pnt(2,:) = [-0.1 70.5 0.0]./10; >> cfg.target.pnt(3,:) = [0.1 -70.5 0.0]./10; >> >> cfg.target.label = {'NAS' 'LPA' 'RPA'}; >> >> >> grad_aligned = ft_sensorrealign(cfg, grad); >> >> when I use ft_sensorrealign I get the following errors : >> >> Undefined function or variable 'lab'. >> >> Error in channelposition (line 314) >> n = size(lab,2); >> >> Error in ft_datatype_sens (line 328) >> [chanpos, chanori, lab] = channelposition(sens); >> >> Error in ft_sensorrealign (line 212) >> elec_original = ft_datatype_sens(elec_original); % ensure up-to-date >> sensor description (Oct 2011) >> >> >> when I use ft_electroderealign I get the following errors: >> >> Error using ft_fetch_sens (line 192) >> no electrodes or gradiometers specified. >> >> Error in ft_electroderealign (line 195) >> elec_original = ft_fetch_sens(cfg); >> >> >> Hope you can help me figure out why I'm getting these errors. >> Thanks, >> Ainsley >> _______________________________________________ >> fieldtrip mailing list >> fieldtrip at donders.ru.nl >> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >> >> >> >> >> J.M.Schoffelen, MD PhD >> Senior Researcher, VIDI-fellow - PI, language in interaction >> Telephone: +31-24-3614793 <+31%2024%20361%204793> >> Physical location: room 00.028 >> Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands >> >> >> >> >> _______________________________________________ >> fieldtrip mailing list >> fieldtrip at donders.ru.nl >> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >> > > > > -- > *Ainsley Temudo* > Research Assistant > Sreenivasan Lab > NYU Abu Dhabi > Office Tel (UAE): +971 2 628 4764 <+971%202%20628%204764> > Mobile (UAE): +971 56 664 6952 <+971%2056%20664%206952> > > NYU Abu Dhabi, Saadiyat Campus > P.O. Box 129188 > Abu Dhabi, United Arab Emirates > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > > > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -- *Ainsley Temudo* Research Assistant Sreenivasan Lab NYU Abu Dhabi Office Tel (UAE): +971 2 628 4764 Mobile (UAE): +971 56 664 6952 NYU Abu Dhabi, Saadiyat Campus P.O. Box 129188 Abu Dhabi, United Arab Emirates -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: aligned.fig Type: application/octet-stream Size: 451127 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: unaligned.fig Type: application/octet-stream Size: 449086 bytes Desc: not available URL: From nick.peatfield at gmail.com Wed Mar 8 17:48:27 2017 From: nick.peatfield at gmail.com (Nicholas A. Peatfield) Date: Wed, 8 Mar 2017 08:48:27 -0800 Subject: [FieldTrip] BTI freesurfer surface Message-ID: Hi Fieldtrippers I want to reconstruct cortical sources using a freesurfer surface, rather than an equidistant grid I will use the points from the surface. To do so I use ft_read_headshape to read the .surf file and use it as the points for the leadfield. However, the MEG data and headmodel are in 'bti' coordinates thus the grid points are not aligned to the headmodel and sensor points. I read the minimum norm estimate tutorial from fieldtrip webpage for transforming spm coordinates to bti, but in my case I am using a surface file in which there are only the points and tri.gonometries and the tutorial doesn't apply. How can I convert the surface points to bti? This is HCP data and I thought I would find some help on this somewhere but couldn't. Cheers, Nick -------------- next part -------------- An HTML attachment was scrubbed... URL: From scho at med.ovgu.de Wed Mar 8 17:52:46 2017 From: scho at med.ovgu.de (Michael Scholz) Date: Wed, 8 Mar 2017 17:52:46 +0100 (CET) Subject: [FieldTrip] different gradiometer-units from fiff-file Message-ID: Dear community, My name is Michael Scholz and I am working in Magdeburg (Germany) in the Department of Neurology. We just started using our Elekta Neuromag Triux System. I was going to use fieldtrip to create some simulation data to test Elekta software. Therefore I read data from a fiff-file acquired by the Elekta-MEG-system including 102 magnetometer-data and 2x102 gradiometer data. Reading fiff-files with ft_read_data creates output with magnetometer-data in Tesla (T) and gradiometer-data in T/m just as in the fiff-file. Reading the same fiff-file by ft_read_sens creates a structure with header-info including T/cm-unit-info for the gradiometer-sensors. This was not expected and was misleading; if one doesnt recognize these different units for the gradiometers and combines data based on ft_read_sens-output and ft_read_data-output, the result is unusable, since scaling of magnetometer-data and gradiometer-data wont match. How can I prevent ft_read_sens from reading gradiometer in different units as given in the source-fiff-file? best, Michael From jan.schoffelen at donders.ru.nl Wed Mar 8 17:55:22 2017 From: jan.schoffelen at donders.ru.nl (Schoffelen, J.M. (Jan Mathijs)) Date: Wed, 8 Mar 2017 16:55:22 +0000 Subject: [FieldTrip] BTI freesurfer surface In-Reply-To: References: Message-ID: Hi Nick, Sounds like you need a transformation matrix from freesurfer space to MEG headspace, true? is there a c_ras.mat file in your freesurfer/mri directory? This may provide you with the missing link Best, JM J.M.Schoffelen, MD PhD Senior Researcher, VIDI-fellow - PI, language in interaction Telephone: +31-24-3614793 Physical location: room 00.028 Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands On 08 Mar 2017, at 17:48, Nicholas A. Peatfield > wrote: Hi Fieldtrippers I want to reconstruct cortical sources using a freesurfer surface, rather than an equidistant grid I will use the points from the surface. To do so I use ft_read_headshape to read the .surf file and use it as the points for the leadfield. However, the MEG data and headmodel are in 'bti' coordinates thus the grid points are not aligned to the headmodel and sensor points. I read the minimum norm estimate tutorial from fieldtrip webpage for transforming spm coordinates to bti, but in my case I am using a surface file in which there are only the points and tri.gonometries and the tutorial doesn't apply. How can I convert the surface points to bti? This is HCP data and I thought I would find some help on this somewhere but couldn't. Cheers, Nick _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: From timeehan at gmail.com Wed Mar 8 18:04:27 2017 From: timeehan at gmail.com (Tim Meehan) Date: Wed, 8 Mar 2017 12:04:27 -0500 Subject: [FieldTrip] marking artifacts by channel + trial In-Reply-To: References: Message-ID: Thanks for sharing! I'm just taking a look now. It looks like you're doing mostly automated rejection. Or are you also doing visual rejection along with the z-value thresholding? Thanks again, Tim On Fri, Mar 3, 2017 at 5:31 PM, Teresa Madsen wrote: > Here's a rough sketch of my approach, with one custom function attached. > If you or others find it useful, maybe we can think about ways to > incorporate it into the FieldTrip code. I've been working mostly with > scripts, but you've inspired me to work on functionizing the rest of it so > it's more shareable. > > So, assuming raw multichannel data has been loaded into FieldTrip > structure 'data' with unique trial identifiers in data.trialinfo... > > for ch = 1:numel(data.label) > %% pull out one channel at a time > cfg = []; > cfg.channel = data.label{ch}; > > datch{ch} = ft_selectdata(cfg, data); > > %% identify large z-value artifacts and/or whatever else you might want > > cfg = []; > cfg.artfctdef.zvalue.channel = 'all'; > cfg.artfctdef.zvalue.cutoff = 15; > cfg.artfctdef.zvalue.trlpadding = 0; > cfg.artfctdef.zvalue.fltpadding = 0; > cfg.artfctdef.zvalue.artpadding = 0.1; > cfg.artfctdef.zvalue.rectify = 'yes'; > > [~, artifact.zvalue] = ft_artifact_zvalue([], datch{ch}); > > %% replace artifacts with NaNs > cfg = []; > cfg.artfctdef.zvalue.artifact = artifact.zvalue; > cfg.artfctdef.reject = 'nan'; > > datch{ch} = ft_rejectartifact(cfg,datch{ch}); > end > > %% re-merge channels > data = ft_appenddata([],datch); > > %% mark uniform NaNs as artifacts when they occur across all channels > % and replace non-uniform NaNs (on some but not all channels) with zeroes, > saving times > [artifact,data,times] = artifact_nan2zero_TEM(data) % custom function, see > attached > > %% reject artifacts by breaking into sub-trials > cfg = []; > cfg.artfctdef.nan2zero.artifact = artifact; > cfg.artfctdef.reject = 'partial'; > > data = ft_rejectartifact(cfg,data); > > %% identify real trials > trlinfo = unique(data.trialinfo,'rows','stable'); > > for tr = 1:size(trlinfo,1) > > %% calculate trial spectrogram > > cfg = []; > > cfg.trials = ismember(data.trialinfo, trlinfo(tr,:), 'rows'); > cfg.keeptrials = 'no'; % refers to sub-trials > > cfg.method = 'mtmconvol'; > > cfg.output = 'powandcsd'; > > cfg.foi = 2.^(0:0.1:log2(300)); % 83 freqs, log2 spaced, 1-300 Hz > cfg.tapsmofrq = cfg.foi/10; % smooth by 10% > cfg.t_ftimwin = 2./cfg.tapsmofrq; % for 3 tapers (K=3), T=2/W > cfg.toi = '50%'; > cfg.pad = 'nextpow2'; > > > freq = ft_freqanalysis(cfg,data); > > %% replace powspctrm & crsspctrum values with NaNs > % where t_ftimwin (or wavlen for wavelets) overlaps with artifact > for ch = 1:numel(freq.label) > badt = [times{tr,ch}]; > if ~isempty(badt) && any(... > badt > (min(freq.time) - max(freq.cfg.t_ftimwin)) & ... > badt < (max(freq.time) + max(freq.cfg.t_ftimwin))) > ci = find(any(strcmp(freq.label{ch}, freq.labelcmb))); > for t = 1:numel(freq.time) > for f = 1:numel(freq.freq) > mint = freq.time(t) - freq.cfg.t_ftimwin(f); > maxt = freq.time(t) + freq.cfg.t_ftimwin(f); > if any(badt > mint & badt < maxt) > freq.powspctrm(ch,f,t) = NaN; > freq.crsspctrm(ci,f,t) = NaN; > end > end > end > end > end > > %% save corrected output > > save(['trial' num2str(tr) 'mtmconvolTFA.mat'], 'freq', '-v7.3'); > end > > > > On Thu, Mar 2, 2017 at 9:55 AM, Tim Meehan wrote: > >> Hi Teresa, >> >> Thanks for the reply. I'll take a look at your example if you don't mind >> sharing. Thanks! >> >> Tim >> >> On Thu, Mar 2, 2017 at 9:53 AM, Teresa Madsen >> wrote: >> >>> No, not really. The only way I've found to do that is to loop through >>> my artifact rejection process on each trial individually, then merge them >>> back together with NaNs filling in where there are artifacts, but then that >>> breaks every form of analysis I want to do. :-P >>> >>> I wonder if it would work to fill in the artifacts with 0s instead of >>> NaNs....I might play with that. Let me know if you're interested in some >>> example code. >>> >>> ~Teresa >>> >>> >>> On Wed, Mar 1, 2017 at 3:55 PM, Tim Meehan wrote: >>> >>>> Hello All, >>>> >>>> When performing visual artifact rejection, I want to be able to mark >>>> artifacts that occur during some specific trials and only on some specific >>>> channels. In the tutorials I see only ways to mark bad channels (i.e. >>>> across all trials) or bad trials (i.e. across all channels). Does FieldTrip >>>> handle marking artifacts restricted to some channel/trial combination? >>>> >>>> Thanks, >>>> Tim >>>> >>>> _______________________________________________ >>>> fieldtrip mailing list >>>> fieldtrip at donders.ru.nl >>>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>>> >>> >>> >>> >>> -- >>> Teresa E. Madsen, PhD >>> Research Technical Specialist: *in vivo *electrophysiology & data >>> analysis >>> Division of Behavioral Neuroscience and Psychiatric Disorders >>> Yerkes National Primate Research Center >>> Emory University >>> Rainnie Lab, NSB 5233 >>> 954 Gatewood Rd. NE >>> Atlanta, GA 30329 >>> (770) 296-9119 >>> braingirl at gmail.com >>> https://www.linkedin.com/in/temadsen >>> >>> _______________________________________________ >>> fieldtrip mailing list >>> fieldtrip at donders.ru.nl >>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>> >> >> >> _______________________________________________ >> fieldtrip mailing list >> fieldtrip at donders.ru.nl >> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >> > > > > -- > Teresa E. Madsen, PhD > Research Technical Specialist: *in vivo *electrophysiology & data > analysis > Division of Behavioral Neuroscience and Psychiatric Disorders > Yerkes National Primate Research Center > Emory University > Rainnie Lab, NSB 5233 > 954 Gatewood Rd. NE > Atlanta, GA 30329 > (770) 296-9119 > braingirl at gmail.com > https://www.linkedin.com/in/temadsen > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -------------- next part -------------- An HTML attachment was scrubbed... URL: From murphyk5 at aston.ac.uk Wed Mar 8 18:49:55 2017 From: murphyk5 at aston.ac.uk (Murphy, Kelly (Research Student)) Date: Wed, 8 Mar 2017 17:49:55 +0000 Subject: [FieldTrip] different gradiometer-units from fiff-file In-Reply-To: References: Message-ID: Hi Michael, You could try using ft_read_data as per usual, then convert the units to the desired ones after. For example: "grad = ft_read_sens(MEG_data); %get fiducial coordinates under grad.chan sens = ft_convert_units('mm', grad);" Kelly ________________________________________ From: fieldtrip-bounces at science.ru.nl [fieldtrip-bounces at science.ru.nl] on behalf of Michael Scholz [scho at med.ovgu.de] Sent: 08 March 2017 16:52 To: fieldtrip at science.ru.nl Subject: [FieldTrip] different gradiometer-units from fiff-file Dear community, My name is Michael Scholz and I am working in Magdeburg (Germany) in the Department of Neurology. We just started using our Elekta Neuromag Triux System. I was going to use fieldtrip to create some simulation data to test Elekta software. Therefore I read data from a fiff-file acquired by the Elekta-MEG-system including 102 magnetometer-data and 2x102 gradiometer data. Reading fiff-files with ft_read_data creates output with magnetometer-data in Tesla (T) and gradiometer-data in T/m just as in the fiff-file. Reading the same fiff-file by ft_read_sens creates a structure with header-info including T/cm-unit-info for the gradiometer-sensors. This was not expected and was misleading; if one doesn?t recognize these different units for the gradiometers and combines data based on ft_read_sens-output and ft_read_data-output, the result is unusable, since scaling of magnetometer-data and gradiometer-data won?t match. How can I prevent ft_read_sens from reading gradiometer in different units as given in the source-fiff-file? best, Michael _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip From braingirl at gmail.com Wed Mar 8 21:35:12 2017 From: braingirl at gmail.com (Teresa Madsen) Date: Wed, 8 Mar 2017 15:35:12 -0500 Subject: [FieldTrip] marking artifacts by channel + trial In-Reply-To: References: Message-ID: I actually do a mix of approaches: a quick look using ft_databrowser with all channels for irregular artifacts like disconnection events, then a channel-by-channel search for large z-value artifacts and clipping artifacts, then I remove all those and do one last ft_databrowser review of all channels together. I'll attach the function I was working on, but it's more complex than you originally asked for and not fully tested yet, so use at your own risk. Do you use ft_databrowser or ft_rejectvisual for visual artifact rejection? ~Teresa On Wed, Mar 8, 2017 at 12:04 PM, Tim Meehan wrote: > Thanks for sharing! I'm just taking a look now. It looks like you're doing > mostly automated rejection. Or are you also doing visual rejection along > with the z-value thresholding? > > Thanks again, > Tim > > On Fri, Mar 3, 2017 at 5:31 PM, Teresa Madsen wrote: > >> Here's a rough sketch of my approach, with one custom function attached. >> If you or others find it useful, maybe we can think about ways to >> incorporate it into the FieldTrip code. I've been working mostly with >> scripts, but you've inspired me to work on functionizing the rest of it so >> it's more shareable. >> >> So, assuming raw multichannel data has been loaded into FieldTrip >> structure 'data' with unique trial identifiers in data.trialinfo... >> >> for ch = 1:numel(data.label) >> %% pull out one channel at a time >> cfg = []; >> cfg.channel = data.label{ch}; >> >> datch{ch} = ft_selectdata(cfg, data); >> >> %% identify large z-value artifacts and/or whatever else you might want >> >> cfg = []; >> cfg.artfctdef.zvalue.channel = 'all'; >> cfg.artfctdef.zvalue.cutoff = 15; >> cfg.artfctdef.zvalue.trlpadding = 0; >> cfg.artfctdef.zvalue.fltpadding = 0; >> cfg.artfctdef.zvalue.artpadding = 0.1; >> cfg.artfctdef.zvalue.rectify = 'yes'; >> >> [~, artifact.zvalue] = ft_artifact_zvalue([], datch{ch}); >> >> %% replace artifacts with NaNs >> cfg = []; >> cfg.artfctdef.zvalue.artifact = artifact.zvalue; >> cfg.artfctdef.reject = 'nan'; >> >> datch{ch} = ft_rejectartifact(cfg,datch{ch}); >> end >> >> %% re-merge channels >> data = ft_appenddata([],datch); >> >> %% mark uniform NaNs as artifacts when they occur across all channels >> % and replace non-uniform NaNs (on some but not all channels) with >> zeroes, saving times >> [artifact,data,times] = artifact_nan2zero_TEM(data) % custom function, >> see attached >> >> %% reject artifacts by breaking into sub-trials >> cfg = []; >> cfg.artfctdef.nan2zero.artifact = artifact; >> cfg.artfctdef.reject = 'partial'; >> >> data = ft_rejectartifact(cfg,data); >> >> %% identify real trials >> trlinfo = unique(data.trialinfo,'rows','stable'); >> >> for tr = 1:size(trlinfo,1) >> >> %% calculate trial spectrogram >> >> cfg = []; >> >> cfg.trials = ismember(data.trialinfo, trlinfo(tr,:), 'rows'); >> cfg.keeptrials = 'no'; % refers to sub-trials >> >> cfg.method = 'mtmconvol'; >> >> cfg.output = 'powandcsd'; >> >> cfg.foi = 2.^(0:0.1:log2(300)); % 83 freqs, log2 spaced, 1-300 Hz >> cfg.tapsmofrq = cfg.foi/10; % smooth by 10% >> cfg.t_ftimwin = 2./cfg.tapsmofrq; % for 3 tapers (K=3), T=2/W >> cfg.toi = '50%'; >> cfg.pad = 'nextpow2'; >> >> >> freq = ft_freqanalysis(cfg,data); >> >> %% replace powspctrm & crsspctrum values with NaNs >> % where t_ftimwin (or wavlen for wavelets) overlaps with artifact >> for ch = 1:numel(freq.label) >> badt = [times{tr,ch}]; >> if ~isempty(badt) && any(... >> badt > (min(freq.time) - max(freq.cfg.t_ftimwin)) & ... >> badt < (max(freq.time) + max(freq.cfg.t_ftimwin))) >> ci = find(any(strcmp(freq.label{ch}, freq.labelcmb))); >> for t = 1:numel(freq.time) >> for f = 1:numel(freq.freq) >> mint = freq.time(t) - freq.cfg.t_ftimwin(f); >> maxt = freq.time(t) + freq.cfg.t_ftimwin(f); >> if any(badt > mint & badt < maxt) >> freq.powspctrm(ch,f,t) = NaN; >> freq.crsspctrm(ci,f,t) = NaN; >> end >> end >> end >> end >> end >> >> %% save corrected output >> >> save(['trial' num2str(tr) 'mtmconvolTFA.mat'], 'freq', '-v7.3'); >> end >> >> >> >> On Thu, Mar 2, 2017 at 9:55 AM, Tim Meehan wrote: >> >>> Hi Teresa, >>> >>> Thanks for the reply. I'll take a look at your example if you don't mind >>> sharing. Thanks! >>> >>> Tim >>> >>> On Thu, Mar 2, 2017 at 9:53 AM, Teresa Madsen >>> wrote: >>> >>>> No, not really. The only way I've found to do that is to loop through >>>> my artifact rejection process on each trial individually, then merge them >>>> back together with NaNs filling in where there are artifacts, but then that >>>> breaks every form of analysis I want to do. :-P >>>> >>>> I wonder if it would work to fill in the artifacts with 0s instead of >>>> NaNs....I might play with that. Let me know if you're interested in some >>>> example code. >>>> >>>> ~Teresa >>>> >>>> >>>> On Wed, Mar 1, 2017 at 3:55 PM, Tim Meehan wrote: >>>> >>>>> Hello All, >>>>> >>>>> When performing visual artifact rejection, I want to be able to mark >>>>> artifacts that occur during some specific trials and only on some specific >>>>> channels. In the tutorials I see only ways to mark bad channels (i.e. >>>>> across all trials) or bad trials (i.e. across all channels). Does FieldTrip >>>>> handle marking artifacts restricted to some channel/trial combination? >>>>> >>>>> Thanks, >>>>> Tim >>>>> >>>>> _______________________________________________ >>>>> fieldtrip mailing list >>>>> fieldtrip at donders.ru.nl >>>>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>>>> >>>> >>>> >>>> >>>> -- >>>> Teresa E. Madsen, PhD >>>> Research Technical Specialist: *in vivo *electrophysiology & data >>>> analysis >>>> Division of Behavioral Neuroscience and Psychiatric Disorders >>>> Yerkes National Primate Research Center >>>> Emory University >>>> Rainnie Lab, NSB 5233 >>>> 954 Gatewood Rd. NE >>>> Atlanta, GA 30329 >>>> (770) 296-9119 >>>> braingirl at gmail.com >>>> https://www.linkedin.com/in/temadsen >>>> >>>> _______________________________________________ >>>> fieldtrip mailing list >>>> fieldtrip at donders.ru.nl >>>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>>> >>> >>> >>> _______________________________________________ >>> fieldtrip mailing list >>> fieldtrip at donders.ru.nl >>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>> >> >> >> >> -- >> Teresa E. Madsen, PhD >> Research Technical Specialist: *in vivo *electrophysiology & data >> analysis >> Division of Behavioral Neuroscience and Psychiatric Disorders >> Yerkes National Primate Research Center >> Emory University >> Rainnie Lab, NSB 5233 >> 954 Gatewood Rd. NE >> Atlanta, GA 30329 >> (770) 296-9119 >> braingirl at gmail.com >> https://www.linkedin.com/in/temadsen >> >> _______________________________________________ >> fieldtrip mailing list >> fieldtrip at donders.ru.nl >> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >> > > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -- Teresa E. Madsen, PhD Research Technical Specialist: *in vivo *electrophysiology & data analysis Division of Behavioral Neuroscience and Psychiatric Disorders Yerkes National Primate Research Center Emory University Rainnie Lab, NSB 5233 954 Gatewood Rd. NE Atlanta, GA 30329 (770) 296-9119 braingirl at gmail.com https://www.linkedin.com/in/temadsen -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- function [data] = AutoArtReject_TEM(cfg,data) % AutoArtReject_TEM performs automated artifact rejection, processing each % channel independently, removing clipping & large zvalue artifacts based % on automated thresholds (best to run on all data from a given subject % in one data structure, so the same threshold is applied consistently % across conditions), and returning the data structure re-merged across % channels, with NaNs in place of artifacts. % % Input cfg structure should contain: % interactsubj = true or false, whether to select visual artifacts per % subject (i.e., per call to this function) & review all channels after % automated detection % interactch = true or false, whether to preview detected artifacts % & select visual artifacts per channel % artfctdef.clip = struct as defined in ft_artifact_clip, but absdiff is % applied before the data is passed to that function, so it's actually % comparing these thresholds to the 2nd derivative of the original data % artfctdef.zvalue = struct as defined in ft_artifact_zvalue % artfctdef.minaccepttim = scalar as defined in ft_rejectartifact % % To facilitate data-handling and distributed computing you can use % cfg.inputfile = ... % cfg.outputfile = ... % If you specify one of these (or both) the input data will be read from a % *.mat file on disk and/or the output data will be written to a *.mat % file. These *.mat files should contain only a single variable 'data', % corresponding with ft_datatype_raw. % % written 3/2/17 by Teresa E. Madsen %% load data, if needed if nargin < 2 && isfield(cfg,'inputfile') load(cfg.inputfile); end %% preview data & mark unusual cross-channel artifacts if cfg.interactsubj cfgtmp = []; cfgtmp = ft_databrowser(cfgtmp,data); visual = cfgtmp.artfctdef.visual.artifact; % not a field of artifact % because this will be reused for all channels, while the rest of % artifact is cleared when starting each new channel else visual = []; end %% perform artifact detection on each channel separately excludech = false(size(data.label)); datch = cell(size(data.label)); for ch = 1:numel(data.label) artifact = []; %% divide data into channels cfgtmp = []; cfgtmp.channel = data.label{ch}; datch{ch} = ft_selectdata(cfgtmp,data); %% identify large zvalue artifacts cfgtmp = []; cfgtmp.artfctdef.zvalue = cfg.artfctdef.zvalue; if ~isfield(cfgtmp.artfctdef.zvalue,'interactive') if interactch cfgtmp.artfctdef.zvalue.interactive = 'yes'; else cfgtmp.artfctdef.zvalue.interactive = 'no'; end end [~, artifact.zvalue] = ft_artifact_zvalue(cfgtmp,datch{ch}); %% take 1st derivative of signal cfgtmp = []; cfgtmp.absdiff = 'yes'; datd1 = ft_preprocessing(cfgtmp,datch{ch}); %% define clipping artifacts % applies absdiff again, so it's actually working on 2nd derivative data cfgtmp = []; cfgtmp.artfctdef.clip = cfg.artfctdef.clip; [~, artifact.clip] = ft_artifact_clip(cfgtmp,datd1); %% review artifacts if needed cfgtmp = []; cfgtmp.artfctdef.clip.artifact = artifact.clip; cfgtmp.artfctdef.zvalue.artifact = artifact.zvalue; cfgtmp.artfctdef.visual.artifact = visual; if cfg.interactch % any new visual artifacts will be automatically added to cfgtmp cfgtmp = ft_databrowser(cfgtmp,datch{ch}); keyboard % dbcont when satisfied % excludech(ch) = true; % exclude this channel if desired end clearvars d1dat %% replace artifactual data with NaNs cfgtmp.artfctdef.reject = 'nan'; cfgtmp.artfctdef.minaccepttim = cfg.artfctdef.minaccepttim; datch{ch} = ft_rejectartifact(cfgtmp,datch{ch}); % if any trials were rejected completely, exclude this channel, or it % won't merge properly if numel(datch{ch}.trial) ~= numel(data.trial) excludech(ch) = true; end end % for ch = 1:numel(data.label) %% remerge each channel file into one cleaned data file cfgtmp = []; if isfield(cfg,'outputfile') cfgtmp.outputfile = cfg.outputfile; end data = ft_appenddata(cfgtmp,datch(~excludech)); %% visualize result if interactsubj cfgtmp = []; cfgtmp = ft_databrowser(cfgtmp,data); %#ok just for debugging keyboard % dbcont when satisfied end end From martabortoletto at yahoo.it Thu Mar 9 11:22:30 2017 From: martabortoletto at yahoo.it (Marta Bortoletto) Date: Thu, 9 Mar 2017 10:22:30 +0000 (UTC) Subject: [FieldTrip] Post-doc position in TMS-EEG coregistration in Brescia, Italy References: <1430592770.3251961.1489054950545.ref@mail.yahoo.com> Message-ID: <1430592770.3251961.1489054950545@mail.yahoo.com> Dear all, Please find below an announcement for a post-docposition to work on a project of TMS-EEG coregistration, located at theCognitive Neuroscience Unit, IRCCS Centro San Giovanni di Dio Fatebenefratelli,Brescia (Italy), led by prof. Carlo Miniussi.We would be mostly grateful if you couldcirculate this notice to possibly interested candidates.Cheers, Marta Bortoletto and Anna Fertonani ------------------------------------------------------------- Job description The Cognitive Neuroscience Unit, IRCCS Centro San Giovannidi Dio Fatebenefratelli, led by Prof. Carlo Miniussi is seeking to recruit apost-doctoral research fellow to work on a project of TMS-EEG coregistration.This is part of projects funded by BIAL foundation and FISM foundation, incollaboration with the University of Genova, ASST Spedali Civili di Brescia andthe Center for Mind/Brain Sciences CIMeC of the University of Trento. Theresearch focus of these projects is the effects of non-invasive brainstimulation (TMS and tES) on cortical networks of the human brain during motorand perceptual tasks and their contributions to learning.The post is available from May 2017 and is funded for oneyear in the first instance, with the possibility of extension for further 2years.  Key Requirements ·    We are seeking for aspiring individuals withsubstantial experience in TMS-EEG or EEG research and strong computationalabilities.·    The applicants should also be interested instudying cortical networks and their disorders.·    Successful candidates should have a backgroundand PhD degree in a neuroscience-related field, broadly specified, and skillsfor working with complex empirical data and human subjects.·    Applicants should have experience with conductingexperimental research, hands-on knowledge in EEG method, and documented skillsin at least one programming language (preferably Matlab). ·    Good command of the English language (writtenand oral), as well as skills for teamwork in a multidisciplinary researchgroup, are required.·    Experience with advanced EEG signal processing,EEG source localization, connectivity analyses and a strong publication recordare an advantage.  What we offer ·    Gross salary: 25.000-28.000 euro per annum ·    Excellent working environment·    Opportunity to work in a motivated, skilled,inspired and supportive group ·    A chance to work in Italy – one of the mostbeautiful countries in the world  To apply, please sendthe following items, as ONE PDF FILE and via email to Dr. Anna Fertonani (anna.fertonani at cognitiveneuroscience.it) preferably by March 31st 2017. Later applications will be considereduntil the position is filled.   ·    A letter of intent including a brief descriptionof your past and current research interests·    Curriculum vitae including the list of yourpublication and degrees·    Names and contact information of 2 referees.    For furtherinformation please contact Anna FertonaniIRCCS Centro SanGiovanni di Dio Fatebenefratelli anna.fertonani at cognitiveneuroscience.it  About the employer The IRCCS San Giovanni di Dio Fatebenefratelli is operatingsince 120 years and has been appointed and funded as national centre ofexcellence in research and care by the Italian Ministry of Health since 1996.More than 4500 patients with Alzheimer’s Dementia or associated disorders andabout 1700 patients with psychiatric diseases are treated each year. The researchdivision, besides the Cognitive Neuroscience Section, includes the laboratoriesof Genetics, Neuropsychopharmacology, Neurobiology, Proteomic, Neuroimaging,Ethic and Epidemiology and employs about fifty professional researchers. The Cognitive Neuroscience Unit is provided with severalstate-of-the-art devices necessary for the application of brain stimulationtechniques (transcranial magnetic stimulation: TMS, rTMS, and transcranialelectrical stimulation: tDCS, tACS and tRNS) and for the recording and theanalysis of electrophysiological signals (EEG, EMG) as well asneuropsychological testing. The simultaneous co-registration ofelectroencephalography and TMS application is also available, field where wehave been pioneers in the national research.  Marta Bortoletto, PhD Cognitive Neuroscience Section, IRCCS Centro San Giovanni di Dio Fatebenefratelli Via Pilastroni 4, 25125 Brescia, Italy Phone number: (+39) 0303501594 E-mail: marta.bortoletto at cognitiveneuroscience.it web: http://www.cognitiveneuroscience.it/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From timeehan at gmail.com Thu Mar 9 16:37:02 2017 From: timeehan at gmail.com (Tim Meehan) Date: Thu, 9 Mar 2017 10:37:02 -0500 Subject: [FieldTrip] marking artifacts by channel + trial In-Reply-To: References: Message-ID: So far I've just been using ft_databrowser in the same way you mention to look for artifacts that affect most or all channels. But I think I will need to also visually check each channel for bad trials. Since I'm working with iEEG, these will be mainly those picking up any epileptic discharges. Of course the channels around the seizure foci I will throw out entirely. I'm a bit daunted by how much work it will be to do a channel x trial visual rejection since I have ~1700 trials and ~ 100 channels for our one subject so far. In fact just typing those numbers makes me think it may not be feasible. Do you find the automated rejection works satisfactorily for you? On Wed, Mar 8, 2017 at 3:35 PM, Teresa Madsen wrote: > I actually do a mix of approaches: a quick look using ft_databrowser with > all channels for irregular artifacts like disconnection events, then a > channel-by-channel search for large z-value artifacts and clipping > artifacts, then I remove all those and do one last ft_databrowser review of > all channels together. I'll attach the function I was working on, but it's > more complex than you originally asked for and not fully tested yet, so use > at your own risk. > > Do you use ft_databrowser or ft_rejectvisual for visual artifact rejection? > > ~Teresa > > > On Wed, Mar 8, 2017 at 12:04 PM, Tim Meehan wrote: > >> Thanks for sharing! I'm just taking a look now. It looks like you're >> doing mostly automated rejection. Or are you also doing visual rejection >> along with the z-value thresholding? >> >> Thanks again, >> Tim >> >> On Fri, Mar 3, 2017 at 5:31 PM, Teresa Madsen >> wrote: >> >>> Here's a rough sketch of my approach, with one custom function >>> attached. If you or others find it useful, maybe we can think about ways >>> to incorporate it into the FieldTrip code. I've been working mostly with >>> scripts, but you've inspired me to work on functionizing the rest of it so >>> it's more shareable. >>> >>> So, assuming raw multichannel data has been loaded into FieldTrip >>> structure 'data' with unique trial identifiers in data.trialinfo... >>> >>> for ch = 1:numel(data.label) >>> %% pull out one channel at a time >>> cfg = []; >>> cfg.channel = data.label{ch}; >>> >>> datch{ch} = ft_selectdata(cfg, data); >>> >>> %% identify large z-value artifacts and/or whatever else you might >>> want >>> >>> cfg = []; >>> cfg.artfctdef.zvalue.channel = 'all'; >>> cfg.artfctdef.zvalue.cutoff = 15; >>> cfg.artfctdef.zvalue.trlpadding = 0; >>> cfg.artfctdef.zvalue.fltpadding = 0; >>> cfg.artfctdef.zvalue.artpadding = 0.1; >>> cfg.artfctdef.zvalue.rectify = 'yes'; >>> >>> [~, artifact.zvalue] = ft_artifact_zvalue([], datch{ch}); >>> >>> %% replace artifacts with NaNs >>> cfg = []; >>> cfg.artfctdef.zvalue.artifact = artifact.zvalue; >>> cfg.artfctdef.reject = 'nan'; >>> >>> datch{ch} = ft_rejectartifact(cfg,datch{ch}); >>> end >>> >>> %% re-merge channels >>> data = ft_appenddata([],datch); >>> >>> %% mark uniform NaNs as artifacts when they occur across all channels >>> % and replace non-uniform NaNs (on some but not all channels) with >>> zeroes, saving times >>> [artifact,data,times] = artifact_nan2zero_TEM(data) % custom function, >>> see attached >>> >>> %% reject artifacts by breaking into sub-trials >>> cfg = []; >>> cfg.artfctdef.nan2zero.artifact = artifact; >>> cfg.artfctdef.reject = 'partial'; >>> >>> data = ft_rejectartifact(cfg,data); >>> >>> %% identify real trials >>> trlinfo = unique(data.trialinfo,'rows','stable'); >>> >>> for tr = 1:size(trlinfo,1) >>> >>> %% calculate trial spectrogram >>> >>> cfg = []; >>> >>> cfg.trials = ismember(data.trialinfo, trlinfo(tr,:), 'rows'); >>> cfg.keeptrials = 'no'; % refers to sub-trials >>> >>> cfg.method = 'mtmconvol'; >>> >>> cfg.output = 'powandcsd'; >>> >>> cfg.foi = 2.^(0:0.1:log2(300)); % 83 freqs, log2 spaced, 1-300 Hz >>> cfg.tapsmofrq = cfg.foi/10; % smooth by 10% >>> cfg.t_ftimwin = 2./cfg.tapsmofrq; % for 3 tapers (K=3), T=2/W >>> cfg.toi = '50%'; >>> cfg.pad = 'nextpow2'; >>> >>> >>> freq = ft_freqanalysis(cfg,data); >>> >>> %% replace powspctrm & crsspctrum values with NaNs >>> % where t_ftimwin (or wavlen for wavelets) overlaps with artifact >>> for ch = 1:numel(freq.label) >>> badt = [times{tr,ch}]; >>> if ~isempty(badt) && any(... >>> badt > (min(freq.time) - max(freq.cfg.t_ftimwin)) & ... >>> badt < (max(freq.time) + max(freq.cfg.t_ftimwin))) >>> ci = find(any(strcmp(freq.label{ch}, freq.labelcmb))); >>> for t = 1:numel(freq.time) >>> for f = 1:numel(freq.freq) >>> mint = freq.time(t) - freq.cfg.t_ftimwin(f); >>> maxt = freq.time(t) + freq.cfg.t_ftimwin(f); >>> if any(badt > mint & badt < maxt) >>> freq.powspctrm(ch,f,t) = NaN; >>> freq.crsspctrm(ci,f,t) = NaN; >>> end >>> end >>> end >>> end >>> end >>> >>> %% save corrected output >>> >>> save(['trial' num2str(tr) 'mtmconvolTFA.mat'], 'freq', '-v7.3'); >>> end >>> >>> >>> >>> On Thu, Mar 2, 2017 at 9:55 AM, Tim Meehan wrote: >>> >>>> Hi Teresa, >>>> >>>> Thanks for the reply. I'll take a look at your example if you don't >>>> mind sharing. Thanks! >>>> >>>> Tim >>>> >>>> On Thu, Mar 2, 2017 at 9:53 AM, Teresa Madsen >>>> wrote: >>>> >>>>> No, not really. The only way I've found to do that is to loop through >>>>> my artifact rejection process on each trial individually, then merge them >>>>> back together with NaNs filling in where there are artifacts, but then that >>>>> breaks every form of analysis I want to do. :-P >>>>> >>>>> I wonder if it would work to fill in the artifacts with 0s instead of >>>>> NaNs....I might play with that. Let me know if you're interested in some >>>>> example code. >>>>> >>>>> ~Teresa >>>>> >>>>> >>>>> On Wed, Mar 1, 2017 at 3:55 PM, Tim Meehan wrote: >>>>> >>>>>> Hello All, >>>>>> >>>>>> When performing visual artifact rejection, I want to be able to mark >>>>>> artifacts that occur during some specific trials and only on some specific >>>>>> channels. In the tutorials I see only ways to mark bad channels (i.e. >>>>>> across all trials) or bad trials (i.e. across all channels). Does FieldTrip >>>>>> handle marking artifacts restricted to some channel/trial combination? >>>>>> >>>>>> Thanks, >>>>>> Tim >>>>>> >>>>>> _______________________________________________ >>>>>> fieldtrip mailing list >>>>>> fieldtrip at donders.ru.nl >>>>>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> Teresa E. Madsen, PhD >>>>> Research Technical Specialist: *in vivo *electrophysiology & data >>>>> analysis >>>>> Division of Behavioral Neuroscience and Psychiatric Disorders >>>>> Yerkes National Primate Research Center >>>>> Emory University >>>>> Rainnie Lab, NSB 5233 >>>>> 954 Gatewood Rd. NE >>>>> Atlanta, GA 30329 >>>>> (770) 296-9119 >>>>> braingirl at gmail.com >>>>> https://www.linkedin.com/in/temadsen >>>>> >>>>> _______________________________________________ >>>>> fieldtrip mailing list >>>>> fieldtrip at donders.ru.nl >>>>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>>>> >>>> >>>> >>>> _______________________________________________ >>>> fieldtrip mailing list >>>> fieldtrip at donders.ru.nl >>>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>>> >>> >>> >>> >>> -- >>> Teresa E. Madsen, PhD >>> Research Technical Specialist: *in vivo *electrophysiology & data >>> analysis >>> Division of Behavioral Neuroscience and Psychiatric Disorders >>> Yerkes National Primate Research Center >>> Emory University >>> Rainnie Lab, NSB 5233 >>> 954 Gatewood Rd. NE >>> Atlanta, GA 30329 >>> (770) 296-9119 >>> braingirl at gmail.com >>> https://www.linkedin.com/in/temadsen >>> >>> _______________________________________________ >>> fieldtrip mailing list >>> fieldtrip at donders.ru.nl >>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>> >> >> >> _______________________________________________ >> fieldtrip mailing list >> fieldtrip at donders.ru.nl >> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >> > > > > -- > Teresa E. Madsen, PhD > Research Technical Specialist: *in vivo *electrophysiology & data > analysis > Division of Behavioral Neuroscience and Psychiatric Disorders > Yerkes National Primate Research Center > Emory University > Rainnie Lab, NSB 5233 > 954 Gatewood Rd. NE > Atlanta, GA 30329 > (770) 296-9119 > braingirl at gmail.com > https://www.linkedin.com/in/temadsen > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -------------- next part -------------- An HTML attachment was scrubbed... URL: From braingirl at gmail.com Thu Mar 9 17:24:04 2017 From: braingirl at gmail.com (Teresa Madsen) Date: Thu, 9 Mar 2017 11:24:04 -0500 Subject: [FieldTrip] marking artifacts by channel + trial In-Reply-To: References: Message-ID: I have spent a lot of time tweaking it for my own purposes (16-32 channels of rat LFP data with lots of motion artifact), so yes, it works reasonably well for me. I greatly prefer to have some sort of objective way of defining artifacts, and only supplement with visual marking when something irregular slips by. It's both faster and makes me more confident that I'm not inadvertently changing my standards across rats/channels/trials. Since epileptic activity tends to have (reasonably?) consistent spatio-temporal patterns, have you considered trying ICA artifact rejection, as demonstrated for EOG and ECG artifacts? That may allow you to retain more "real" neural signal, rather than invalidating whole chunks of time. Then again, maybe it makes more sense to eliminate the whole signal when epileptic activity occurs, since that region of the brain is obviously not functioning normally at that moment. That's a judgement call for you to make in consultation with experienced people in your field. ~Teresa On Thu, Mar 9, 2017 at 10:37 AM, Tim Meehan wrote: > So far I've just been using ft_databrowser in the same way you mention to > look for artifacts that affect most or all channels. But I think I will > need to also visually check each channel for bad trials. Since I'm working > with iEEG, these will be mainly those picking up any epileptic discharges. > Of course the channels around the seizure foci I will throw out entirely. > > I'm a bit daunted by how much work it will be to do a channel x trial > visual rejection since I have ~1700 trials and ~ 100 channels for our one > subject so far. In fact just typing those numbers makes me think it may not > be feasible. Do you find the automated rejection works satisfactorily for > you? > > On Wed, Mar 8, 2017 at 3:35 PM, Teresa Madsen wrote: > >> I actually do a mix of approaches: a quick look using ft_databrowser >> with all channels for irregular artifacts like disconnection events, then a >> channel-by-channel search for large z-value artifacts and clipping >> artifacts, then I remove all those and do one last ft_databrowser review of >> all channels together. I'll attach the function I was working on, but it's >> more complex than you originally asked for and not fully tested yet, so use >> at your own risk. >> >> Do you use ft_databrowser or ft_rejectvisual for visual artifact >> rejection? >> >> ~Teresa >> >> >> On Wed, Mar 8, 2017 at 12:04 PM, Tim Meehan wrote: >> >>> Thanks for sharing! I'm just taking a look now. It looks like you're >>> doing mostly automated rejection. Or are you also doing visual rejection >>> along with the z-value thresholding? >>> >>> Thanks again, >>> Tim >>> >>> On Fri, Mar 3, 2017 at 5:31 PM, Teresa Madsen >>> wrote: >>> >>>> Here's a rough sketch of my approach, with one custom function >>>> attached. If you or others find it useful, maybe we can think about ways >>>> to incorporate it into the FieldTrip code. I've been working mostly with >>>> scripts, but you've inspired me to work on functionizing the rest of it so >>>> it's more shareable. >>>> >>>> So, assuming raw multichannel data has been loaded into FieldTrip >>>> structure 'data' with unique trial identifiers in data.trialinfo... >>>> >>>> for ch = 1:numel(data.label) >>>> %% pull out one channel at a time >>>> cfg = []; >>>> cfg.channel = data.label{ch}; >>>> >>>> datch{ch} = ft_selectdata(cfg, data); >>>> >>>> %% identify large z-value artifacts and/or whatever else you might >>>> want >>>> >>>> cfg = []; >>>> cfg.artfctdef.zvalue.channel = 'all'; >>>> cfg.artfctdef.zvalue.cutoff = 15; >>>> cfg.artfctdef.zvalue.trlpadding = 0; >>>> cfg.artfctdef.zvalue.fltpadding = 0; >>>> cfg.artfctdef.zvalue.artpadding = 0.1; >>>> cfg.artfctdef.zvalue.rectify = 'yes'; >>>> >>>> [~, artifact.zvalue] = ft_artifact_zvalue([], datch{ch}); >>>> >>>> %% replace artifacts with NaNs >>>> cfg = []; >>>> cfg.artfctdef.zvalue.artifact = artifact.zvalue; >>>> cfg.artfctdef.reject = 'nan'; >>>> >>>> datch{ch} = ft_rejectartifact(cfg,datch{ch}); >>>> end >>>> >>>> %% re-merge channels >>>> data = ft_appenddata([],datch); >>>> >>>> %% mark uniform NaNs as artifacts when they occur across all channels >>>> % and replace non-uniform NaNs (on some but not all channels) with >>>> zeroes, saving times >>>> [artifact,data,times] = artifact_nan2zero_TEM(data) % custom function, >>>> see attached >>>> >>>> %% reject artifacts by breaking into sub-trials >>>> cfg = []; >>>> cfg.artfctdef.nan2zero.artifact = artifact; >>>> cfg.artfctdef.reject = 'partial'; >>>> >>>> data = ft_rejectartifact(cfg,data); >>>> >>>> %% identify real trials >>>> trlinfo = unique(data.trialinfo,'rows','stable'); >>>> >>>> for tr = 1:size(trlinfo,1) >>>> >>>> %% calculate trial spectrogram >>>> >>>> cfg = []; >>>> >>>> cfg.trials = ismember(data.trialinfo, trlinfo(tr,:), 'rows'); >>>> cfg.keeptrials = 'no'; % refers to sub-trials >>>> >>>> cfg.method = 'mtmconvol'; >>>> >>>> cfg.output = 'powandcsd'; >>>> >>>> cfg.foi = 2.^(0:0.1:log2(300)); % 83 freqs, log2 spaced, 1-300 Hz >>>> cfg.tapsmofrq = cfg.foi/10; % smooth by 10% >>>> cfg.t_ftimwin = 2./cfg.tapsmofrq; % for 3 tapers (K=3), T=2/W >>>> cfg.toi = '50%'; >>>> cfg.pad = 'nextpow2'; >>>> >>>> >>>> freq = ft_freqanalysis(cfg,data); >>>> >>>> %% replace powspctrm & crsspctrum values with NaNs >>>> % where t_ftimwin (or wavlen for wavelets) overlaps with artifact >>>> for ch = 1:numel(freq.label) >>>> badt = [times{tr,ch}]; >>>> if ~isempty(badt) && any(... >>>> badt > (min(freq.time) - max(freq.cfg.t_ftimwin)) & ... >>>> badt < (max(freq.time) + max(freq.cfg.t_ftimwin))) >>>> ci = find(any(strcmp(freq.label{ch}, freq.labelcmb))); >>>> for t = 1:numel(freq.time) >>>> for f = 1:numel(freq.freq) >>>> mint = freq.time(t) - freq.cfg.t_ftimwin(f); >>>> maxt = freq.time(t) + freq.cfg.t_ftimwin(f); >>>> if any(badt > mint & badt < maxt) >>>> freq.powspctrm(ch,f,t) = NaN; >>>> freq.crsspctrm(ci,f,t) = NaN; >>>> end >>>> end >>>> end >>>> end >>>> end >>>> >>>> %% save corrected output >>>> >>>> save(['trial' num2str(tr) 'mtmconvolTFA.mat'], 'freq', '-v7.3'); >>>> end >>>> >>>> >>>> >>>> On Thu, Mar 2, 2017 at 9:55 AM, Tim Meehan wrote: >>>> >>>>> Hi Teresa, >>>>> >>>>> Thanks for the reply. I'll take a look at your example if you don't >>>>> mind sharing. Thanks! >>>>> >>>>> Tim >>>>> >>>>> On Thu, Mar 2, 2017 at 9:53 AM, Teresa Madsen >>>>> wrote: >>>>> >>>>>> No, not really. The only way I've found to do that is to loop >>>>>> through my artifact rejection process on each trial individually, then >>>>>> merge them back together with NaNs filling in where there are artifacts, >>>>>> but then that breaks every form of analysis I want to do. :-P >>>>>> >>>>>> I wonder if it would work to fill in the artifacts with 0s instead of >>>>>> NaNs....I might play with that. Let me know if you're interested in some >>>>>> example code. >>>>>> >>>>>> ~Teresa >>>>>> >>>>>> >>>>>> On Wed, Mar 1, 2017 at 3:55 PM, Tim Meehan >>>>>> wrote: >>>>>> >>>>>>> Hello All, >>>>>>> >>>>>>> When performing visual artifact rejection, I want to be able to mark >>>>>>> artifacts that occur during some specific trials and only on some specific >>>>>>> channels. In the tutorials I see only ways to mark bad channels (i.e. >>>>>>> across all trials) or bad trials (i.e. across all channels). Does FieldTrip >>>>>>> handle marking artifacts restricted to some channel/trial combination? >>>>>>> >>>>>>> Thanks, >>>>>>> Tim >>>>>>> >>>>>>> _______________________________________________ >>>>>>> fieldtrip mailing list >>>>>>> fieldtrip at donders.ru.nl >>>>>>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Teresa E. Madsen, PhD >>>>>> Research Technical Specialist: *in vivo *electrophysiology & data >>>>>> analysis >>>>>> Division of Behavioral Neuroscience and Psychiatric Disorders >>>>>> Yerkes National Primate Research Center >>>>>> Emory University >>>>>> Rainnie Lab, NSB 5233 >>>>>> 954 Gatewood Rd. NE >>>>>> Atlanta, GA 30329 >>>>>> (770) 296-9119 >>>>>> braingirl at gmail.com >>>>>> https://www.linkedin.com/in/temadsen >>>>>> >>>>>> _______________________________________________ >>>>>> fieldtrip mailing list >>>>>> fieldtrip at donders.ru.nl >>>>>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> fieldtrip mailing list >>>>> fieldtrip at donders.ru.nl >>>>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>>>> >>>> >>>> >>>> >>>> -- >>>> Teresa E. Madsen, PhD >>>> Research Technical Specialist: *in vivo *electrophysiology & data >>>> analysis >>>> Division of Behavioral Neuroscience and Psychiatric Disorders >>>> Yerkes National Primate Research Center >>>> Emory University >>>> Rainnie Lab, NSB 5233 >>>> 954 Gatewood Rd. NE >>>> Atlanta, GA 30329 >>>> (770) 296-9119 >>>> braingirl at gmail.com >>>> https://www.linkedin.com/in/temadsen >>>> >>>> _______________________________________________ >>>> fieldtrip mailing list >>>> fieldtrip at donders.ru.nl >>>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>>> >>> >>> >>> _______________________________________________ >>> fieldtrip mailing list >>> fieldtrip at donders.ru.nl >>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>> >> >> >> >> -- >> Teresa E. Madsen, PhD >> Research Technical Specialist: *in vivo *electrophysiology & data >> analysis >> Division of Behavioral Neuroscience and Psychiatric Disorders >> Yerkes National Primate Research Center >> Emory University >> Rainnie Lab, NSB 5233 >> 954 Gatewood Rd. NE >> Atlanta, GA 30329 >> (770) 296-9119 >> braingirl at gmail.com >> https://www.linkedin.com/in/temadsen >> >> _______________________________________________ >> fieldtrip mailing list >> fieldtrip at donders.ru.nl >> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >> > > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -- Teresa E. Madsen, PhD Research Technical Specialist: *in vivo *electrophysiology & data analysis Division of Behavioral Neuroscience and Psychiatric Disorders Yerkes National Primate Research Center Emory University Rainnie Lab, NSB 5233 954 Gatewood Rd. NE Atlanta, GA 30329 (770) 296-9119 braingirl at gmail.com https://www.linkedin.com/in/temadsen -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim.curran at colorado.edu Thu Mar 9 22:39:41 2017 From: tim.curran at colorado.edu (Tim Curran) Date: Thu, 9 Mar 2017 21:39:41 +0000 Subject: [FieldTrip] Postdoc at Northwestern In-Reply-To: References: Message-ID: <4CA846C9-15B4-42D5-9305-880F4182DA16@colorado.edu> POSTDOCTORAL POSITION AVAILABLE IN PRODROME/EARLY PSYCHOSIS: NORTHWESTERN UNIVERSITY’S ADAPT PROGRAM The Northwestern University Adolescent Development and Preventive Treatment (ADAPT) program is seeking applications for a full-time postdoctoral fellow. We are looking for someone with background in cognitive/affective neuroscience or clinical psychology, with interests and experience in electrophysiological assessment and/or neuroimaging. Currently we are running a number of NIMH/Foundation/University funded multimodal protocols (structural/diffusion-tensor/functional imaging, ERP, brain stimulation, eye tracking, instrumental motor assessment) with prodromal syndrome and early psychosis populations focusing on: brain, immune, and endocrine changes in response to aerobic exercise; neurobiology of motor dysfunction; timing of affective processing dysfunction (in new collaboration with Tim Curran). Please see our website for more details: http://www.adaptprogram.com. The ideal candidate will be a person who is interested in applying a cognitive/affective neuroscience background (e.g., Cognition /Cog-Neuro or related Ph.D.) to investigate early psychosis and the psychosis prodrome. Clinical Psychology Ph.D.’s with related interests and training experiences are also highly encouraged to apply. Preference will be given to candidates with a proven track record of good productivity, as well as strong computer programming skills (e.g., MATLAB/Python). We also strongly encourage diversity, and will give preference to applicants from populations underrepresented in STEM. The successful applicant will join Vijay Mittal and an active research team and will be responsible for designing/running experiments, analyzing and processing data, and disseminating findings. In addition, the applicant will work on collaborative studies with Vijay Mittal and Robin Nusslock, examining shared and distinct pathophysiology, underlying reward-processing abnormalities in psychosis and affective disorders. There will also be ample opportunities to take courses (Northwestern has a number of in-depth advanced training opportunities, covering a range of methodological and quantitative methods), collaborate (benefit from a number of active ADAPT collaborations as well as the vibrant Northwestern research community), help to mentor/train graduate students, and develop and follow Independent research questions. Significant attention will be placed on career development (e.g., regular conference attendance/participation, training in grant writing, mentorship, teaching, presentations/job-talks etc.) – this is ideal position for someone interested in preparing for a tenure track position. For questions or to submit and application, please contact Vijay Mittal (vijay.mittal at northwestern.edu). Applicants should send a C.V., brief letter describing interests and prior experience, and two publications (that best reflect contributions of the candidate). Salary is based on the NIMH Post-doctoral scale, and funding is available for up to two-years (appointments are for one year, but renewable for two years, based on progress/merit). There is a flexible start date (Spring, Summer or Fall 2017), and review of applications will begin March 1st. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-michel.badier at univ-amu.fr Fri Mar 10 16:07:06 2017 From: jean-michel.badier at univ-amu.fr (Jean-Michel Badier) Date: Fri, 10 Mar 2017 16:07:06 +0100 Subject: [FieldTrip] Doc and Post-doc positions in Marseille France Message-ID: <0f5e65c1-2800-8703-2eb3-c23774626895@univ-amu.fr> Dear all, Please find call for doc and pos-doc positions in Marseille. Note that both offer access to the IRMf and MEG platforms. Best regards *Call for Applications* */3 Post-docs; 3 Phd Grants/* ** *Three 2-year postdoc positions * *At Aix-Marseille/Avignon on /Language, Communication and the Brain/* The Center of Excellence on Brain and Language (BLRI, www.blri.fr/ )and the Institute of Language, Communication and the Brain (ILCB, http://www.ilcb.fr) offer: *Three 2-year postdoc positions*on any topic that falls within the area of language, communication, brain and modelling. The institute provides privileged and free access to fMRI, MEG and EEG facilities. The BLRI-ILCB is located in Aix-en-Provence, Avignon and Marseille and regroups several research centers in linguistics, psychology, cognitive neuroscience, medicine, computer science, and mathematics. The scientific project, ideally interdisciplinary, should be supervised by at least one member of the BLRI/ILCB (see http://www.blri.fr/members.html) and should, if possible, involve two different laboratories of the institute. A complete application should contain: 1.A full description of the research project (~ 5 pages): a.Title b.Name of the collaborator/supervisor/sponsor within the BLRI-ILCB c.Short Summary d.Scientific context/state of the art/ e.Objectives and hypotheses f.Methodology g.Expected results h.Brief statement about the relevance of the project for the BLRI/ILCB i.Proposed Timeline 2.CV with complete list of publications 3.Letter of motivation 4.One letter of recommendation or contact information of a potential referee * Duration: 2 years (1 year, extendable for another year) * Monthly salary: ~2000 € net (depending on experience) * Deadline: June 11, 2017 Applications should be sent to: nadera.bureau at blri.fr For supplementary information: _Johannes.Ziegler at univ-amu.fr _ *Three PhD grants * *at Aix-Marseille/Avignon on /Language, Communication and the Brain/* The Center of Excellence on Brain and Language (BLRI, www.blri.fr/ )and the Institute of Language, Communication and the Brain (ILCB, http://www.ilcb.fr/ ) award 3 PhD grants (3 years) on any topic that falls within the area of language, communication, brain and modelling. The institute provides privileged and free access to fMRI and MEG facilities. The BLRI-ILCB is located in Aix-en-Provence, Avignon and Marseille and regroups several research centers in linguistics, psychology, cognitive neuroscience, medicine, computer science, and mathematics. Interested candidates need to find one or more PhD supervisors amongst the members of the BRLI-ILCB (http://www.blri.fr/members.html.) Together with the supervisor(s), they would then need to write a 3-year PhD project. A priority is given to interdisciplinary co-directions and to projects that involve two different laboratories of the institute. The application should contain: 1.A full description of the PhD project (~ 5 pages): a.Title b.Name of the PhD supervisor(s) c.Short Summary d.Scientific context/state of the art/ e.Objectives and hypotheses f.Methodology g.Expected results h.Brief statement about the relevance of the project for the BLRI/ILCB i.Proposed Timeline 2.CV and master degree grades (if available) 3.Letter of motivation 4.One letter of recommendation or contact information of a potential referee * Deadline for submission: June 11, 2017 * Pre-selection of candidates for audition: June 28, 2017 * Auditions: July 3-7, 2017 (international candidates might be interviewed via skype) * Start: September 1, 2017 * Monthly salary: 1 685€(1 368€ net) for a period of 3 years Applications should be sent to: _nadera.bureau at blri.fr _ For supplementary information contact: _Johannes.Ziegler at univ-amu.fr _ ------------------------------------------------------------------------ Philippe Blache | LPL - CNRS & Universite d'Aix-Marseille | tel: +33 (0)4.13.55.27.21 Brain & Language Research Institute | fax: +33 (0)4.42.95.37.44 5 Avenue Pasteur, BP 80975 | email: blache at lpl-aix.fr 13604 Aix-en-Provence Cedex 1 | http://www.lpl-aix.fr/~blache/ France | http://www.blri.fr/ ------------------------------------------------------------------------ -- Jean michel Badier /- UMR S 1106 Institut de Neurosciences des Systèmes/ Aix-Marseille Université - Laboratoire MEG - TIMONE - 27 Boulevard Jean Moulin - 13005 Marseille Tél: +33(0)4 91 38 55 62 - Fax : +33(0)4 91 78 99 14 Site : http://www.univ-amu.fr - Email : jean-michel.badier at univ-amu.fr /Afin de respecter l'environnement, merci de n'imprimer cet email que si nécessaire./ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: logo_amu.jpg Type: image/jpeg Size: 17847 bytes Desc: not available URL: From dmatthes at cbs.mpg.de Fri Mar 10 16:25:52 2017 From: dmatthes at cbs.mpg.de (Daniel Matthes) Date: Fri, 10 Mar 2017 16:25:52 +0100 Subject: [FieldTrip] Estimation of the phase locking value Message-ID: <0bcd8e53-28ba-68fa-ddcd-ab6f3e385f72@cbs.mpg.de> Hi, In our upcoming studies we want to investigate inter-brain couplings on source level. Therefore, I have at present some questions about the calculation of the phase locking value (PLV), mainly about it's implementation in field trip. Since I'm a very beginner both with analysing eeg data and with field trip, I possibly ask already answerd question, so you've my apologies at the beginnig. Based on the paper of Dumas et al. (08/2010) I initially tried to compute the PLV in field trip by using the hilbert transformation to determine the instantaneous phase (ft_preprocessing) and subsequent with ft_connectivityanalysis. I rapidly recognized that this is not possible, since only freq data is a valid input for ft_connectivityanalysis in connection with the parameter 'plv'. Thus, I realized the PLV calculation on my own with matlab from scratch and tried to get a similar results using the ft_connectivityanalysis function. I've solved this issue by now, but on this way several question came up. The first one is about the result of ft_connectivityanalysis in connection with the parameter 'plv'. The function returns the phase difference of the compared components and not the phase locking value , as defined in Lachaux et al. (1999). What's the reason for this implementation? Existing plans for closing this gap? The second question is related to my initial problem. Why is it not possible to use the instananeous phase as input data of ft_connectivityanalysis in connection with the parameter 'plv'. I think this would make the calculation less complex. At last I wonder, why the configuration of cfg.channel and cfg.channelcmb has no effect in ft_connectivityanalysis in connection with the parameter 'plv'. It is only possible to focus solely on two channels if these definitions are made during the previous ft_freqanalysis. I would be thankful for some advice. Alle the best, Daniel From andrea.brovelli at univ-amu.fr Fri Mar 10 14:31:37 2017 From: andrea.brovelli at univ-amu.fr (Andrea Brovelli) Date: Fri, 10 Mar 2017 14:31:37 +0100 Subject: [FieldTrip] 3 PhD grants and 3 Post-Doc positions at Aix-Marseille University (France) In-Reply-To: References: Message-ID: *Call for Applications* */3 Post-docs; 3 Phd Grants/* ** *Three 2-year postdoc positions * *At Aix-Marseille/Avignon on /Language, Communication and the Brain/* The Center of Excellence on Brain and Language (BLRI, www.blri.fr/ )and the Institute of Language, Communication and the Brain (ILCB, http://www.ilcb.fr) offer: *Three 2-year postdoc positions*on any topic that falls within the area of language, communication, brain and modelling. The institute provides privileged and free access to fMRI, MEG and EEG facilities. The BLRI-ILCB is located in Aix-en-Provence, Avignon and Marseille and regroups several research centers in linguistics, psychology, cognitive neuroscience, medicine, computer science, and mathematics. The scientific project, ideally interdisciplinary, should be supervised by at least one member of the BLRI/ILCB (see http://www.blri.fr/members.html) and should, if possible, involve two different laboratories of the institute. A complete application should contain: 1.A full description of the research project (~ 5 pages): a.Title b.Name of the collaborator/supervisor/sponsor within the BLRI-ILCB c.Short Summary d.Scientific context/state of the art/ e.Objectives and hypotheses f.Methodology g.Expected results h.Brief statement about the relevance of the project for the BLRI/ILCB i.Proposed Timeline 2.CV with complete list of publications 3.Letter of motivation 4.One letter of recommendation or contact information of a potential referee * Duration: 2 years (1 year, extendable for another year) * Monthly salary: ~2000 € net (depending on experience) * Deadline: June 11, 2017 Applications should be sent to: nadera.bureau at blri.fr For supplementary information: _Johannes.Ziegler at univ-amu.fr _ *Three PhD grants * *at Aix-Marseille/Avignon on /Language, Communication and the Brain/* The Center of Excellence on Brain and Language (BLRI, www.blri.fr/ )and the Institute of Language, Communication and the Brain (ILCB, http://www.ilcb.fr/ ) award 3 PhD grants (3 years) on any topic that falls within the area of language, communication, brain and modelling. The institute provides privileged and free access to fMRI and MEG facilities. The BLRI-ILCB is located in Aix-en-Provence, Avignon and Marseille and regroups several research centers in linguistics, psychology, cognitive neuroscience, medicine, computer science, and mathematics. Interested candidates need to find one or more PhD supervisors amongst the members of the BRLI-ILCB (http://www.blri.fr/members.html.) Together with the supervisor(s), they would then need to write a 3-year PhD project. A priority is given to interdisciplinary co-directions and to projects that involve two different laboratories of the institute. The application should contain: 1.A full description of the PhD project (~ 5 pages): a.Title b.Name of the PhD supervisor(s) c.Short Summary d.Scientific context/state of the art/ e.Objectives and hypotheses f.Methodology g.Expected results h.Brief statement about the relevance of the project for the BLRI/ILCB i.Proposed Timeline 2.CV and master degree grades (if available) 3.Letter of motivation 4.One letter of recommendation or contact information of a potential referee * Deadline for submission: June 11, 2017 * Pre-selection of candidates for audition: June 28, 2017 * Auditions: July 3-7, 2017 (international candidates might be interviewed via skype) * Start: September 1, 2017 * Monthly salary: 1 685€(1 368€ net) for a period of 3 years Applications should be sent to: _nadera.bureau at blri.fr _ For supplementary information contact: _Johannes.Ziegler at univ-amu.fr _ ------------------------------------------------------------------------ Philippe Blache | LPL - CNRS & Universite d'Aix-Marseille | tel: +33 (0)4.13.55.27.21 Brain & Language Research Institute | fax: +33 (0)4.42.95.37.44 5 Avenue Pasteur, BP 80975 | email: blache at lpl-aix.fr 13604 Aix-en-Provence Cedex 1 | http://www.lpl-aix.fr/~blache/ France | http://www.blri.fr/ ------------------------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-michel.badier at univ-amu.fr Fri Mar 10 17:05:37 2017 From: jean-michel.badier at univ-amu.fr (Jean-Michel Badier) Date: Fri, 10 Mar 2017 17:05:37 +0100 Subject: [FieldTrip] 3 PhD grants and 3 Post-Doc positions at Aix-Marseille University (France) In-Reply-To: References: Message-ID: <76a895ff-1ab5-47a6-5339-c81754dede50@univ-amu.fr> Bonjour Andrea, On dirait que l'on a eu la même idée ! J'espère que tu vas bien. A bientôt JM Le 10/03/2017 à 14:31, Andrea Brovelli a écrit : > > *Call for Applications* > > */3 Post-docs; 3 Phd Grants/* > > ** > > *Three 2-year postdoc positions * > > *At Aix-Marseille/Avignon on /Language, Communication and the Brain/* > > > The Center of Excellence on Brain and Language (BLRI, www.blri.fr/ > )and the Institute of Language, Communication and > the Brain (ILCB, http://www.ilcb.fr) offer: > > *Three 2-year postdoc positions*on any topic that falls within the > area of language, communication, brain and modelling. The institute > provides privileged and free access to fMRI, MEG and EEG facilities. > > The BLRI-ILCB is located in Aix-en-Provence, Avignon and Marseille and > regroups several research centers in linguistics, psychology, > cognitive neuroscience, medicine, computer science, and mathematics. > > The scientific project, ideally interdisciplinary, should be > supervised by at least one member of the BLRI/ILCB (see > http://www.blri.fr/members.html) and should, if possible, involve two > different laboratories of the institute. > > A complete application should contain: > > > 1.A full description of the research project (~ 5 pages): > a.Title > b.Name of the collaborator/supervisor/sponsor within the BLRI-ILCB > c.Short Summary > d.Scientific context/state of the art/ > e.Objectives and hypotheses > f.Methodology > g.Expected results > h.Brief statement about the relevance of the project for the BLRI/ILCB > i.Proposed Timeline > 2.CV with complete list of publications > 3.Letter of motivation > 4.One letter of recommendation or contact information of a potential > referee > > * Duration: 2 years (1 year, extendable for another year) > * Monthly salary: ~2000 € net (depending on experience) > * Deadline: June 11, 2017 > > > Applications should be sent to: nadera.bureau at blri.fr > > > For supplementary information: _Johannes.Ziegler at univ-amu.fr > _ > > > *Three PhD grants * > > *at Aix-Marseille/Avignon on /Language, Communication and the Brain/* > > > The Center of Excellence on Brain and Language (BLRI, www.blri.fr/ > )and the Institute of Language, Communication and > the Brain (ILCB, http://www.ilcb.fr/ ) award > > 3 PhD grants (3 years) on any topic that falls within the area of > language, communication, brain and modelling. The institute provides > privileged and free access to fMRI and MEG facilities. > > The BLRI-ILCB is located in Aix-en-Provence, Avignon and Marseille and > regroups several research centers in linguistics, psychology, > cognitive neuroscience, medicine, computer science, and mathematics. > > Interested candidates need to find one or more PhD supervisors amongst > the members of the BRLI-ILCB (http://www.blri.fr/members.html.) > Together with the supervisor(s), they would then need to write a > 3-year PhD project. A priority is given to interdisciplinary > co-directions and to projects that involve two different laboratories > of the institute. > > The application should contain: > > > 1.A full description of the PhD project (~ 5 pages): > a.Title > b.Name of the PhD supervisor(s) > c.Short Summary > d.Scientific context/state of the art/ > e.Objectives and hypotheses > f.Methodology > g.Expected results > h.Brief statement about the relevance of the project for the BLRI/ILCB > i.Proposed Timeline > 2.CV and master degree grades (if available) > 3.Letter of motivation > 4.One letter of recommendation or contact information of a potential > referee > > > * Deadline for submission: June 11, 2017 > * Pre-selection of candidates for audition: June 28, 2017 > * Auditions: July 3-7, 2017 (international candidates might be > interviewed via skype) > * Start: September 1, 2017 > * Monthly salary: 1 685€(1 368€ net) for a period of 3 years > > > > Applications should be sent to: _nadera.bureau at blri.fr > _ > > For supplementary information contact: _Johannes.Ziegler at univ-amu.fr > _ > > ------------------------------------------------------------------------ > Philippe Blache | > LPL - CNRS & Universite d'Aix-Marseille | tel: +33 (0)4.13.55.27.21 > Brain & Language Research Institute | fax: +33 (0)4.42.95.37.44 > 5 Avenue Pasteur, BP 80975 | email: blache at lpl-aix.fr > > 13604 Aix-en-Provence Cedex 1 | > http://www.lpl-aix.fr/~blache/ > France | http://www.blri.fr/ > ------------------------------------------------------------------------ > > > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -- Jean michel Badier /- UMR S 1106 Institut de Neurosciences des Systèmes/ Aix-Marseille Université - Laboratoire MEG - TIMONE - 27 Boulevard Jean Moulin - 13005 Marseille Tél: +33(0)4 91 38 55 62 - Fax : +33(0)4 91 78 99 14 Site : http://www.univ-amu.fr - Email : jean-michel.badier at univ-amu.fr /Afin de respecter l'environnement, merci de n'imprimer cet email que si nécessaire./ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: logo_amu.jpg Type: image/jpeg Size: 17847 bytes Desc: not available URL: From jan.schoffelen at donders.ru.nl Mon Mar 13 08:49:27 2017 From: jan.schoffelen at donders.ru.nl (Schoffelen, J.M. (Jan Mathijs)) Date: Mon, 13 Mar 2017 07:49:27 +0000 Subject: [FieldTrip] Fwd: ft_volumerealign with headshape References: Message-ID: <3F59D5AD-E147-4B9A-995F-E8ADBBC72452@donders.ru.nl> Hi Ainsley, I forward your message to the discussion list. Dear list, Please have a look at Ainsley’s question below. Did anyone encounter this issue and has a solution? The error is a low-level MATLAB one, so apparently the input arguments to the ismember function call are not what they should be. Thanks and with best wishes, Jan-Mathijs J.M.Schoffelen, MD PhD Senior Researcher, VIDI-fellow - PI, language in interaction Telephone: +31-24-3614793 Physical location: room 00.028 Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands Begin forwarded message: From: Ainsley Temudo > Subject: ft_volumerealign with headshape Date: 13 March 2017 at 07:30:23 GMT+1 To: > Hi Jan-Mathijs, I'm trying to realign an MRI to a headshape. I found a previous discussion mail from someone who had a similar problem a couple of years ago and you replied saying you fixed it locally with a dirty hack. https://mailman.science.ru.nl/pipermail/fieldtrip/2015-November/009828.html I am doing it the same way: mri = ft_read_mri('WMCP1011+22+t1mprage.nii'); cfg = []; cfg.method = 'interactive'; cfg.coordsys = 'ctf'; mri_realigned = ft_volumerealign(cfg,mri); hs=ft_read_headshape('headscan.hsp'); cfg = []; cfg.method = 'headshape' cfg.coordsys = 'ctf'; cfg.headshape.headshape = hs; mri_realigned2 = ft_volumerealign(cfg,mri_realigned); and I get the following errors : doing interactive realignment with headshape Error using cell/ismember (line 34) Input A of class cell and input B of class cell must be cell arrays of strings, unless one is a string. Error in ft_senstype (line 303) if (mean(ismember(ft_senslabel('ant128'), sens.label)) > 0.8) Error in ft_datatype_sens (line 138) ismeg = ft_senstype(sens, 'meg'); Error in ft_checkconfig (line 250) cfg.elec = ft_datatype_sens(cfg.elec); Error in ft_interactiverealign (line 71) cfg.template = ft_checkconfig(cfg.template, 'renamed', {'vol', 'headmodel'}); Error in ft_volumerealign (line 691) tmpcfg = ft_interactiverealign(tmp cfg); Is it the same issue as before? if this issue was fixed, any idea why I'm getting these areas? I'm using field trip version 20160313 Kind Regards, Ainsley -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.schoffelen at donders.ru.nl Mon Mar 13 09:13:50 2017 From: jan.schoffelen at donders.ru.nl (Schoffelen, J.M. (Jan Mathijs)) Date: Mon, 13 Mar 2017 08:13:50 +0000 Subject: [FieldTrip] Fwd: [HCP-Users] hcp_anatomy.m needs an hsfile? References: Message-ID: <6B401176-3DCA-4858-AE3B-30DA4F0E331A@donders.ru.nl> Dear Jeff, Let me forward your question to the discussion list. Dear list, Jeff is encountering some coregistration problems, which may be FieldTrip related, but also could be a user error. Perhaps somebody has encountered them before. Let us know if you have a solution. The 45 degrees tilt looks odd. If this image was produced after reslicing the to-MNI-coregistered-image something went wrong with the realignment. If this image was produced prior to the reslicing, something funky has gone wrong with the acquisition sequence. I don’t know anything about the specifics of Brainstorm, so I am afraid I cannot help there. Best, Jan-Mathijs J.M.Schoffelen, MD PhD Senior Researcher, VIDI-fellow - PI, language in interaction Telephone: +31-24-3614793 Physical location: room 00.028 Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands Begin forwarded message: From: K Jeffrey Eriksen > Subject: RE: [HCP-Users] hcp_anatomy.m needs an hsfile? Date: 11 March 2017 at 02:47:33 GMT+1 To: "Schoffelen, J.M. (Jan Mathijs)" > Hello again, I encountered a problem when I tried to import into Brainstorm, even though I thought I had the transform text file correct. After importing the anatomy in Brainstorm, it was displayed with the brain rotated by 45 degrees in all axes. I then realized the I had visualized the registration of the headshape to the scalp surface and that looked good, but I had never visualized the MNI registration. I went back into the HCP scripts and found where the MNI registration could be visualized and discovered the 45 degree rotation seemed to occur there. So I thought maybe our local HCP pipeline did something unusual. To test this I ran these three conditions: 1. My hcp_anatomy_egi.m with our local HCP-pipeline produced T1 2. original hcp_anatomy.m with our local T1 3. original hcp_anatomy.m with downloaded HCM_MEG_pipeline produced T1 All three had the same apparent problem, shown on the attached images. I am quite puzzled by this since they are all the same, yet Brainstorm only imports #3 correctly (not counting #2 which is mixed). I put all three cases in the attached Word doc, with the Brainstorm registration display and the HCP headshape registration display. -Jeff From: Schoffelen, J.M. (Jan Mathijs) [mailto:jan.schoffelen at donders.ru.nl] Sent: Wednesday, March 08, 2017 8:52 AM To: K Jeffrey Eriksen Subject: Re: [HCP-Users] hcp_anatomy.m needs an hsfile? Hi Jeff, I made it all the way through hcp_anatomy_EGI.m (my version substituting ‘egi’ for ‘bti’. Amazing! I could not figure out how to do the interactive fine registration of the EGI electrode “headshape” to the scalp surface – where is that documented? Well it’s not extensively documented, but in the crude GUI you can fiddle around with translation and rotation parameters to move the electrode point cloud closer to the headsurface mesh, created from the MRI segmentation. The main remaining problem is that the BTI coordinate system has the X-axis toward the nasion, and the Y-axis toward the LPA. The EGI coordinate system has the X-axis toward the RPA and the Y-axis toward the nasion. Can you suggest the best way to change hcp_anatomy_EGI.m to reflect this? Well, it sounds as if the EGI has an RAS convention, which may be similar to the ‘neuromag’ convention (as per http://www.fieldtriptoolbox.org/faq/how_are_the_different_head_and_mri_coordinate_systems_defined) It could be that changing the required coordinate system (coordsys) to ‘neuromag’ while specifying the fiducials in ft_volumerealign (rather than ‘bti’) would do the trick. Each of the supported coordinates systems must have some kind of master definition somewhere in the code, and that would be the best place to define the EGI system. I think it is similar to the BESA system. The code that has the ‘intelligence’ to map the specification of defined fiducial/landmark locations is in https://github.com/fieldtrip/fieldtrip/blob/master/utilities/ft_headcoordinates.m with a typo in line48/49 I noticed just now. Feel free to suggest a new coordinate system if needed. Perhaps this is best done through the fieldtrip discussion list. Best, Jan-Mathijs -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 45 degree rotqation of MNI registration HCP_MEG_anatomy.docx Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document Size: 494751 bytes Desc: 45 degree rotqation of MNI registration HCP_MEG_anatomy.docx URL: From mpcoll at mac.com Mon Mar 13 13:47:25 2017 From: mpcoll at mac.com (MP Coll) Date: Mon, 13 Mar 2017 12:47:25 +0000 Subject: [FieldTrip] Precisions on DICS beamforming on EEG data and using normalisation Message-ID: <28ea0ca0-fed2-3cba-aa11-a79cc8c7c1a3@mac.com> Dear Fieldtrip Community, My name is Michel-Pierre Coll and I am a postdoctoral researcher at King's college London. A reviewer recently suggested we perform DICS beamforming to source localise EEG effects in the frequency domain during an action observation/execution paradigm. I was able to perform these analyses using the very good tutorial on the Fiedltrip website. However, I have some questions regarding these analyses. I have searched the literature and the mailing list but somehow I can't find clear answers to these basic questions. 1) Is it appropriate to perform DICS beamforming using EEG with 60 channels (standard montage) ? If not, what would be the appropriate number of channels ? Can you suggest a reference discussing this issue ? 2) When not using a contrast to perform the beamforming, is the normalisation of the lead field an adequate procedure to correct for the depth/centre bias ? The Fieldtrip tutorial suggest it is but other posts on the mailing list suggest that it is not. 3) How does one choose an optimal time window for DICS beamforming when the duration of the effect is quite long (e.g. several seconds of alpha changes in response to a visual stimulus) ? Is it correct to use a longer time-window (e.g. 2 seconds) that is representative of the duration of the effect ? I would greatly appreciate any hints on these questions or if you could point me towards relevant texts discussing these issues. Best, MP Coll From n.molinaro at bcbl.eu Tue Mar 14 17:21:43 2017 From: n.molinaro at bcbl.eu (Nicola Molinaro) Date: Tue, 14 Mar 2017 17:21:43 +0100 (CET) Subject: [FieldTrip] RESEARCH FACULTY POSITIONS at the BCBL Message-ID: <1648301278.309469.1489508503037.JavaMail.zimbra@bcbl.eu> Dear Fieldtrip community I am forwarding this message from the BCBL direction Nicola ------------- RESEARCH FACULTY POSITIONS at the BCBL- Basque Center on Cognition Brain and Language (San Sebastián, Basque Country, Spain) www.bcbl.eu (Center of excellence Severo Ochoa) The Basque Center on Cognition Brain and Language (San Sebastián, Basque Country, Spain) together with IKERBASQUE (Basque Foundation for Science) offer 3 permanent IKERBASQUE Research Professor positions in the following areas: - Language acquisition - Any area of Language processing and/or disorders with advanced experience in MRI - Any area of Language processing and/or disorders with advanced experience in MEG The BCBL Center (recently awarded the label of excellence Severo Ochoa) promotes a vibrant research environment without substantial teaching obligations. It provides access to the most advanced behavioral and neuroimaging techniques, including 3 Tesla MRI, a whole-head MEG system, four ERP labs, a NIRS lab, a baby lab including eyetracker, EEG and NIRS, two eyetracking labs, and several well-equipped behavioral labs. There are excellent technical support staff and research personnel (PhD and postdoctoral students). The senior positions are permanent appointments. We are looking for cognitive neuroscientists or experimental psychologists with a background in psycholinguistics and/or neighboring cognitive neuroscience areas, and physicists and/or engineers with fMRI or MEG expertise. Individuals interested in undertaking research in the fields described in http://www.bcbl.eu/research/lines/ should apply through the BCBL web page (www.bcbl.eu/jobs). The successful candidate will be working within the research lines of the BCBL whose main aim is to develop high-risk/high gain projects at the frontiers of Cognitive Neuroscience. We expect high readiness to work with strong engagement and creativity in an interdisciplinary and international environment. Deadline June 30th We encourage immediate applications as the selection process will be ongoing and the appointment may be made before the deadline. Only senior researchers with a strong record of research experience will be considered. Women candidates are especially welcome. To submit your application please follow this link: http://www.bcbl.eu/jobs applying for Ikerbasque Research Professor 2017 and upload: Your curriculum vitae. A cover letter/statement describing your research interests (4000 characters maximum) The names of two referees who would be willing to write letters of recommendation Applicants should be fluent in English. Knowledge of Spanish and/or Basque will be considered useful but is not compulsory. For more information, please contact the Director of BCBL, Manuel Carreiras (m.carreiras at bcbl.eu). From marc.lalancette at sickkids.ca Tue Mar 14 17:46:50 2017 From: marc.lalancette at sickkids.ca (Marc Lalancette) Date: Tue, 14 Mar 2017 16:46:50 +0000 Subject: [FieldTrip] Precisions on DICS beamforming on EEG data and normalisation Message-ID: <2A2B6A5B8C4C174CBCCE0B45E548DEB23B96F1D4@SKMBXX01.sickkids.ca> Hi Michel-Pierre, Regarding question 2, I'm mostly familiar with LCMV, and I can't remember exactly how DICS works, but I would guess normalization approaches have the same properties in both. (Please someone correct me on this if I'm wrong.) One great reference for LCMV beamformer in general, and normalization in particular, is the book by Sekihara and Nagarajan. For a scalar beamformer, yes normalizing the leadfield ("array-gain") will correct depth bias, but I find these absolute values harder to interpret. Dividing instead by projected noise ("unit-noise-gain") also corrects depth bias, and has better spatial resolution. For a vector beamformer, things get a bit more complicated as the "array-gain" and "unit-noise-gain" vector formulae in that book are not rotationally invariant and I would not recommend using them. (See my recent post: https://mailman.science.ru.nl/pipermail/fieldtrip/2017-March/011390.html) Fieldtrip does not by default use these normalizations, but I also haven't seen an analysis of (or had time to investigate much) how its vector beamformer normalization approach fares in terms of bias and resolution compared to others. Maybe it exists somewhere? Sorry if it's not a very practical answer... Cheers, Marc Lalancette Lab Research Project Manager, Research MEG The Hospital for Sick Children, Diagnostic Imaging, Room S742 555 University Avenue, Toronto, ON, M5G 1X8 416-813-7654 x201535 -----Original Message----- Date: Mon, 13 Mar 2017 12:47:25 +0000 From: MP Coll To: fieldtrip at science.ru.nl Subject: [FieldTrip] Precisions on DICS beamforming on EEG data and using normalisation Message-ID: <28ea0ca0-fed2-3cba-aa11-a79cc8c7c1a3 at mac.com> Content-Type: text/plain; charset=utf-8; format=flowed Dear Fieldtrip Community, My name is Michel-Pierre Coll and I am a postdoctoral researcher at King's college London. A reviewer recently suggested we perform DICS beamforming to source localise EEG effects in the frequency domain during an action observation/execution paradigm. I was able to perform these analyses using the very good tutorial on the Fiedltrip website. However, I have some questions regarding these analyses. I have searched the literature and the mailing list but somehow I can't find clear answers to these basic questions. 1) Is it appropriate to perform DICS beamforming using EEG with 60 channels (standard montage) ? If not, what would be the appropriate number of channels ? Can you suggest a reference discussing this issue ? 2) When not using a contrast to perform the beamforming, is the normalisation of the lead field an adequate procedure to correct for the depth/centre bias ? The Fieldtrip tutorial suggest it is but other posts on the mailing list suggest that it is not. 3) How does one choose an optimal time window for DICS beamforming when the duration of the effect is quite long (e.g. several seconds of alpha changes in response to a visual stimulus) ? Is it correct to use a longer time-window (e.g. 2 seconds) that is representative of the duration of the effect ? I would greatly appreciate any hints on these questions or if you could point me towards relevant texts discussing these issues. Best, MP Coll ------------------------------ _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip End of fieldtrip Digest, Vol 76, Issue 14 ***************************************** ________________________________ This e-mail may contain confidential, personal and/or health information(information which may be subject to legal restrictions on use, retention and/or disclosure) for the sole use of the intended recipient. Any review or distribution by anyone other than the person for whom it was originally intended is strictly prohibited. If you have received this e-mail in error, please contact the sender and delete all copies. From hamedtaheri at yahoo.com Tue Mar 14 18:26:54 2017 From: hamedtaheri at yahoo.com (Hamed Taheri) Date: Tue, 14 Mar 2017 17:26:54 +0000 (UTC) Subject: [FieldTrip] How can i see EEG References: <687200850.5575467.1489512414151.ref@mail.yahoo.com> Message-ID: <687200850.5575467.1489512414151@mail.yahoo.com> Hello allMy name is Hamed a Ph.D. candidate from the Sapienza University of Rome.I have an EEG data that recorded in 64 channel with .eeg format.How can I see my data in Fieldtrip. cfg = []cfg.dataset = 'mydata........'....  Hamed Taheri Gorji PhD Candidate Brain Imaging Laboratory  DEPARTMENT OF PSYCHOLOGY FACULTY OF MEDICINE AND PSYCHOLOGY SAPIENZA University of Rome Santa Lucia Foundation, ViaArdeatina 306, 00179 Rome -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailtome.2113 at gmail.com Wed Mar 15 07:16:28 2017 From: mailtome.2113 at gmail.com (Arti Abhishek) Date: Wed, 15 Mar 2017 17:16:28 +1100 Subject: [FieldTrip] Epoching between 1 to 30 seconds Message-ID: Dear fieldtrip community, I have EEG recorded in an auditory steady state paradigm and I want to epoch between 1-30 seconds. I don't want in my epoch any prestimulus time or the first second of the stimulus (to remove the onset response). I was wondering how I can epoch like this in fieldtrip? Can I epoch without using cfg.trialdef.prestim and cfg.trialdef.poststim parameters? Thanks, Arti -------------- next part -------------- An HTML attachment was scrubbed... URL: From toni.rbaena at gmail.com Wed Mar 15 08:26:05 2017 From: toni.rbaena at gmail.com (Antonio Rodriguez) Date: Wed, 15 Mar 2017 08:26:05 +0100 Subject: [FieldTrip] Epoching between 1 to 30 seconds In-Reply-To: References: Message-ID: Hello Arti, maybe you can try to set your pre stim time to a negative value ( so you will start after the event) , and then set the post stim at your final epoch time. Like this: cfg = []; cfg.datafile = datafile; cfg.headerfile = headerfile; cfg.trialdef.eventtype = 'Stimulus'; cfg.trialdef.eventvalue = 'S 19'; cfg.trialdef.prestim = -1; % start 1 second AFTER stim cfg.trialdef.poststim = 30; % end in the second 30 AFTER stim td = ft_definetrial(cfg); % my epochs are 29 second long Hope this helps. 2017-03-15 7:16 GMT+01:00 Arti Abhishek : > Dear fieldtrip community, > > I have EEG recorded in an auditory steady state paradigm and I want to > epoch between 1-30 seconds. I don't want in my epoch any prestimulus time > or the first second of the stimulus (to remove the onset response). I was > wondering how I can epoch like this in fieldtrip? Can I epoch without using > cfg.trialdef.prestim and cfg.trialdef.poststim parameters? > > Thanks, > Arti > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stan.vanpelt at donders.ru.nl Wed Mar 15 08:52:23 2017 From: stan.vanpelt at donders.ru.nl (Pelt, S. van (Stan)) Date: Wed, 15 Mar 2017 07:52:23 +0000 Subject: [FieldTrip] How can i see EEG In-Reply-To: <687200850.5575467.1489512414151@mail.yahoo.com> References: <687200850.5575467.1489512414151.ref@mail.yahoo.com> <687200850.5575467.1489512414151@mail.yahoo.com> Message-ID: <7CCA2706D7A4DA45931A892DF3C2894C58F0E941@exprd03.hosting.ru.nl> Hi Hamed, The FieldTrip tutorials are your friend here. See http://www.fieldtriptoolbox.org/getting_started. Best, Stan -- Stan van Pelt, PhD Donders Institute for Brain, Cognition and Behaviour Radboud University Montessorilaan 3, B.01.34 6525 HR Nijmegen, the Netherlands tel: +31 24 3616288 From: fieldtrip-bounces at science.ru.nl [mailto:fieldtrip-bounces at science.ru.nl] On Behalf Of Hamed Taheri Sent: dinsdag 14 maart 2017 18:27 To: fieldtrip at science.ru.nl Subject: [FieldTrip] How can i see EEG Hello all My name is Hamed a Ph.D. candidate from the Sapienza University of Rome. I have an EEG data that recorded in 64 channel with .eeg format. How can I see my data in Fieldtrip. cfg = [] cfg.dataset = 'mydata........' . . . . Hamed Taheri Gorji PhD Candidate Brain Imaging Laboratory DEPARTMENT OF PSYCHOLOGY FACULTY OF MEDICINE AND PSYCHOLOGY SAPIENZA University of Rome Santa Lucia Foundation, Via Ardeatina 306, 00179 Rome -------------- next part -------------- An HTML attachment was scrubbed... URL: From hamedtaheri at yahoo.com Wed Mar 15 09:10:23 2017 From: hamedtaheri at yahoo.com (hamed taheri) Date: Wed, 15 Mar 2017 09:10:23 +0100 Subject: [FieldTrip] How can i see EEG In-Reply-To: <7CCA2706D7A4DA45931A892DF3C2894C58F0E941@exprd03.hosting.ru.nl> References: <687200850.5575467.1489512414151.ref@mail.yahoo.com> <687200850.5575467.1489512414151@mail.yahoo.com> <7CCA2706D7A4DA45931A892DF3C2894C58F0E941@exprd03.hosting.ru.nl> Message-ID: I saw this tutorial but I couldn't find viewing code I want to see my 64 channels Sent from my iPhone > On Mar 15, 2017, at 8:52 AM, Pelt, S. van (Stan) wrote: > > Hi Hamed, > > The FieldTrip tutorials are your friend here. See http://www.fieldtriptoolbox.org/getting_started. > > Best, > Stan > > -- > Stan van Pelt, PhD > Donders Institute for Brain, Cognition and Behaviour > Radboud University > Montessorilaan 3, B.01.34 > 6525 HR Nijmegen, the Netherlands > tel: +31 24 3616288 > > From: fieldtrip-bounces at science.ru.nl [mailto:fieldtrip-bounces at science.ru.nl] On Behalf Of Hamed Taheri > Sent: dinsdag 14 maart 2017 18:27 > To: fieldtrip at science.ru.nl > Subject: [FieldTrip] How can i see EEG > > Hello all > My name is Hamed a Ph.D. candidate from the Sapienza University of Rome. > I have an EEG data that recorded in 64 channel with .eeg format. > How can I see my data in Fieldtrip. > > cfg = [] > cfg.dataset = 'mydata........' > . > . > . > . > > > > Hamed Taheri Gorji > PhD Candidate > Brain Imaging Laboratory > > DEPARTMENT OF PSYCHOLOGY > FACULTY OF MEDICINE AND PSYCHOLOGY > SAPIENZA > University of Rome > > Santa Lucia Foundation, Via > Ardeatina 306, 00179 Rome > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: From stan.vanpelt at donders.ru.nl Wed Mar 15 09:16:46 2017 From: stan.vanpelt at donders.ru.nl (Pelt, S. van (Stan)) Date: Wed, 15 Mar 2017 08:16:46 +0000 Subject: [FieldTrip] How can i see EEG In-Reply-To: References: <687200850.5575467.1489512414151.ref@mail.yahoo.com> <687200850.5575467.1489512414151@mail.yahoo.com> <7CCA2706D7A4DA45931A892DF3C2894C58F0E941@exprd03.hosting.ru.nl> Message-ID: <7CCA2706D7A4DA45931A892DF3C2894C58F0E95E@exprd03.hosting.ru.nl> Try http://www.fieldtriptoolbox.org/tutorial/preprocessing_erp Or the excellent walkthrough: http://www.fieldtriptoolbox.org/walkthrough From: fieldtrip-bounces at science.ru.nl [mailto:fieldtrip-bounces at science.ru.nl] On Behalf Of hamed taheri Sent: woensdag 15 maart 2017 9:10 To: FieldTrip discussion list Subject: Re: [FieldTrip] How can i see EEG I saw this tutorial but I couldn't find viewing code I want to see my 64 channels Sent from my iPhone On Mar 15, 2017, at 8:52 AM, Pelt, S. van (Stan) > wrote: Hi Hamed, The FieldTrip tutorials are your friend here. See http://www.fieldtriptoolbox.org/getting_started. Best, Stan -- Stan van Pelt, PhD Donders Institute for Brain, Cognition and Behaviour Radboud University Montessorilaan 3, B.01.34 6525 HR Nijmegen, the Netherlands tel: +31 24 3616288 From: fieldtrip-bounces at science.ru.nl [mailto:fieldtrip-bounces at science.ru.nl] On Behalf Of Hamed Taheri Sent: dinsdag 14 maart 2017 18:27 To: fieldtrip at science.ru.nl Subject: [FieldTrip] How can i see EEG Hello all My name is Hamed a Ph.D. candidate from the Sapienza University of Rome. I have an EEG data that recorded in 64 channel with .eeg format. How can I see my data in Fieldtrip. cfg = [] cfg.dataset = 'mydata........' . . . . Hamed Taheri Gorji PhD Candidate Brain Imaging Laboratory DEPARTMENT OF PSYCHOLOGY FACULTY OF MEDICINE AND PSYCHOLOGY SAPIENZA University of Rome Santa Lucia Foundation, Via Ardeatina 306, 00179 Rome _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.schoffelen at donders.ru.nl Wed Mar 15 09:27:29 2017 From: jan.schoffelen at donders.ru.nl (Schoffelen, J.M. (Jan Mathijs)) Date: Wed, 15 Mar 2017 08:27:29 +0000 Subject: [FieldTrip] Fwd: How can i see EEG References: <0CE090DC-AF2A-4133-83E8-F52895A64ECC@gmail.com> Message-ID: Hamed, To add to Stan’s excellent suggestions: If your question is about visualization, you could have a look at the plotting tutorial, or familiarize yourself with matlab’s basic plotting functionality, functions such as plot etc. Perhaps you could also check with colleagues in your lab who might know how to do this. Good luck Jan-Mathijs On 15 Mar 2017, at 09:10, hamed taheri > wrote: I saw this tutorial but I couldn't find viewing code I want to see my 64 channels Sent from my iPhone On Mar 15, 2017, at 8:52 AM, Pelt, S. van (Stan) > wrote: Hi Hamed, The FieldTrip tutorials are your friend here. See http://www.fieldtriptoolbox.org/getting_started. Best, Stan -- Stan van Pelt, PhD Donders Institute for Brain, Cognition and Behaviour Radboud University Montessorilaan 3, B.01.34 6525 HR Nijmegen, the Netherlands tel: +31 24 3616288 From: fieldtrip-bounces at science.ru.nl [mailto:fieldtrip-bounces at science.ru.nl] On Behalf Of Hamed Taheri Sent: dinsdag 14 maart 2017 18:27 To: fieldtrip at science.ru.nl Subject: [FieldTrip] How can i see EEG Hello all My name is Hamed a Ph.D. candidate from the Sapienza University of Rome. I have an EEG data that recorded in 64 channel with .eeg format. How can I see my data in Fieldtrip. cfg = [] cfg.dataset = 'mydata........' . . . . Hamed Taheri Gorji PhD Candidate Brain Imaging Laboratory DEPARTMENT OF PSYCHOLOGY FACULTY OF MEDICINE AND PSYCHOLOGY SAPIENZA University of Rome Santa Lucia Foundation, Via Ardeatina 306, 00179 Rome _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: From anne.hauswald at me.com Wed Mar 15 09:37:49 2017 From: anne.hauswald at me.com (anne Hauswald) Date: Wed, 15 Mar 2017 09:37:49 +0100 Subject: [FieldTrip] How can i see EEG In-Reply-To: References: <687200850.5575467.1489512414151.ref@mail.yahoo.com> <687200850.5575467.1489512414151@mail.yahoo.com> <7CCA2706D7A4DA45931A892DF3C2894C58F0E941@exprd03.hosting.ru.nl> Message-ID: <9C222EA7-DD8C-45F6-B3DE-BBA5B40276FD@me.com> Hi Hamed, as Stan pointed you to, you will find information on visual data inspection in http://www.fieldtriptoolbox.org/walkthrough . Basically, you can use e.g. ft_databrowser to view your data. for example cfg=[]; cfg.dataset='path to your eeg data‘; ft_databrowser(cfg) for more options see the references for this function. best anne > Am 15.03.2017 um 09:10 schrieb hamed taheri : > > I saw this tutorial but I couldn't find viewing code > I want to see my 64 channels > > > Sent from my iPhone > > On Mar 15, 2017, at 8:52 AM, Pelt, S. van (Stan) > wrote: > >> Hi Hamed, >> >> The FieldTrip tutorials are your friend here. See http://www.fieldtriptoolbox.org/getting_started . >> >> Best, >> Stan >> >> -- >> Stan van Pelt, PhD >> Donders Institute for Brain, Cognition and Behaviour >> Radboud University >> Montessorilaan 3, B.01.34 >> 6525 HR Nijmegen, the Netherlands >> tel: +31 24 3616288 >> >> From: fieldtrip-bounces at science.ru.nl [mailto:fieldtrip-bounces at science.ru.nl ] On Behalf Of Hamed Taheri >> Sent: dinsdag 14 maart 2017 18:27 >> To: fieldtrip at science.ru.nl >> Subject: [FieldTrip] How can i see EEG >> >> Hello all >> My name is Hamed a Ph.D. candidate from the Sapienza University of Rome. >> I have an EEG data that recorded in 64 channel with .eeg format. >> How can I see my data in Fieldtrip. >> >> cfg = [] >> cfg.dataset = 'mydata........' >> . >> . >> . >> . >> >> >> >> Hamed Taheri Gorji >> PhD Candidate >> Brain Imaging Laboratory >> >> DEPARTMENT OF PSYCHOLOGY >> FACULTY OF MEDICINE AND PSYCHOLOGY >> SAPIENZA >> University of Rome >> >> Santa Lucia Foundation, Via >> Ardeatina 306, 00179 Rome >> _______________________________________________ >> fieldtrip mailing list >> fieldtrip at donders.ru.nl >> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: From J.Herring at donders.ru.nl Wed Mar 15 09:44:56 2017 From: J.Herring at donders.ru.nl (Herring, J.D. (Jim)) Date: Wed, 15 Mar 2017 08:44:56 +0000 Subject: [FieldTrip] Precisions on DICS beamforming on EEG data and normalisation In-Reply-To: <2A2B6A5B8C4C174CBCCE0B45E548DEB23B96F1D4@SKMBXX01.sickkids.ca> References: <2A2B6A5B8C4C174CBCCE0B45E548DEB23B96F1D4@SKMBXX01.sickkids.ca> Message-ID: <6F9804CE79B042468FDC7E8C86CF4CBC500CF390@exprd04.hosting.ru.nl> Dear Michel-Pierre, Allow me to add some additional (unfortunately non-referenced) advice. 1) Is it appropriate to perform DICS beamforming using EEG with 60 channels (standard montage) ? If not, what would be the appropriate number of channels ? Can you suggest a reference discussing this issue ? First, make sure your data are referenced to the common-average as the forward model assumes this. Then, the appropriate number of channels depends on the required spatial resolution; If you wish to source localize posterior alpha activity 60 channels is fine. If you wish to parcellate your brain into 100 regions and do whole-brain connectivity, 60 channels is not fine and you might want to consider switching to MEG as well. 2) When not using a contrast to perform the beamforming, is the normalisation of the lead field an adequate procedure to correct for the depth/centre bias ? The Fieldtrip tutorial suggest it is but other posts on the mailing list suggest that it is not. You say that you are looking at a change of alpha in response to a visual stimulus? It seems like you do have a contrast. You can compare to the baseline. 3) How does one choose an optimal time window for DICS beamforming when the duration of the effect is quite long (e.g. several seconds of alpha changes in response to a visual stimulus) ? Is it correct to use a longer time-window (e.g. 2 seconds) that is representative of the duration of the effect ? Together with the previous point, you can compare your time window of interest to your baseline. Here it is important that you take the same window length from the baseline period as you take during the activation period to prevent a bias towards the window with more data when calculating the common filter. However, according to http://www.fieldtriptoolbox.org/example/common_filters_in_beamforming it is fine to have an unequal amount of trials in each conditions so if your baseline period is only 1 second, you could cut your 'active' period into 1s segments using ft_redefinetrial so you can still use all of the data. Best, Jim -----Original Message----- From: fieldtrip-bounces at science.ru.nl [mailto:fieldtrip-bounces at science.ru.nl] On Behalf Of Marc Lalancette Sent: Tuesday, March 14, 2017 5:47 PM To: fieldtrip at science.ru.nl Cc: mpcoll at mac.com Subject: Re: [FieldTrip] Precisions on DICS beamforming on EEG data and normalisation Hi Michel-Pierre, Regarding question 2, I'm mostly familiar with LCMV, and I can't remember exactly how DICS works, but I would guess normalization approaches have the same properties in both. (Please someone correct me on this if I'm wrong.) One great reference for LCMV beamformer in general, and normalization in particular, is the book by Sekihara and Nagarajan. For a scalar beamformer, yes normalizing the leadfield ("array-gain") will correct depth bias, but I find these absolute values harder to interpret. Dividing instead by projected noise ("unit-noise-gain") also corrects depth bias, and has better spatial resolution. For a vector beamformer, things get a bit more complicated as the "array-gain" and "unit-noise-gain" vector formulae in that book are not rotationally invariant and I would not recommend using them. (See my recent post: https://mailman.science.ru.nl/pipermail/fieldtrip/2017-March/011390.html) Fieldtrip does not by default use these normalizations, but I also haven'! t seen an analysis of (or had time to investigate much) how its vector beamformer normalization approach fares in terms of bias and resolution compared to others. Maybe it exists somewhere? Sorry if it's not a very practical answer... Cheers, Marc Lalancette Lab Research Project Manager, Research MEG The Hospital for Sick Children, Diagnostic Imaging, Room S742 555 University Avenue, Toronto, ON, M5G 1X8 416-813-7654 x201535 -----Original Message----- Date: Mon, 13 Mar 2017 12:47:25 +0000 From: MP Coll > To: fieldtrip at science.ru.nl Subject: [FieldTrip] Precisions on DICS beamforming on EEG data and using normalisation Message-ID: <28ea0ca0-fed2-3cba-aa11-a79cc8c7c1a3 at mac.com> Content-Type: text/plain; charset=utf-8; format=flowed Dear Fieldtrip Community, My name is Michel-Pierre Coll and I am a postdoctoral researcher at King's college London. A reviewer recently suggested we perform DICS beamforming to source localise EEG effects in the frequency domain during an action observation/execution paradigm. I was able to perform these analyses using the very good tutorial on the Fiedltrip website. However, I have some questions regarding these analyses. I have searched the literature and the mailing list but somehow I can't find clear answers to these basic questions. 1) Is it appropriate to perform DICS beamforming using EEG with 60 channels (standard montage) ? If not, what would be the appropriate number of channels ? Can you suggest a reference discussing this issue ? 2) When not using a contrast to perform the beamforming, is the normalisation of the lead field an adequate procedure to correct for the depth/centre bias ? The Fieldtrip tutorial suggest it is but other posts on the mailing list suggest that it is not. 3) How does one choose an optimal time window for DICS beamforming when the duration of the effect is quite long (e.g. several seconds of alpha changes in response to a visual stimulus) ? Is it correct to use a longer time-window (e.g. 2 seconds) that is representative of the duration of the effect ? I would greatly appreciate any hints on these questions or if you could point me towards relevant texts discussing these issues. Best, MP Coll ------------------------------ _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip End of fieldtrip Digest, Vol 76, Issue 14 ***************************************** ________________________________ This e-mail may contain confidential, personal and/or health information(information which may be subject to legal restrictions on use, retention and/or disclosure) for the sole use of the intended recipient. Any review or distribution by anyone other than the person for whom it was originally intended is strictly prohibited. If you have received this e-mail in error, please contact the sender and delete all copies. _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.schoffelen at donders.ru.nl Wed Mar 15 10:11:39 2017 From: jan.schoffelen at donders.ru.nl (Schoffelen, J.M. (Jan Mathijs)) Date: Wed, 15 Mar 2017 09:11:39 +0000 Subject: [FieldTrip] How can i see EEG In-Reply-To: <9C222EA7-DD8C-45F6-B3DE-BBA5B40276FD@me.com> References: <687200850.5575467.1489512414151.ref@mail.yahoo.com> <687200850.5575467.1489512414151@mail.yahoo.com> <7CCA2706D7A4DA45931A892DF3C2894C58F0E941@exprd03.hosting.ru.nl> <9C222EA7-DD8C-45F6-B3DE-BBA5B40276FD@me.com> Message-ID: <2AF1219A-FE85-4EFC-A8E1-2D25DB95F593@donders.ru.nl> tja, een beetje broekafzakkerig. T On 15 Mar 2017, at 09:37, anne Hauswald > wrote: Hi Hamed, as Stan pointed you to, you will find information on visual data inspection in http://www.fieldtriptoolbox.org/walkthrough. Basically, you can use e.g. ft_databrowser to view your data. for example cfg=[]; cfg.dataset='path to your eeg data‘; ft_databrowser(cfg) for more options see the references for this function. best anne Am 15.03.2017 um 09:10 schrieb hamed taheri >: I saw this tutorial but I couldn't find viewing code I want to see my 64 channels Sent from my iPhone On Mar 15, 2017, at 8:52 AM, Pelt, S. van (Stan) > wrote: Hi Hamed, The FieldTrip tutorials are your friend here. See http://www.fieldtriptoolbox.org/getting_started. Best, Stan -- Stan van Pelt, PhD Donders Institute for Brain, Cognition and Behaviour Radboud University Montessorilaan 3, B.01.34 6525 HR Nijmegen, the Netherlands tel: +31 24 3616288 From: fieldtrip-bounces at science.ru.nl [mailto:fieldtrip-bounces at science.ru.nl] On Behalf Of Hamed Taheri Sent: dinsdag 14 maart 2017 18:27 To: fieldtrip at science.ru.nl Subject: [FieldTrip] How can i see EEG Hello all My name is Hamed a Ph.D. candidate from the Sapienza University of Rome. I have an EEG data that recorded in 64 channel with .eeg format. How can I see my data in Fieldtrip. cfg = [] cfg.dataset = 'mydata........' . . . . Hamed Taheri Gorji PhD Candidate Brain Imaging Laboratory DEPARTMENT OF PSYCHOLOGY FACULTY OF MEDICINE AND PSYCHOLOGY SAPIENZA University of Rome Santa Lucia Foundation, Via Ardeatina 306, 00179 Rome _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.schoffelen at donders.ru.nl Wed Mar 15 10:16:47 2017 From: jan.schoffelen at donders.ru.nl (Schoffelen, J.M. (Jan Mathijs)) Date: Wed, 15 Mar 2017 09:16:47 +0000 Subject: [FieldTrip] How can i see EEG In-Reply-To: <2AF1219A-FE85-4EFC-A8E1-2D25DB95F593@donders.ru.nl> References: <687200850.5575467.1489512414151.ref@mail.yahoo.com> <687200850.5575467.1489512414151@mail.yahoo.com> <7CCA2706D7A4DA45931A892DF3C2894C58F0E941@exprd03.hosting.ru.nl> <9C222EA7-DD8C-45F6-B3DE-BBA5B40276FD@me.com> <2AF1219A-FE85-4EFC-A8E1-2D25DB95F593@donders.ru.nl> Message-ID: <01F7D3C2-7726-46A0-93FC-6025F919319E@donders.ru.nl> Hi all, Apologies to all. I replied incorrectly to this e-mail, so please ignore it. It’s out of context (and impossible to understand it without context). Best wishes, Jan-Mathijs > On 15 Mar 2017, at 10:11, Schoffelen, J.M. (Jan Mathijs) wrote: > > tja, een beetje broekafzakkerig. > T From jens.klinzing at uni-tuebingen.de Wed Mar 15 11:31:19 2017 From: jens.klinzing at uni-tuebingen.de (=?UTF-8?B?IkplbnMgS2xpbnppbmcsIFVuaSBUw7xiaW5nZW4i?=) Date: Wed, 15 Mar 2017 11:31:19 +0100 Subject: [FieldTrip] Sourcemodel inside definition too large when using warpmni In-Reply-To: <58BFCD56.2080508@uni-tuebingen.de> References: <08748577-A2CA-4D37-8B9E-BA75BD7BA5CD@donders.ru.nl> <58BFCD56.2080508@uni-tuebingen.de> Message-ID: <58C917F7.5040009@uni-tuebingen.de> I realized the problem also occurs when processing the fieldtrip example brain and filed it as bug 3271. Best, Jens > Jens Klinzing, Uni Tübingen > Mittwoch, 8. März 2017 10:22 > Hi Jan-Mathijs, > the size difference is still there with cfg.grid.nonlinear = no. > > > > > Best, > Jens > > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > Schoffelen, J.M. (Jan Mathijs) > Mittwoch, 8. März 2017 08:27 > Hi Jens, > > What does the ‘green’ point cloud look like relative to the blue > points when you switch off the non-linear step in recipe a)? > > JM > > > > > > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 111863 bytes Desc: not available URL: From kirsten.petras at uclouvain.be Wed Mar 15 13:04:57 2017 From: kirsten.petras at uclouvain.be (Kirsten Petras) Date: Wed, 15 Mar 2017 12:04:57 +0000 Subject: [FieldTrip] ft_electroderealign Reference to non-existent field error Message-ID: <2194f3f6bdc14ab6be094db21ad3487f@ucl-mbx02.OASIS.UCLOUVAIN.BE> Dear Fieldtrippers, I am a PhD student with at UC Louvain and am currently working on source-space analysis of 256 channel EEG data. I am having troubles using ft_electroderealign as follows to project my electrodes to the surface of the scalp. cfg = []; cfg.method = 'headshape'; cfg.elec = elec_prealigned; cfg.warp = 'rigidbody'; cfg.headshape = mesh; elec_aligned = ft_electroderealign(cfg); This fails with the following error message: Reference to non-existent field 'pos'. Error in ft_warp_error (line 57) el = project_elec(input, target.pos, target.tri); Error in fminunc (line 253) f = feval(funfcn{3},x,varargin{:}); Error in ft_warp_optim (line 129) rf = optimfun(errorfun, ri, options, pos1, pos2, 'rigidbody'); Error in ft_electroderealign (line 361) [norm.chanpos, norm.m] = ft_warp_optim(elec.chanpos, headshape, cfg.warp); Caused by: Failure in initial user-supplied objective function evaluation. FMINUNC cannot continue. The electrode-positions come in the format of the EGI template, however, the coordinates have been exchanged for the actual coordinates on the individual participant (done manually from the MRI). The Fid positions have been removed. So the struct looks like this: >> disp (elec_prealigned) chanpos: [256x3 double] elecpos: [256x3 double] homogeneous: [4x4 double] label: {256x1 cell} type: 'egi256' unit: 'cm' cfg: [1x1 struct] Mesh is the following structure: hex: [4794932x8 double] pnt: [4940731x3 double] labels: [4794932x1 double] tissue: [4794932x1 double] tissuelabel: {'air' 'csf' 'gray' 'scalp' 'skull' 'white'} unit: 'mm' cfg: [1x1 struct] At the point where it crashes, 'target' looks like this: disp(target) pnt: [4940731x3 double] poly: [291320x4 double] unit: 'cm' It looks like the headshape created in ft_electroderealign lines 249-259 ( if isstruct(cfg.headshape) && isfield(cfg.headshape, 'hex') cfg.headshape = fixpos(cfg.headshape); headshape = mesh2edge(cfg.headshape); ) is used as the 'target' input for ft_warp_optim(elec.chanpos, headshape, cfg.warp); in line 361. I tried replacing the input by cfg.headshape, but then the .tri field is still missing... Would anyone have a suggestion as to what I am doing wrong here? Thanks a lot! Kirsten -------------- next part -------------- An HTML attachment was scrubbed... URL: From anne.urai at gmail.com Wed Mar 15 13:29:25 2017 From: anne.urai at gmail.com (Anne Urai) Date: Wed, 15 Mar 2017 13:29:25 +0100 Subject: [FieldTrip] compiling ft_volumenormalise In-Reply-To: References: Message-ID: If anyone encounters the same problem, compilation works if I manually add a bunch of spm functions (which are not recognised by mcc, probably because they are in a class definition folder). Specifically, including '-a', '~/Documents/fieldtrip/external/spm8/spm.m', ... '-a', '~/Documents/fieldtrip/external/spm8/templates/T1.nii', ... '-a', '~/Documents/fieldtrip/external/freesurfer/MRIread', ... '-a', '~/code/Tools/spmbug/dim.m', ... '-a', '~/code/Tools/spmbug/dtype.m', ... '-a', '~/code/Tools/spmbug/fname.m', ... '-a', '~/code/Tools/spmbug/offset.m', ... '-a', '~/code/Tools/spmbug/scl_slope.m', ... '-a', '~/code/Tools/spmbug/scl_inter.m', ... '-a', '~/code/Tools/spmbug/permission.m', ... '-a', '~/code/Tools/spmbug/niftistruc.m', ... '-a', '~/code/Tools/spmbug/read_hdr.m', ... '-a', '~/code/Tools/spmbug/getdict.m', ... '-a', '~/code/Tools/spmbug/read_extras.m', ... '-a', '~/code/Tools/spmbug/read_hdr_raw.m', ... does the trick. Happy compiling, Anne On 1 March 2017 at 19:38, Anne Urai wrote: > Hi FieldTrippers, > > I compile my code to run on the supercomputer cluster (without many matlab > licenses), which usually works fine when I do something like: > > *addpath('~/Documents/fieldtrip');* > *ft_defaults; * > *addpath('~/Documents/fieldtrip/external/spm8');* > *mcc('-mv', '-N', '-p', 'stats', '-p', 'images', '-p', 'signal', ...* > * '-R', '-nodisplay', '-R', '-singleCompThread', fname);* > > However, compiling the ft_volumenormalise function gives me some problems. > Specifically, if source is the result of my beamformer analysis, this code > > * cfg = [];* > * cfg.parameter = 'pow';* > * cfg.nonlinear = 'no'; % can warp back to individual* > * cfg.template = > '/home/aeurai/Documents/fieldtrip/external/spm8/templates/T1.nii';* > * cfg.write = 'no';* > * cfg.keepinside = 'no'; % otherwise, ft_sourcegrandaverage > will bug* > * source = ft_volumenormalise(cfg, source);* > > works fine when running it within Matlab. However, when I run the > executable after compiling (which completes without error), a low-level spm > function throws the following error: > > *the input is source data with 16777216 brainordinates on a [256 256 256] > grid* > *Warning: could not reshape "freq" to the expected dimensions* > *> In ft_datatype_volume (line 136)* > *In ft_checkdata (line 350)* > *In ft_volumenormalise (line 98)* > *In B6b_sourceContrast_volNormalise (line 57)* > *Converting the coordinate system from ctf to spm* > *Undefined function 'fname' for input arguments of type 'struct'* > *Error in file_array (line 32)* > *Error in spm_create_vol>create_vol (line 77)* > *Error in spm_create_vol (line 16)* > *Error in volumewrite_spm (line 71)* > *Error in ft_write_mri (line 65)* > *Error in align_ctf2spm (line 168)* > *Error in ft_convert_coordsys (line 95)* > *Error in ft_volumenormalise (line 124)* > *Error in B6b_sourceContrast_volNormalise (line 57)* > *MATLAB:UndefinedFunction* > > I'd be very grateful for hints from anyone who's successfully compiled the > ft_normalise function! Adding the template T1.nii file, spm8 or freesurfer > at compilation does not solve the problem. > Thanks, > > — > Anne E. Urai, MSc > PhD student | Institut für Neurophysiologie und Pathophysiologie > Universitätsklinikum Hamburg-Eppendorf | Martinistrasse 52, 20246 | > Hamburg, Germany > www.anneurai.net / @AnneEUrai > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hamedtaheri at yahoo.com Wed Mar 15 14:33:54 2017 From: hamedtaheri at yahoo.com (Hamed Taheri) Date: Wed, 15 Mar 2017 13:33:54 +0000 (UTC) Subject: [FieldTrip] How can i see EEG In-Reply-To: <9C222EA7-DD8C-45F6-B3DE-BBA5B40276FD@me.com> References: <687200850.5575467.1489512414151.ref@mail.yahoo.com> <687200850.5575467.1489512414151@mail.yahoo.com> <7CCA2706D7A4DA45931A892DF3C2894C58F0E941@exprd03.hosting.ru.nl> <9C222EA7-DD8C-45F6-B3DE-BBA5B40276FD@me.com> Message-ID: <174832299.566947.1489584834648@mail.yahoo.com> Thanks, Dear Anne with ft_databrowser(cfg); i saw my signal but it's not good as EEGLAB  Hamed Taheri Gorji PhD Candidate Brain Imaging Laboratory  DEPARTMENT OF PSYCHOLOGY FACULTY OF MEDICINE AND PSYCHOLOGY SAPIENZA University of Rome Santa Lucia Foundation, ViaArdeatina 306, 00179 Rome On Wednesday, March 15, 2017 9:43 AM, anne Hauswald wrote: Hi Hamed,  as Stan pointed you to, you will find information on visual data inspection in http://www.fieldtriptoolbox.org/walkthrough. Basically, you can use e.g. ft_databrowser to view your data.for example cfg=[]; cfg.dataset='path to your eeg data‘;ft_databrowser(cfg) for more options see the references for this function.best anne Am 15.03.2017 um 09:10 schrieb hamed taheri : I saw this tutorial but I couldn't find viewing codeI want to see my 64 channels  Sent from my iPhone On Mar 15, 2017, at 8:52 AM, Pelt, S. van (Stan) wrote: Hi Hamed,  The FieldTrip tutorials are your friend here. See http://www.fieldtriptoolbox.org/getting_started.  Best,Stan  --Stan van Pelt, PhDDonders Institute for Brain, Cognition and BehaviourRadboud UniversityMontessorilaan 3, B.01.346525 HR Nijmegen, the Netherlandstel: +31 24 3616288  From: fieldtrip-bounces at science.ru.nl [mailto:fieldtrip-bounces at science.ru.nl] On Behalf Of Hamed Taheri Sent: dinsdag 14 maart 2017 18:27 To: fieldtrip at science.ru.nl Subject: [FieldTrip] How can i see EEG  Hello allMy name is Hamed a Ph.D. candidate from the Sapienza University of Rome.I have an EEG data that recorded in 64 channel with .eeg format.How can I see my data in Fieldtrip.  cfg = []cfg.dataset = 'mydata........'....     Hamed Taheri Gorji PhD Candidate  Brain Imaging Laboratory  DEPARTMENT OF PSYCHOLOGY FACULTY OF MEDICINE AND PSYCHOLOGY SAPIENZA University of Rome Santa Lucia Foundation, ViaArdeatina 306, 00179 Rome _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: From elam4hcp at gmail.com Wed Mar 15 17:41:03 2017 From: elam4hcp at gmail.com (Jennifer Elam) Date: Wed, 15 Mar 2017 11:41:03 -0500 Subject: [FieldTrip] HCP Course 2017: Accommodations Reservations now available Message-ID: Spaces are beginning to fill for the 2017 HCP Course: "Exploring the Human Connectome" , to be held June 19-23 at the Djavad Mowafagian Centre for Brain Health at University of British Columbia (UBC) in Vancouver, BC, Canada! Reservations for on-site accommodations for those attending the course are now available. The 5-day intensive HCP course will provide training in acquisition, processing, analysis and visualization of whole brain imaging and behavioral data using methods and software tools developed by the WU-Minn-Oxford Human Connectome Project (HCP) consortium. The HCP Course is the best place to learn directly from HCP investigators and to explore HCP data and methods. This year's course will cover where HCP is heading with advent of the Lifespan HCP development (ages 5-18) and aging (ages 35-90+) projects and will provide hands-on experience in working with the multi-modal human cortical parcellation (Glasser *et al.* 2016, Nature ) and with the “HCP-Style” paradigm for data acquisition, analysis, and sharing (Glasser *et al.* 2016, Nature Neuroscience ). For more info and to register visit the HCP Course 2017 website . If you have any questions, please contact us at: hcpcourse at humanconnectome.org We look forward to seeing you in Vancouver! Best, 2017 HCP Course Staff -------------- next part -------------- An HTML attachment was scrubbed... URL: From eriksenj at ohsu.edu Wed Mar 15 23:56:21 2017 From: eriksenj at ohsu.edu (K Jeffrey Eriksen) Date: Wed, 15 Mar 2017 22:56:21 +0000 Subject: [FieldTrip] why realignment tilted in hcp_anatomy? Message-ID: All HCP_MEG users: In the hope of getting some responses, let me simplify this to the bare minimum. By setting a flag in [hcp_anatomy.m] to allow visualization of the realignment result, I have discovered something that appears wrong. The coronal view in the attached “realignment result” is tilted at a 45 degree angle. My first question is simply: is this what I should see? If so, why is it tilted? [cid:image001.png at 01D29D9F.396AFF00] I have not modified the script except to turn on this visualization. The input file (T1w_acpc_dc_restore.nii) is from one of the publically available HCP_MEG subjects (177746) that I downloaded. So there can be no “user error” at this point on my account, unless it is using [hcp_anatomy] outside the context of the whole HCP_MEG pipeline. The above plot occurs on line 156 of [hcp_anatomy.m]. Thanks, -Jeff PS. Just in case I am marking the ac, pc, zx, and r landmark points wrong, here is what I marked: [cid:image002.png at 01D29DA0.14D6DF00] And here is all the console output up to the point of drawing the realignment result: dicomfile = A:\HCP_MEG_subs\HCP-MEG-177746\MEG\anatomy\T1w_acpc_dc_restore.nii executing the anatomy pipeline for subject 177746 not using the high quality structural preprocessing results ------------------------------------------------------------------------- Running the interactive part of the anatomy pipeline Rescaling NIFTI: slope = 1, intercept = 0 Please identify the Anterior Commissure, Posterior Commissure, a point on the positive Z and X axes, and a point on the right part of the head the input is volume data with dimensions [260 311 260] 1. To change the slice viewed in one plane, either: a. click (left mouse) in the image on a different plane. Eg, to view a more superior slice in the horizontal plane, click on a superior position in the coronal plane, or b. use the arrow keys to increase or decrease the slice number by one 2. To mark a fiducial position or anatomical landmark, do BOTH: a. select the position by clicking on it in any slice with the left mouse button b. identify it by pressing the letter corresponding to the fiducial/landmark: press a for ac, p for pc, z for xzpoint press r for an extra control point that should be on the right side You can mark the fiducials multiple times, until you are satisfied with the positions. 3. To change the display: a. press c on keyboard to toggle crosshair visibility b. press f on keyboard to toggle fiducial visibility c. press + or - on (numeric) keyboard to change the color range's upper limit 4. To finalize markers and quit interactive mode, press q on keyboard ================================================================================== crosshair: voxel 7405710, index = [130 183 92], head = [-0.3 1.4 -8.3] mm ac: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm pc: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm xzpoint: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm right: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm ================================================================================== selected ac ================================================================================== crosshair: voxel 7405710, index = [130 183 92], head = [-0.3 1.4 -8.3] mm ac: voxel 7405710, index = [130 183 92], head = [-0.3 1.4 -8.3] mm pc: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm xzpoint: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm right: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm ================================================================================== crosshair: voxel 9014850, index = [130 152 112], head = [-0.3 -20.3 5.7] mm ac: voxel 7405710, index = [130 183 92], head = [-0.3 1.4 -8.3] mm pc: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm xzpoint: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm right: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm ================================================================================== selected pc ================================================================================== crosshair: voxel 9014850, index = [130 152 112], head = [-0.3 -20.3 5.7] mm ac: voxel 7405710, index = [130 183 92], head = [-0.3 1.4 -8.3] mm pc: voxel 9014850, index = [130 152 112], head = [-0.3 -20.3 5.7] mm xzpoint: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm right: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm ================================================================================== crosshair: voxel 14108972, index = [ 72 152 175], head = [40.3 -20.3 49.8] mm ac: voxel 7405710, index = [130 183 92], head = [-0.3 1.4 -8.3] mm pc: voxel 9014850, index = [130 152 112], head = [-0.3 -20.3 5.7] mm xzpoint: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm right: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm ================================================================================== selected xzpoint ================================================================================== crosshair: voxel 14108972, index = [ 72 152 175], head = [40.3 -20.3 49.8] mm ac: voxel 7405710, index = [130 183 92], head = [-0.3 1.4 -8.3] mm pc: voxel 9014850, index = [130 152 112], head = [-0.3 -20.3 5.7] mm xzpoint: voxel 14108972, index = [ 72 152 175], head = [40.3 -20.3 49.8] mm right: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm ================================================================================== crosshair: voxel 6508124, index = [ 64 152 81], head = [45.9 -20.3 -16.0] mm ac: voxel 7405710, index = [130 183 92], head = [-0.3 1.4 -8.3] mm pc: voxel 9014850, index = [130 152 112], head = [-0.3 -20.3 5.7] mm xzpoint: voxel 14108972, index = [ 72 152 175], head = [40.3 -20.3 49.8] mm right: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm ================================================================================== selected right ================================================================================== crosshair: voxel 6508124, index = [ 64 152 81], head = [45.9 -20.3 -16.0] mm ac: voxel 7405710, index = [130 183 92], head = [-0.3 1.4 -8.3] mm pc: voxel 9014850, index = [130 152 112], head = [-0.3 -20.3 5.7] mm xzpoint: voxel 14108972, index = [ 72 152 175], head = [40.3 -20.3 49.8] mm right: voxel 6508124, index = [ 64 152 81], head = [45.9 -20.3 -16.0] mm ================================================================================== crosshair: voxel 6508124, index = [ 64 152 81], head = [45.9 -20.3 -16.0] mm ac: voxel 7405710, index = [130 183 92], head = [-0.3 1.4 -8.3] mm pc: voxel 9014850, index = [130 152 112], head = [-0.3 -20.3 5.7] mm xzpoint: voxel 14108972, index = [ 72 152 175], head = [40.3 -20.3 49.8] mm right: voxel 6508124, index = [ 64 152 81], head = [45.9 -20.3 -16.0] mm 615 cfg.fiducial = opt.fiducial; Warning: assuming that the units are "mm" > In ft_estimate_units (line 49) In ft_plot_slice (line 197) In ft_plot_ortho (line 120) In ft_volumerealign>cb_redraw (line 1446) In ft_volumerealign (line 1293) In hcp_anatomy (line 156) In test_hcp_meg_anat (line 27) Warning: assuming that the units are "mm" > In ft_estimate_units (line 49) In ft_plot_slice (line 197) In ft_plot_ortho (line 134) In ft_volumerealign>cb_redraw (line 1446) In ft_volumerealign (line 1293) In hcp_anatomy (line 156) In test_hcp_meg_anat (line 27) Warning: assuming that the units are "mm" > In ft_estimate_units (line 49) In ft_plot_slice (line 197) In ft_plot_ortho (line 148) In ft_volumerealign>cb_redraw (line 1446) In ft_volumerealign (line 1293) In hcp_anatomy (line 156) In test_hcp_meg_anat (line 27) K>> From: Schoffelen, J.M. (Jan Mathijs) [mailto:jan.schoffelen at donders.ru.nl] Sent: Monday, March 13, 2017 1:14 AM To: FieldTrip discussion list; K Jeffrey Eriksen Subject: Fwd: [HCP-Users] hcp_anatomy.m needs an hsfile? Dear Jeff, Let me forward your question to the discussion list. Dear list, Jeff is encountering some coregistration problems, which may be FieldTrip related, but also could be a user error. Perhaps somebody has encountered them before. Let us know if you have a solution. The 45 degrees tilt looks odd. If this image was produced after reslicing the to-MNI-coregistered-image something went wrong with the realignment. If this image was produced prior to the reslicing, something funky has gone wrong with the acquisition sequence. I don’t know anything about the specifics of Brainstorm, so I am afraid I cannot help there. Best, Jan-Mathijs J.M.Schoffelen, MD PhD Senior Researcher, VIDI-fellow - PI, language in interaction Telephone: +31-24-3614793 Physical location: room 00.028 Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands Begin forwarded message: From: K Jeffrey Eriksen > Subject: RE: [HCP-Users] hcp_anatomy.m needs an hsfile? Date: 11 March 2017 at 02:47:33 GMT+1 To: "Schoffelen, J.M. (Jan Mathijs)" > Hello again, I encountered a problem when I tried to import into Brainstorm, even though I thought I had the transform text file correct. After importing the anatomy in Brainstorm, it was displayed with the brain rotated by 45 degrees in all axes. I then realized the I had visualized the registration of the headshape to the scalp surface and that looked good, but I had never visualized the MNI registration. I went back into the HCP scripts and found where the MNI registration could be visualized and discovered the 45 degree rotation seemed to occur there. So I thought maybe our local HCP pipeline did something unusual. To test this I ran these three conditions: 1. My hcp_anatomy_egi.m with our local HCP-pipeline produced T1 2. original hcp_anatomy.m with our local T1 3. original hcp_anatomy.m with downloaded HCM_MEG_pipeline produced T1 All three had the same apparent problem, shown on the attached images. I am quite puzzled by this since they are all the same, yet Brainstorm only imports #3 correctly (not counting #2 which is mixed). I put all three cases in the attached Word doc, with the Brainstorm registration display and the HCP headshape registration display. -Jeff From: Schoffelen, J.M. (Jan Mathijs) [mailto:jan.schoffelen at donders.ru.nl] Sent: Wednesday, March 08, 2017 8:52 AM To: K Jeffrey Eriksen Subject: Re: [HCP-Users] hcp_anatomy.m needs an hsfile? Hi Jeff, I made it all the way through hcp_anatomy_EGI.m (my version substituting ‘egi’ for ‘bti’. Amazing! I could not figure out how to do the interactive fine registration of the EGI electrode “headshape” to the scalp surface – where is that documented? Well it’s not extensively documented, but in the crude GUI you can fiddle around with translation and rotation parameters to move the electrode point cloud closer to the headsurface mesh, created from the MRI segmentation. The main remaining problem is that the BTI coordinate system has the X-axis toward the nasion, and the Y-axis toward the LPA. The EGI coordinate system has the X-axis toward the RPA and the Y-axis toward the nasion. Can you suggest the best way to change hcp_anatomy_EGI.m to reflect this? Well, it sounds as if the EGI has an RAS convention, which may be similar to the ‘neuromag’ convention (as per http://www.fieldtriptoolbox.org/faq/how_are_the_different_head_and_mri_coordinate_systems_defined) It could be that changing the required coordinate system (coordsys) to ‘neuromag’ while specifying the fiducials in ft_volumerealign (rather than ‘bti’) would do the trick. Each of the supported coordinates systems must have some kind of master definition somewhere in the code, and that would be the best place to define the EGI system. I think it is similar to the BESA system. The code that has the ‘intelligence’ to map the specification of defined fiducial/landmark locations is in https://github.com/fieldtrip/fieldtrip/blob/master/utilities/ft_headcoordinates.m with a typo in line48/49 I noticed just now. Feel free to suggest a new coordinate system if needed. Perhaps this is best done through the fieldtrip discussion list. Best, Jan-Mathijs -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 92989 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 122781 bytes Desc: image002.png URL: From eriksenj at ohsu.edu Thu Mar 16 01:59:01 2017 From: eriksenj at ohsu.edu (K Jeffrey Eriksen) Date: Thu, 16 Mar 2017 00:59:01 +0000 Subject: [FieldTrip] how many points in Polhemus headshape file for HCP_MEG pipeline use? Message-ID: I am trying to simulate the HCP_MEG pipeline (specifically hcp_anatomy) and thus have to create my own simulated hs_file, as if I had the non-anonymized T1 and a Polhemus headshape file. Can someone tell me how many point are usually captured in these files? -Jeff -------------- next part -------------- An HTML attachment was scrubbed... URL: From da434 at cam.ac.uk Thu Mar 16 17:16:35 2017 From: da434 at cam.ac.uk (D.Akarca) Date: Thu, 16 Mar 2017 16:16:35 +0000 Subject: [FieldTrip] Neighbouring issue with ft_timelockstatistics Message-ID: Dear all, My name is Danyal Akarca, I’m a Master’s student working at Cambridge University, working at the MRC Cognition and Brain Sciences Unit. I’m currently working on some MEG data analysis, using ft_timelockstatistics and ft_clusterplot to determine clustering of neuromag magnetometers for task-related data. My neighbouring function is defined as follows: cfg = []; cfg.method = ‘distance’; cfg.neighbourdist = 0.13; cfg.template = ‘neuromag306mag_neighb’; cfg.layout = ‘NM306mag.lay’ cfg.channel = ‘all' neighbours = ft_prepare_neighbours(cfg, MagGM_Control_Deviant); % The input data here is one of the grand means computed with ft_timelockgrandaverage This provides me with an average of 5.5 neighbours per channel, and upon inspection with ft_neighbourplot, it looks very reasonable. I then went on to compute statistics, using ft_timelockstatistics as follows cfg = []; cfg.channel = ‘all’; cfg.neighbours = neighbours; cfg.latency = [0.1 0.54] cfg.method = ‘montecarlo’; cfg.randomization = 1000; cfg.correctm = ‘cluster’; cfg.correctail = ‘prob’; cfg.ivar = 1; cfg.uvar = 2; cfg.statistic = ‘ft_statfun_depsamplesT’; Nsub = 14; cfg.design(1,1:2*Nsub) = [ones(1,Nsub) 2*ones(1,Nsub)] cfg.design(2,1:2*Nsub)= [1:Nsub 1:Nsub]; stat = ft_timelockstatistics(cfg, cw{:}, cw1{:}) % cw and cw1 are cells containing my files When I run this, i obtain 52 positive clusters and 100 negative clusters, of which 6 negative clusters are significant. However, I have realised that this assumed that each channel is an independant cluster? These 6 ‘clusters’ are very close to each other when plotted using ft_clusterplot, so I thought that actually this should be 1 big cluster rather than 6 independant clusters very close to each other. So I therefore added cfg.minnbchan = 2; However, when I do this, it says there are 0 clusters generated at all. This occurs no matter how large I make cfg.neighbourdist (even when I make it so that each magnetometer is neighbours with every other neighbour, I still get no clusters forming). I was wondering if anyone had any thoughts, or could help me with this? I am still new to FieldTrip so any help would be very much appreciated. I hope that I’ve included all the relevant information above required. All the best, Danyal Akarca MPhil Neuroscience, Cambridge University MRC Cognition and Brain Sciences Unit From SXM1085 at student.bham.ac.uk Thu Mar 16 17:30:55 2017 From: SXM1085 at student.bham.ac.uk (Sebastian Michelmann) Date: Thu, 16 Mar 2017 16:30:55 +0000 Subject: [FieldTrip] how many points in Polhemus headshape file for HCP_MEG pipeline use? In-Reply-To: References: Message-ID: <2D9C9145AF1E4D4799ADDB2C0F996AE8019EF96FF9@EX13.adf.bham.ac.uk> Hi Jeff, we are currently taking >500 points Best, Sebastian From: fieldtrip-bounces at science.ru.nl [mailto:fieldtrip-bounces at science.ru.nl] On Behalf Of K Jeffrey Eriksen Sent: 16 March 2017 00:59 To: hcp-users at humanconnectome.org; fieldtrip at science.ru.nl Subject: [FieldTrip] how many points in Polhemus headshape file for HCP_MEG pipeline use? I am trying to simulate the HCP_MEG pipeline (specifically hcp_anatomy) and thus have to create my own simulated hs_file, as if I had the non-anonymized T1 and a Polhemus headshape file. Can someone tell me how many point are usually captured in these files? -Jeff -------------- next part -------------- An HTML attachment was scrubbed... URL: From seymourr at aston.ac.uk Thu Mar 16 18:58:36 2017 From: seymourr at aston.ac.uk (Seymour, Robert (Research Student)) Date: Thu, 16 Mar 2017 17:58:36 +0000 Subject: [FieldTrip] Granger Causality & ft_timelockstatistics Message-ID: Hi all, I'm currently using ft_timelockstatistics to compute the group-level statistical difference between 2 granger causality spectra (I'm substituting freq for time data). My question is whether my current cfg settings for ft_timelockstatistics (see code below) will cluster my data over time? I assume by selecting cfg.avgovertime = 'no' FT_STATISTICS_MONTECARLO will cluster over time rather than space.. but I just wanted to double check... Many thanks, Robert Seymour (Aston Brain Centre) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% cfg = []; cfg.avgovertime = 'no'; cfg.parameter = 'avg'; cfg.method = 'montecarlo'; cfg.statistic = 'ft_statfun_depsamplesT'; cfg.alpha = 0.05; cfg.clusteralpha = 0.05; cfg.correctm = 'cluster'; cfg.numrandomization = 1000; Nsub = numel(grandavgA); cfg.design(1,1:2*Nsub) = [ones(1,Nsub) 2*ones(1,Nsub)]; cfg.design(2,1:2*Nsub) = [1:Nsub 1:Nsub]; cfg.ivar = 1; % the 1st row in cfg.design contains the independent variable cfg.uvar = 2; % the 2nd row in cfg.design contains the subject number stat = ft_timelockstatistics(cfg,grandavgB{:},grandavgA{:}); figure; plot(stat.stat); xlabel('Freq (Hz)'); ylabel('t-value'); figure; plot(stat.prob);xlabel('Freq (Hz)'); ylabel('p-value'); %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% -------------- next part -------------- An HTML attachment was scrubbed... URL: From Umla-Runge at cardiff.ac.uk Thu Mar 16 19:15:39 2017 From: Umla-Runge at cardiff.ac.uk (Katja Umla-Runge) Date: Thu, 16 Mar 2017 18:15:39 +0000 Subject: [FieldTrip] PhD studentship at Cardiff University Message-ID: Applications are invited for a PhD studentship on functional and structural properties of spatial processing networks in the brain at Cardiff University starting from July 2017. Please see here for more details on the project and do contact me if you would like to know more: https://www.findaphd.com/search/ProjectDetails.aspx?PJID=82152 http://psych.cf.ac.uk/degreeprogrammes/postgraduate/research/ Regards, Katja Katja Umla-Runge Lecturer CUBRIC, School of Psychology (College of Biomedical and Life Sciences) Cardiff University Maindy Road Cardiff, CF24 4HQ Tel: +44 (0)29 2087 0715 Email: Umla-Runge at cardiff.ac.uk Katja Umla-Runge Darlithydd CUBRIC, Yr Ysgol Seicoleg (Coleg y Gwyddorau Biofeddygol a Bywyd) Prifysgol Caerdydd Maindy Road Caerdydd, CF24 4HQ Ffôn : +44 (0)29 2087 0715 E-bost: Umla-Runge at caerdydd.ac.uk Sent from my iPhone -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmatthes at cbs.mpg.de Fri Mar 17 13:35:10 2017 From: dmatthes at cbs.mpg.de (Daniel Matthes) Date: Fri, 17 Mar 2017 13:35:10 +0100 Subject: [FieldTrip] Bug in ft_trialfun_brainvision_segmented.m Message-ID: <24173161-9793-8db6-04bf-220d7b031ef3@cbs.mpg.de> Hi fieldtrip developers, I found a bug in fieldtrip/trialfun/ft_trialfun_brainvision_segmented.m. If the Brain Vision marker file *.vmrk includes no 'Stimulus' makers, the ft_trialfun_brainvision_segmented function crashes in line 116. The reason for this crash is absence of the trialinfo variable. In detail, if no 'Stimulus' is defined the numstim variable gets 0 (line 99), otherwise the query 'if all(numstim==numstim(1))' in line 100 results in true. I would recommend to change line 100 to: if ((numstim > 0 ) && (all(numstim==numstim(1)))) Hereby the else branch will be executed, if numstim = 0. Furthermore, the mentioned function also crashes, if the stimulus markers in the marker file either have no value or a value with wrong letters. This cases should be captured with a more obvious error message. All the best, Daniel From jan.schoffelen at donders.ru.nl Fri Mar 17 13:45:16 2017 From: jan.schoffelen at donders.ru.nl (Schoffelen, J.M. (Jan Mathijs)) Date: Fri, 17 Mar 2017 12:45:16 +0000 Subject: [FieldTrip] Bug in ft_trialfun_brainvision_segmented.m In-Reply-To: <24173161-9793-8db6-04bf-220d7b031ef3@cbs.mpg.de> References: <24173161-9793-8db6-04bf-220d7b031ef3@cbs.mpg.de> Message-ID: <68BDEFCD-EA25-459F-8465-B3D850838B67@donders.ru.nl> Thanks for your input, Daniel. May I suggest you to follow this up through github? http://www.fieldtriptoolbox.org/development/git The best thing for you to do would be to make a Pull Request with the suggested changes. Thanks, and keep up the good work, Jan_Mathijs > On 17 Mar 2017, at 13:35, Daniel Matthes wrote: > > Hi fieldtrip developers, > > I found a bug in fieldtrip/trialfun/ft_trialfun_brainvision_segmented.m. If the Brain Vision marker file *.vmrk includes no 'Stimulus' makers, the ft_trialfun_brainvision_segmented function crashes in line 116. The reason for this crash is absence of the trialinfo variable. > > In detail, if no 'Stimulus' is defined the numstim variable gets 0 (line 99), otherwise the query 'if all(numstim==numstim(1))' in line 100 results in true. > > I would recommend to change line 100 to: > > if ((numstim > 0 ) && (all(numstim==numstim(1)))) > > Hereby the else branch will be executed, if numstim = 0. > > Furthermore, the mentioned function also crashes, if the stimulus markers in the marker file either have no value or a value with wrong letters. This cases should be captured with a more obvious error message. > > All the best, > > Daniel > > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip From r.oostenveld at donders.ru.nl Tue Mar 21 12:01:50 2017 From: r.oostenveld at donders.ru.nl (Robert Oostenveld) Date: Tue, 21 Mar 2017 12:01:50 +0100 Subject: [FieldTrip] Donders training courses: "Tool-kits" of Cognitive Neuroscience Message-ID: <05323DBF-FF87-4F0E-AE08-1E59B82EBEA3@donders.ru.nl> > Begin forwarded message: > > From: "Stijns, M.H. (Tildie)" > Subject: Announcing Donders Tool-kits 2017 > Date: 15 March 2017 at 14:42:02 GMT+1 > > > Are you interested in learning neuroimaging techniques directly from the experts? > Do you like courses that take a practical hands-on approach to training? > To help you become proficient in modern neuroimaging methods, the Donders Institute offers “Tool-kits” of Cognitive Neuroscience, held annually at Radboud University, Nijmegen, the Netherlands. > Donders Tool-kits in 2017 : > Advanced MEG/EEG : (3-7 April 2017) - NOTE: registration is closed > Advanced (f)MRI: (15-18 May 2017) > Brain Stimulation : (30 May-2 June 2017) > Neuroimaging : (28 August-1 September 2017) > > Bests, Tildie -------------- next part -------------- An HTML attachment was scrubbed... URL: From J.Verhoef at donders.ru.nl Tue Mar 21 12:51:46 2017 From: J.Verhoef at donders.ru.nl (Verhoef, J.P. (Julia)) Date: Tue, 21 Mar 2017 11:51:46 +0000 Subject: [FieldTrip] Senior Postdoc for the Dutch Research Consortium 'Language in Interaction' Message-ID: <11E9E0B371DBAE4EB859A9CC30606A04023E199F@exprd04.hosting.ru.nl> Senior Postdoc for the Dutch Research Consortium 'Language in Interaction' (1.0 FTE) Dutch Research Consortium 'Language in Interaction' Maximum salary: € 4,691 gross/month Vacancy number: 30.01.17 Application deadline: 17 April 2017 [Logo NWO] [Logo] Responsibilities The Language in Interaction research consortium invites applications for a senior postdoctoral position. You will contribute to the integration of empirical research in our consortium. You will act in close collaboration with Peter Hagoort, programme director of the consortium. This position provides the opportunity for conducting world-class research as a member of an interdisciplinary team. Moreover, it will provide the opportunity to contribute to developing a theoretical framework for our understanding of the human language faculty. Work environment The Netherlands has an outstanding track record in the language sciences. The Language in Interaction research consortium, which is sponsored by a large grant from the Netherlands Organization for Scientific research (NWO), brings together many of the excellent research groups in the Netherlands in a research programme on the foundations of language. In addition to excellence in the domain of language and related relevant fields of cognition, our consortium provides state-of-the-art research facilities and a research team with ample experience in the complex research methods that will be invoked to address the scientific questions at the highest level of methodological sophistication. These include methods from genetics, neuroimaging, computational modelling, and patient-related research. This consortium realises both quality and critical mass for studying human language at a scale not easily found anywhere else. We have identified five Big Questions (BQ) that are central to our understanding of the human language faculty. These questions are interrelated at multiple levels. Teams of researchers will collaborate to collectively address these key questions of our field. Our five Big Questions are: BQ1: The nature of the mental lexicon: How to bridge neurobiology and psycholinguistic theory by computational modelling? BQ2: What are the characteristics and consequences of internal brain organization for language? BQ3: Creating a shared cognitive space: How is language grounded in and shaped by communicative settings of interacting people? BQ4: Variability in language processing and in language learning: Why does the ability to learn language change with age? How can we characterise and map individual language skills in relation to the population distribution? BQ5: How are other cognitive systems shaped by the presence of a language system in humans? You will be appointed at the Donders Institute, Centre for Cognitive Neuroimaging (Radboud University, Nijmegen). The research is conducted in an international setting at all participating institutions. English is the lingua franca. What we expect from you We are looking for a highly motivated, creative and talented candidate to enrich a unique consortium of researchers that aims to unravel the neurocognitive mechanisms of language at multiple levels. The goal is to understand both the universality and the variability of the human language faculty from genes to behaviour. The selection criteria include: · a PhD in an area related to the neurobiology of language and/or language sciences; · expertise/interest in theoretical neuroscience and language; · an integrative mindset; · a theory-driven approach; · good communication skills; · excellent proficiency in written and spoken English. What we have to offer · employment: 1.0 FTE; · a maximum gross monthly salary of € 4,691 based on a 38-hour working week (salary scale 11); · in addition to the salary: an 8% holiday allowance and an 8.3% end-of-year bonus; · you will be appointed for an initial period of 18 months, after which your performance will be evaluated. If the evaluation is positive, the contract will be extended by 30 months; · the Collective Labour Agreement (CAO) of Dutch Universities is applicable; · you will be classified as Researcher, level 3 in the Dutch university job-ranking system (UFO); · the Dutch universities and institutes involved have a number of regulations that enable employees to create a good work-life balance. Are you interested in our excellent employment conditions? Other Information The institute involved is an equal opportunity employer, committed to building a culturally diverse intellectual community, and as such encourages applications from women and minorities. Would you like to know more? Further information on: Language in Interaction Further information on: Donders Institute for Brain, Cognition and Behaviour For more information about this vacancy, please contact: Prof. dr. Peter Hagoort, programme director Language in Interaction and director of DCCN Telephone: +31 24 3610648, +31 24 3521301 E-mail: p.hagoort at donders.ru.nl Are you interested? You should upload your application (attn. of Prof. dr. P. Hagoort) exclusively using the button 'Apply' below. Your application should include (and be limited to) the following attachment(s): · a cover letter · your curriculum vitae, including a list of publications and the names of at least two people who can provide references Please apply before 17 April 2017, 23:59 CET. [Apply] No commercial propositions please. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 2461 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 40202 bytes Desc: image002.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 3660 bytes Desc: image003.jpg URL: From mailtome.2113 at gmail.com Thu Mar 23 07:09:07 2017 From: mailtome.2113 at gmail.com (Arti Abhishek) Date: Thu, 23 Mar 2017 17:09:07 +1100 Subject: [FieldTrip] Channel order after interpolation Message-ID: Dear list, I am working with 128 channel EEG data recorded from infants and young children. As they had few bad channels, I removed them, computed ICA, removed eye-blink components and then interpolated the removed channels. The interpolated channels were appended at the end, not at their original positions. Is there a way I can add the interpolated channels in the original order? I want to run some scripts outside fieldtrip on the data and the channel order is important. Any help would be greatly appreciated. Thanks, Arti cfg =[]; cfg.layout = 'GSN-Hydrocel-129.sfp'; lay = ft_prepare_layout(cfg); cfg = []; cfg_neighb.layout = lay; cfg_neighb.method = 'triangulation'; cfg.feedback = 'yes'; EEG_neighbours = ft_prepare_neighbours(cfg_neighb); load('NJ_24_ica_artrej.mat') badchannels = setdiff(lay.label(1:129), NJ_24_ica_artrej.label); cfg = []; cfg.layout = lay; cfg.neighbours = EEG_neighbours; cfg.badchannel = badchannels; cfg.method ='spline'; cfg.senstype = 'EEG'; NJ_24_ica_interp = ft_channelrepair(cfg, NJ_24_ica_artrej); -------------- next part -------------- An HTML attachment was scrubbed... URL: From julian.keil at gmail.com Thu Mar 23 09:44:38 2017 From: julian.keil at gmail.com (Julian Keil) Date: Thu, 23 Mar 2017 09:44:38 +0100 Subject: [FieldTrip] Channel order after interpolation In-Reply-To: References: Message-ID: <96928AB6-212E-4AD8-B30E-184B252A7465@gmail.com> Dear Arti, if you know exactly where your channels are, and where they ought to be, you can simply build a vector with the index of the current channel at the position where it should be, and assign this vector as a new matrix index. So for example, if you have channels A, B and C, but they should be ordered B-C-A, you can use something like this: neworder = [2 3 1]; % Element 2, should now be at the beginning, then the third element, and then the first; data.avg = data.avg(neworder,:); % Assign neworder to the 2D-Matrix of - for example - trial averaged data Hope this helps, Julian Am 23.03.2017 um 07:09 schrieb Arti Abhishek: > Dear list, > > I am working with 128 channel EEG data recorded from infants and young children. As they had few bad channels, I removed them, computed ICA, removed eye-blink components and then interpolated the removed channels. The interpolated channels were appended at the end, not at their original positions. Is there a way I can add the interpolated channels in the original order? I want to run some scripts outside fieldtrip on the data and the channel order is important. Any help would be greatly appreciated. > > Thanks, > Arti > > cfg =[]; > cfg.layout = 'GSN-Hydrocel-129.sfp'; > lay = ft_prepare_layout(cfg); > cfg = []; > cfg_neighb.layout = lay; > cfg_neighb.method = 'triangulation'; > cfg.feedback = 'yes'; > EEG_neighbours = ft_prepare_neighbours(cfg_neighb); > > load('NJ_24_ica_artrej.mat') > badchannels = setdiff(lay.label(1:129), NJ_24_ica_artrej.label); > > > cfg = []; > cfg.layout = lay; > cfg.neighbours = EEG_neighbours; > cfg.badchannel = badchannels; > cfg.method ='spline'; > cfg.senstype = 'EEG'; > NJ_24_ica_interp = ft_channelrepair(cfg, NJ_24_ica_artrej); > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail URL: From elinor.tzvi at neuro.uni-luebeck.de Thu Mar 23 11:37:38 2017 From: elinor.tzvi at neuro.uni-luebeck.de (Elinor Tzvi-Minker) Date: Thu, 23 Mar 2017 10:37:38 +0000 Subject: [FieldTrip] =?utf-8?q?OPEN_PHD_POSITION_University_of_L=C3=BCbeck?= =?utf-8?q?=2C_GERMANY?= Message-ID: <00444adb22804d07814214e640867971@hermes.neuro.uni-luebeck.de> The Cognitive Neuroscience Group at the Neurology department of the University of Lübeck offers a PhD position (65% E13 TV-L) starting immediately. The candidate will be working on a project that develops and implements neuromodulation techniques (tDCS) in combination with fMRI and then translates these methods to social neuroscience paradigms. ​ We offer The department of Neurology is part of the Center for Brain, Behavior and Metabolism (CBBM), which offers an excellent and state-of the-art research environment. The research group “Cognitive Neuroscience” (headed by Prof. Ulrike Krämer) is working on different topics related to cognitive and affective control (anger and aggression, response inhibition, regulation of eating behavior) and motor control. Our researchers use diverse and complex methods to analyze brain-behavior relationships. At the CBBM, a 3T Skypra MRI research scanner, several EEG labs, fNIRS, TMS and tDCS are available. Thus, we offer an excellent environment for interdisciplinary research. We require The successful candidate will hold an MSc/MA/Dipl. in Psychology or related fields (cognitive science, neuroscience or other). Experience in acquisition and analysis of human neuroimaging data (fMRI, EEG, MEG or NIRS) and Programming skills in Matlab (or equivalent) is preferred. Interest and/or experience in the field of cognitive neuroscience are obligatory. We are looking for a motivated, analytic and problem-solving oriented candidate who enjoys interdisciplinary challenges. The candidate will work in the “Cognitive Neuroscience Group” under co-supervision of Dr. Elinor Tzvi-Minker and Prof. Ulrike M. Krämer. Applicants with disabilities are preferred if qualification is equal. The University of Lübeck is an equal opportunity employer, aiming to increase the proportion of women in science. Applications by women are particularly welcome. For questions about the details of the assignment please contact Dr. Elinor Tzvi-Minker (elinor.tzvi at neuro.uni-luebeck.de). Please send your application (Letter of motivation, CV, contact information of two references, relevant certificates) as one single complete PDF file to the Email-address mentioned above. Applications will be considered until the position has been filled. -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.chait at ucl.ac.uk Thu Mar 23 23:20:03 2017 From: m.chait at ucl.ac.uk (Chait, Maria) Date: Thu, 23 Mar 2017 22:20:03 +0000 Subject: [FieldTrip] Post-Doc position on Auditory Attention [DEADLINE March 31] Message-ID: (please forward; deadline next week) A postdoctoral research associate position is available at the UCL Ear Institute's 'Auditory Cognitive Neuroscience Lab' to work on an EC-funded project that will use psychophysics, eye tracking and EEG to investigate auditory attention in humans. The post is funded for 20 months in the first instance. For more information about the post please see the lab website: http://www.ucl.ac.uk/ear/research/chaitlab/vacancies The Ear Institute is a leading interdisciplinary centre for hearing research in Europe, situated within one of the strongest neuroscience communities in the world at University College London Key Requirements The successful applicant will have a PhD in neuroscience or a neuroscience-related discipline and proven ability to conduct high-quality original research and prepare results for publication. Essential skills include excellent time-management and organizational ability; proficiency in computer programming and good interpersonal, oral and written communication skills. Previous experience with functional brain imaging, neural data analysis, psychophysical assessment, and/or auditory science or acoustics would be desirable. Further Details You should apply for this post (Ref #: 1631454) through UCL's online recruitment website, www.ucl.ac.uk/hr/jobs, where you can download a job description and person specifications. Closing Date for applications is: 31 March 2017 For an informal discussion please contact Dr. Maria Chait (m.chait at ucl.ac.uk). Maria Chait PhD m.chait at ucl.ac.uk Reader in Auditory Cognitive Neuroscience Lab site: http://www.ucl.ac.uk/ear/research/chaitlab/ UCL Ear Institute 332 Gray's Inn Road London WC1X 8EE -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandrina.guran at uni-luebeck.de Mon Mar 27 11:34:19 2017 From: alexandrina.guran at uni-luebeck.de (Alexandrina Guran) Date: Mon, 27 Mar 2017 09:34:19 +0000 Subject: [FieldTrip] Problem with downsampling / automatic artifact rejection Message-ID: <815CD0C7-3692-488F-8012-3A23529C52E0@uni-luebeck.de> Dear FieldTrip Community, My name is Alexandrina Guran, I am a PhD Student at the University of Lübeck and I have recently started working with FieldTrip in order to preprocess (and later analyse) EEG data. I have encountered an odd problem, that I nor people I asked in the lab could solve, also using the help function and google: After running the epoching (trial length 5s), filtering (high-pass, low-pass and notch, for a time-frequency analysis) and downsampling (to 250 Hz), I wanted to do an automatic artifact rejection, in order to have exploratory information of how many of my trials would be affected by artifacts and if there were participants that blinked on a majority of trials in order to determine whether I should shorten my trial length and/or conduct an ICA. I used the ft_artifact_threshold function, in Matlab R2016b, with different FieldTrip versions (march 2017 as well as end 2016 and end 2015). However, the automatic artifact detection did not work – that is, it would stop rejecting artifacts after a number x of trials (usually between 90 and 140 trials), depending on participant. I would get an error message but then the artifact rejection would go on, telling me all trials were ok (even if I set 1 microvolt as a threshold). The error message I got is the following: “(…) threshold artifact scanning: trial 128 from 320 exceeds max-threshold threshold artifact scanning: trial 129 from 320 is ok threshold artifact scanning: trial 130 from 320 is ok threshold artifact scanning: trial 131 from 320 is ok Warning: data contains NaNs, no filtering or preprocessing applied > In ft_warning (line 184) In preproc (line 283) In ft_artifact_threshold (line 164) In preprocessing (line 266) threshold artifact scanning: trial 132 from 320 is ok threshold artifact scanning: trial 133 from 320 is ok threshold artifact scanning: trial 134 from 320 is ok threshold artifact scanning: trial 135 from 320 is ok threshold artifact scanning: trial 136 from 320 is ok threshold artifact scanning: trial 137 from 320 is ok threshold artifact scanning: trial 138 from 320 is ok threshold artifact scanning: trial 139 from 320 is ok threshold artifact scanning: trial 140 from 320 is ok threshold artifact scanning: trial 141 from 320 is ok threshold artifact scanning: trial 142 from 320 is ok (…)” This was however only the case if I ran the artifact detection on down-sampled data. It worked fine with just filtered data. However, I checked the preprocessed (downsampled) data for NaNs (using the isnan-MATLAB function) and there were none to be found (I also checked visually in one dataset). Has anyone encountered this problem and found a solution? Of course, I considered just doing the downsampling after the automatic and visual artifact rejection, but I would like to be sure that the downsampling will work correctly at any point of the preprocessing and right now I am a little flummoxed at “what is happening” with the data in that function. Down below you can find code excerpts for both the artifact rejection and the downsampling. Both were looped over participants but the error appears regardless of that. Downsampling: cfg = []; cfg.dataset = ['tfdata_filtfilt_' num2str(subj(s)) '.mat']; %tfdata_filtfilt_ is the epoched and filtered data cfg.resamplefs = 250; cfg.detrend = 'no'; cfg.inputfile = ['tfdata_filtfilt_' num2str(subj(s)) '.mat']; cfg.outputfile = ['tfdata_filt_rs_' num2str(subj(s)) '.mat']; datatfrs = ft_resampledata(cfg) Artifact rejection cfg = []; config = load(['tfcfg_' num2str(subj(s)) '.mat']); cfg.trl = config.cfg.trl; cfg.continuous = 'no' ; cfg.artfctdef.threshold.channel = [1:28 33:63]; %exclude eye channels 'VEOG1' 'VEOG2' 'HEOG1' 'HEOG2' cfg.artfctdef.threshold.max = 75; cfg.artfctdef.threshold.min = -75; cfg.artfctdef.threshold.bpfilter = 'no'; cfg.inputfile = ['tfdata_filt_rs_' num2str(subj(s)) '.mat']; cfg.outputfile = ['tfdata_artif_' num2str(subj(s)) '.mat']; cfg = ft_artifact_threshold(cfg); save (cfg.outputfile, 'cfg') Since I am new to FieldTrip, I can imagine it to be a “simple/stupid” error having to do with the cfg. Thank you for reading this and trying to help ☺ Best regards Alexandrina -- C.-N. Alexandrina Guran, M.Sc. PhD student Institute of Psychology I University of Lübeck Maria-Goeppert-Straße 9a 23562 Lübeck Germany Building MFC 8, 1st Floor, Room 1 Phone: +49 451 3101 3635 Fax: +49 451 3101 3604 -------------- next part -------------- An HTML attachment was scrubbed... URL: From chuanjigao at gmail.com Mon Mar 27 14:30:44 2017 From: chuanjigao at gmail.com (Jack Gao) Date: Mon, 27 Mar 2017 08:30:44 -0400 Subject: [FieldTrip] Post-hoc tests for cluster-based permutation tests on event-related fields Message-ID: Dear Community, My name is Chuanji Gao, I'm a PhD student in Experimental Psychology Program at University of South Carolina. I'm now analyzing EEG data to get some event-related fields results. There are three conditions (condition1, 2 and 3) that I want to compare. So I used ft_timelockstatistics to run the cluster-based permutation test firstly. The cfg are as below. %--------- *cfg = [];...cfg.neighbours = neighbours;...* *cfg.latency = [0.1 0.8];cfg.avgovertime = 'no';cfg.parameter = 'avg';cfg.method = 'montecarlo';cfg.statistic = 'depsamplesFmultivariate'; * *...Nsub = 19;cfg.design(1,1:3*Nsub) = [ones(1,Nsub) 2*ones(1,Nsub) 3*ones(1,Nsub)]; cfg.design(2,1:3*Nsub) = [1:Nsub 1:Nsub 1:Nsub];cfg.ivar = 1; cfg.uvar = 2; stat = ft_timelockstatistics(cfg,cond1{:},cond2{:}, cond3{:});* %--------- The null hypothesis was rejected, and it seems the effect was most pronounced from 224ms to 800ms at centro-parietal regions. The next step: I want to do pairwise comparisons of the three conditions. I'm not sure if I should use the time window identified from the last analyses as below: *...cfg.latency = [0.224 0.8];cfg.avgovertime = 'yes';cfg.parameter = 'avg';cfg.method = 'montecarlo';cfg.statistic = 'ft_statfun_depsamplesT'; * *...stat = ft_timelockstatistics(cfg,cond1{:},cond2{:});* OR should I use the whole time window as I used in the first analyses as below: *...cfg.latency = [0.1 0.8];cfg.avgovertime = 'no';cfg.parameter = 'avg';cfg.method = 'montecarlo';cfg.statistic = 'ft_statfun_depsamplesT'; ...stat = ft_timelockstatistics(cfg,cond1{:},cond2{:});* I'm inclined to use the whole time window and "non-average over time", but not entirely sure. Can someone give me some suggestions on it? Any help would be very appreciated. Best, Chuanji Chuanji Gao PhD student Department of Psychology University of South Carolina E-Mail chuanji at email.sc.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From braingirl at gmail.com Mon Mar 27 16:19:03 2017 From: braingirl at gmail.com (Teresa Madsen) Date: Mon, 27 Mar 2017 10:19:03 -0400 Subject: [FieldTrip] Problem with downsampling / automatic artifact rejection In-Reply-To: <815CD0C7-3692-488F-8012-3A23529C52E0@uni-luebeck.de> References: <815CD0C7-3692-488F-8012-3A23529C52E0@uni-luebeck.de> Message-ID: I don't see anything obviously wrong with your cfg, but I don't know what is loaded into the config variable - is it possible config.cfg.trl is requesting samples that are not present in the input file? If it's based on the data before downsampling, the sample numbers could be off by a factor of 250. If that's not it, here are some more general troubleshooting tips: First, I would set Matlab to dbstop if warning so it pauses execution at that warning message. You'll need to dbup at least once to get out of ft_warning, and then you'll have access to the workspace of the preproc function. Examine the dat variable for NaNs and see if you can track back to figure out where they were added. Since dat is an input to that function, you might start by typing dbup twice to get to the workspace of ft_artifact_threshold and verify whether any NaNs are present in dat there. If neither of those help you figure out the problem, it should at least give you more info to provide in a bug report to http://bugzilla.fieldtriptoolbox.org/ Hope that helps, Teresa On Mon, Mar 27, 2017 at 5:34 AM, Alexandrina Guran < alexandrina.guran at uni-luebeck.de> wrote: > Dear FieldTrip Community, > > > > My name is Alexandrina Guran, I am a PhD Student at the University of > Lübeck and I have recently started working with FieldTrip in order to > preprocess (and later analyse) EEG data. I have encountered an odd problem, that > I nor people I asked in the lab could solve, also using the help function > and google: > > > > After running the epoching (trial length 5s), filtering (high-pass, > low-pass and notch, for a time-frequency analysis) and downsampling (to 250 > Hz), I wanted to do an automatic artifact rejection, in order to have > exploratory information of how many of my trials would be affected by > artifacts and if there were participants that blinked on a majority of > trials in order to determine whether I should shorten my trial length > and/or conduct an ICA. > > > > I used the ft_artifact_threshold function, in Matlab R2016b, with > different FieldTrip versions (march 2017 as well as end 2016 and end 2015). > > However, the automatic artifact detection did not work – that is, it would > stop rejecting artifacts after a number x of trials (usually between 90 and > 140 trials), depending on participant. I would get an error message but > then the artifact rejection would go on, telling me all trials were ok > (even if I set 1 microvolt as a threshold). > > The error message I got is the following: > > > > “(…) threshold artifact scanning: trial 128 from 320 exceeds max-threshold > > threshold artifact scanning: trial 129 from 320 is ok > > threshold artifact scanning: trial 130 from 320 is ok > > threshold artifact scanning: trial 131 from 320 is ok > > Warning: data contains NaNs, no filtering or preprocessing applied > > > In ft_warning (line 184) > > In preproc (line 283) > > In ft_artifact_threshold (line 164) > > In preprocessing (line 266) > > threshold artifact scanning: trial 132 from 320 is ok > > threshold artifact scanning: trial 133 from 320 is ok > > threshold artifact scanning: trial 134 from 320 is ok > > threshold artifact scanning: trial 135 from 320 is ok > > threshold artifact scanning: trial 136 from 320 is ok > > threshold artifact scanning: trial 137 from 320 is ok > > threshold artifact scanning: trial 138 from 320 is ok > > threshold artifact scanning: trial 139 from 320 is ok > > threshold artifact scanning: trial 140 from 320 is ok > > threshold artifact scanning: trial 141 from 320 is ok > > threshold artifact scanning: trial 142 from 320 is ok (…)” > > > > This was however only the case if I ran the artifact detection on > down-sampled data. It worked fine with just filtered data. > > > > However, I checked the preprocessed (downsampled) data for NaNs (using the > isnan-MATLAB function) and there were none to be found (I also checked > visually in one dataset). > > > > Has anyone encountered this problem and found a solution? > > > > Of course, I considered just doing the downsampling after the automatic > and visual artifact rejection, but I would like to be sure that the > downsampling will work correctly at any point of the preprocessing and > right now I am a little flummoxed at “what is happening” with the data in > that function. > > > > Down below you can find code excerpts for both the artifact rejection and > the downsampling. Both were looped over participants but the error appears > regardless of that. > > Downsampling: > > > > cfg = []; > > cfg.dataset = ['tfdata_filtfilt_' > num2str(subj(s)) '.mat']; %tfdata_filtfilt_ is the epoched and filtered > data > > cfg.resamplefs = 250; > > cfg.detrend = 'no'; > > cfg.inputfile = ['tfdata_filtfilt_' > num2str(subj(s)) '.mat']; > > cfg.outputfile = ['tfdata_filt_rs_' > num2str(subj(s)) '.mat']; > > datatfrs = ft_resampledata(cfg) > > > > Artifact rejection > > cfg = []; > > config = load(['tfcfg_' > num2str(subj(s)) '.mat']); > > cfg.trl = config.cfg.trl; > > cfg.continuous = 'no' ; > > cfg.artfctdef.threshold.channel = [1:28 33:63]; %exclude eye > channels 'VEOG1' 'VEOG2' 'HEOG1' 'HEOG2' > > cfg.artfctdef.threshold.max = 75; > > cfg.artfctdef.threshold.min = -75; > > cfg.artfctdef.threshold.bpfilter = 'no'; > > cfg.inputfile = ['tfdata_filt_rs_' > num2str(subj(s)) '.mat']; > > cfg.outputfile = ['tfdata_artif_' > num2str(subj(s)) '.mat']; > > cfg = ft_artifact_threshold(cfg); > > save (cfg.outputfile, 'cfg') > > > > Since I am new to FieldTrip, I can imagine it to be a “simple/stupid” > error having to do with the cfg. > > Thank you for reading this and trying to help J > > > > Best regards > > Alexandrina > > > > > > -- > > C.-N. Alexandrina Guran, M.Sc. > > PhD student > > Institute of Psychology I > > University of Lübeck > > Maria-Goeppert-Straße 9a > 23562 Lübeck > > Germany > > > > Building MFC 8, 1st Floor, Room 1 > > Phone: +49 451 3101 3635 <+49%20451%2031013635> > > Fax: +49 451 3101 3604 <+49%20451%2031013604> > > > > > > > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -- Teresa E. Madsen, PhD Research Technical Specialist: *in vivo *electrophysiology & data analysis Division of Behavioral Neuroscience and Psychiatric Disorders Yerkes National Primate Research Center Emory University Rainnie Lab, NSB 5233 954 Gatewood Rd. NE Atlanta, GA 30329 (770) 296-9119 braingirl at gmail.com https://www.linkedin.com/in/temadsen -------------- next part -------------- An HTML attachment was scrubbed... URL: From efrain.torres at marquette.edu Wed Mar 29 07:54:14 2017 From: efrain.torres at marquette.edu (Torres, Efrain) Date: Wed, 29 Mar 2017 05:54:14 +0000 Subject: [FieldTrip] Activity not changing with time, SAM Beamforming Message-ID: When I plot my results using ft_sourceplot. My results do not seem to change despite changes in latency that I indicate through the configuration. Below is my code for SAM Beamformer of EEG data. I am unsure what I am doing wrong. Note that for preprocessing, it was previously done in EEGLab and imported into fieldtrip. cfg.trialdef.eventtype='trigger'; cfg.trialdef.prestim=.2; cfg.trialdef.poststim=.8; cfg.trialdef.ntrials=50; %%This was changed to 1 from 64 to cfg.dataset=rawEEG; cfg=ft_definetrial(cfg) cfg.continous='yes' cfg.trialfun='ft_trialfun_general' cfg.method='trial' %changed from channel to trial PU74954_PL5=ft_preprocessing(cfg) %% timelock analysis cfg=[]; cfg.covariance='yes'; cfg.covariancewindow='poststim'; cfg.vartrllength=2; timelock=ft_timelockanalysis(cfg,PU74954_PL5); plot(timelock.time, timelock.avg); %% headmodel Subject01='/home/etorres/Desktop/HAL_Fieldtrip/Anatomy/PU7493_1/RAW/anat+orig.BRIK'; mri=ft_read_mri(Subject01); cfg=[]; cfg.output='brain'; seg=ft_volumesegment(cfg, mri); cfg = []; cfg.method = 'singlesphere'; headmodel = ft_prepare_headmodel(cfg, seg); %% Preparing the subject specific grid %hdr=ft_read_header(PU74954_PL5); cfg=[]; cfg.elec=PU74954_PL5.hdr.elec; cfg.headmodel=headmodel; cfg.grid.resolution=1; cfg.grid.unit='cm'; %cfg.inwardshift=-1.5; grid=ft_prepare_sourcemodel(cfg); %% Creating the leadfield cfg=[]; cfg.elec=PU74954_PL5.hdr.elec; cfg.reducerank='3'; cfg.headmodel=headmodel; cfg.grid=grid; cfg.normalize='yes'; lf=ft_prepare_leadfield(cfg); %% Source Analysis cfg=[]; cfg.method='sam'; cfg.grid=lf; cfg.headmodel=headmodel; %cfg.keepfilter='yes'; cfg.lcmv.fixedori='yes'; source_avg=ft_sourceanalysis(cfg,timelock); %% Plotting Results mri = ft_read_mri(Subject01); mri = ft_volumereslice([], mri); cfg=[]; cfg.parameter='avg.pow'; [interp]=ft_sourceinterpolate(cfg,source_avg,mri); cfg=[]; cfg.method='slice'; cfg.funcolorlim=[0 10]; cfg.nslices=25; cfg.latency=-.1; cfg.funcolormap='jet'; cfg.funparameter='avg.pow'; ft_sourceplot(cfg, interp); Efrain Torres -------------- next part -------------- An HTML attachment was scrubbed... URL: From gunnar.norrman at biling.su.se Wed Mar 29 09:10:29 2017 From: gunnar.norrman at biling.su.se (Gunnar Norrman) Date: Wed, 29 Mar 2017 07:10:29 +0000 Subject: [FieldTrip] PhD position at Centre for Research on Bilingualism, Stockholm University Message-ID: <1490771429408.37501@biling.su.se> The Centre for Research on Bilingualism at Stockholm University is announcing a fully funded 4-year PhD position in bilingualism, starting fall 2017. The Centre is an interdisciplinary unit with focus on psycholinguistic and sociolinguistic aspects of bilingualism, including bilingual cognition and second language acquisition. We offer a vibrant interdisciplinary research environment, as well as a fully equipped EEG/ERP and Eye Tracking lab, and we strongly encourage students with a background in any of these methodologies to apply. Read more about the position here: http://www.su.se/english/about/vacancies/vacancies-new-list?rmpage=job&rmjob=2862&rmlang=UK Applications are submitted through the university recruitment system, and the last date for applications is April 18, 2017. --- Gunnar Norrman Centre for Research on Bilingualism, Stockholm University +46 (0)8 16 3643 | gunnar.norrman at biling.su.se -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bastiaansen4.M at nhtv.nl Thu Mar 30 12:19:46 2017 From: Bastiaansen4.M at nhtv.nl (Bastiaansen, Marcel) Date: Thu, 30 Mar 2017 10:19:46 +0000 Subject: [FieldTrip] PhD position Tilburg University on 'decoding emotions from the brain' Message-ID: Dear Fieldtrippers, The departments of Cognitive Neuropsychology and Methodology and Statistics have a vacancy for a 4-year, fully funded PhD position to work on decoding emotions induced in Virtual reality environments from EEG signals. Deadline for applications is April 9th, 2017. Additional information about the position can be found through the link below. Inquiries about the position can be addressed directly to me. https://career012.successfactors.eu/career?_s.crb=%252bZoJOFM7vsQ4kHTupKwp7t2BWvc%253d best, Marcel *** Dr Marcel C.M. Bastiaansen Senior lecturer and researcher in quantitative research methods Academy for Leisure & Academy for Tourism NHTV Breda University of Applied Sciences Visiting adress: Room C1.011, Academy for Leisure Archimedesstraat 17, 4816 BA, Breda Phone: +31 76 533 2869 Email: bastiaansen4.m at nhtv.nl And Department of Cognitive Neuropsychology Tilburg School of Social and Behavioral Sciences Tilburg University Visiting address: Room S217, Simon building Warandelaan 2 5000 LE Tilburg Email: M.C.M.Bastiaansen at uvt.nl publications linked-in *** ----------------------------------------------------- Op deze e-mail zijn de volgende voorwaarden van toepassing : The following disclaimer applies to the e-mail message : http://www.nhtv.nl/disclaimer ----------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From elam4hcp at gmail.com Thu Mar 30 21:28:58 2017 From: elam4hcp at gmail.com (Jennifer Elam) Date: Thu, 30 Mar 2017 14:28:58 -0500 Subject: [FieldTrip] HCP Course 2017: Faculty and Course Schedule Available -- Register Now! Message-ID: Faculty listings and the full schedule of covered topics are now available for the 2017 HCP Course: "Exploring the Human Connectome" , to be held June 19-23 at the Djavad Mowafagian Centre for Brain Health at University of British Columbia (UBC) in Vancouver, BC, Canada! The 5-day intensive course is a great opportunity to learn directly from HCP investigators and gain practical experience with the Human Connectome Project's approach to multimodal whole brain imaging acquisition, processing, analysis, visualization, and sharing of data and results. For more info and to register visit the HCP Course 2017 website . Don't delay, registration is limited, and the course is filling up fast! Discounted on-site UBC accommodations are available through May 17, 2017 to attendees reserving through the HCP Course room block . If you have any questions, please contact us at: hcpcourse at humanconnectome. org We look forward to seeing you in Vancouver! Best, 2017 HCP Course Staff Jennifer Elam, Ph.D. Scientific Outreach, Human Connectome Project Washington University School of Medicine Department of Neuroscience, Box 8108 660 South Euclid Avenue St. Louis, MO 63110 314-362-9387 elam at wustl.edu www.humanconnectome.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bastiaansen4.M at nhtv.nl Fri Mar 31 09:31:23 2017 From: Bastiaansen4.M at nhtv.nl (Bastiaansen, Marcel) Date: Fri, 31 Mar 2017 07:31:23 +0000 Subject: [FieldTrip] PhD position Tilburg University on 'decoding emotions from the brain' In-Reply-To: References: Message-ID: Dear list, I posted a PhD vacancy yesterday, but I included a link that for some reason does not seem to work. Below is the full vacancy text as can be found on the website of Tilburg University. Apologies for the multiple posting. Best, Marcel PhD student on Decoding emotions from the brain (1,0 fte) PhD student on Decoding emotions from the brain, Departments of Cognitive Neuropsychology and Methodology and Statistics (1,0 fte) Project description Central aim of the PhD research is to decode / classify discrete categories of emotions, based on recordings of neural activity (EEG) and other physiological measures (HR, GSR, facial EMG). Emotion induction will be realized using Tilburg University’s advanced Virtual and Augmented Reality facilities. Emotion classification will be performed using state-of-the-art machine learning and data science techniques in order to optimize the sensitivity to identify and classify (differences in) emotional states. The PhD project will be supervised by promotors prof.dr. J. Vroomen, and co-promotors dr. Katrijn van Deun and dr. Marcel C.M. Bastiaansen. A more detailed project description is available upon request from dr. Marcel Bastiaansen. Tasks * Designing and conducting research; * Presenting findings on scientific conferences; * Reporting findings in international journals, resulting in a dissertation; * Participating in the graduate school; * Participating in the teaching program of the departments. Qualifications * Master’s degree (preferably Research Master) in cognitive neuroscience or a closely related discipline; * hands-on experience with EEG data analysis (preferably Fieldtrip); * Fluency in spoken English and excellent writing skills in English; * Programming skills (Matlab, R), and a keen interest in using advanced data analysis techniques are an important asset; * Experience with VR would be helpful; * Willingness and proven ability to work independently. Terms of Employment Tilburg University is among the top Dutch employers and has an excellent policy concerning terms of employment. The collective employment terms and conditions for Dutch universities will apply. The appointment is intended to lead to the completion of a PhD thesis. The PhD appointment at Tilburg University begins with a period of 12 months. Continuation of the appointment with another 36 months will will be based on performance evaluation. The gross salary for the PhD position amounts € 2.191 per month in the first year, rising to € 2.801 per month in the fourth year, based on a full-time appointment (38 hours per week). Applications and Information For additional information about the vacancy can be obtained from Dr. Marcel Bastiaansen, M.C.M.Bastiaansen at tilburguniversity.edu, tel.: +31 13 466 2408. Applicants should send their CV and a covering letter to Hans-Georg van Liempd MSc, Managing Director, Tilburg School of Social and Behavioral Sciences, only by the link mentioned below. The closing date for applications is April 9th 2017. Tilburg School of Social and Behavioral Sciences Tilburg School of Social and Behavioral Sciences (TSB) is a modern, specialized university. The teaching and research of the Tilburg School of Social and Behavioral Sciences are organized around the themes of Health, Organization, and Relations between State, Citizen, and Society. The Schools inspiring working environment challenges its workers to realize their ambitions; involvement and cooperation are essential to achieve this. Tilburg School of Social and Behavioral Sciences Department of Cognitive Neuropsychology The Department of Cognitive Neuropsychology of Tilburg University consists of a vibrant mix of people interested in cognitive and clinical neuropsychology. Our department is an intellectually exciting and productive group, advancing fundamental understanding in the cognitive neuroscience and clinical neuropsychology. Our research is highly recognized both nationally and internationally. Our fundamental research is focused on the integration of information from different modalities (hearing, seeing, touch) for perceiving speech, emotions, and crossmodal synchrony in the healthy population and in patient groups with autism, schizophrenia, or developmental dyslexia. We use behavioral measures and variety of psychophysical methods like eye tracking, EEG, and fMRI. We have access to the DAF Technology Lab for creating Virtual Reality. Department Methodology and Statistics The Department of Methodology and Statistics is an internationally renowned group, holding several experts in data science methods, latent variable methods, psychometrics, meta-research, survey methodology, and other applied statistics fields. The department has a strong tradition of working with the other (substantive) research programs in our School. The department is part of the School of Social and Behavioral Sciences at Tilburg University and responsible for the teaching and the research in the area of methodology and statistics for the social and behavioral sciences, the Data Science programs (including the novel joint bachelor in Data Science together with the technical university of Eindhoven), and the Liberal Arts and Science program of Tilburg University. The department is a member of the Interuniversity Graduate School for Psychometrics and Sociometrics (IOPS). Recruitment code Tilburg University applies the recruitmentcode of the Dutch Association for Personnel Management & Organization Development (NVP). Disclaimer The text of this vacancy advertisement is copyright-protected property of Tilburg University. Use, distribution and further disclosure of the advertisement without express permission from Tilburg University is not allowed, and this applies explicitly to use by recruitment and selection agencies which do not act directly on the instructions of Tilburg University. Responses resulting from recruitment by non-contractors of Tilburg Universities will not be handled. *** Dr Marcel C.M. Bastiaansen Senior lecturer and researcher in quantitative research methods Academy for Leisure & Academy for Tourism NHTV Breda University of Applied Sciences Visiting adress: Room C1.011, Academy for Leisure Archimedesstraat 17, 4816 BA, Breda Phone: +31 76 533 2869 Email: bastiaansen4.m at nhtv.nl And Department of Cognitive Neuropsychology Tilburg School of Social and Behavioral Sciences Tilburg University Visiting address: Room S217, Simon building Warandelaan 2 5000 LE Tilburg Email: M.C.M.Bastiaansen at uvt.nl publications linked-in *** From: Bastiaansen, Marcel Sent: donderdag 30 maart 2017 12:20 To: fieldtrip at science.ru.nl Cc: J.Vroomen at uvt.nl; k.vandeun at tilburguniversity.edu Subject: PhD position Tilburg University on 'decoding emotions from the brain' Dear Fieldtrippers, The departments of Cognitive Neuropsychology and Methodology and Statistics have a vacancy for a 4-year, fully funded PhD position to work on decoding emotions induced in Virtual reality environments from EEG signals. Deadline for applications is April 9th, 2017. Additional information about the position can be found through the link below. Inquiries about the position can be addressed directly to me. https://career012.successfactors.eu/career?_s.crb=%252bZoJOFM7vsQ4kHTupKwp7t2BWvc%253d best, Marcel *** Dr Marcel C.M. Bastiaansen Senior lecturer and researcher in quantitative research methods Academy for Leisure & Academy for Tourism NHTV Breda University of Applied Sciences Visiting adress: Room C1.011, Academy for Leisure Archimedesstraat 17, 4816 BA, Breda Phone: +31 76 533 2869 Email: bastiaansen4.m at nhtv.nl And Department of Cognitive Neuropsychology Tilburg School of Social and Behavioral Sciences Tilburg University Visiting address: Room S217, Simon building Warandelaan 2 5000 LE Tilburg Email: M.C.M.Bastiaansen at uvt.nl publications linked-in *** ----------------------------------------------------- Op deze e-mail zijn de volgende voorwaarden van toepassing : The following disclaimer applies to the e-mail message : http://www.nhtv.nl/disclaimer ----------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From narendra.kumar at iitrpr.ac.in Fri Mar 31 13:34:25 2017 From: narendra.kumar at iitrpr.ac.in (narendra karna) Date: Fri, 31 Mar 2017 17:04:25 +0530 Subject: [FieldTrip] Regarding Analysis of EGI's EEG Data using fieldtrip Message-ID: ​Hi, I am pursuing PhD in Linguistics. I don't know much about MATLAB. I have recently done one EEG/ERP experiment using EGI's 128 channel EEG system. I came to know that fieldtrip supports the analysis of EGI's EEG data. So, if be possible can anyone send me the script for analysing EGI's EEG data with ICA analysis. ​Thanks. Narendra​ Research Scholar Department of Humanities and Social Sciences Indian Institute of Technology Ropar Punjab, India - 140001 -------------- next part -------------- An HTML attachment was scrubbed... URL: From max-philipp.stenner at med.ovgu.de Fri Mar 31 15:13:37 2017 From: max-philipp.stenner at med.ovgu.de (Stenner, Max-Philipp) Date: Fri, 31 Mar 2017 13:13:37 +0000 Subject: [FieldTrip] PhD on human motor learning at the Leibniz Insitut for Neurobiology, Magdeburg/Germany Message-ID: Dear fieldtrip community a 3-year PhD position is available for a research project on the role of neural oscillations for motor learning in humans with Dr Max-Philipp Stenner and Prof Jens-Max Hopf at the Leibniz Institute for Neurobiology in Magdeburg, Germany (http://www.lin-magdeburg.de/en/departments/behavioral_neurology/physiology_motorlearning/index.jsp). Please find all details in the attached pdf. Best wishes Max-Philipp Stenner -------------- next part -------------- A non-text attachment was scrubbed... Name: PhD ad.pdf Type: application/pdf Size: 150597 bytes Desc: PhD ad.pdf URL: From dlozanosoldevilla at gmail.com Fri Mar 31 16:40:36 2017 From: dlozanosoldevilla at gmail.com (Diego Lozano-Soldevilla) Date: Fri, 31 Mar 2017 16:40:36 +0200 Subject: [FieldTrip] how to make the cfg.selectfeature work in ft_databrowser? Message-ID: Hi all, I'm using ft_databrowser to inspect sleep data and I want to visually mark different events (spindles, k-complexes, artifacts, so forth) and asign them to different cfg.artfctdef.xxx.artifact substructures. Could somebody help me to mark different artifact trial types using the cfg.selectfeature option? Please find below the code and data to reproduce the error I got. I'm using the very last fieldtrip version on windows with matlab 7.9b. Thanks beforehand, Diego data = []; data.label = {'Fpz';'F7';'F3';'Fz';'F4';'F8';'C3';'Cz';'C4';'P3';'Pz';'P4';'O1';'Oz';'O2'}; data.fsample = 250; data.trial{1} = rand(size(data.label,1),data.fsample*30); data.time{1} = (1:data.fsample*30)./data.fsample; cfg = []; cfg.length = 2; cfg.overlap = 0; trl = ft_redefinetrial(cfg,data); cfg = []; cfg.channel = 'all'; cfg.blocksize = 2; cfg.selectfeature = {'a';'b'}; cfg.viewmode = 'vertical'; events = ft_databrowser(cfg,trl); the input is raw data with 15 channels and 15 trials detected 0 a artifacts detected 0 b artifacts ??? Error using ==> plus Matrix dimensions must agree. Error in ==> ft_databrowser at 745 hsel = [1 2 3] + (opt.ftsel-1) .*3; ??? Reference to non-existent field 'trlvis'. Error in ==> ft_databrowser>redraw_cb at 1639 begsample = opt.trlvis(opt.trlop, 1); Error in ==> ft_databrowser>winresize_cb at 2250 redraw_cb(h,eventdata); ??? Error while evaluating figure ResizeFcn Virus-free. www.avast.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> -------------- next part -------------- An HTML attachment was scrubbed... URL: From sunsunruirui1111 at gmail.com Fri Mar 31 17:40:38 2017 From: sunsunruirui1111 at gmail.com (Rachel S) Date: Fri, 31 Mar 2017 11:40:38 -0400 Subject: [FieldTrip] Fwd: OpenMEEG binaries are not correctly installed In-Reply-To: References: Message-ID: Hello fieldtrip community, My name is Rachel and I am a Master student working on a project on Ecog. I am trying to use ft_prepare_headmodel with cfg = 'openmeeg' and I get the error "OpenMEEG binaries are not correctly installed". I use a Windows machine and I already add the openmeeg install folder to 'PATH'. When I ran system('om_assemble'), the output is: om_assemble version 2.1.0 (799) compiled at Aug 17 2011 19:50:41 Not enough arguments Please try "om_assemble -h" or "om_assemble --help " ans = 0 Any suggestions? Thanks in advance. Best wishes, Rachel -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian.jobke at nexgo.de Fri Mar 31 18:07:43 2017 From: sebastian.jobke at nexgo.de (Sebastian Jobke) Date: Fri, 31 Mar 2017 18:07:43 +0200 Subject: [FieldTrip] How to compute permutation test on ITC data Message-ID: <00f501d2aa38$ee73c260$cb5b4720$@nexgo.de> Hello Fieldtrip-Community, I am writing you to ask for some help. At the moment I am analysing EEG-Data gained during a passive oddball paradigm. For the preprocessing I used eeglab, transformed the data to the fieldtrip structure and computed time-frequency analysis and ITC, for which you provided great tutorials. Now I am a little stuck, because I was wondering how to compute permutation tests on ITC data? I have several subjects and want to compare two conditions (standards and deviants). I saw that there is a function (FT_STATFUN_DIFF_ITC) for this, but I unfortunately don't know how to use it. More specifically, I was wondering how to average over subjects and if I have to do the permutation test on every frequency band again (This, I did for the time-frequency analysis, as described in your tutorial). Further, I was wondering about how to use ft_freqstatistics with ITC-Data, how you described it in the tutorial. For any advise, I would be more than greatful. Thank you very much in advance. Best, Sebastian -------------- next part -------------- An HTML attachment was scrubbed... URL: From N.vanKlink-2 at umcutrecht.nl Wed Mar 1 11:43:09 2017 From: N.vanKlink-2 at umcutrecht.nl (Klink-3, N.E.C. van) Date: Wed, 1 Mar 2017 10:43:09 +0000 Subject: [FieldTrip] Normalization of beamformer leadfields Message-ID: Dear all, I want to do SAM beamformer source localization on single trial EEG data. I would like to normalize the leadfields to correct for depth, like mentioned in the lmcv beamformer tutorial: (http://www.fieldtriptoolbox.org/tutorial/beamformer_lcmv) cfg = []; cfg.elec = hdr.elec; % electrode distances cfg.headmodel = vol; % volume conduction headmodel cfg.grid = grid; % normalized grid positions cfg.channel = {'EEG'}; cfg.normalize = 'yes'; % to remove depth bias (Q in eq. 27 of van Veen et al, 1997). lf = ft_prepare_leadfield(cfg); However when I look what happens with cfg.normalize='yes', the following script is used in ft_compute_leadfield, from line 570: case 'yes' for ii=1:Ndipoles tmplf = lf(:, (3*ii-2):(3*ii)); if normalizeparam==0.5 % normalize the leadfield by the Frobenius norm of the matrix % this is the same as below in case normalizeparam is 0.5 nrm = norm(tmplf, 'fro'); else % normalize the leadfield by sum of squares of the elements of the leadfield matrix to the power "normalizeparam" % this is the same as the Frobenius norm if normalizeparam is 0.5 nrm = sum(tmplf(:).^2)^normalizeparam; end if nrm>0 tmplf = tmplf ./ nrm; end lf(:, (3*ii-2):(3*ii)) = tmplf; end This seems to me as independent of the dipole location, and does not use an estimate of the noise spectrum as in Eq 27 of van Veen et al 1997. DICS beamformer has the option to estimate the noise spectrum with 'projectnoise', but SAM beamformer does not have that option. SAM does something with noise and a lambda, which is noise regularization I guess (beamformer_sam from line 102). I use Fieldtrip 20170212. My main question: how do I correct the leadfields for depth bias? Thanks in advance, Nicole ------------------------------------------------------------------------------ De informatie opgenomen in dit bericht kan vertrouwelijk zijn en is uitsluitend bestemd voor de geadresseerde. Indien u dit bericht onterecht ontvangt, wordt u verzocht de inhoud niet te gebruiken en de afzender direct te informeren door het bericht te retourneren. Het Universitair Medisch Centrum Utrecht is een publiekrechtelijke rechtspersoon in de zin van de W.H.W. (Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat geregistreerd bij de Kamer van Koophandel voor Midden-Nederland onder nr. 30244197. Denk s.v.p aan het milieu voor u deze e-mail afdrukt. ------------------------------------------------------------------------------ This message may contain confidential information and is intended exclusively for the addressee. If you receive this message unintentionally, please do not use the contents but notify the sender immediately by return e-mail. University Medical Center Utrecht is a legal person by public law and is registered at the Chamber of Commerce for Midden-Nederland under no. 30244197. Please consider the environment before printing this e-mail. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sarang at cfin.au.dk Wed Mar 1 13:08:55 2017 From: sarang at cfin.au.dk (Sarang S. Dalal) Date: Wed, 1 Mar 2017 12:08:55 +0000 Subject: [FieldTrip] Normalization of beamformer leadfields In-Reply-To: References: Message-ID: <07BC6EA1-15AE-4528-B9CC-05BA838317F0@cfin.au.dk> Hi Nicole, Lead field normalization is a different approach than Van Veen’s method, which is often called the Neural Activity Index (NAI) and closely related to the “unit noise gain” or “weight normalization” concept you might see in some literature. I have implemented the NAI in beamformer_lcmv.m, which you can run with: cfg.method = ‘lcmv’; cfg.lcmv.weightnorm = ‘nai’; However, the equivalent has not been implemented in the other beamformer variants yet (SAM, DICS). You can still get output equivalent to SAM using the LCMV method if you use cfg.keeptrials=‘yes’ and average the power of the resulting time series (in source.avg.mom). This would give you a measure of induced power changes (rather than evoked), like the SAM procedure would. Unfortunately this procedure is not yet documented, but it’s not too tricky. (Please use a brand new version of FieldTrip if you’d like to try this, as an old bug in the NAI orientation selection was inadvertently re-introduced in FieldTrip versions between September 2016 and last week). I personally find that the NAI gives more sensible results if you are contrasting something like post-stimulus activity to a pre-stimulus baseline. If you are instead contrasting two conditions against each other rather than a baseline, then the different normalization approaches should give (almost) the same results anyway. Anyway, regarding lead field normalization: it does indeed do a voxel-by-voxel normalization since it cycles through all the voxels in a for loop ('for ii=1:Ndipoles' on the second line). It is purely based on the properties of the lead field, and as you noticed, is unlike Van Veen’s method in that it does not use the noise estimate at all. BTW, I believe that the lead field "column normalization" approach has been more popular in the literature. This normalizes the x/y/z components of the lead field independently, rather than all together. You can try this with cfg.normalize = ‘column’ and see how the results compare. Cheers, Sarang On 01 Mar 2017, at 11:43, Klink-3, N.E.C. van > wrote: Dear all, I want to do SAM beamformer source localization on single trial EEG data. I would like to normalize the leadfields to correct for depth, like mentioned in the lmcv beamformer tutorial: (http://www.fieldtriptoolbox.org/tutorial/beamformer_lcmv) cfg = []; cfg.elec = hdr.elec; % electrode distances cfg.headmodel = vol; % volume conduction headmodel cfg.grid = grid; % normalized grid positions cfg.channel = {'EEG'}; cfg.normalize = 'yes'; % to remove depth bias (Q in eq. 27 of van Veen et al, 1997). lf = ft_prepare_leadfield(cfg); However when I look what happens with cfg.normalize='yes', the following script is used in ft_compute_leadfield, from line 570: case 'yes' for ii=1:Ndipoles tmplf = lf(:, (3*ii-2):(3*ii)); if normalizeparam==0.5 % normalize the leadfield by the Frobenius norm of the matrix % this is the same as below in case normalizeparam is 0.5 nrm = norm(tmplf, 'fro'); else % normalize the leadfield by sum of squares of the elements of the leadfield matrix to the power "normalizeparam" % this is the same as the Frobenius norm if normalizeparam is 0.5 nrm = sum(tmplf(:).^2)^normalizeparam; end if nrm>0 tmplf = tmplf ./ nrm; end lf(:, (3*ii-2):(3*ii)) = tmplf; end This seems to me as independent of the dipole location, and does not use an estimate of the noise spectrum as in Eq 27 of van Veen et al 1997. DICS beamformer has the option to estimate the noise spectrum with 'projectnoise', but SAM beamformer does not have that option. SAM does something with noise and a lambda, which is noise regularization I guess (beamformer_sam from line 102). I use Fieldtrip 20170212. My main question: how do I correct the leadfields for depth bias? Thanks in advance, Nicole ________________________________ De informatie opgenomen in dit bericht kan vertrouwelijk zijn en is uitsluitend bestemd voor de geadresseerde. Indien u dit bericht onterecht ontvangt, wordt u verzocht de inhoud niet te gebruiken en de afzender direct te informeren door het bericht te retourneren. Het Universitair Medisch Centrum Utrecht is een publiekrechtelijke rechtspersoon in de zin van de W.H.W. (Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat geregistreerd bij de Kamer van Koophandel voor Midden-Nederland onder nr. 30244197. Denk s.v.p aan het milieu voor u deze e-mail afdrukt. ________________________________ This message may contain confidential information and is intended exclusively for the addressee. If you receive this message unintentionally, please do not use the contents but notify the sender immediately by return e-mail. University Medical Center Utrecht is a legal person by public law and is registered at the Chamber of Commerce for Midden-Nederland under no. 30244197. Please consider the environment before printing this e-mail. _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: From anne.urai at gmail.com Wed Mar 1 19:38:44 2017 From: anne.urai at gmail.com (Anne Urai) Date: Wed, 1 Mar 2017 10:38:44 -0800 Subject: [FieldTrip] compiling ft_volumenormalise Message-ID: Hi FieldTrippers, I compile my code to run on the supercomputer cluster (without many matlab licenses), which usually works fine when I do something like: *addpath('~/Documents/fieldtrip');* *ft_defaults; * *addpath('~/Documents/fieldtrip/external/spm8');* *mcc('-mv', '-N', '-p', 'stats', '-p', 'images', '-p', 'signal', ...* * '-R', '-nodisplay', '-R', '-singleCompThread', fname);* However, compiling the ft_volumenormalise function gives me some problems. Specifically, if source is the result of my beamformer analysis, this code * cfg = [];* * cfg.parameter = 'pow';* * cfg.nonlinear = 'no'; % can warp back to individual* * cfg.template = '/home/aeurai/Documents/fieldtrip/external/spm8/templates/T1.nii';* * cfg.write = 'no';* * cfg.keepinside = 'no'; % otherwise, ft_sourcegrandaverage will bug* * source = ft_volumenormalise(cfg, source);* works fine when running it within Matlab. However, when I run the executable after compiling (which completes without error), a low-level spm function throws the following error: *the input is source data with 16777216 brainordinates on a [256 256 256] grid* *Warning: could not reshape "freq" to the expected dimensions* *> In ft_datatype_volume (line 136)* *In ft_checkdata (line 350)* *In ft_volumenormalise (line 98)* *In B6b_sourceContrast_volNormalise (line 57)* *Converting the coordinate system from ctf to spm* *Undefined function 'fname' for input arguments of type 'struct'* *Error in file_array (line 32)* *Error in spm_create_vol>create_vol (line 77)* *Error in spm_create_vol (line 16)* *Error in volumewrite_spm (line 71)* *Error in ft_write_mri (line 65)* *Error in align_ctf2spm (line 168)* *Error in ft_convert_coordsys (line 95)* *Error in ft_volumenormalise (line 124)* *Error in B6b_sourceContrast_volNormalise (line 57)* *MATLAB:UndefinedFunction* I'd be very grateful for hints from anyone who's successfully compiled the ft_normalise function! Adding the template T1.nii file, spm8 or freesurfer at compilation does not solve the problem. Thanks, — Anne E. Urai, MSc PhD student | Institut für Neurophysiologie und Pathophysiologie Universitätsklinikum Hamburg-Eppendorf | Martinistrasse 52, 20246 | Hamburg, Germany www.anneurai.net / @AnneEUrai -------------- next part -------------- An HTML attachment was scrubbed... URL: From braingirl at gmail.com Wed Mar 1 20:22:48 2017 From: braingirl at gmail.com (Teresa Madsen) Date: Wed, 1 Mar 2017 14:22:48 -0500 Subject: [FieldTrip] error in filter_with_correction In-Reply-To: <7AC17F80-7F8D-4EC9-87F5-1B3279AC8DE1@mail.ucsd.edu> References: <7AC17F80-7F8D-4EC9-87F5-1B3279AC8DE1@mail.ucsd.edu> Message-ID: Did you also try searching the mailing list archives? The same error has come up a few times. What is your trial duration and sampling frequency? You'll need several seconds to get an accurate idea of what's going on in such low frequencies. Have you tried just detrending and applying a 2 Hz low-pass filter? It seems like that might have essentially the same effect. Hope one of those helps, Teresa On Fri, Feb 24, 2017 at 8:29 PM, Wong-Barnum, Mona wrote: > Hello fellow FieldTrip'er: > > Can someone help me understand and hopefully fix the following runtime > error message I am seeing (I searched a bit on the website documentation > but didn’t find anything): > > Error using filter_with_correction (line 51) > Calculated filter coefficients have poles on or outside the unit circle and > will not be stable. Try a higher cutoff frequency or a different > type/order of > filter. > > Error in filter_with_correction (line 51) > error('Calculated filter coefficients have poles on or outside the unit > circle and will not be stable. Try a higher cutoff frequency or a > different > type/order of filter.'); > > Error in ft_preproc_bandpassfilter (line 286) > filt = filter_with_correction(B,A,dat,dir,usefftfilt); > > Error in preproc (line 324) > if strcmp(cfg.bpfilter, 'yes'), dat = ft_preproc_bandpassfilter(dat, > fsample, cfg.bpfreq, cfg.bpfiltord, cfg.bpfilttype, cfg.bpfiltdir, > cfg.bpinstabilityfix, cfg.bpfiltdf, cfg.bpfiltwintype, cfg.bpfiltdev, > cfg.plotfiltresp, cfg.usefftfilt); end > > Error in ft_preprocessing (line 592) > [cutdat{i}, label, time{i}, cfg] = preproc(dat, hdr.label(rawindx), > tim, > cfg, begpadding, endpadding); > > Error in test (line 25) > data = ft_preprocessing ( cfg ); > > Error in run (line 96) > evalin('caller', [script ';']); > > Here is my script: > > addpath /path/to/my/fieldtrip > ft_defaults > > % 1. MEG > disp ( 'Reading 1.fif...' ) > cfg = []; > cfg.dataset = '1.fif'; > data = ft_preprocessing ( cfg ); > > disp ( 'Getting MEG channel 1...' ) > meg_channel = ft_channelselection ( 'MEG0111', data.label ); > cfg = []; > cfg.channel = meg_channel; > meg = ft_selectdata ( cfg, data ); > disp ( 'Saving meg...' ) > save meg.mat meg -v7.3; > clearvars cfg meg; > > % 2. Low delta MEG > disp ( 'Low delta MEG...' ) > cfg = []; > cfg.bpfilter = 'yes'; > cfg.bpfreq = [0.1 2]; > cfg.dataset = '1.fif'; > data = ft_preprocessing ( cfg ); > > cfg = []; > cfg.channel = meg_channel; > cfg.frequency = [0.1 2]; > meg = ft_selectdata ( cfg, data ); > disp ( 'Saving low delta meg...' ) > save low_delta_meg.mat meg -v7.3; > clearvars cfg meg; > > Line #25 is the last “data = ft_preprocessing ( cfg );” line. > > If I do cfg.bpfreq = [2 4] then there is no error but I really like to get > this low [0.1 2] range…any tips? > > Mona > > ********************************************* > Mona Wong > Web & Mobile Application Developer > San Diego Supercomputer Center > > Believing we are in control is an > illusion that brings suffering. > ********************************************* > > > > > > > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -- Teresa E. Madsen, PhD Research Technical Specialist: *in vivo *electrophysiology & data analysis Division of Behavioral Neuroscience and Psychiatric Disorders Yerkes National Primate Research Center Emory University Rainnie Lab, NSB 5233 954 Gatewood Rd. NE Atlanta, GA 30329 (770) 296-9119 braingirl at gmail.com https://www.linkedin.com/in/temadsen -------------- next part -------------- An HTML attachment was scrubbed... URL: From timeehan at gmail.com Wed Mar 1 21:55:03 2017 From: timeehan at gmail.com (Tim Meehan) Date: Wed, 1 Mar 2017 15:55:03 -0500 Subject: [FieldTrip] marking artifacts by channel + trial Message-ID: Hello All, When performing visual artifact rejection, I want to be able to mark artifacts that occur during some specific trials and only on some specific channels. In the tutorials I see only ways to mark bad channels (i.e. across all trials) or bad trials (i.e. across all channels). Does FieldTrip handle marking artifacts restricted to some channel/trial combination? Thanks, Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: From boris.burle at univ-amu.fr Thu Mar 2 15:19:18 2017 From: boris.burle at univ-amu.fr (Boris BURLE) Date: Thu, 2 Mar 2017 15:19:18 +0100 Subject: [FieldTrip] Post-doc position in development of cognitive control, Marseille, France Message-ID: <0ce55d3b-c424-633f-225b-2495e615a120@univ-amu.fr> Dear colleagues, Please find below a post-doc position offer that may be of interest to Fieldtrip users: B. Burle ------------------------------------- Post-doc research position in Developmental Psychology/Cognitive Neuroscience in Marseille, France We are seeking for a highly motivated fellow for a 2 years (potentially renewable) post-doc position to conduct EEG and structural MRI studies from children to young adults. This position is opened within a larger project aiming at tracking the development of cognitive control from childhood to adulthood. In the first phase of the project so far, we have collected behavioral data in a large cohort of more than 400 participants (from 5 to 14-years old) performing conflict tasks. The second phase in which occulometry (to extract pupil dilatation and eye movement) and electromyography (to extract the so-called " partial errors") in another group of children (comparable age-span) is currently being completed. Capitalizing on the results of the first two phases, the hired fellow will be mainly involved in the third phase of this project that will study the cortical components related to executive control maturation. EEG (and EMG) will be recorded on children performing conflict tasks to track the maturation of the different electrophysiological markers of executive control. The same children will undergo a structural MRI scan to get precise anatomy and connectivity, along with resting state activity. The recruited fellow will be in charge of the acquisition and processing of those data. The evolution of the EEG markers and of performance will be related to the maturation state of the different brain areas of interest and their connectivity. Candidates should hold a PhD in cognitive/developmental psychology/neuroscience. An expertise in either EEG or structural MRI is required. Having experience with children is a real plus, and if this experience is in association with one of the two techniques listed above, that is a major advantage. However, candidates having a strong background in one of those techniques but no experience with children are still encourage to apply. Knowledge of a high-level programming language (python, matlab, R...) is a real plus. The daily work language will be english but given the large interactions with children, non French-speaking applicant would have to speak a minimum amount of French (French courses can be attended in place). The project is interdsiciplinary, at the cross road of developmental psychology, cognitive neuroscience of cognitive control and neuro-imagery. The recruited fellow will hence interact with researchers in the three domain. Besides, the project in embedded in the vibrant and second biggest “behavioral and brain sciences” community in France. State of the art methodologies are accessible (Research dedicated MRI - Siemens last generation 3T scanner Prisma, MEG, robotized TMS, high resolution EEG etc...). Marseille is located in the south of France (Provence), on the shore of the Mediterranean sea, and is known for his very nice weather and surrounding: it is bordered by the beautiful “Calanques ”, the Alp mountains are within 1h30 ride, and so are the major cultural cities of Provence (Aix-en Provence, Avignon, Arles...). Salary is based on experience according to the CNRS (french National Center for Scientific Research) rules (and will be around 2000 € netto). Applications are encouraged immediately, and will remain open until position is filled. The position is available immediately. Please send candidatures (and or request for more information) at boris.burle at univ-amu.fr with [Post-Doc Devel] in the subject of the mail. -------------- next part -------------- An HTML attachment was scrubbed... URL: From braingirl at gmail.com Thu Mar 2 15:53:06 2017 From: braingirl at gmail.com (Teresa Madsen) Date: Thu, 2 Mar 2017 09:53:06 -0500 Subject: [FieldTrip] marking artifacts by channel + trial In-Reply-To: References: Message-ID: No, not really. The only way I've found to do that is to loop through my artifact rejection process on each trial individually, then merge them back together with NaNs filling in where there are artifacts, but then that breaks every form of analysis I want to do. :-P I wonder if it would work to fill in the artifacts with 0s instead of NaNs....I might play with that. Let me know if you're interested in some example code. ~Teresa On Wed, Mar 1, 2017 at 3:55 PM, Tim Meehan wrote: > Hello All, > > When performing visual artifact rejection, I want to be able to mark > artifacts that occur during some specific trials and only on some specific > channels. In the tutorials I see only ways to mark bad channels (i.e. > across all trials) or bad trials (i.e. across all channels). Does FieldTrip > handle marking artifacts restricted to some channel/trial combination? > > Thanks, > Tim > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -- Teresa E. Madsen, PhD Research Technical Specialist: *in vivo *electrophysiology & data analysis Division of Behavioral Neuroscience and Psychiatric Disorders Yerkes National Primate Research Center Emory University Rainnie Lab, NSB 5233 954 Gatewood Rd. NE Atlanta, GA 30329 (770) 296-9119 braingirl at gmail.com https://www.linkedin.com/in/temadsen -------------- next part -------------- An HTML attachment was scrubbed... URL: From timeehan at gmail.com Thu Mar 2 15:55:14 2017 From: timeehan at gmail.com (Tim Meehan) Date: Thu, 2 Mar 2017 09:55:14 -0500 Subject: [FieldTrip] marking artifacts by channel + trial In-Reply-To: References: Message-ID: Hi Teresa, Thanks for the reply. I'll take a look at your example if you don't mind sharing. Thanks! Tim On Thu, Mar 2, 2017 at 9:53 AM, Teresa Madsen wrote: > No, not really. The only way I've found to do that is to loop through my > artifact rejection process on each trial individually, then merge them back > together with NaNs filling in where there are artifacts, but then that > breaks every form of analysis I want to do. :-P > > I wonder if it would work to fill in the artifacts with 0s instead of > NaNs....I might play with that. Let me know if you're interested in some > example code. > > ~Teresa > > > On Wed, Mar 1, 2017 at 3:55 PM, Tim Meehan wrote: > >> Hello All, >> >> When performing visual artifact rejection, I want to be able to mark >> artifacts that occur during some specific trials and only on some specific >> channels. In the tutorials I see only ways to mark bad channels (i.e. >> across all trials) or bad trials (i.e. across all channels). Does FieldTrip >> handle marking artifacts restricted to some channel/trial combination? >> >> Thanks, >> Tim >> >> _______________________________________________ >> fieldtrip mailing list >> fieldtrip at donders.ru.nl >> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >> > > > > -- > Teresa E. Madsen, PhD > Research Technical Specialist: *in vivo *electrophysiology & data > analysis > Division of Behavioral Neuroscience and Psychiatric Disorders > Yerkes National Primate Research Center > Emory University > Rainnie Lab, NSB 5233 > 954 Gatewood Rd. NE > Atlanta, GA 30329 > (770) 296-9119 > braingirl at gmail.com > https://www.linkedin.com/in/temadsen > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -------------- next part -------------- An HTML attachment was scrubbed... URL: From martabortoletto at yahoo.it Fri Mar 3 09:16:53 2017 From: martabortoletto at yahoo.it (Marta Bortoletto) Date: Fri, 3 Mar 2017 08:16:53 +0000 (UTC) Subject: [FieldTrip] Post-doc position in TMS-EEG coregistration in Brescia, Italy In-Reply-To: <1398938405.1509799.1488528843896@mail.yahoo.com> References: <1398938405.1509799.1488528843896.ref@mail.yahoo.com> <1398938405.1509799.1488528843896@mail.yahoo.com> Message-ID: <89519094.172506.1488529013260@mail.yahoo.com> Dear all, Please find below an announcement for a post-docposition to work on a project of TMS-EEG coregistration, located at theCognitive Neuroscience Unit, IRCCS Centro San Giovanni di Dio Fatebenefratelli,Brescia (Italy), led by prof. Carlo Miniussi.We would be mostly grateful if you couldcirculate this notice to possibly interested candidates.Cheers, Marta Bortoletto and Anna Fertonani  Marta Bortoletto, PhD Cognitive Neuroscience Section, IRCCS Centro San Giovanni di Dio Fatebenefratelli Via Pilastroni 4, 25125 Brescia, Italy Phone number: (+39) 0303501594 E-mail: marta.bortoletto at cognitiveneuroscience.it web: http://www.cognitiveneuroscience.it/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Job description.pdf Type: application/pdf Size: 18043 bytes Desc: not available URL: From marc.lalancette at sickkids.ca Fri Mar 3 18:22:22 2017 From: marc.lalancette at sickkids.ca (Marc Lalancette) Date: Fri, 3 Mar 2017 17:22:22 +0000 Subject: [FieldTrip] Normalization of beamformer leadfields Message-ID: <2A2B6A5B8C4C174CBCCE0B45E548DEB23B964C34@SKMBXX01.sickkids.ca> Normalizing columns of the leadfield separately is not recommended. It is not a rotationally invariant operation, meaning you will get different results depending on your choice of coordinate system, which in short means that it introduces a physically meaningless bias, thus potentially amplitude and localization distortions. Note that this is also true of the unit-noise-gain normalization formula for the vector beamformer of Sekihara (which may still be used in some software, but is not in Fieldtrip). I was planning on writing a short paper on this, but unfortunately never found the time. I had a poster at Biomag 2014. Here's the link, but note that I later found errors in the computations for the "source bias and resolution figures" so it's probably best to ignore them, though the general idea that there are orientation and possibly location biases in most vector formulae is still valid http://dx.doi.org/10.6084/m9.figshare.1148970 . Maybe I'll redo the figures and post a "corrected" version at some point. Cheers, Marc Lalancette Lab Research Project Manager, Research MEG The Hospital for Sick Children, 555 University Avenue, Toronto, ON, M5G 1X8 416-813-7654 x201535 ________________________________ This e-mail may contain confidential, personal and/or health information(information which may be subject to legal restrictions on use, retention and/or disclosure) for the sole use of the intended recipient. Any review or distribution by anyone other than the person for whom it was originally intended is strictly prohibited. If you have received this e-mail in error, please contact the sender and delete all copies. From v.litvak at ucl.ac.uk Fri Mar 3 18:59:54 2017 From: v.litvak at ucl.ac.uk (Vladimir Litvak) Date: Fri, 3 Mar 2017 17:59:54 +0000 Subject: [FieldTrip] SPM course for MEG/EEG in London: May 8-10, 2017 Message-ID: Dear all, We are pleased to announce that our annual SPM course for MEG/EEG will take place this year from Monday May 8 to Wednesday May 10 2017. Hosted by University College London, the course will be held at Queen Square, a very central location in London (UK). The course will present instruction on the analysis of MEG and EEG data. The first two days will combine theoretical presentations with practical demonstrations of the different data analysis methods implemented in SPM. On the last day participants will have the opportunity to work on SPM tutorial data sets under the supervision of the course faculty. We also invite students to bring their own data for analysis. The course is suitable for both beginners and more advanced users. The topics that will be covered range from pre-processing and statistical analysis to source localization and dynamic causal modelling. The program is listed below. Registration is now open. For full details see http://www.fil.ion.ucl.ac.uk/ spm/course/london/ where you can also register. Available places are limited so please register as early as possible if you would like to attend! ---------------------- Monday May 8th (33 Queen square, basement) 9.00 - 9.30 Registration 9.30 - 9.45 SPM introduction and resources Guillaume Flandin 9.45 - 10.30 What are we measuring with M/EEG? Saskia Heibling 10.30 - 11.15 Data pre-processing Hayriye Cagnan Coffee 11.45 - 12.30 Data pre-processing – demo Sofie Meyer, Misun Kim 12.30 - 13.15 General linear model and classical inference Christophe Phillips Lunch 14.15 - 15.00 Multiple comparisons problem and solutions Guillaume Flandin 15.00 - 15.45 Bayesian inference Christophe Mathys Coffee 16.15 - 17.45 Group M/EEG dataset analysis - demo Jason Taylor, Martin Dietz 17.45 - 18.30 Advanced applications of the GLM Ashwani Jha, Bernadette van Wijk Tuesday May 9th (33 Queen square, basement) 9.30 - 10.15 M/EEG source analysis Gareth Barnes 10.15 - 11.15 M/EEG source analysis – demo Jose Lopez, Leonardo Duque Coffee 11.45 - 12.30 The principles of dynamic causal modelling Bernadette van Wijk 12.30 - 13.15 DCM for evoked responses Ryszard Auksztulewicz Lunch 14.15 - 15.00 DCM for steady state responses Rosalyn Moran 15.00 - 15.45 DCM - demo Richard Rosch, Tim West Coffee 16.15 - 17.00 Bayesian model selection and averaging Peter Zeidman 17.00 - 18.30 Clinic - questions & answers Karl Friston 19.00 - ... Social Event Wednesday May 10th 9.30 - 17.00 Practical hands-on session in UCL computer class rooms. Participants can either work on SPM tutorial datasets or on their own data with the help of the faculty. There will also be an opportunity to ask questions in small tutorial groups for further discussions on the topics of the lectures. -------------- next part -------------- An HTML attachment was scrubbed... URL: From braingirl at gmail.com Fri Mar 3 23:31:04 2017 From: braingirl at gmail.com (Teresa Madsen) Date: Fri, 3 Mar 2017 17:31:04 -0500 Subject: [FieldTrip] marking artifacts by channel + trial In-Reply-To: References: Message-ID: Here's a rough sketch of my approach, with one custom function attached. If you or others find it useful, maybe we can think about ways to incorporate it into the FieldTrip code. I've been working mostly with scripts, but you've inspired me to work on functionizing the rest of it so it's more shareable. So, assuming raw multichannel data has been loaded into FieldTrip structure 'data' with unique trial identifiers in data.trialinfo... for ch = 1:numel(data.label) %% pull out one channel at a time cfg = []; cfg.channel = data.label{ch}; datch{ch} = ft_selectdata(cfg, data); %% identify large z-value artifacts and/or whatever else you might want cfg = []; cfg.artfctdef.zvalue.channel = 'all'; cfg.artfctdef.zvalue.cutoff = 15; cfg.artfctdef.zvalue.trlpadding = 0; cfg.artfctdef.zvalue.fltpadding = 0; cfg.artfctdef.zvalue.artpadding = 0.1; cfg.artfctdef.zvalue.rectify = 'yes'; [~, artifact.zvalue] = ft_artifact_zvalue([], datch{ch}); %% replace artifacts with NaNs cfg = []; cfg.artfctdef.zvalue.artifact = artifact.zvalue; cfg.artfctdef.reject = 'nan'; datch{ch} = ft_rejectartifact(cfg,datch{ch}); end %% re-merge channels data = ft_appenddata([],datch); %% mark uniform NaNs as artifacts when they occur across all channels % and replace non-uniform NaNs (on some but not all channels) with zeroes, saving times [artifact,data,times] = artifact_nan2zero_TEM(data) % custom function, see attached %% reject artifacts by breaking into sub-trials cfg = []; cfg.artfctdef.nan2zero.artifact = artifact; cfg.artfctdef.reject = 'partial'; data = ft_rejectartifact(cfg,data); %% identify real trials trlinfo = unique(data.trialinfo,'rows','stable'); for tr = 1:size(trlinfo,1) %% calculate trial spectrogram cfg = []; cfg.trials = ismember(data.trialinfo, trlinfo(tr,:), 'rows'); cfg.keeptrials = 'no'; % refers to sub-trials cfg.method = 'mtmconvol'; cfg.output = 'powandcsd'; cfg.foi = 2.^(0:0.1:log2(300)); % 83 freqs, log2 spaced, 1-300 Hz cfg.tapsmofrq = cfg.foi/10; % smooth by 10% cfg.t_ftimwin = 2./cfg.tapsmofrq; % for 3 tapers (K=3), T=2/W cfg.toi = '50%'; cfg.pad = 'nextpow2'; freq = ft_freqanalysis(cfg,data); %% replace powspctrm & crsspctrum values with NaNs % where t_ftimwin (or wavlen for wavelets) overlaps with artifact for ch = 1:numel(freq.label) badt = [times{tr,ch}]; if ~isempty(badt) && any(... badt > (min(freq.time) - max(freq.cfg.t_ftimwin)) & ... badt < (max(freq.time) + max(freq.cfg.t_ftimwin))) ci = find(any(strcmp(freq.label{ch}, freq.labelcmb))); for t = 1:numel(freq.time) for f = 1:numel(freq.freq) mint = freq.time(t) - freq.cfg.t_ftimwin(f); maxt = freq.time(t) + freq.cfg.t_ftimwin(f); if any(badt > mint & badt < maxt) freq.powspctrm(ch,f,t) = NaN; freq.crsspctrm(ci,f,t) = NaN; end end end end end %% save corrected output save(['trial' num2str(tr) 'mtmconvolTFA.mat'], 'freq', '-v7.3'); end On Thu, Mar 2, 2017 at 9:55 AM, Tim Meehan wrote: > Hi Teresa, > > Thanks for the reply. I'll take a look at your example if you don't mind > sharing. Thanks! > > Tim > > On Thu, Mar 2, 2017 at 9:53 AM, Teresa Madsen wrote: > >> No, not really. The only way I've found to do that is to loop through my >> artifact rejection process on each trial individually, then merge them back >> together with NaNs filling in where there are artifacts, but then that >> breaks every form of analysis I want to do. :-P >> >> I wonder if it would work to fill in the artifacts with 0s instead of >> NaNs....I might play with that. Let me know if you're interested in some >> example code. >> >> ~Teresa >> >> >> On Wed, Mar 1, 2017 at 3:55 PM, Tim Meehan wrote: >> >>> Hello All, >>> >>> When performing visual artifact rejection, I want to be able to mark >>> artifacts that occur during some specific trials and only on some specific >>> channels. In the tutorials I see only ways to mark bad channels (i.e. >>> across all trials) or bad trials (i.e. across all channels). Does FieldTrip >>> handle marking artifacts restricted to some channel/trial combination? >>> >>> Thanks, >>> Tim >>> >>> _______________________________________________ >>> fieldtrip mailing list >>> fieldtrip at donders.ru.nl >>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>> >> >> >> >> -- >> Teresa E. Madsen, PhD >> Research Technical Specialist: *in vivo *electrophysiology & data >> analysis >> Division of Behavioral Neuroscience and Psychiatric Disorders >> Yerkes National Primate Research Center >> Emory University >> Rainnie Lab, NSB 5233 >> 954 Gatewood Rd. NE >> Atlanta, GA 30329 >> (770) 296-9119 >> braingirl at gmail.com >> https://www.linkedin.com/in/temadsen >> >> _______________________________________________ >> fieldtrip mailing list >> fieldtrip at donders.ru.nl >> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >> > > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -- Teresa E. Madsen, PhD Research Technical Specialist: *in vivo *electrophysiology & data analysis Division of Behavioral Neuroscience and Psychiatric Disorders Yerkes National Primate Research Center Emory University Rainnie Lab, NSB 5233 954 Gatewood Rd. NE Atlanta, GA 30329 (770) 296-9119 braingirl at gmail.com https://www.linkedin.com/in/temadsen -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- function [artifact,data,times] = artifact_nan2zero_TEM(data) % ARTIFACT_NAN2ZERO_TEM marks NaNs that occur uniformly across all channels % as artifacts, taking FT format data structure & returning the same % format as ft_artifact_xxx, for input to ft_rejectartifact. Non-uniform % NaNs (those not present on all channels) are replaced with 0s to avoid % breaking analysis functions. Also returns times of replaced NaNs by % trial & channel, so they can be changed back to NaNs in freq output. % % written 3/2/17 by Teresa E. Madsen artifact = []; times = cell(numel(data.trial), numel(data.label)); for tr = 1:numel(data.trial) % find NaNs to mark as artifacts (present uniformly across all channels) trlnan = isnan(data.trial{tr}); % identify NaNs by channel & timepoint allnan = all(trlnan,1); % need to specify dim in case of single channel % find, save timepoints, & replace non-uniform NaNs (not on all chs) w/ 0 replacenan = trlnan & repmat(~allnan,size(trlnan,1),1); for ch = 1:numel(data.label) times{tr,ch} = data.time{tr}(replacenan(ch,:)); % ID before replacing end data.trial{tr}(replacenan) = 0; % replace these w/ 0s if any(allnan) % determine the file sample #s for this trial trsamp = data.sampleinfo(tr,1):data.sampleinfo(tr,2); while any(allnan) % start from the end so sample #s don't shift endnan = find(allnan,1,'last'); allnan = allnan(1:endnan); % remove any non-NaNs after this % find last non-NaN before the NaNs beforenan = find(~allnan,1,'last'); if isempty(beforenan) % if no more non-NaNs begnan = 1; allnan = false; % while loop ends else % still more to remove - while loop continues begnan = beforenan + 1; allnan = allnan(1:beforenan); % remove the identified NaNs end % identify file sample #s that correspond to beginning and end of % this chunk of NaNs and append to artifact artifact = [artifact; trsamp(begnan) trsamp(endnan)]; %#ok end % while any(tnan) end % if any(tnan) end % for tr = 1:numel(data.trial) end From gaur-p at email.ulster.ac.uk Mon Mar 6 13:49:16 2017 From: gaur-p at email.ulster.ac.uk (Pramod Gaur) Date: Mon, 6 Mar 2017 12:49:16 +0000 Subject: [FieldTrip] Problem in buffer connection Message-ID: <518001d29678$12cbe540$3863afc0$@email.ulster.ac.uk> Dear community, My name is Pramod Gaur and I am PhD student in the Ulster university in UK working Brain-Computer Interfaces. Currently I am trying to implement the real-time classification problem mentioned in the tutorials. We have Neuromag Elekta MEG machine. I tried to the execute the following commands, it hangs. strcom = 'buffer:// ip-address-of-acquistion-machine:1972'; hdr = ft_read_header(strcom, 'cache', true); I executed the command ./neuromag2ft in the acquistion computer. Can anybody please suggest how this problem could be resolved. Any help would be highly appreciated. Best Regards, Pramod Gaur -------------- next part -------------- An HTML attachment was scrubbed... URL: From changa5 at mcmaster.ca Mon Mar 6 19:04:31 2017 From: changa5 at mcmaster.ca (Andrew Chang) Date: Mon, 6 Mar 2017 13:04:31 -0500 Subject: [FieldTrip] ft_volumereslice rotates the brain, how to fix? Message-ID: Dear Fieldtrip users, I am following the tutorial ( http://www.fieldtriptoolbox.org/tutorial/natmeg/dipolefitting) to work on coregistering the anatomical MRI (using colin27 template) to the EEG coordinate system, and then reslicing the MRI on to a cubic grid. However, I found that the ft_volumereslice rotates the MRI image, which seems weird. This is the sourceplot of the realigned MRI (from the 'mri_realigned2' variable, see the code below): [image: Inline image 1] However, this is the sourceplot of the resliced MRI, which was rotated in 3 dimensions (from the 'mri_resliced' variable, see the code below): [image: Inline image 3] I found that this rotation effect can be modulated by adjusting the parameters [rotation, scale, translate] on xyz dimensions, when I use the 'headshap' method for ft_volumerealign (see the code below). However, the effect of adjusting these parameters seems not to be linear or intuitive at all, and I cannot find the best combination to fix the rotation problem. Any advice or help would be much appreciated! Thank you all in advance! Here is the .mat file of what I have done: https://www.dropbox.com/s/viazz1vaq8gjyqb/fixingRotationMRI.mat?dl=0 Here is my code %% load MRI [mri_orig] = ft_read_mri('colin27_t1_tal_lin.nii'); %% load elec locations % I do not have the channel location or the headshape file, so I use a template cap to build the channel locations and headshape load('chanCfg') sphcoor = [Theta,Phi]'; cartcoor = elp2coor(sphcoor,10)'; % converting theta/phi coorfinates into xyz elec.elecpos = cartcoor; elec.chanpos = cartcoor; elec.label = ChannelName; % 'ChannelName' is a cell array of channel labels elec.unit = 'cm'; shape.pos = elec.elecpos; shape.label = elec.label; shape.unit = elec.unit ; shape.coordsys = 'spm'; %% Coregister the anatomical MRI to the EEG coordinate system cfg = []; cfg.method = 'interactive'; cfg.coordsys = 'spm'; [mri_realigned1] = ft_volumerealign(cfg, mri_orig); cfg = []; mri_realigned2 = []; cfg.method = 'headshape'; cfg.coordsys = 'spm'; cfg.headshape = shape; [mri_realigned2] = ft_volumerealign(cfg, mri_orig); % key in the following parameter for controlling the alignment % rotation: [0,0,0.5] % scale: [0.95, .8, .8] % translate: [0, 15, 0] cfg = []; cfg.resolution = 1; cfg.xrange = [-100 100]; cfg.yrange = [-110 110]; cfg.zrange = [-50 120]; mri_resliced = ft_volumereslice(cfg, mri_realigned2); Best, Andrew -- Andrew Chang, Ph.D. Candidate Vanier Canada Graduate Scholar http://changa5.wordpress.com/ Auditory Development Lab Department of Psychology, Neuroscience & Behaviour McMaster University -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: realigned.jpg Type: image/jpeg Size: 157286 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: resliced.jpg Type: image/jpeg Size: 145531 bytes Desc: not available URL: From bqrosen at ucsd.edu Tue Mar 7 03:38:22 2017 From: bqrosen at ucsd.edu (Burke Rosen) Date: Tue, 7 Mar 2017 02:38:22 +0000 Subject: [FieldTrip] units of the leadfield matrix Message-ID: <2E1FF091B841104E961728E963EE40D5BA6554F2@XMAIL-MBX-AT1.AD.UCSD.EDU> Hello, What are the units of the leadfield matrix produced by ft_compute_leadfield for EEG, Gradiometers, and Magnetometers? In particular, when using the the OpenMEEG BEM method. Thank you, Burke Rosen -------------- next part -------------- An HTML attachment was scrubbed... URL: From rikkert.hindriks at upf.edu Tue Mar 7 08:53:51 2017 From: rikkert.hindriks at upf.edu (HINDRIKS, RIKKERT) Date: Tue, 7 Mar 2017 08:53:51 +0100 Subject: [FieldTrip] units of the leadfield matrix In-Reply-To: <2E1FF091B841104E961728E963EE40D5BA6554F2@XMAIL-MBX-AT1.AD.UCSD.EDU> References: <2E1FF091B841104E961728E963EE40D5BA6554F2@XMAIL-MBX-AT1.AD.UCSD.EDU> Message-ID: https://mailman.science.ru.nl/pipermail/fieldtrip/2015-August/009561.html On Tue, Mar 7, 2017 at 3:38 AM, Burke Rosen wrote: > Hello, > > What are the units of the leadfield matrix produced by > ft_compute_leadfield for EEG, Gradiometers, and Magnetometers? > > In particular, when using the the OpenMEEG BEM method. > > Thank you, > > Burke Rosen > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.schoffelen at donders.ru.nl Tue Mar 7 09:17:50 2017 From: jan.schoffelen at donders.ru.nl (Schoffelen, J.M. (Jan Mathijs)) Date: Tue, 7 Mar 2017 08:17:50 +0000 Subject: [FieldTrip] ft_volumereslice rotates the brain, how to fix? In-Reply-To: References: Message-ID: <33D0E3BA-5A8B-4262-80D1-99A5AB94C268@donders.ru.nl> Hi Andrew, What’s the point in doing the second, headshape based alignment? I suppose that the template electrode positions are defined in a different coordinate system than ‘spm’? If so, be aware that probably these template positions do not nicely match the reconstructed headsurface from the template MRI, so you need to do the headshape based alignment by hand, since the automatic icp algorithm probably will get caught in an inappropriate local minimum. As long as you don’t rotate around the z-axis, I would assume that the ‘rotation’ would go away. Note, that the rotation of the image itself (as per ft_volumereslice) is not the problem, but the fact that it is rotated probably is, because that suggest that your coregistration between anatomy and electrodes does not make sense. Best, Jan-Mathijs J.M.Schoffelen, MD PhD Senior Researcher, VIDI-fellow - PI, language in interaction Telephone: +31-24-3614793 Physical location: room 00.028 Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands On 06 Mar 2017, at 19:04, Andrew Chang > wrote: Dear Fieldtrip users, I am following the tutorial (http://www.fieldtriptoolbox.org/tutorial/natmeg/dipolefitting) to work on coregistering the anatomical MRI (using colin27 template) to the EEG coordinate system, and then reslicing the MRI on to a cubic grid. However, I found that the ft_volumereslice rotates the MRI image, which seems weird. This is the sourceplot of the realigned MRI (from the 'mri_realigned2' variable, see the code below): However, this is the sourceplot of the resliced MRI, which was rotated in 3 dimensions (from the 'mri_resliced' variable, see the code below): I found that this rotation effect can be modulated by adjusting the parameters [rotation, scale, translate] on xyz dimensions, when I use the 'headshap' method for ft_volumerealign (see the code below). However, the effect of adjusting these parameters seems not to be linear or intuitive at all, and I cannot find the best combination to fix the rotation problem. Any advice or help would be much appreciated! Thank you all in advance! Here is the .mat file of what I have done: https://www.dropbox.com/s/viazz1vaq8gjyqb/fixingRotationMRI.mat?dl=0 Here is my code %% load MRI [mri_orig] = ft_read_mri('colin27_t1_tal_lin.nii'); %% load elec locations % I do not have the channel location or the headshape file, so I use a template cap to build the channel locations and headshape load('chanCfg') sphcoor = [Theta,Phi]'; cartcoor = elp2coor(sphcoor,10)'; % converting theta/phi coorfinates into xyz elec.elecpos = cartcoor; elec.chanpos = cartcoor; elec.label = ChannelName; % 'ChannelName' is a cell array of channel labels elec.unit = 'cm'; shape.pos = elec.elecpos; shape.label = elec.label; shape.unit = elec.unit ; shape.coordsys = 'spm'; %% Coregister the anatomical MRI to the EEG coordinate system cfg = []; cfg.method = 'interactive'; cfg.coordsys = 'spm'; [mri_realigned1] = ft_volumerealign(cfg, mri_orig); cfg = []; mri_realigned2 = []; cfg.method = 'headshape'; cfg.coordsys = 'spm'; cfg.headshape = shape; [mri_realigned2] = ft_volumerealign(cfg, mri_orig); % key in the following parameter for controlling the alignment % rotation: [0,0,0.5] % scale: [0.95, .8, .8] % translate: [0, 15, 0] cfg = []; cfg.resolution = 1; cfg.xrange = [-100 100]; cfg.yrange = [-110 110]; cfg.zrange = [-50 120]; mri_resliced = ft_volumereslice(cfg, mri_realigned2); Best, Andrew -- Andrew Chang, Ph.D. Candidate Vanier Canada Graduate Scholar http://changa5.wordpress.com/ Auditory Development Lab Department of Psychology, Neuroscience & Behaviour McMaster University _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: From jens.klinzing at uni-tuebingen.de Tue Mar 7 23:36:04 2017 From: jens.klinzing at uni-tuebingen.de (=?ISO-8859-1?Q?=22Jens_Klinzing=2C_Universit=E4t_T=FCbingen=22?=) Date: Tue, 07 Mar 2017 23:36:04 +0100 Subject: [FieldTrip] Sourcemodel inside definition too large when using warpmni Message-ID: <58BF35D4.7060400@uni-tuebingen.de> Dear Fieldtrip community, when calling ft_prepare_sourcemodel to create an individual sourcemodel I get quite different 'inside' definitions for the same subject when a) providing an unsegmented MRI and warping to the template MNI (see attachment: green) b) when providing an already segmented MRI (see attachment: blue) In fact, the extent of the inside in scenario a) is pretty similar to when I create a sourcemodel based on the skull instead of the brain. So maybe the segmentation during the warping process is the problem (for warped sourcemodels the inside field is just copied from the template sourcemodel). Is there a way to influence the segmentation performed by ft_prepare_sourcemodel when warping to the template MNI? Fieldtrip does not allow to provide an already segmented MRI in this case (error: missing anatomy). I expected the options cfg.threshold and cfg.smooth to be analogous to the threshold and smooth options for ft_volumesegment but they seem to be used only when I already provide a segmented MRI (so they can't help me here). I could just use the cfg.inwardshift option to fix the issue but I'm afraid that the MNI-warping itself may be affected in case the problem actually results from a flawed segmentation. Thanks in advance for your help! All the best, Jens -------------- next part -------------- A non-text attachment was scrubbed... Name: sourcemodel green_warpmni blue_nowarp_onsegmentedmri.PNG Type: image/png Size: 55645 bytes Desc: not available URL: From m.chait at ucl.ac.uk Wed Mar 8 01:04:46 2017 From: m.chait at ucl.ac.uk (Chait, Maria) Date: Wed, 8 Mar 2017 00:04:46 +0000 Subject: [FieldTrip] Post-Doc position on Auditory Attention [DEADLINE March 31] Message-ID: (please forward) A postdoctoral research associate position is available at the UCL Ear Institute's 'Auditory Cognitive Neuroscience Lab' to work on an EC-funded project that will use psychophysics, eye tracking and EEG to investigate auditory attention in humans. The post is funded for 20 months in the first instance. For more information about the post please see the lab website: http://www.ucl.ac.uk/ear/research/chaitlab/vacancies The Ear Institute is a leading interdisciplinary centre for hearing research in Europe, situated within one of the strongest neuroscience communities in the world at University College London Key Requirements The successful applicant will have a PhD in neuroscience or a neuroscience-related discipline and proven ability to conduct high-quality original research and prepare results for publication. Essential skills include excellent time-management and organizational ability; proficiency in computer programming and good interpersonal, oral and written communication skills. Previous experience with functional brain imaging, neural data analysis, psychophysical assessment, and/or auditory science or acoustics would be desirable. Further Details You should apply for this post (Ref #: 1631454) through UCL's online recruitment website, www.ucl.ac.uk/hr/jobs, where you can download a job description and person specifications. Closing Date for applications is: 31 March 2017 For an informal discussion please contact Dr. Maria Chait (m.chait at ucl.ac.uk). Maria Chait PhD m.chait at ucl.ac.uk Reader in Auditory Cognitive Neuroscience Lab site: http://www.ucl.ac.uk/ear/research/chaitlab/ UCL Ear Institute 332 Gray's Inn Road London WC1X 8EE -------------- next part -------------- An HTML attachment was scrubbed... URL: From ainsley.temudo at nyu.edu Wed Mar 8 07:52:03 2017 From: ainsley.temudo at nyu.edu (Ainsley Temudo) Date: Wed, 8 Mar 2017 10:52:03 +0400 Subject: [FieldTrip] Source Reconstruction Message-ID: Hi FieldTrip Experts, I am trying to perform source reconstruction, and I am having trouble with coregistering my anatomical with the sensors. The MEG system we're using is Yokogawa and the anatomical is a NIFTI file. I get some errors when using ft_sensorrealign and ft_electroderealign. I will go through the steps I took before getting to this stage, as maybe I have done something wrong. first I read in my MRI and determine the coordinate system which is LPS. mriunknown = ft_read_mri('WMCP1011+22+t1mprage.nii'); mri = ft_determine_coordsys(mriunknown, 'interactive','yes') next I realign to the CTF coordinate system by marking the NAS LPA, RPA cfg = []; cfg.method = 'interactive'; cfg.coordsys = 'ctf'; mri_ctf = ft_volumerealign(cfg, mir); I read in the sensor information and added in the coordinates for the marker positions. we have five marker positions, the three I picked were the left and right ear markers and the middle forehead marker. grad=ft_read_sens('srcLocTest01_FT_01.con'); grad.fid.pnt(1,:) = [96.07 3.11 -5.32]./10; grad.fid.pnt(2,:) = [11.13 75.50 -78.23]./10; grad.fid.pnt(3,:) = [8.50 -75.09 -64.60]./10; grad.fid.label = {'NAS' 'LPA' 'RPA'}; I then put the template marker point cordinates into the configuration which were taken from the mri_ctf cfg = []; cfg.method = 'fiducial'; cfg.target.pnt(1,:) = [91.1 3.0 49.2]./10; cfg.target.pnt(2,:) = [-0.1 70.5 0.0]./10; cfg.target.pnt(3,:) = [0.1 -70.5 0.0]./10; cfg.target.label = {'NAS' 'LPA' 'RPA'}; grad_aligned = ft_sensorrealign(cfg, grad); when I use ft_sensorrealign I get the following errors : Undefined function or variable 'lab'. Error in channelposition (line 314) n = size(lab,2); Error in ft_datatype_sens (line 328) [chanpos, chanori, lab] = channelposition(sens); Error in ft_sensorrealign (line 212) elec_original = ft_datatype_sens(elec_original); % ensure up-to-date sensor description (Oct 2011) when I use ft_electroderealign I get the following errors: Error using ft_fetch_sens (line 192) no electrodes or gradiometers specified. Error in ft_electroderealign (line 195) elec_original = ft_fetch_sens(cfg); Hope you can help me figure out why I'm getting these errors. Thanks, Ainsley -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.schoffelen at donders.ru.nl Wed Mar 8 08:26:38 2017 From: jan.schoffelen at donders.ru.nl (Schoffelen, J.M. (Jan Mathijs)) Date: Wed, 8 Mar 2017 07:26:38 +0000 Subject: [FieldTrip] Source Reconstruction In-Reply-To: References: Message-ID: <20B3451F-DE4F-4BF7-A42E-E1A17A766EDC@donders.ru.nl> Hi Ainsley, Why would you want to use sensorrealign/electroderealign since you have MEG-data? The former functions may be needed for EEG electrodes, not for MEG sensors. Best wishes, Jan-Mathijs J.M.Schoffelen, MD PhD Senior Researcher, VIDI-fellow - PI, language in interaction Telephone: +31-24-3614793 Physical location: room 00.028 Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands On 08 Mar 2017, at 07:52, Ainsley Temudo > wrote: Hi FieldTrip Experts, I am trying to perform source reconstruction, and I am having trouble with coregistering my anatomical with the sensors. The MEG system we're using is Yokogawa and the anatomical is a NIFTI file. I get some errors when using ft_sensorrealign and ft_electroderealign. I will go through the steps I took before getting to this stage, as maybe I have done something wrong. first I read in my MRI and determine the coordinate system which is LPS. mriunknown = ft_read_mri('WMCP1011+22+t1mprage.nii'); mri = ft_determine_coordsys(mriunknown, 'interactive','yes') next I realign to the CTF coordinate system by marking the NAS LPA, RPA cfg = []; cfg.method = 'interactive'; cfg.coordsys = 'ctf'; mri_ctf = ft_volumerealign(cfg, mir); I read in the sensor information and added in the coordinates for the marker positions. we have five marker positions, the three I picked were the left and right ear markers and the middle forehead marker. grad=ft_read_sens('srcLocTest01_FT_01.con'); grad.fid.pnt(1,:) = [96.07 3.11 -5.32]./10; grad.fid.pnt(2,:) = [11.13 75.50 -78.23]./10; grad.fid.pnt(3,:) = [8.50 -75.09 -64.60]./10; grad.fid.label = {'NAS' 'LPA' 'RPA'}; I then put the template marker point cordinates into the configuration which were taken from the mri_ctf cfg = []; cfg.method = 'fiducial'; cfg.target.pnt(1,:) = [91.1 3.0 49.2]./10; cfg.target.pnt(2,:) = [-0.1 70.5 0.0]./10; cfg.target.pnt(3,:) = [0.1 -70.5 0.0]./10; cfg.target.label = {'NAS' 'LPA' 'RPA'}; grad_aligned = ft_sensorrealign(cfg, grad); when I use ft_sensorrealign I get the following errors : Undefined function or variable 'lab'. Error in channelposition (line 314) n = size(lab,2); Error in ft_datatype_sens (line 328) [chanpos, chanori, lab] = channelposition(sens); Error in ft_sensorrealign (line 212) elec_original = ft_datatype_sens(elec_original); % ensure up-to-date sensor description (Oct 2011) when I use ft_electroderealign I get the following errors: Error using ft_fetch_sens (line 192) no electrodes or gradiometers specified. Error in ft_electroderealign (line 195) elec_original = ft_fetch_sens(cfg); Hope you can help me figure out why I'm getting these errors. Thanks, Ainsley _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip J.M.Schoffelen, MD PhD Senior Researcher, VIDI-fellow - PI, language in interaction Telephone: +31-24-3614793 Physical location: room 00.028 Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.schoffelen at donders.ru.nl Wed Mar 8 08:27:30 2017 From: jan.schoffelen at donders.ru.nl (Schoffelen, J.M. (Jan Mathijs)) Date: Wed, 8 Mar 2017 07:27:30 +0000 Subject: [FieldTrip] Sourcemodel inside definition too large when using warpmni References: Message-ID: <08748577-A2CA-4D37-8B9E-BA75BD7BA5CD@donders.ru.nl> Hi Jens, What does the ‘green’ point cloud look like relative to the blue points when you switch off the non-linear step in recipe a)? JM > On 07 Mar 2017, at 23:36, Jens Klinzing, Universität Tübingen wrote: > > Dear Fieldtrip community, > when calling ft_prepare_sourcemodel to create an individual sourcemodel I get quite different 'inside' definitions for the same subject when > a) providing an unsegmented MRI and warping to the template MNI (see attachment: green) > b) when providing an already segmented MRI (see attachment: blue) > > In fact, the extent of the inside in scenario a) is pretty similar to when I create a sourcemodel based on the skull instead of the brain. So maybe the segmentation during the warping process is the problem (for warped sourcemodels the inside field is just copied from the template sourcemodel). > > Is there a way to influence the segmentation performed by ft_prepare_sourcemodel when warping to the template MNI? > Fieldtrip does not allow to provide an already segmented MRI in this case (error: missing anatomy). I expected the options cfg.threshold and cfg.smooth to be analogous to the threshold and smooth options for ft_volumesegment but they seem to be used only when I already provide a segmented MRI (so they can't help me here). > > I could just use the cfg.inwardshift option to fix the issue but I'm afraid that the MNI-warping itself may be affected in case the problem actually results from a flawed segmentation. > > Thanks in advance for your help! > > All the best, > Jens > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip From ainsley.temudo at nyu.edu Wed Mar 8 09:03:58 2017 From: ainsley.temudo at nyu.edu (Ainsley Temudo) Date: Wed, 8 Mar 2017 12:03:58 +0400 Subject: [FieldTrip] Source Reconstruction In-Reply-To: <20B3451F-DE4F-4BF7-A42E-E1A17A766EDC@donders.ru.nl> References: <20B3451F-DE4F-4BF7-A42E-E1A17A766EDC@donders.ru.nl> Message-ID: Hi Jan-Mathijs Thanks for getting back to me so quickly. I originally used Ft_sensoralign, but I got the error messages one of which said 'FT_SENSORREALIGN is deprecated, please use FT_ELECTRODEREALIGN instead' so thats why I used electrode realign instead, even though it's MEG data. I've been following this page to do the realignment. http://www.fieldtriptoolbox.org/getting_started/yokogawa?s[]=yokogawa if I use sensorrealign how should I deal with these error messages? Undefined function or variable 'lab'. Error in channelposition (line 314) n = size(lab,2); Error in ft_datatype_sens (line 328) [chanpos, chanori, lab] = channelposition(sens); Error in ft_sensorrealign (line 212) elec_original = ft_datatype_sens(elec_original); % ensure up-to-date sensor description (Oct 2011) is there another way to realign my anatomical with my MEG sensors without using ft_sensorrealign? Thanks, Ainsley On Wed, Mar 8, 2017 at 11:26 AM, Schoffelen, J.M. (Jan Mathijs) < jan.schoffelen at donders.ru.nl> wrote: > Hi Ainsley, > > Why would you want to use sensorrealign/electroderealign since you have > MEG-data? The former functions may be needed for EEG electrodes, not for > MEG sensors. > > Best wishes, > Jan-Mathijs > > > J.M.Schoffelen, MD PhD > Senior Researcher, VIDI-fellow - PI, language in interaction > Telephone: +31-24-3614793 <+31%2024%20361%204793> > Physical location: room 00.028 > Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands > > > On 08 Mar 2017, at 07:52, Ainsley Temudo wrote: > > Hi FieldTrip Experts, > > I am trying to perform source reconstruction, and I am having trouble with > coregistering my anatomical with the sensors. The MEG system we're using is > Yokogawa and the anatomical is a NIFTI file. I get some errors when using > ft_sensorrealign and ft_electroderealign. I will go through the steps I > took before getting to this stage, as maybe I have done something wrong. > > first I read in my MRI and determine the coordinate system which is LPS. > > mriunknown = ft_read_mri('WMCP1011+22+t1mprage.nii'); > mri = ft_determine_coordsys(mriunknown, 'interactive','yes') > > next I realign to the CTF coordinate system by marking the NAS LPA, RPA > > cfg = []; > cfg.method = 'interactive'; > cfg.coordsys = 'ctf'; > > mri_ctf = ft_volumerealign(cfg, mir); > > I read in the sensor information and added in the coordinates for the > marker positions. we have five marker positions, the three I picked were > the left and right ear markers and the middle forehead marker. > > grad=ft_read_sens('srcLocTest01_FT_01.con'); > > > > > grad.fid.pnt(1,:) = [96.07 3.11 -5.32]./10; > grad.fid.pnt(2,:) = [11.13 75.50 -78.23]./10; > grad.fid.pnt(3,:) = [8.50 -75.09 -64.60]./10; > > > grad.fid.label = {'NAS' 'LPA' 'RPA'}; > > I then put the template marker point cordinates into the configuration > which were taken from the mri_ctf > > cfg = []; > cfg.method = 'fiducial'; > cfg.target.pnt(1,:) = [91.1 3.0 49.2]./10; > cfg.target.pnt(2,:) = [-0.1 70.5 0.0]./10; > cfg.target.pnt(3,:) = [0.1 -70.5 0.0]./10; > > > cfg.target.label = {'NAS' 'LPA' 'RPA'}; > > > > grad_aligned = ft_sensorrealign(cfg, grad); > > when I use ft_sensorrealign I get the following errors : > > Undefined function or variable 'lab'. > > Error in channelposition (line 314) > n = size(lab,2); > > Error in ft_datatype_sens (line 328) > [chanpos, chanori, lab] = channelposition(sens); > > Error in ft_sensorrealign (line 212) > elec_original = ft_datatype_sens(elec_original); % ensure up-to-date > sensor description (Oct 2011) > > > when I use ft_electroderealign I get the following errors: > > Error using ft_fetch_sens (line 192) > no electrodes or gradiometers specified. > > Error in ft_electroderealign (line 195) > elec_original = ft_fetch_sens(cfg); > > > Hope you can help me figure out why I'm getting these errors. > Thanks, > Ainsley > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > > > > > J.M.Schoffelen, MD PhD > Senior Researcher, VIDI-fellow - PI, language in interaction > Telephone: +31-24-3614793 <+31%2024%20361%204793> > Physical location: room 00.028 > Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands > > > > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -- *Ainsley Temudo* Research Assistant Sreenivasan Lab NYU Abu Dhabi Office Tel (UAE): +971 2 628 4764 Mobile (UAE): +971 56 664 6952 NYU Abu Dhabi, Saadiyat Campus P.O. Box 129188 Abu Dhabi, United Arab Emirates -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.schoffelen at donders.ru.nl Wed Mar 8 09:15:48 2017 From: jan.schoffelen at donders.ru.nl (Schoffelen, J.M. (Jan Mathijs)) Date: Wed, 8 Mar 2017 08:15:48 +0000 Subject: [FieldTrip] Source Reconstruction In-Reply-To: References: <20B3451F-DE4F-4BF7-A42E-E1A17A766EDC@donders.ru.nl> Message-ID: Hi Ainsley, I have never worked with ‘yokogawa’ data myself, so I can’t be of much help. The documentation you point to is already several years old, and appears not to have been actively maintained. Most likely the error you get is caused by incompatibility between the current version of the FieldTrip code, and the example code provided. Perhaps someone that has recently done coregistration between anatomical data and yokogawa data can chime in? Best wishes, Jan-Mathijs On 08 Mar 2017, at 09:03, Ainsley Temudo > wrote: Hi Jan-Mathijs Thanks for getting back to me so quickly. I originally used Ft_sensoralign, but I got the error messages one of which said 'FT_SENSORREALIGN is deprecated, please use FT_ELECTRODEREALIGN instead' so thats why I used electrode realign instead, even though it's MEG data. I've been following this page to do the realignment. http://www.fieldtriptoolbox.org/getting_started/yokogawa?s[]=yokogawa if I use sensorrealign how should I deal with these error messages? Undefined function or variable 'lab'. Error in channelposition (line 314) n = size(lab,2); Error in ft_datatype_sens (line 328) [chanpos, chanori, lab] = channelposition(sens); Error in ft_sensorrealign (line 212) elec_original = ft_datatype_sens(elec_original); % ensure up-to-date sensor description (Oct 2011) is there another way to realign my anatomical with my MEG sensors without using ft_sensorrealign? Thanks, Ainsley On Wed, Mar 8, 2017 at 11:26 AM, Schoffelen, J.M. (Jan Mathijs) > wrote: Hi Ainsley, Why would you want to use sensorrealign/electroderealign since you have MEG-data? The former functions may be needed for EEG electrodes, not for MEG sensors. Best wishes, Jan-Mathijs J.M.Schoffelen, MD PhD Senior Researcher, VIDI-fellow - PI, language in interaction Telephone: +31-24-3614793 Physical location: room 00.028 Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands On 08 Mar 2017, at 07:52, Ainsley Temudo > wrote: Hi FieldTrip Experts, I am trying to perform source reconstruction, and I am having trouble with coregistering my anatomical with the sensors. The MEG system we're using is Yokogawa and the anatomical is a NIFTI file. I get some errors when using ft_sensorrealign and ft_electroderealign. I will go through the steps I took before getting to this stage, as maybe I have done something wrong. first I read in my MRI and determine the coordinate system which is LPS. mriunknown = ft_read_mri('WMCP1011+22+t1mprage.nii'); mri = ft_determine_coordsys(mriunknown, 'interactive','yes') next I realign to the CTF coordinate system by marking the NAS LPA, RPA cfg = []; cfg.method = 'interactive'; cfg.coordsys = 'ctf'; mri_ctf = ft_volumerealign(cfg, mir); I read in the sensor information and added in the coordinates for the marker positions. we have five marker positions, the three I picked were the left and right ear markers and the middle forehead marker. grad=ft_read_sens('srcLocTest01_FT_01.con'); grad.fid.pnt(1,:) = [96.07 3.11 -5.32]./10; grad.fid.pnt(2,:) = [11.13 75.50 -78.23]./10; grad.fid.pnt(3,:) = [8.50 -75.09 -64.60]./10; grad.fid.label = {'NAS' 'LPA' 'RPA'}; I then put the template marker point cordinates into the configuration which were taken from the mri_ctf cfg = []; cfg.method = 'fiducial'; cfg.target.pnt(1,:) = [91.1 3.0 49.2]./10; cfg.target.pnt(2,:) = [-0.1 70.5 0.0]./10; cfg.target.pnt(3,:) = [0.1 -70.5 0.0]./10; cfg.target.label = {'NAS' 'LPA' 'RPA'}; grad_aligned = ft_sensorrealign(cfg, grad); when I use ft_sensorrealign I get the following errors : Undefined function or variable 'lab'. Error in channelposition (line 314) n = size(lab,2); Error in ft_datatype_sens (line 328) [chanpos, chanori, lab] = channelposition(sens); Error in ft_sensorrealign (line 212) elec_original = ft_datatype_sens(elec_original); % ensure up-to-date sensor description (Oct 2011) when I use ft_electroderealign I get the following errors: Error using ft_fetch_sens (line 192) no electrodes or gradiometers specified. Error in ft_electroderealign (line 195) elec_original = ft_fetch_sens(cfg); Hope you can help me figure out why I'm getting these errors. Thanks, Ainsley _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip J.M.Schoffelen, MD PhD Senior Researcher, VIDI-fellow - PI, language in interaction Telephone: +31-24-3614793 Physical location: room 00.028 Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -- Ainsley Temudo Research Assistant Sreenivasan Lab NYU Abu Dhabi Office Tel (UAE): +971 2 628 4764 Mobile (UAE): +971 56 664 6952 NYU Abu Dhabi, Saadiyat Campus P.O. Box 129188 Abu Dhabi, United Arab Emirates _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: From jens.klinzing at uni-tuebingen.de Wed Mar 8 10:22:30 2017 From: jens.klinzing at uni-tuebingen.de (=?UTF-8?B?IkplbnMgS2xpbnppbmcsIFVuaSBUw7xiaW5nZW4i?=) Date: Wed, 08 Mar 2017 10:22:30 +0100 Subject: [FieldTrip] Sourcemodel inside definition too large when using warpmni In-Reply-To: <08748577-A2CA-4D37-8B9E-BA75BD7BA5CD@donders.ru.nl> References: <08748577-A2CA-4D37-8B9E-BA75BD7BA5CD@donders.ru.nl> Message-ID: <58BFCD56.2080508@uni-tuebingen.de> Hi Jan-Mathijs, the size difference is still there with cfg.grid.nonlinear = no. Best, Jens > Schoffelen, J.M. (Jan Mathijs) > Mittwoch, 8. März 2017 08:27 > Hi Jens, > > What does the ‘green’ point cloud look like relative to the blue > points when you switch off the non-linear step in recipe a)? > > JM > > > > > > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 101843 bytes Desc: not available URL: From seymourr at aston.ac.uk Wed Mar 8 11:19:13 2017 From: seymourr at aston.ac.uk (Seymour, Robert (Research Student)) Date: Wed, 8 Mar 2017 10:19:13 +0000 Subject: [FieldTrip] Source Reconstruction Message-ID: Hi Ainsley, Good to see you're using Fieldtrip + Yokogawa data as well :D As I'm sure you're aware the issue is that "unlike other systems, the Yokogawa system software does not automatically analyse its sensor locations relative to fiducial coils". One workaround option is to do your coregistration in the Yokogawa/KIT software MEG160 and then export the sensor locations. You can then follow a more standard FT coregistration route without the need to use ft_sensorrealign. As Jan Mathijs said the http://www.fieldtriptoolbox.org/getting_started/yokogawa page is very outdated, so I will update it at some point in the future with more relevant info + updated code for sensor realignment. Many thanks, Robert Seymour -------------- next part -------------- An HTML attachment was scrubbed... URL: From sarang at cfin.au.dk Wed Mar 8 11:34:03 2017 From: sarang at cfin.au.dk (Sarang S. Dalal) Date: Wed, 8 Mar 2017 10:34:03 +0000 Subject: [FieldTrip] Source Reconstruction In-Reply-To: References: <20B3451F-DE4F-4BF7-A42E-E1A17A766EDC@donders.ru.nl> Message-ID: <1488969241.5011.5.camel@cfin.au.dk> Hi Ainsley, You might consider realigning your MEG to your MRI, rather than the other way around. Our group typically does it this way to also simplify some other aspects of our pipeline, in particular, to simplify (re-)use of BEM head models and plotting the final source maps on the participant's own MRI or the MNI template. You can find examples of our scripts on github: https://github.com/meeg-cfin/nemolab Check out basics/nemo_mriproc.m -- you may need to add your particular yokogawa system to line 45. (Note that I've tested this procedure on Elekta, CTF, and 4D/BTi data, but not yet Yokogawa.) An example of how to put it all the pipeline pieces together for a basic LCMV source analysis and visulization of ERF data is given in: basics/nemo_sourcelocER.m Best wishes, Sarang On Wed, 2017-03-08 at 08:15 +0000, Schoffelen, J.M. (Jan Mathijs) wrote: > Hi Ainsley, > > I have never worked with ‘yokogawa’ data myself, so I can’t be of > much help. The documentation you point to is already several years > old, and appears not to have been actively maintained. Most likely > the error you get is caused by incompatibility between the current > version of the FieldTrip code, and the example code provided. Perhaps > someone that has recently done coregistration between anatomical data > and yokogawa data can chime in? > > Best wishes, > Jan-Mathijs > >   > > On 08 Mar 2017, at 09:03, Ainsley Temudo > > wrote: > > > > Hi Jan-Mathijs > > Thanks for getting back to me so quickly. I originally used > > Ft_sensoralign, but I got the error messages one of which said  > > 'FT_SENSORREALIGN is deprecated, please use FT_ELECTRODEREALIGN > > instead' so thats why I used electrode realign instead, even though > > it's MEG data.   > > > > I've been following this page to do the realignment.  > > > > http://www.fieldtriptoolbox.org/getting_started/yokogawa?s[]=yokoga > > wa > > > > if I use sensorrealign how should I deal with these error > > messages?  > > > > Undefined function or variable 'lab'. > > > > Error in channelposition (line 314) > > n   = size(lab,2); > > > > Error in ft_datatype_sens (line 328) > >         [chanpos, chanori, lab] = channelposition(sens); > > > > Error in ft_sensorrealign (line 212) > > elec_original = ft_datatype_sens(elec_original); % ensure up-to- > > date sensor description (Oct 2011) > > > > > > is there another way to realign my anatomical with my MEG sensors > > without using ft_sensorrealign? > > > > Thanks, > > Ainsley  > > > > On Wed, Mar 8, 2017 at 11:26 AM, Schoffelen, J.M. (Jan Mathijs) > n.schoffelen at donders.ru.nl> wrote: > > > Hi Ainsley, > > > > > > Why would you want to use sensorrealign/electroderealign since > > > you have MEG-data? The former functions may be needed for EEG > > > electrodes, not for MEG sensors. > > > > > > Best wishes, > > > Jan-Mathijs > > > > > > > > > J.M.Schoffelen, MD PhD > > > Senior Researcher, VIDI-fellow - PI, language in interaction > > > Telephone: +31-24-3614793 > > > Physical location: room 00.028 > > > Donders Centre for Cognitive Neuroimaging, Nijmegen, > > > The Netherlands > > > > > > > > > > On 08 Mar 2017, at 07:52, Ainsley Temudo > > > u> wrote: > > > > > > > > Hi FieldTrip Experts, > > > > > > > > I am trying to perform source reconstruction, and I am having > > > > trouble with coregistering my anatomical with the sensors. The > > > > MEG system we're using is Yokogawa and the anatomical is a > > > > NIFTI file. I get some errors when using  ft_sensorrealign and > > > > ft_electroderealign. I will go through the steps I took before > > > > getting to this stage, as maybe I have done something wrong.  > > > > > > > > first I read in my MRI and determine the coordinate system > > > > which is LPS. > > > > > > > > mriunknown = ft_read_mri('WMCP1011+22+t1mprage.nii'); > > > > mri = ft_determine_coordsys(mriunknown, 'interactive','yes') > > > > > > > >  next I realign to the CTF coordinate system by marking the NAS > > > > LPA, RPA  > > > > > > > > cfg = []; > > > > cfg.method = 'interactive'; > > > > cfg.coordsys = 'ctf'; > > > > mri_ctf = ft_volumerealign(cfg, mir); > > > > > > > > I read in the sensor information and added in the coordinates > > > > for the marker positions. we have five marker positions, the > > > > three I picked were the left and right ear markers and the > > > > middle forehead marker.   > > > > > > > > grad=ft_read_sens('srcLocTest01_FT_01.con'); > > > >   > > > >   > > > > grad.fid.pnt(1,:) = [96.07 3.11 -5.32]./10; > > > > grad.fid.pnt(2,:) = [11.13 75.50 -78.23]./10; > > > > grad.fid.pnt(3,:) = [8.50 -75.09 -64.60]./10; > > > >   > > > > grad.fid.label = {'NAS' 'LPA' 'RPA'}; > > > > > > > > I then put the template marker point cordinates  into > > > > the configuration which were taken from the mri_ctf  > > > > > > > > cfg = []; > > > > cfg.method = 'fiducial'; > > > > cfg.target.pnt(1,:) = [91.1 3.0 49.2]./10; > > > > cfg.target.pnt(2,:) = [-0.1 70.5 0.0]./10; > > > > cfg.target.pnt(3,:) = [0.1 -70.5 0.0]./10; > > > >   > > > > cfg.target.label = {'NAS' 'LPA' 'RPA'}; > > > >   > > > > grad_aligned = ft_sensorrealign(cfg, grad); > > > > > > > > when I use ft_sensorrealign I get the following errors  : > > > > > > > > Undefined function or variable 'lab'. > > > > > > > > Error in channelposition (line 314) > > > > n   = size(lab,2); > > > > > > > > Error in ft_datatype_sens (line 328) > > > >         [chanpos, chanori, lab] = channelposition(sens); > > > > > > > > Error in ft_sensorrealign (line 212) > > > > elec_original = ft_datatype_sens(elec_original); % ensure up- > > > > to-date sensor description (Oct 2011) > > > > > > > > > > > > when I use ft_electroderealign I get the following errors:  > > > > > > > > Error using ft_fetch_sens (line 192) > > > > no electrodes or gradiometers specified. > > > > > > > > Error in ft_electroderealign (line 195) > > > >   elec_original = ft_fetch_sens(cfg); > > > > > > > > > > > > Hope you can help me figure out why I'm getting these errors. > > > > Thanks, > > > > Ainsley  > > > > _______________________________________________ > > > > fieldtrip mailing list > > > > fieldtrip at donders.ru.nl > > > > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > > > > > > > > > > > > J.M.Schoffelen, MD PhD > > > Senior Researcher, VIDI-fellow - PI, language in interaction > > > Telephone: +31-24-3614793 > > > Physical location: room 00.028 > > > Donders Centre for Cognitive Neuroimaging, Nijmegen, > > > The Netherlands > > > > > > > > > > > > > > > _______________________________________________ > > > fieldtrip mailing list > > > fieldtrip at donders.ru.nl > > > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > > > > > > > > > > > --  > > Ainsley Temudo > > Research Assistant  > > Sreenivasan Lab > > NYU Abu Dhabi > > Office Tel (UAE): +971 2 628 4764 > > Mobile (UAE): +971 56 664 6952 > > > > NYU Abu Dhabi, Saadiyat Campus > > P.O. Box 129188 > > Abu Dhabi, United Arab Emirates > > _______________________________________________ > > fieldtrip mailing list > > fieldtrip at donders.ru.nl > > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip From ainsley.temudo at nyu.edu Wed Mar 8 10:36:22 2017 From: ainsley.temudo at nyu.edu (Ainsley Temudo) Date: Wed, 8 Mar 2017 13:36:22 +0400 Subject: [FieldTrip] Source Reconstruction In-Reply-To: References: <20B3451F-DE4F-4BF7-A42E-E1A17A766EDC@donders.ru.nl> Message-ID: Hi Jan-Mathijs, I managed to get the ft_electoderealign to work after some debugging, which involved commenting out parts of the script. Could you take a look at the two images I've attached. The first is my volume conduction model and the sensors before realignment (unaligned.fig) and the second is after realignment (aligned.fig). any idea why the MRI markers which were used as the template (green) are so far apart from the MEG marker coil positions( red)? also it seems that before realignment everything looks okay, but how is that possible if my volume conduction model is in CTF and my MEG sensors are not? thanks Ainsley On Wed, Mar 8, 2017 at 12:15 PM, Schoffelen, J.M. (Jan Mathijs) < jan.schoffelen at donders.ru.nl> wrote: > Hi Ainsley, > > I have never worked with ‘yokogawa’ data myself, so I can’t be of much > help. The documentation you point to is already several years old, and > appears not to have been actively maintained. Most likely the error you get > is caused by incompatibility between the current version of the FieldTrip > code, and the example code provided. Perhaps someone that has recently done > coregistration between anatomical data and yokogawa data can chime in? > > Best wishes, > Jan-Mathijs > > > > On 08 Mar 2017, at 09:03, Ainsley Temudo wrote: > > Hi Jan-Mathijs > Thanks for getting back to me so quickly. I originally used > Ft_sensoralign, but I got the error messages one of which said > 'FT_SENSORREALIGN is deprecated, please use FT_ELECTRODEREALIGN instead' > so thats why I used electrode realign instead, even though it's MEG data. > > I've been following this page to do the realignment. > > http://www.fieldtriptoolbox.org/getting_started/yokogawa?s[]=yokogawa > > if I use sensorrealign how should I deal with these error messages? > > Undefined function or variable 'lab'. > > Error in channelposition (line 314) > n = size(lab,2); > > Error in ft_datatype_sens (line 328) > [chanpos, chanori, lab] = channelposition(sens); > > Error in ft_sensorrealign (line 212) > elec_original = ft_datatype_sens(elec_original); % ensure up-to-date > sensor description (Oct 2011) > > > is there another way to realign my anatomical with my MEG sensors without > using ft_sensorrealign? > > Thanks, > Ainsley > > On Wed, Mar 8, 2017 at 11:26 AM, Schoffelen, J.M. (Jan Mathijs) < > jan.schoffelen at donders.ru.nl> wrote: > >> Hi Ainsley, >> >> Why would you want to use sensorrealign/electroderealign since you have >> MEG-data? The former functions may be needed for EEG electrodes, not for >> MEG sensors. >> >> Best wishes, >> Jan-Mathijs >> >> >> J.M.Schoffelen, MD PhD >> Senior Researcher, VIDI-fellow - PI, language in interaction >> Telephone: +31-24-3614793 <+31%2024%20361%204793> >> Physical location: room 00.028 >> Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands >> >> >> On 08 Mar 2017, at 07:52, Ainsley Temudo wrote: >> >> Hi FieldTrip Experts, >> >> I am trying to perform source reconstruction, and I am having trouble >> with coregistering my anatomical with the sensors. The MEG system we're >> using is Yokogawa and the anatomical is a NIFTI file. I get some errors >> when using ft_sensorrealign and ft_electroderealign. I will go through the >> steps I took before getting to this stage, as maybe I have done something >> wrong. >> >> first I read in my MRI and determine the coordinate system which is LPS. >> >> mriunknown = ft_read_mri('WMCP1011+22+t1mprage.nii'); >> mri = ft_determine_coordsys(mriunknown, 'interactive','yes') >> >> next I realign to the CTF coordinate system by marking the NAS LPA, RPA >> >> cfg = []; >> cfg.method = 'interactive'; >> cfg.coordsys = 'ctf'; >> >> mri_ctf = ft_volumerealign(cfg, mir); >> >> I read in the sensor information and added in the coordinates for the >> marker positions. we have five marker positions, the three I picked were >> the left and right ear markers and the middle forehead marker. >> >> grad=ft_read_sens('srcLocTest01_FT_01.con'); >> >> >> grad.fid.pnt(1,:) = [96.07 3.11 -5.32]./10; >> grad.fid.pnt(2,:) = [11.13 75.50 -78.23]./10; >> grad.fid.pnt(3,:) = [8.50 -75.09 -64.60]./10; >> >> grad.fid.label = {'NAS' 'LPA' 'RPA'}; >> >> I then put the template marker point cordinates into the configuration >> which were taken from the mri_ctf >> >> cfg = []; >> cfg.method = 'fiducial'; >> cfg.target.pnt(1,:) = [91.1 3.0 49.2]./10; >> cfg.target.pnt(2,:) = [-0.1 70.5 0.0]./10; >> cfg.target.pnt(3,:) = [0.1 -70.5 0.0]./10; >> >> cfg.target.label = {'NAS' 'LPA' 'RPA'}; >> >> >> grad_aligned = ft_sensorrealign(cfg, grad); >> >> when I use ft_sensorrealign I get the following errors : >> >> Undefined function or variable 'lab'. >> >> Error in channelposition (line 314) >> n = size(lab,2); >> >> Error in ft_datatype_sens (line 328) >> [chanpos, chanori, lab] = channelposition(sens); >> >> Error in ft_sensorrealign (line 212) >> elec_original = ft_datatype_sens(elec_original); % ensure up-to-date >> sensor description (Oct 2011) >> >> >> when I use ft_electroderealign I get the following errors: >> >> Error using ft_fetch_sens (line 192) >> no electrodes or gradiometers specified. >> >> Error in ft_electroderealign (line 195) >> elec_original = ft_fetch_sens(cfg); >> >> >> Hope you can help me figure out why I'm getting these errors. >> Thanks, >> Ainsley >> _______________________________________________ >> fieldtrip mailing list >> fieldtrip at donders.ru.nl >> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >> >> >> >> >> J.M.Schoffelen, MD PhD >> Senior Researcher, VIDI-fellow - PI, language in interaction >> Telephone: +31-24-3614793 <+31%2024%20361%204793> >> Physical location: room 00.028 >> Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands >> >> >> >> >> _______________________________________________ >> fieldtrip mailing list >> fieldtrip at donders.ru.nl >> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >> > > > > -- > *Ainsley Temudo* > Research Assistant > Sreenivasan Lab > NYU Abu Dhabi > Office Tel (UAE): +971 2 628 4764 <+971%202%20628%204764> > Mobile (UAE): +971 56 664 6952 <+971%2056%20664%206952> > > NYU Abu Dhabi, Saadiyat Campus > P.O. Box 129188 > Abu Dhabi, United Arab Emirates > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > > > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -- *Ainsley Temudo* Research Assistant Sreenivasan Lab NYU Abu Dhabi Office Tel (UAE): +971 2 628 4764 Mobile (UAE): +971 56 664 6952 NYU Abu Dhabi, Saadiyat Campus P.O. Box 129188 Abu Dhabi, United Arab Emirates -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: aligned.fig Type: application/octet-stream Size: 451127 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: unaligned.fig Type: application/octet-stream Size: 449086 bytes Desc: not available URL: From nick.peatfield at gmail.com Wed Mar 8 17:48:27 2017 From: nick.peatfield at gmail.com (Nicholas A. Peatfield) Date: Wed, 8 Mar 2017 08:48:27 -0800 Subject: [FieldTrip] BTI freesurfer surface Message-ID: Hi Fieldtrippers I want to reconstruct cortical sources using a freesurfer surface, rather than an equidistant grid I will use the points from the surface. To do so I use ft_read_headshape to read the .surf file and use it as the points for the leadfield. However, the MEG data and headmodel are in 'bti' coordinates thus the grid points are not aligned to the headmodel and sensor points. I read the minimum norm estimate tutorial from fieldtrip webpage for transforming spm coordinates to bti, but in my case I am using a surface file in which there are only the points and tri.gonometries and the tutorial doesn't apply. How can I convert the surface points to bti? This is HCP data and I thought I would find some help on this somewhere but couldn't. Cheers, Nick -------------- next part -------------- An HTML attachment was scrubbed... URL: From scho at med.ovgu.de Wed Mar 8 17:52:46 2017 From: scho at med.ovgu.de (Michael Scholz) Date: Wed, 8 Mar 2017 17:52:46 +0100 (CET) Subject: [FieldTrip] different gradiometer-units from fiff-file Message-ID: Dear community, My name is Michael Scholz and I am working in Magdeburg (Germany) in the Department of Neurology. We just started using our Elekta Neuromag Triux System. I was going to use fieldtrip to create some simulation data to test Elekta software. Therefore I read data from a fiff-file acquired by the Elekta-MEG-system including 102 magnetometer-data and 2x102 gradiometer data. Reading fiff-files with ft_read_data creates output with magnetometer-data in Tesla (T) and gradiometer-data in T/m just as in the fiff-file. Reading the same fiff-file by ft_read_sens creates a structure with header-info including T/cm-unit-info for the gradiometer-sensors. This was not expected and was misleading; if one doesnt recognize these different units for the gradiometers and combines data based on ft_read_sens-output and ft_read_data-output, the result is unusable, since scaling of magnetometer-data and gradiometer-data wont match. How can I prevent ft_read_sens from reading gradiometer in different units as given in the source-fiff-file? best, Michael From jan.schoffelen at donders.ru.nl Wed Mar 8 17:55:22 2017 From: jan.schoffelen at donders.ru.nl (Schoffelen, J.M. (Jan Mathijs)) Date: Wed, 8 Mar 2017 16:55:22 +0000 Subject: [FieldTrip] BTI freesurfer surface In-Reply-To: References: Message-ID: Hi Nick, Sounds like you need a transformation matrix from freesurfer space to MEG headspace, true? is there a c_ras.mat file in your freesurfer/mri directory? This may provide you with the missing link Best, JM J.M.Schoffelen, MD PhD Senior Researcher, VIDI-fellow - PI, language in interaction Telephone: +31-24-3614793 Physical location: room 00.028 Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands On 08 Mar 2017, at 17:48, Nicholas A. Peatfield > wrote: Hi Fieldtrippers I want to reconstruct cortical sources using a freesurfer surface, rather than an equidistant grid I will use the points from the surface. To do so I use ft_read_headshape to read the .surf file and use it as the points for the leadfield. However, the MEG data and headmodel are in 'bti' coordinates thus the grid points are not aligned to the headmodel and sensor points. I read the minimum norm estimate tutorial from fieldtrip webpage for transforming spm coordinates to bti, but in my case I am using a surface file in which there are only the points and tri.gonometries and the tutorial doesn't apply. How can I convert the surface points to bti? This is HCP data and I thought I would find some help on this somewhere but couldn't. Cheers, Nick _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: From timeehan at gmail.com Wed Mar 8 18:04:27 2017 From: timeehan at gmail.com (Tim Meehan) Date: Wed, 8 Mar 2017 12:04:27 -0500 Subject: [FieldTrip] marking artifacts by channel + trial In-Reply-To: References: Message-ID: Thanks for sharing! I'm just taking a look now. It looks like you're doing mostly automated rejection. Or are you also doing visual rejection along with the z-value thresholding? Thanks again, Tim On Fri, Mar 3, 2017 at 5:31 PM, Teresa Madsen wrote: > Here's a rough sketch of my approach, with one custom function attached. > If you or others find it useful, maybe we can think about ways to > incorporate it into the FieldTrip code. I've been working mostly with > scripts, but you've inspired me to work on functionizing the rest of it so > it's more shareable. > > So, assuming raw multichannel data has been loaded into FieldTrip > structure 'data' with unique trial identifiers in data.trialinfo... > > for ch = 1:numel(data.label) > %% pull out one channel at a time > cfg = []; > cfg.channel = data.label{ch}; > > datch{ch} = ft_selectdata(cfg, data); > > %% identify large z-value artifacts and/or whatever else you might want > > cfg = []; > cfg.artfctdef.zvalue.channel = 'all'; > cfg.artfctdef.zvalue.cutoff = 15; > cfg.artfctdef.zvalue.trlpadding = 0; > cfg.artfctdef.zvalue.fltpadding = 0; > cfg.artfctdef.zvalue.artpadding = 0.1; > cfg.artfctdef.zvalue.rectify = 'yes'; > > [~, artifact.zvalue] = ft_artifact_zvalue([], datch{ch}); > > %% replace artifacts with NaNs > cfg = []; > cfg.artfctdef.zvalue.artifact = artifact.zvalue; > cfg.artfctdef.reject = 'nan'; > > datch{ch} = ft_rejectartifact(cfg,datch{ch}); > end > > %% re-merge channels > data = ft_appenddata([],datch); > > %% mark uniform NaNs as artifacts when they occur across all channels > % and replace non-uniform NaNs (on some but not all channels) with zeroes, > saving times > [artifact,data,times] = artifact_nan2zero_TEM(data) % custom function, see > attached > > %% reject artifacts by breaking into sub-trials > cfg = []; > cfg.artfctdef.nan2zero.artifact = artifact; > cfg.artfctdef.reject = 'partial'; > > data = ft_rejectartifact(cfg,data); > > %% identify real trials > trlinfo = unique(data.trialinfo,'rows','stable'); > > for tr = 1:size(trlinfo,1) > > %% calculate trial spectrogram > > cfg = []; > > cfg.trials = ismember(data.trialinfo, trlinfo(tr,:), 'rows'); > cfg.keeptrials = 'no'; % refers to sub-trials > > cfg.method = 'mtmconvol'; > > cfg.output = 'powandcsd'; > > cfg.foi = 2.^(0:0.1:log2(300)); % 83 freqs, log2 spaced, 1-300 Hz > cfg.tapsmofrq = cfg.foi/10; % smooth by 10% > cfg.t_ftimwin = 2./cfg.tapsmofrq; % for 3 tapers (K=3), T=2/W > cfg.toi = '50%'; > cfg.pad = 'nextpow2'; > > > freq = ft_freqanalysis(cfg,data); > > %% replace powspctrm & crsspctrum values with NaNs > % where t_ftimwin (or wavlen for wavelets) overlaps with artifact > for ch = 1:numel(freq.label) > badt = [times{tr,ch}]; > if ~isempty(badt) && any(... > badt > (min(freq.time) - max(freq.cfg.t_ftimwin)) & ... > badt < (max(freq.time) + max(freq.cfg.t_ftimwin))) > ci = find(any(strcmp(freq.label{ch}, freq.labelcmb))); > for t = 1:numel(freq.time) > for f = 1:numel(freq.freq) > mint = freq.time(t) - freq.cfg.t_ftimwin(f); > maxt = freq.time(t) + freq.cfg.t_ftimwin(f); > if any(badt > mint & badt < maxt) > freq.powspctrm(ch,f,t) = NaN; > freq.crsspctrm(ci,f,t) = NaN; > end > end > end > end > end > > %% save corrected output > > save(['trial' num2str(tr) 'mtmconvolTFA.mat'], 'freq', '-v7.3'); > end > > > > On Thu, Mar 2, 2017 at 9:55 AM, Tim Meehan wrote: > >> Hi Teresa, >> >> Thanks for the reply. I'll take a look at your example if you don't mind >> sharing. Thanks! >> >> Tim >> >> On Thu, Mar 2, 2017 at 9:53 AM, Teresa Madsen >> wrote: >> >>> No, not really. The only way I've found to do that is to loop through >>> my artifact rejection process on each trial individually, then merge them >>> back together with NaNs filling in where there are artifacts, but then that >>> breaks every form of analysis I want to do. :-P >>> >>> I wonder if it would work to fill in the artifacts with 0s instead of >>> NaNs....I might play with that. Let me know if you're interested in some >>> example code. >>> >>> ~Teresa >>> >>> >>> On Wed, Mar 1, 2017 at 3:55 PM, Tim Meehan wrote: >>> >>>> Hello All, >>>> >>>> When performing visual artifact rejection, I want to be able to mark >>>> artifacts that occur during some specific trials and only on some specific >>>> channels. In the tutorials I see only ways to mark bad channels (i.e. >>>> across all trials) or bad trials (i.e. across all channels). Does FieldTrip >>>> handle marking artifacts restricted to some channel/trial combination? >>>> >>>> Thanks, >>>> Tim >>>> >>>> _______________________________________________ >>>> fieldtrip mailing list >>>> fieldtrip at donders.ru.nl >>>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>>> >>> >>> >>> >>> -- >>> Teresa E. Madsen, PhD >>> Research Technical Specialist: *in vivo *electrophysiology & data >>> analysis >>> Division of Behavioral Neuroscience and Psychiatric Disorders >>> Yerkes National Primate Research Center >>> Emory University >>> Rainnie Lab, NSB 5233 >>> 954 Gatewood Rd. NE >>> Atlanta, GA 30329 >>> (770) 296-9119 >>> braingirl at gmail.com >>> https://www.linkedin.com/in/temadsen >>> >>> _______________________________________________ >>> fieldtrip mailing list >>> fieldtrip at donders.ru.nl >>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>> >> >> >> _______________________________________________ >> fieldtrip mailing list >> fieldtrip at donders.ru.nl >> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >> > > > > -- > Teresa E. Madsen, PhD > Research Technical Specialist: *in vivo *electrophysiology & data > analysis > Division of Behavioral Neuroscience and Psychiatric Disorders > Yerkes National Primate Research Center > Emory University > Rainnie Lab, NSB 5233 > 954 Gatewood Rd. NE > Atlanta, GA 30329 > (770) 296-9119 > braingirl at gmail.com > https://www.linkedin.com/in/temadsen > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -------------- next part -------------- An HTML attachment was scrubbed... URL: From murphyk5 at aston.ac.uk Wed Mar 8 18:49:55 2017 From: murphyk5 at aston.ac.uk (Murphy, Kelly (Research Student)) Date: Wed, 8 Mar 2017 17:49:55 +0000 Subject: [FieldTrip] different gradiometer-units from fiff-file In-Reply-To: References: Message-ID: Hi Michael, You could try using ft_read_data as per usual, then convert the units to the desired ones after. For example: "grad = ft_read_sens(MEG_data); %get fiducial coordinates under grad.chan sens = ft_convert_units('mm', grad);" Kelly ________________________________________ From: fieldtrip-bounces at science.ru.nl [fieldtrip-bounces at science.ru.nl] on behalf of Michael Scholz [scho at med.ovgu.de] Sent: 08 March 2017 16:52 To: fieldtrip at science.ru.nl Subject: [FieldTrip] different gradiometer-units from fiff-file Dear community, My name is Michael Scholz and I am working in Magdeburg (Germany) in the Department of Neurology. We just started using our Elekta Neuromag Triux System. I was going to use fieldtrip to create some simulation data to test Elekta software. Therefore I read data from a fiff-file acquired by the Elekta-MEG-system including 102 magnetometer-data and 2x102 gradiometer data. Reading fiff-files with ft_read_data creates output with magnetometer-data in Tesla (T) and gradiometer-data in T/m just as in the fiff-file. Reading the same fiff-file by ft_read_sens creates a structure with header-info including T/cm-unit-info for the gradiometer-sensors. This was not expected and was misleading; if one doesn?t recognize these different units for the gradiometers and combines data based on ft_read_sens-output and ft_read_data-output, the result is unusable, since scaling of magnetometer-data and gradiometer-data won?t match. How can I prevent ft_read_sens from reading gradiometer in different units as given in the source-fiff-file? best, Michael _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip From braingirl at gmail.com Wed Mar 8 21:35:12 2017 From: braingirl at gmail.com (Teresa Madsen) Date: Wed, 8 Mar 2017 15:35:12 -0500 Subject: [FieldTrip] marking artifacts by channel + trial In-Reply-To: References: Message-ID: I actually do a mix of approaches: a quick look using ft_databrowser with all channels for irregular artifacts like disconnection events, then a channel-by-channel search for large z-value artifacts and clipping artifacts, then I remove all those and do one last ft_databrowser review of all channels together. I'll attach the function I was working on, but it's more complex than you originally asked for and not fully tested yet, so use at your own risk. Do you use ft_databrowser or ft_rejectvisual for visual artifact rejection? ~Teresa On Wed, Mar 8, 2017 at 12:04 PM, Tim Meehan wrote: > Thanks for sharing! I'm just taking a look now. It looks like you're doing > mostly automated rejection. Or are you also doing visual rejection along > with the z-value thresholding? > > Thanks again, > Tim > > On Fri, Mar 3, 2017 at 5:31 PM, Teresa Madsen wrote: > >> Here's a rough sketch of my approach, with one custom function attached. >> If you or others find it useful, maybe we can think about ways to >> incorporate it into the FieldTrip code. I've been working mostly with >> scripts, but you've inspired me to work on functionizing the rest of it so >> it's more shareable. >> >> So, assuming raw multichannel data has been loaded into FieldTrip >> structure 'data' with unique trial identifiers in data.trialinfo... >> >> for ch = 1:numel(data.label) >> %% pull out one channel at a time >> cfg = []; >> cfg.channel = data.label{ch}; >> >> datch{ch} = ft_selectdata(cfg, data); >> >> %% identify large z-value artifacts and/or whatever else you might want >> >> cfg = []; >> cfg.artfctdef.zvalue.channel = 'all'; >> cfg.artfctdef.zvalue.cutoff = 15; >> cfg.artfctdef.zvalue.trlpadding = 0; >> cfg.artfctdef.zvalue.fltpadding = 0; >> cfg.artfctdef.zvalue.artpadding = 0.1; >> cfg.artfctdef.zvalue.rectify = 'yes'; >> >> [~, artifact.zvalue] = ft_artifact_zvalue([], datch{ch}); >> >> %% replace artifacts with NaNs >> cfg = []; >> cfg.artfctdef.zvalue.artifact = artifact.zvalue; >> cfg.artfctdef.reject = 'nan'; >> >> datch{ch} = ft_rejectartifact(cfg,datch{ch}); >> end >> >> %% re-merge channels >> data = ft_appenddata([],datch); >> >> %% mark uniform NaNs as artifacts when they occur across all channels >> % and replace non-uniform NaNs (on some but not all channels) with >> zeroes, saving times >> [artifact,data,times] = artifact_nan2zero_TEM(data) % custom function, >> see attached >> >> %% reject artifacts by breaking into sub-trials >> cfg = []; >> cfg.artfctdef.nan2zero.artifact = artifact; >> cfg.artfctdef.reject = 'partial'; >> >> data = ft_rejectartifact(cfg,data); >> >> %% identify real trials >> trlinfo = unique(data.trialinfo,'rows','stable'); >> >> for tr = 1:size(trlinfo,1) >> >> %% calculate trial spectrogram >> >> cfg = []; >> >> cfg.trials = ismember(data.trialinfo, trlinfo(tr,:), 'rows'); >> cfg.keeptrials = 'no'; % refers to sub-trials >> >> cfg.method = 'mtmconvol'; >> >> cfg.output = 'powandcsd'; >> >> cfg.foi = 2.^(0:0.1:log2(300)); % 83 freqs, log2 spaced, 1-300 Hz >> cfg.tapsmofrq = cfg.foi/10; % smooth by 10% >> cfg.t_ftimwin = 2./cfg.tapsmofrq; % for 3 tapers (K=3), T=2/W >> cfg.toi = '50%'; >> cfg.pad = 'nextpow2'; >> >> >> freq = ft_freqanalysis(cfg,data); >> >> %% replace powspctrm & crsspctrum values with NaNs >> % where t_ftimwin (or wavlen for wavelets) overlaps with artifact >> for ch = 1:numel(freq.label) >> badt = [times{tr,ch}]; >> if ~isempty(badt) && any(... >> badt > (min(freq.time) - max(freq.cfg.t_ftimwin)) & ... >> badt < (max(freq.time) + max(freq.cfg.t_ftimwin))) >> ci = find(any(strcmp(freq.label{ch}, freq.labelcmb))); >> for t = 1:numel(freq.time) >> for f = 1:numel(freq.freq) >> mint = freq.time(t) - freq.cfg.t_ftimwin(f); >> maxt = freq.time(t) + freq.cfg.t_ftimwin(f); >> if any(badt > mint & badt < maxt) >> freq.powspctrm(ch,f,t) = NaN; >> freq.crsspctrm(ci,f,t) = NaN; >> end >> end >> end >> end >> end >> >> %% save corrected output >> >> save(['trial' num2str(tr) 'mtmconvolTFA.mat'], 'freq', '-v7.3'); >> end >> >> >> >> On Thu, Mar 2, 2017 at 9:55 AM, Tim Meehan wrote: >> >>> Hi Teresa, >>> >>> Thanks for the reply. I'll take a look at your example if you don't mind >>> sharing. Thanks! >>> >>> Tim >>> >>> On Thu, Mar 2, 2017 at 9:53 AM, Teresa Madsen >>> wrote: >>> >>>> No, not really. The only way I've found to do that is to loop through >>>> my artifact rejection process on each trial individually, then merge them >>>> back together with NaNs filling in where there are artifacts, but then that >>>> breaks every form of analysis I want to do. :-P >>>> >>>> I wonder if it would work to fill in the artifacts with 0s instead of >>>> NaNs....I might play with that. Let me know if you're interested in some >>>> example code. >>>> >>>> ~Teresa >>>> >>>> >>>> On Wed, Mar 1, 2017 at 3:55 PM, Tim Meehan wrote: >>>> >>>>> Hello All, >>>>> >>>>> When performing visual artifact rejection, I want to be able to mark >>>>> artifacts that occur during some specific trials and only on some specific >>>>> channels. In the tutorials I see only ways to mark bad channels (i.e. >>>>> across all trials) or bad trials (i.e. across all channels). Does FieldTrip >>>>> handle marking artifacts restricted to some channel/trial combination? >>>>> >>>>> Thanks, >>>>> Tim >>>>> >>>>> _______________________________________________ >>>>> fieldtrip mailing list >>>>> fieldtrip at donders.ru.nl >>>>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>>>> >>>> >>>> >>>> >>>> -- >>>> Teresa E. Madsen, PhD >>>> Research Technical Specialist: *in vivo *electrophysiology & data >>>> analysis >>>> Division of Behavioral Neuroscience and Psychiatric Disorders >>>> Yerkes National Primate Research Center >>>> Emory University >>>> Rainnie Lab, NSB 5233 >>>> 954 Gatewood Rd. NE >>>> Atlanta, GA 30329 >>>> (770) 296-9119 >>>> braingirl at gmail.com >>>> https://www.linkedin.com/in/temadsen >>>> >>>> _______________________________________________ >>>> fieldtrip mailing list >>>> fieldtrip at donders.ru.nl >>>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>>> >>> >>> >>> _______________________________________________ >>> fieldtrip mailing list >>> fieldtrip at donders.ru.nl >>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>> >> >> >> >> -- >> Teresa E. Madsen, PhD >> Research Technical Specialist: *in vivo *electrophysiology & data >> analysis >> Division of Behavioral Neuroscience and Psychiatric Disorders >> Yerkes National Primate Research Center >> Emory University >> Rainnie Lab, NSB 5233 >> 954 Gatewood Rd. NE >> Atlanta, GA 30329 >> (770) 296-9119 >> braingirl at gmail.com >> https://www.linkedin.com/in/temadsen >> >> _______________________________________________ >> fieldtrip mailing list >> fieldtrip at donders.ru.nl >> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >> > > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -- Teresa E. Madsen, PhD Research Technical Specialist: *in vivo *electrophysiology & data analysis Division of Behavioral Neuroscience and Psychiatric Disorders Yerkes National Primate Research Center Emory University Rainnie Lab, NSB 5233 954 Gatewood Rd. NE Atlanta, GA 30329 (770) 296-9119 braingirl at gmail.com https://www.linkedin.com/in/temadsen -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- function [data] = AutoArtReject_TEM(cfg,data) % AutoArtReject_TEM performs automated artifact rejection, processing each % channel independently, removing clipping & large zvalue artifacts based % on automated thresholds (best to run on all data from a given subject % in one data structure, so the same threshold is applied consistently % across conditions), and returning the data structure re-merged across % channels, with NaNs in place of artifacts. % % Input cfg structure should contain: % interactsubj = true or false, whether to select visual artifacts per % subject (i.e., per call to this function) & review all channels after % automated detection % interactch = true or false, whether to preview detected artifacts % & select visual artifacts per channel % artfctdef.clip = struct as defined in ft_artifact_clip, but absdiff is % applied before the data is passed to that function, so it's actually % comparing these thresholds to the 2nd derivative of the original data % artfctdef.zvalue = struct as defined in ft_artifact_zvalue % artfctdef.minaccepttim = scalar as defined in ft_rejectartifact % % To facilitate data-handling and distributed computing you can use % cfg.inputfile = ... % cfg.outputfile = ... % If you specify one of these (or both) the input data will be read from a % *.mat file on disk and/or the output data will be written to a *.mat % file. These *.mat files should contain only a single variable 'data', % corresponding with ft_datatype_raw. % % written 3/2/17 by Teresa E. Madsen %% load data, if needed if nargin < 2 && isfield(cfg,'inputfile') load(cfg.inputfile); end %% preview data & mark unusual cross-channel artifacts if cfg.interactsubj cfgtmp = []; cfgtmp = ft_databrowser(cfgtmp,data); visual = cfgtmp.artfctdef.visual.artifact; % not a field of artifact % because this will be reused for all channels, while the rest of % artifact is cleared when starting each new channel else visual = []; end %% perform artifact detection on each channel separately excludech = false(size(data.label)); datch = cell(size(data.label)); for ch = 1:numel(data.label) artifact = []; %% divide data into channels cfgtmp = []; cfgtmp.channel = data.label{ch}; datch{ch} = ft_selectdata(cfgtmp,data); %% identify large zvalue artifacts cfgtmp = []; cfgtmp.artfctdef.zvalue = cfg.artfctdef.zvalue; if ~isfield(cfgtmp.artfctdef.zvalue,'interactive') if interactch cfgtmp.artfctdef.zvalue.interactive = 'yes'; else cfgtmp.artfctdef.zvalue.interactive = 'no'; end end [~, artifact.zvalue] = ft_artifact_zvalue(cfgtmp,datch{ch}); %% take 1st derivative of signal cfgtmp = []; cfgtmp.absdiff = 'yes'; datd1 = ft_preprocessing(cfgtmp,datch{ch}); %% define clipping artifacts % applies absdiff again, so it's actually working on 2nd derivative data cfgtmp = []; cfgtmp.artfctdef.clip = cfg.artfctdef.clip; [~, artifact.clip] = ft_artifact_clip(cfgtmp,datd1); %% review artifacts if needed cfgtmp = []; cfgtmp.artfctdef.clip.artifact = artifact.clip; cfgtmp.artfctdef.zvalue.artifact = artifact.zvalue; cfgtmp.artfctdef.visual.artifact = visual; if cfg.interactch % any new visual artifacts will be automatically added to cfgtmp cfgtmp = ft_databrowser(cfgtmp,datch{ch}); keyboard % dbcont when satisfied % excludech(ch) = true; % exclude this channel if desired end clearvars d1dat %% replace artifactual data with NaNs cfgtmp.artfctdef.reject = 'nan'; cfgtmp.artfctdef.minaccepttim = cfg.artfctdef.minaccepttim; datch{ch} = ft_rejectartifact(cfgtmp,datch{ch}); % if any trials were rejected completely, exclude this channel, or it % won't merge properly if numel(datch{ch}.trial) ~= numel(data.trial) excludech(ch) = true; end end % for ch = 1:numel(data.label) %% remerge each channel file into one cleaned data file cfgtmp = []; if isfield(cfg,'outputfile') cfgtmp.outputfile = cfg.outputfile; end data = ft_appenddata(cfgtmp,datch(~excludech)); %% visualize result if interactsubj cfgtmp = []; cfgtmp = ft_databrowser(cfgtmp,data); %#ok just for debugging keyboard % dbcont when satisfied end end From martabortoletto at yahoo.it Thu Mar 9 11:22:30 2017 From: martabortoletto at yahoo.it (Marta Bortoletto) Date: Thu, 9 Mar 2017 10:22:30 +0000 (UTC) Subject: [FieldTrip] Post-doc position in TMS-EEG coregistration in Brescia, Italy References: <1430592770.3251961.1489054950545.ref@mail.yahoo.com> Message-ID: <1430592770.3251961.1489054950545@mail.yahoo.com> Dear all, Please find below an announcement for a post-docposition to work on a project of TMS-EEG coregistration, located at theCognitive Neuroscience Unit, IRCCS Centro San Giovanni di Dio Fatebenefratelli,Brescia (Italy), led by prof. Carlo Miniussi.We would be mostly grateful if you couldcirculate this notice to possibly interested candidates.Cheers, Marta Bortoletto and Anna Fertonani ------------------------------------------------------------- Job description The Cognitive Neuroscience Unit, IRCCS Centro San Giovannidi Dio Fatebenefratelli, led by Prof. Carlo Miniussi is seeking to recruit apost-doctoral research fellow to work on a project of TMS-EEG coregistration.This is part of projects funded by BIAL foundation and FISM foundation, incollaboration with the University of Genova, ASST Spedali Civili di Brescia andthe Center for Mind/Brain Sciences CIMeC of the University of Trento. Theresearch focus of these projects is the effects of non-invasive brainstimulation (TMS and tES) on cortical networks of the human brain during motorand perceptual tasks and their contributions to learning.The post is available from May 2017 and is funded for oneyear in the first instance, with the possibility of extension for further 2years.  Key Requirements ·    We are seeking for aspiring individuals withsubstantial experience in TMS-EEG or EEG research and strong computationalabilities.·    The applicants should also be interested instudying cortical networks and their disorders.·    Successful candidates should have a backgroundand PhD degree in a neuroscience-related field, broadly specified, and skillsfor working with complex empirical data and human subjects.·    Applicants should have experience with conductingexperimental research, hands-on knowledge in EEG method, and documented skillsin at least one programming language (preferably Matlab). ·    Good command of the English language (writtenand oral), as well as skills for teamwork in a multidisciplinary researchgroup, are required.·    Experience with advanced EEG signal processing,EEG source localization, connectivity analyses and a strong publication recordare an advantage.  What we offer ·    Gross salary: 25.000-28.000 euro per annum ·    Excellent working environment·    Opportunity to work in a motivated, skilled,inspired and supportive group ·    A chance to work in Italy – one of the mostbeautiful countries in the world  To apply, please sendthe following items, as ONE PDF FILE and via email to Dr. Anna Fertonani (anna.fertonani at cognitiveneuroscience.it) preferably by March 31st 2017. Later applications will be considereduntil the position is filled.   ·    A letter of intent including a brief descriptionof your past and current research interests·    Curriculum vitae including the list of yourpublication and degrees·    Names and contact information of 2 referees.    For furtherinformation please contact Anna FertonaniIRCCS Centro SanGiovanni di Dio Fatebenefratelli anna.fertonani at cognitiveneuroscience.it  About the employer The IRCCS San Giovanni di Dio Fatebenefratelli is operatingsince 120 years and has been appointed and funded as national centre ofexcellence in research and care by the Italian Ministry of Health since 1996.More than 4500 patients with Alzheimer’s Dementia or associated disorders andabout 1700 patients with psychiatric diseases are treated each year. The researchdivision, besides the Cognitive Neuroscience Section, includes the laboratoriesof Genetics, Neuropsychopharmacology, Neurobiology, Proteomic, Neuroimaging,Ethic and Epidemiology and employs about fifty professional researchers. The Cognitive Neuroscience Unit is provided with severalstate-of-the-art devices necessary for the application of brain stimulationtechniques (transcranial magnetic stimulation: TMS, rTMS, and transcranialelectrical stimulation: tDCS, tACS and tRNS) and for the recording and theanalysis of electrophysiological signals (EEG, EMG) as well asneuropsychological testing. The simultaneous co-registration ofelectroencephalography and TMS application is also available, field where wehave been pioneers in the national research.  Marta Bortoletto, PhD Cognitive Neuroscience Section, IRCCS Centro San Giovanni di Dio Fatebenefratelli Via Pilastroni 4, 25125 Brescia, Italy Phone number: (+39) 0303501594 E-mail: marta.bortoletto at cognitiveneuroscience.it web: http://www.cognitiveneuroscience.it/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From timeehan at gmail.com Thu Mar 9 16:37:02 2017 From: timeehan at gmail.com (Tim Meehan) Date: Thu, 9 Mar 2017 10:37:02 -0500 Subject: [FieldTrip] marking artifacts by channel + trial In-Reply-To: References: Message-ID: So far I've just been using ft_databrowser in the same way you mention to look for artifacts that affect most or all channels. But I think I will need to also visually check each channel for bad trials. Since I'm working with iEEG, these will be mainly those picking up any epileptic discharges. Of course the channels around the seizure foci I will throw out entirely. I'm a bit daunted by how much work it will be to do a channel x trial visual rejection since I have ~1700 trials and ~ 100 channels for our one subject so far. In fact just typing those numbers makes me think it may not be feasible. Do you find the automated rejection works satisfactorily for you? On Wed, Mar 8, 2017 at 3:35 PM, Teresa Madsen wrote: > I actually do a mix of approaches: a quick look using ft_databrowser with > all channels for irregular artifacts like disconnection events, then a > channel-by-channel search for large z-value artifacts and clipping > artifacts, then I remove all those and do one last ft_databrowser review of > all channels together. I'll attach the function I was working on, but it's > more complex than you originally asked for and not fully tested yet, so use > at your own risk. > > Do you use ft_databrowser or ft_rejectvisual for visual artifact rejection? > > ~Teresa > > > On Wed, Mar 8, 2017 at 12:04 PM, Tim Meehan wrote: > >> Thanks for sharing! I'm just taking a look now. It looks like you're >> doing mostly automated rejection. Or are you also doing visual rejection >> along with the z-value thresholding? >> >> Thanks again, >> Tim >> >> On Fri, Mar 3, 2017 at 5:31 PM, Teresa Madsen >> wrote: >> >>> Here's a rough sketch of my approach, with one custom function >>> attached. If you or others find it useful, maybe we can think about ways >>> to incorporate it into the FieldTrip code. I've been working mostly with >>> scripts, but you've inspired me to work on functionizing the rest of it so >>> it's more shareable. >>> >>> So, assuming raw multichannel data has been loaded into FieldTrip >>> structure 'data' with unique trial identifiers in data.trialinfo... >>> >>> for ch = 1:numel(data.label) >>> %% pull out one channel at a time >>> cfg = []; >>> cfg.channel = data.label{ch}; >>> >>> datch{ch} = ft_selectdata(cfg, data); >>> >>> %% identify large z-value artifacts and/or whatever else you might >>> want >>> >>> cfg = []; >>> cfg.artfctdef.zvalue.channel = 'all'; >>> cfg.artfctdef.zvalue.cutoff = 15; >>> cfg.artfctdef.zvalue.trlpadding = 0; >>> cfg.artfctdef.zvalue.fltpadding = 0; >>> cfg.artfctdef.zvalue.artpadding = 0.1; >>> cfg.artfctdef.zvalue.rectify = 'yes'; >>> >>> [~, artifact.zvalue] = ft_artifact_zvalue([], datch{ch}); >>> >>> %% replace artifacts with NaNs >>> cfg = []; >>> cfg.artfctdef.zvalue.artifact = artifact.zvalue; >>> cfg.artfctdef.reject = 'nan'; >>> >>> datch{ch} = ft_rejectartifact(cfg,datch{ch}); >>> end >>> >>> %% re-merge channels >>> data = ft_appenddata([],datch); >>> >>> %% mark uniform NaNs as artifacts when they occur across all channels >>> % and replace non-uniform NaNs (on some but not all channels) with >>> zeroes, saving times >>> [artifact,data,times] = artifact_nan2zero_TEM(data) % custom function, >>> see attached >>> >>> %% reject artifacts by breaking into sub-trials >>> cfg = []; >>> cfg.artfctdef.nan2zero.artifact = artifact; >>> cfg.artfctdef.reject = 'partial'; >>> >>> data = ft_rejectartifact(cfg,data); >>> >>> %% identify real trials >>> trlinfo = unique(data.trialinfo,'rows','stable'); >>> >>> for tr = 1:size(trlinfo,1) >>> >>> %% calculate trial spectrogram >>> >>> cfg = []; >>> >>> cfg.trials = ismember(data.trialinfo, trlinfo(tr,:), 'rows'); >>> cfg.keeptrials = 'no'; % refers to sub-trials >>> >>> cfg.method = 'mtmconvol'; >>> >>> cfg.output = 'powandcsd'; >>> >>> cfg.foi = 2.^(0:0.1:log2(300)); % 83 freqs, log2 spaced, 1-300 Hz >>> cfg.tapsmofrq = cfg.foi/10; % smooth by 10% >>> cfg.t_ftimwin = 2./cfg.tapsmofrq; % for 3 tapers (K=3), T=2/W >>> cfg.toi = '50%'; >>> cfg.pad = 'nextpow2'; >>> >>> >>> freq = ft_freqanalysis(cfg,data); >>> >>> %% replace powspctrm & crsspctrum values with NaNs >>> % where t_ftimwin (or wavlen for wavelets) overlaps with artifact >>> for ch = 1:numel(freq.label) >>> badt = [times{tr,ch}]; >>> if ~isempty(badt) && any(... >>> badt > (min(freq.time) - max(freq.cfg.t_ftimwin)) & ... >>> badt < (max(freq.time) + max(freq.cfg.t_ftimwin))) >>> ci = find(any(strcmp(freq.label{ch}, freq.labelcmb))); >>> for t = 1:numel(freq.time) >>> for f = 1:numel(freq.freq) >>> mint = freq.time(t) - freq.cfg.t_ftimwin(f); >>> maxt = freq.time(t) + freq.cfg.t_ftimwin(f); >>> if any(badt > mint & badt < maxt) >>> freq.powspctrm(ch,f,t) = NaN; >>> freq.crsspctrm(ci,f,t) = NaN; >>> end >>> end >>> end >>> end >>> end >>> >>> %% save corrected output >>> >>> save(['trial' num2str(tr) 'mtmconvolTFA.mat'], 'freq', '-v7.3'); >>> end >>> >>> >>> >>> On Thu, Mar 2, 2017 at 9:55 AM, Tim Meehan wrote: >>> >>>> Hi Teresa, >>>> >>>> Thanks for the reply. I'll take a look at your example if you don't >>>> mind sharing. Thanks! >>>> >>>> Tim >>>> >>>> On Thu, Mar 2, 2017 at 9:53 AM, Teresa Madsen >>>> wrote: >>>> >>>>> No, not really. The only way I've found to do that is to loop through >>>>> my artifact rejection process on each trial individually, then merge them >>>>> back together with NaNs filling in where there are artifacts, but then that >>>>> breaks every form of analysis I want to do. :-P >>>>> >>>>> I wonder if it would work to fill in the artifacts with 0s instead of >>>>> NaNs....I might play with that. Let me know if you're interested in some >>>>> example code. >>>>> >>>>> ~Teresa >>>>> >>>>> >>>>> On Wed, Mar 1, 2017 at 3:55 PM, Tim Meehan wrote: >>>>> >>>>>> Hello All, >>>>>> >>>>>> When performing visual artifact rejection, I want to be able to mark >>>>>> artifacts that occur during some specific trials and only on some specific >>>>>> channels. In the tutorials I see only ways to mark bad channels (i.e. >>>>>> across all trials) or bad trials (i.e. across all channels). Does FieldTrip >>>>>> handle marking artifacts restricted to some channel/trial combination? >>>>>> >>>>>> Thanks, >>>>>> Tim >>>>>> >>>>>> _______________________________________________ >>>>>> fieldtrip mailing list >>>>>> fieldtrip at donders.ru.nl >>>>>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> Teresa E. Madsen, PhD >>>>> Research Technical Specialist: *in vivo *electrophysiology & data >>>>> analysis >>>>> Division of Behavioral Neuroscience and Psychiatric Disorders >>>>> Yerkes National Primate Research Center >>>>> Emory University >>>>> Rainnie Lab, NSB 5233 >>>>> 954 Gatewood Rd. NE >>>>> Atlanta, GA 30329 >>>>> (770) 296-9119 >>>>> braingirl at gmail.com >>>>> https://www.linkedin.com/in/temadsen >>>>> >>>>> _______________________________________________ >>>>> fieldtrip mailing list >>>>> fieldtrip at donders.ru.nl >>>>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>>>> >>>> >>>> >>>> _______________________________________________ >>>> fieldtrip mailing list >>>> fieldtrip at donders.ru.nl >>>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>>> >>> >>> >>> >>> -- >>> Teresa E. Madsen, PhD >>> Research Technical Specialist: *in vivo *electrophysiology & data >>> analysis >>> Division of Behavioral Neuroscience and Psychiatric Disorders >>> Yerkes National Primate Research Center >>> Emory University >>> Rainnie Lab, NSB 5233 >>> 954 Gatewood Rd. NE >>> Atlanta, GA 30329 >>> (770) 296-9119 >>> braingirl at gmail.com >>> https://www.linkedin.com/in/temadsen >>> >>> _______________________________________________ >>> fieldtrip mailing list >>> fieldtrip at donders.ru.nl >>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>> >> >> >> _______________________________________________ >> fieldtrip mailing list >> fieldtrip at donders.ru.nl >> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >> > > > > -- > Teresa E. Madsen, PhD > Research Technical Specialist: *in vivo *electrophysiology & data > analysis > Division of Behavioral Neuroscience and Psychiatric Disorders > Yerkes National Primate Research Center > Emory University > Rainnie Lab, NSB 5233 > 954 Gatewood Rd. NE > Atlanta, GA 30329 > (770) 296-9119 > braingirl at gmail.com > https://www.linkedin.com/in/temadsen > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -------------- next part -------------- An HTML attachment was scrubbed... URL: From braingirl at gmail.com Thu Mar 9 17:24:04 2017 From: braingirl at gmail.com (Teresa Madsen) Date: Thu, 9 Mar 2017 11:24:04 -0500 Subject: [FieldTrip] marking artifacts by channel + trial In-Reply-To: References: Message-ID: I have spent a lot of time tweaking it for my own purposes (16-32 channels of rat LFP data with lots of motion artifact), so yes, it works reasonably well for me. I greatly prefer to have some sort of objective way of defining artifacts, and only supplement with visual marking when something irregular slips by. It's both faster and makes me more confident that I'm not inadvertently changing my standards across rats/channels/trials. Since epileptic activity tends to have (reasonably?) consistent spatio-temporal patterns, have you considered trying ICA artifact rejection, as demonstrated for EOG and ECG artifacts? That may allow you to retain more "real" neural signal, rather than invalidating whole chunks of time. Then again, maybe it makes more sense to eliminate the whole signal when epileptic activity occurs, since that region of the brain is obviously not functioning normally at that moment. That's a judgement call for you to make in consultation with experienced people in your field. ~Teresa On Thu, Mar 9, 2017 at 10:37 AM, Tim Meehan wrote: > So far I've just been using ft_databrowser in the same way you mention to > look for artifacts that affect most or all channels. But I think I will > need to also visually check each channel for bad trials. Since I'm working > with iEEG, these will be mainly those picking up any epileptic discharges. > Of course the channels around the seizure foci I will throw out entirely. > > I'm a bit daunted by how much work it will be to do a channel x trial > visual rejection since I have ~1700 trials and ~ 100 channels for our one > subject so far. In fact just typing those numbers makes me think it may not > be feasible. Do you find the automated rejection works satisfactorily for > you? > > On Wed, Mar 8, 2017 at 3:35 PM, Teresa Madsen wrote: > >> I actually do a mix of approaches: a quick look using ft_databrowser >> with all channels for irregular artifacts like disconnection events, then a >> channel-by-channel search for large z-value artifacts and clipping >> artifacts, then I remove all those and do one last ft_databrowser review of >> all channels together. I'll attach the function I was working on, but it's >> more complex than you originally asked for and not fully tested yet, so use >> at your own risk. >> >> Do you use ft_databrowser or ft_rejectvisual for visual artifact >> rejection? >> >> ~Teresa >> >> >> On Wed, Mar 8, 2017 at 12:04 PM, Tim Meehan wrote: >> >>> Thanks for sharing! I'm just taking a look now. It looks like you're >>> doing mostly automated rejection. Or are you also doing visual rejection >>> along with the z-value thresholding? >>> >>> Thanks again, >>> Tim >>> >>> On Fri, Mar 3, 2017 at 5:31 PM, Teresa Madsen >>> wrote: >>> >>>> Here's a rough sketch of my approach, with one custom function >>>> attached. If you or others find it useful, maybe we can think about ways >>>> to incorporate it into the FieldTrip code. I've been working mostly with >>>> scripts, but you've inspired me to work on functionizing the rest of it so >>>> it's more shareable. >>>> >>>> So, assuming raw multichannel data has been loaded into FieldTrip >>>> structure 'data' with unique trial identifiers in data.trialinfo... >>>> >>>> for ch = 1:numel(data.label) >>>> %% pull out one channel at a time >>>> cfg = []; >>>> cfg.channel = data.label{ch}; >>>> >>>> datch{ch} = ft_selectdata(cfg, data); >>>> >>>> %% identify large z-value artifacts and/or whatever else you might >>>> want >>>> >>>> cfg = []; >>>> cfg.artfctdef.zvalue.channel = 'all'; >>>> cfg.artfctdef.zvalue.cutoff = 15; >>>> cfg.artfctdef.zvalue.trlpadding = 0; >>>> cfg.artfctdef.zvalue.fltpadding = 0; >>>> cfg.artfctdef.zvalue.artpadding = 0.1; >>>> cfg.artfctdef.zvalue.rectify = 'yes'; >>>> >>>> [~, artifact.zvalue] = ft_artifact_zvalue([], datch{ch}); >>>> >>>> %% replace artifacts with NaNs >>>> cfg = []; >>>> cfg.artfctdef.zvalue.artifact = artifact.zvalue; >>>> cfg.artfctdef.reject = 'nan'; >>>> >>>> datch{ch} = ft_rejectartifact(cfg,datch{ch}); >>>> end >>>> >>>> %% re-merge channels >>>> data = ft_appenddata([],datch); >>>> >>>> %% mark uniform NaNs as artifacts when they occur across all channels >>>> % and replace non-uniform NaNs (on some but not all channels) with >>>> zeroes, saving times >>>> [artifact,data,times] = artifact_nan2zero_TEM(data) % custom function, >>>> see attached >>>> >>>> %% reject artifacts by breaking into sub-trials >>>> cfg = []; >>>> cfg.artfctdef.nan2zero.artifact = artifact; >>>> cfg.artfctdef.reject = 'partial'; >>>> >>>> data = ft_rejectartifact(cfg,data); >>>> >>>> %% identify real trials >>>> trlinfo = unique(data.trialinfo,'rows','stable'); >>>> >>>> for tr = 1:size(trlinfo,1) >>>> >>>> %% calculate trial spectrogram >>>> >>>> cfg = []; >>>> >>>> cfg.trials = ismember(data.trialinfo, trlinfo(tr,:), 'rows'); >>>> cfg.keeptrials = 'no'; % refers to sub-trials >>>> >>>> cfg.method = 'mtmconvol'; >>>> >>>> cfg.output = 'powandcsd'; >>>> >>>> cfg.foi = 2.^(0:0.1:log2(300)); % 83 freqs, log2 spaced, 1-300 Hz >>>> cfg.tapsmofrq = cfg.foi/10; % smooth by 10% >>>> cfg.t_ftimwin = 2./cfg.tapsmofrq; % for 3 tapers (K=3), T=2/W >>>> cfg.toi = '50%'; >>>> cfg.pad = 'nextpow2'; >>>> >>>> >>>> freq = ft_freqanalysis(cfg,data); >>>> >>>> %% replace powspctrm & crsspctrum values with NaNs >>>> % where t_ftimwin (or wavlen for wavelets) overlaps with artifact >>>> for ch = 1:numel(freq.label) >>>> badt = [times{tr,ch}]; >>>> if ~isempty(badt) && any(... >>>> badt > (min(freq.time) - max(freq.cfg.t_ftimwin)) & ... >>>> badt < (max(freq.time) + max(freq.cfg.t_ftimwin))) >>>> ci = find(any(strcmp(freq.label{ch}, freq.labelcmb))); >>>> for t = 1:numel(freq.time) >>>> for f = 1:numel(freq.freq) >>>> mint = freq.time(t) - freq.cfg.t_ftimwin(f); >>>> maxt = freq.time(t) + freq.cfg.t_ftimwin(f); >>>> if any(badt > mint & badt < maxt) >>>> freq.powspctrm(ch,f,t) = NaN; >>>> freq.crsspctrm(ci,f,t) = NaN; >>>> end >>>> end >>>> end >>>> end >>>> end >>>> >>>> %% save corrected output >>>> >>>> save(['trial' num2str(tr) 'mtmconvolTFA.mat'], 'freq', '-v7.3'); >>>> end >>>> >>>> >>>> >>>> On Thu, Mar 2, 2017 at 9:55 AM, Tim Meehan wrote: >>>> >>>>> Hi Teresa, >>>>> >>>>> Thanks for the reply. I'll take a look at your example if you don't >>>>> mind sharing. Thanks! >>>>> >>>>> Tim >>>>> >>>>> On Thu, Mar 2, 2017 at 9:53 AM, Teresa Madsen >>>>> wrote: >>>>> >>>>>> No, not really. The only way I've found to do that is to loop >>>>>> through my artifact rejection process on each trial individually, then >>>>>> merge them back together with NaNs filling in where there are artifacts, >>>>>> but then that breaks every form of analysis I want to do. :-P >>>>>> >>>>>> I wonder if it would work to fill in the artifacts with 0s instead of >>>>>> NaNs....I might play with that. Let me know if you're interested in some >>>>>> example code. >>>>>> >>>>>> ~Teresa >>>>>> >>>>>> >>>>>> On Wed, Mar 1, 2017 at 3:55 PM, Tim Meehan >>>>>> wrote: >>>>>> >>>>>>> Hello All, >>>>>>> >>>>>>> When performing visual artifact rejection, I want to be able to mark >>>>>>> artifacts that occur during some specific trials and only on some specific >>>>>>> channels. In the tutorials I see only ways to mark bad channels (i.e. >>>>>>> across all trials) or bad trials (i.e. across all channels). Does FieldTrip >>>>>>> handle marking artifacts restricted to some channel/trial combination? >>>>>>> >>>>>>> Thanks, >>>>>>> Tim >>>>>>> >>>>>>> _______________________________________________ >>>>>>> fieldtrip mailing list >>>>>>> fieldtrip at donders.ru.nl >>>>>>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Teresa E. Madsen, PhD >>>>>> Research Technical Specialist: *in vivo *electrophysiology & data >>>>>> analysis >>>>>> Division of Behavioral Neuroscience and Psychiatric Disorders >>>>>> Yerkes National Primate Research Center >>>>>> Emory University >>>>>> Rainnie Lab, NSB 5233 >>>>>> 954 Gatewood Rd. NE >>>>>> Atlanta, GA 30329 >>>>>> (770) 296-9119 >>>>>> braingirl at gmail.com >>>>>> https://www.linkedin.com/in/temadsen >>>>>> >>>>>> _______________________________________________ >>>>>> fieldtrip mailing list >>>>>> fieldtrip at donders.ru.nl >>>>>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> fieldtrip mailing list >>>>> fieldtrip at donders.ru.nl >>>>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>>>> >>>> >>>> >>>> >>>> -- >>>> Teresa E. Madsen, PhD >>>> Research Technical Specialist: *in vivo *electrophysiology & data >>>> analysis >>>> Division of Behavioral Neuroscience and Psychiatric Disorders >>>> Yerkes National Primate Research Center >>>> Emory University >>>> Rainnie Lab, NSB 5233 >>>> 954 Gatewood Rd. NE >>>> Atlanta, GA 30329 >>>> (770) 296-9119 >>>> braingirl at gmail.com >>>> https://www.linkedin.com/in/temadsen >>>> >>>> _______________________________________________ >>>> fieldtrip mailing list >>>> fieldtrip at donders.ru.nl >>>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>>> >>> >>> >>> _______________________________________________ >>> fieldtrip mailing list >>> fieldtrip at donders.ru.nl >>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >>> >> >> >> >> -- >> Teresa E. Madsen, PhD >> Research Technical Specialist: *in vivo *electrophysiology & data >> analysis >> Division of Behavioral Neuroscience and Psychiatric Disorders >> Yerkes National Primate Research Center >> Emory University >> Rainnie Lab, NSB 5233 >> 954 Gatewood Rd. NE >> Atlanta, GA 30329 >> (770) 296-9119 >> braingirl at gmail.com >> https://www.linkedin.com/in/temadsen >> >> _______________________________________________ >> fieldtrip mailing list >> fieldtrip at donders.ru.nl >> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip >> > > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -- Teresa E. Madsen, PhD Research Technical Specialist: *in vivo *electrophysiology & data analysis Division of Behavioral Neuroscience and Psychiatric Disorders Yerkes National Primate Research Center Emory University Rainnie Lab, NSB 5233 954 Gatewood Rd. NE Atlanta, GA 30329 (770) 296-9119 braingirl at gmail.com https://www.linkedin.com/in/temadsen -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim.curran at colorado.edu Thu Mar 9 22:39:41 2017 From: tim.curran at colorado.edu (Tim Curran) Date: Thu, 9 Mar 2017 21:39:41 +0000 Subject: [FieldTrip] Postdoc at Northwestern In-Reply-To: References: Message-ID: <4CA846C9-15B4-42D5-9305-880F4182DA16@colorado.edu> POSTDOCTORAL POSITION AVAILABLE IN PRODROME/EARLY PSYCHOSIS: NORTHWESTERN UNIVERSITY’S ADAPT PROGRAM The Northwestern University Adolescent Development and Preventive Treatment (ADAPT) program is seeking applications for a full-time postdoctoral fellow. We are looking for someone with background in cognitive/affective neuroscience or clinical psychology, with interests and experience in electrophysiological assessment and/or neuroimaging. Currently we are running a number of NIMH/Foundation/University funded multimodal protocols (structural/diffusion-tensor/functional imaging, ERP, brain stimulation, eye tracking, instrumental motor assessment) with prodromal syndrome and early psychosis populations focusing on: brain, immune, and endocrine changes in response to aerobic exercise; neurobiology of motor dysfunction; timing of affective processing dysfunction (in new collaboration with Tim Curran). Please see our website for more details: http://www.adaptprogram.com. The ideal candidate will be a person who is interested in applying a cognitive/affective neuroscience background (e.g., Cognition /Cog-Neuro or related Ph.D.) to investigate early psychosis and the psychosis prodrome. Clinical Psychology Ph.D.’s with related interests and training experiences are also highly encouraged to apply. Preference will be given to candidates with a proven track record of good productivity, as well as strong computer programming skills (e.g., MATLAB/Python). We also strongly encourage diversity, and will give preference to applicants from populations underrepresented in STEM. The successful applicant will join Vijay Mittal and an active research team and will be responsible for designing/running experiments, analyzing and processing data, and disseminating findings. In addition, the applicant will work on collaborative studies with Vijay Mittal and Robin Nusslock, examining shared and distinct pathophysiology, underlying reward-processing abnormalities in psychosis and affective disorders. There will also be ample opportunities to take courses (Northwestern has a number of in-depth advanced training opportunities, covering a range of methodological and quantitative methods), collaborate (benefit from a number of active ADAPT collaborations as well as the vibrant Northwestern research community), help to mentor/train graduate students, and develop and follow Independent research questions. Significant attention will be placed on career development (e.g., regular conference attendance/participation, training in grant writing, mentorship, teaching, presentations/job-talks etc.) – this is ideal position for someone interested in preparing for a tenure track position. For questions or to submit and application, please contact Vijay Mittal (vijay.mittal at northwestern.edu). Applicants should send a C.V., brief letter describing interests and prior experience, and two publications (that best reflect contributions of the candidate). Salary is based on the NIMH Post-doctoral scale, and funding is available for up to two-years (appointments are for one year, but renewable for two years, based on progress/merit). There is a flexible start date (Spring, Summer or Fall 2017), and review of applications will begin March 1st. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-michel.badier at univ-amu.fr Fri Mar 10 16:07:06 2017 From: jean-michel.badier at univ-amu.fr (Jean-Michel Badier) Date: Fri, 10 Mar 2017 16:07:06 +0100 Subject: [FieldTrip] Doc and Post-doc positions in Marseille France Message-ID: <0f5e65c1-2800-8703-2eb3-c23774626895@univ-amu.fr> Dear all, Please find call for doc and pos-doc positions in Marseille. Note that both offer access to the IRMf and MEG platforms. Best regards *Call for Applications* */3 Post-docs; 3 Phd Grants/* ** *Three 2-year postdoc positions * *At Aix-Marseille/Avignon on /Language, Communication and the Brain/* The Center of Excellence on Brain and Language (BLRI, www.blri.fr/ )and the Institute of Language, Communication and the Brain (ILCB, http://www.ilcb.fr) offer: *Three 2-year postdoc positions*on any topic that falls within the area of language, communication, brain and modelling. The institute provides privileged and free access to fMRI, MEG and EEG facilities. The BLRI-ILCB is located in Aix-en-Provence, Avignon and Marseille and regroups several research centers in linguistics, psychology, cognitive neuroscience, medicine, computer science, and mathematics. The scientific project, ideally interdisciplinary, should be supervised by at least one member of the BLRI/ILCB (see http://www.blri.fr/members.html) and should, if possible, involve two different laboratories of the institute. A complete application should contain: 1.A full description of the research project (~ 5 pages): a.Title b.Name of the collaborator/supervisor/sponsor within the BLRI-ILCB c.Short Summary d.Scientific context/state of the art/ e.Objectives and hypotheses f.Methodology g.Expected results h.Brief statement about the relevance of the project for the BLRI/ILCB i.Proposed Timeline 2.CV with complete list of publications 3.Letter of motivation 4.One letter of recommendation or contact information of a potential referee * Duration: 2 years (1 year, extendable for another year) * Monthly salary: ~2000 € net (depending on experience) * Deadline: June 11, 2017 Applications should be sent to: nadera.bureau at blri.fr For supplementary information: _Johannes.Ziegler at univ-amu.fr _ *Three PhD grants * *at Aix-Marseille/Avignon on /Language, Communication and the Brain/* The Center of Excellence on Brain and Language (BLRI, www.blri.fr/ )and the Institute of Language, Communication and the Brain (ILCB, http://www.ilcb.fr/ ) award 3 PhD grants (3 years) on any topic that falls within the area of language, communication, brain and modelling. The institute provides privileged and free access to fMRI and MEG facilities. The BLRI-ILCB is located in Aix-en-Provence, Avignon and Marseille and regroups several research centers in linguistics, psychology, cognitive neuroscience, medicine, computer science, and mathematics. Interested candidates need to find one or more PhD supervisors amongst the members of the BRLI-ILCB (http://www.blri.fr/members.html.) Together with the supervisor(s), they would then need to write a 3-year PhD project. A priority is given to interdisciplinary co-directions and to projects that involve two different laboratories of the institute. The application should contain: 1.A full description of the PhD project (~ 5 pages): a.Title b.Name of the PhD supervisor(s) c.Short Summary d.Scientific context/state of the art/ e.Objectives and hypotheses f.Methodology g.Expected results h.Brief statement about the relevance of the project for the BLRI/ILCB i.Proposed Timeline 2.CV and master degree grades (if available) 3.Letter of motivation 4.One letter of recommendation or contact information of a potential referee * Deadline for submission: June 11, 2017 * Pre-selection of candidates for audition: June 28, 2017 * Auditions: July 3-7, 2017 (international candidates might be interviewed via skype) * Start: September 1, 2017 * Monthly salary: 1 685€(1 368€ net) for a period of 3 years Applications should be sent to: _nadera.bureau at blri.fr _ For supplementary information contact: _Johannes.Ziegler at univ-amu.fr _ ------------------------------------------------------------------------ Philippe Blache | LPL - CNRS & Universite d'Aix-Marseille | tel: +33 (0)4.13.55.27.21 Brain & Language Research Institute | fax: +33 (0)4.42.95.37.44 5 Avenue Pasteur, BP 80975 | email: blache at lpl-aix.fr 13604 Aix-en-Provence Cedex 1 | http://www.lpl-aix.fr/~blache/ France | http://www.blri.fr/ ------------------------------------------------------------------------ -- Jean michel Badier /- UMR S 1106 Institut de Neurosciences des Systèmes/ Aix-Marseille Université - Laboratoire MEG - TIMONE - 27 Boulevard Jean Moulin - 13005 Marseille Tél: +33(0)4 91 38 55 62 - Fax : +33(0)4 91 78 99 14 Site : http://www.univ-amu.fr - Email : jean-michel.badier at univ-amu.fr /Afin de respecter l'environnement, merci de n'imprimer cet email que si nécessaire./ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: logo_amu.jpg Type: image/jpeg Size: 17847 bytes Desc: not available URL: From dmatthes at cbs.mpg.de Fri Mar 10 16:25:52 2017 From: dmatthes at cbs.mpg.de (Daniel Matthes) Date: Fri, 10 Mar 2017 16:25:52 +0100 Subject: [FieldTrip] Estimation of the phase locking value Message-ID: <0bcd8e53-28ba-68fa-ddcd-ab6f3e385f72@cbs.mpg.de> Hi, In our upcoming studies we want to investigate inter-brain couplings on source level. Therefore, I have at present some questions about the calculation of the phase locking value (PLV), mainly about it's implementation in field trip. Since I'm a very beginner both with analysing eeg data and with field trip, I possibly ask already answerd question, so you've my apologies at the beginnig. Based on the paper of Dumas et al. (08/2010) I initially tried to compute the PLV in field trip by using the hilbert transformation to determine the instantaneous phase (ft_preprocessing) and subsequent with ft_connectivityanalysis. I rapidly recognized that this is not possible, since only freq data is a valid input for ft_connectivityanalysis in connection with the parameter 'plv'. Thus, I realized the PLV calculation on my own with matlab from scratch and tried to get a similar results using the ft_connectivityanalysis function. I've solved this issue by now, but on this way several question came up. The first one is about the result of ft_connectivityanalysis in connection with the parameter 'plv'. The function returns the phase difference of the compared components and not the phase locking value , as defined in Lachaux et al. (1999). What's the reason for this implementation? Existing plans for closing this gap? The second question is related to my initial problem. Why is it not possible to use the instananeous phase as input data of ft_connectivityanalysis in connection with the parameter 'plv'. I think this would make the calculation less complex. At last I wonder, why the configuration of cfg.channel and cfg.channelcmb has no effect in ft_connectivityanalysis in connection with the parameter 'plv'. It is only possible to focus solely on two channels if these definitions are made during the previous ft_freqanalysis. I would be thankful for some advice. Alle the best, Daniel From andrea.brovelli at univ-amu.fr Fri Mar 10 14:31:37 2017 From: andrea.brovelli at univ-amu.fr (Andrea Brovelli) Date: Fri, 10 Mar 2017 14:31:37 +0100 Subject: [FieldTrip] 3 PhD grants and 3 Post-Doc positions at Aix-Marseille University (France) In-Reply-To: References: Message-ID: *Call for Applications* */3 Post-docs; 3 Phd Grants/* ** *Three 2-year postdoc positions * *At Aix-Marseille/Avignon on /Language, Communication and the Brain/* The Center of Excellence on Brain and Language (BLRI, www.blri.fr/ )and the Institute of Language, Communication and the Brain (ILCB, http://www.ilcb.fr) offer: *Three 2-year postdoc positions*on any topic that falls within the area of language, communication, brain and modelling. The institute provides privileged and free access to fMRI, MEG and EEG facilities. The BLRI-ILCB is located in Aix-en-Provence, Avignon and Marseille and regroups several research centers in linguistics, psychology, cognitive neuroscience, medicine, computer science, and mathematics. The scientific project, ideally interdisciplinary, should be supervised by at least one member of the BLRI/ILCB (see http://www.blri.fr/members.html) and should, if possible, involve two different laboratories of the institute. A complete application should contain: 1.A full description of the research project (~ 5 pages): a.Title b.Name of the collaborator/supervisor/sponsor within the BLRI-ILCB c.Short Summary d.Scientific context/state of the art/ e.Objectives and hypotheses f.Methodology g.Expected results h.Brief statement about the relevance of the project for the BLRI/ILCB i.Proposed Timeline 2.CV with complete list of publications 3.Letter of motivation 4.One letter of recommendation or contact information of a potential referee * Duration: 2 years (1 year, extendable for another year) * Monthly salary: ~2000 € net (depending on experience) * Deadline: June 11, 2017 Applications should be sent to: nadera.bureau at blri.fr For supplementary information: _Johannes.Ziegler at univ-amu.fr _ *Three PhD grants * *at Aix-Marseille/Avignon on /Language, Communication and the Brain/* The Center of Excellence on Brain and Language (BLRI, www.blri.fr/ )and the Institute of Language, Communication and the Brain (ILCB, http://www.ilcb.fr/ ) award 3 PhD grants (3 years) on any topic that falls within the area of language, communication, brain and modelling. The institute provides privileged and free access to fMRI and MEG facilities. The BLRI-ILCB is located in Aix-en-Provence, Avignon and Marseille and regroups several research centers in linguistics, psychology, cognitive neuroscience, medicine, computer science, and mathematics. Interested candidates need to find one or more PhD supervisors amongst the members of the BRLI-ILCB (http://www.blri.fr/members.html.) Together with the supervisor(s), they would then need to write a 3-year PhD project. A priority is given to interdisciplinary co-directions and to projects that involve two different laboratories of the institute. The application should contain: 1.A full description of the PhD project (~ 5 pages): a.Title b.Name of the PhD supervisor(s) c.Short Summary d.Scientific context/state of the art/ e.Objectives and hypotheses f.Methodology g.Expected results h.Brief statement about the relevance of the project for the BLRI/ILCB i.Proposed Timeline 2.CV and master degree grades (if available) 3.Letter of motivation 4.One letter of recommendation or contact information of a potential referee * Deadline for submission: June 11, 2017 * Pre-selection of candidates for audition: June 28, 2017 * Auditions: July 3-7, 2017 (international candidates might be interviewed via skype) * Start: September 1, 2017 * Monthly salary: 1 685€(1 368€ net) for a period of 3 years Applications should be sent to: _nadera.bureau at blri.fr _ For supplementary information contact: _Johannes.Ziegler at univ-amu.fr _ ------------------------------------------------------------------------ Philippe Blache | LPL - CNRS & Universite d'Aix-Marseille | tel: +33 (0)4.13.55.27.21 Brain & Language Research Institute | fax: +33 (0)4.42.95.37.44 5 Avenue Pasteur, BP 80975 | email: blache at lpl-aix.fr 13604 Aix-en-Provence Cedex 1 | http://www.lpl-aix.fr/~blache/ France | http://www.blri.fr/ ------------------------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-michel.badier at univ-amu.fr Fri Mar 10 17:05:37 2017 From: jean-michel.badier at univ-amu.fr (Jean-Michel Badier) Date: Fri, 10 Mar 2017 17:05:37 +0100 Subject: [FieldTrip] 3 PhD grants and 3 Post-Doc positions at Aix-Marseille University (France) In-Reply-To: References: Message-ID: <76a895ff-1ab5-47a6-5339-c81754dede50@univ-amu.fr> Bonjour Andrea, On dirait que l'on a eu la même idée ! J'espère que tu vas bien. A bientôt JM Le 10/03/2017 à 14:31, Andrea Brovelli a écrit : > > *Call for Applications* > > */3 Post-docs; 3 Phd Grants/* > > ** > > *Three 2-year postdoc positions * > > *At Aix-Marseille/Avignon on /Language, Communication and the Brain/* > > > The Center of Excellence on Brain and Language (BLRI, www.blri.fr/ > )and the Institute of Language, Communication and > the Brain (ILCB, http://www.ilcb.fr) offer: > > *Three 2-year postdoc positions*on any topic that falls within the > area of language, communication, brain and modelling. The institute > provides privileged and free access to fMRI, MEG and EEG facilities. > > The BLRI-ILCB is located in Aix-en-Provence, Avignon and Marseille and > regroups several research centers in linguistics, psychology, > cognitive neuroscience, medicine, computer science, and mathematics. > > The scientific project, ideally interdisciplinary, should be > supervised by at least one member of the BLRI/ILCB (see > http://www.blri.fr/members.html) and should, if possible, involve two > different laboratories of the institute. > > A complete application should contain: > > > 1.A full description of the research project (~ 5 pages): > a.Title > b.Name of the collaborator/supervisor/sponsor within the BLRI-ILCB > c.Short Summary > d.Scientific context/state of the art/ > e.Objectives and hypotheses > f.Methodology > g.Expected results > h.Brief statement about the relevance of the project for the BLRI/ILCB > i.Proposed Timeline > 2.CV with complete list of publications > 3.Letter of motivation > 4.One letter of recommendation or contact information of a potential > referee > > * Duration: 2 years (1 year, extendable for another year) > * Monthly salary: ~2000 € net (depending on experience) > * Deadline: June 11, 2017 > > > Applications should be sent to: nadera.bureau at blri.fr > > > For supplementary information: _Johannes.Ziegler at univ-amu.fr > _ > > > *Three PhD grants * > > *at Aix-Marseille/Avignon on /Language, Communication and the Brain/* > > > The Center of Excellence on Brain and Language (BLRI, www.blri.fr/ > )and the Institute of Language, Communication and > the Brain (ILCB, http://www.ilcb.fr/ ) award > > 3 PhD grants (3 years) on any topic that falls within the area of > language, communication, brain and modelling. The institute provides > privileged and free access to fMRI and MEG facilities. > > The BLRI-ILCB is located in Aix-en-Provence, Avignon and Marseille and > regroups several research centers in linguistics, psychology, > cognitive neuroscience, medicine, computer science, and mathematics. > > Interested candidates need to find one or more PhD supervisors amongst > the members of the BRLI-ILCB (http://www.blri.fr/members.html.) > Together with the supervisor(s), they would then need to write a > 3-year PhD project. A priority is given to interdisciplinary > co-directions and to projects that involve two different laboratories > of the institute. > > The application should contain: > > > 1.A full description of the PhD project (~ 5 pages): > a.Title > b.Name of the PhD supervisor(s) > c.Short Summary > d.Scientific context/state of the art/ > e.Objectives and hypotheses > f.Methodology > g.Expected results > h.Brief statement about the relevance of the project for the BLRI/ILCB > i.Proposed Timeline > 2.CV and master degree grades (if available) > 3.Letter of motivation > 4.One letter of recommendation or contact information of a potential > referee > > > * Deadline for submission: June 11, 2017 > * Pre-selection of candidates for audition: June 28, 2017 > * Auditions: July 3-7, 2017 (international candidates might be > interviewed via skype) > * Start: September 1, 2017 > * Monthly salary: 1 685€(1 368€ net) for a period of 3 years > > > > Applications should be sent to: _nadera.bureau at blri.fr > _ > > For supplementary information contact: _Johannes.Ziegler at univ-amu.fr > _ > > ------------------------------------------------------------------------ > Philippe Blache | > LPL - CNRS & Universite d'Aix-Marseille | tel: +33 (0)4.13.55.27.21 > Brain & Language Research Institute | fax: +33 (0)4.42.95.37.44 > 5 Avenue Pasteur, BP 80975 | email: blache at lpl-aix.fr > > 13604 Aix-en-Provence Cedex 1 | > http://www.lpl-aix.fr/~blache/ > France | http://www.blri.fr/ > ------------------------------------------------------------------------ > > > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -- Jean michel Badier /- UMR S 1106 Institut de Neurosciences des Systèmes/ Aix-Marseille Université - Laboratoire MEG - TIMONE - 27 Boulevard Jean Moulin - 13005 Marseille Tél: +33(0)4 91 38 55 62 - Fax : +33(0)4 91 78 99 14 Site : http://www.univ-amu.fr - Email : jean-michel.badier at univ-amu.fr /Afin de respecter l'environnement, merci de n'imprimer cet email que si nécessaire./ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: logo_amu.jpg Type: image/jpeg Size: 17847 bytes Desc: not available URL: From jan.schoffelen at donders.ru.nl Mon Mar 13 08:49:27 2017 From: jan.schoffelen at donders.ru.nl (Schoffelen, J.M. (Jan Mathijs)) Date: Mon, 13 Mar 2017 07:49:27 +0000 Subject: [FieldTrip] Fwd: ft_volumerealign with headshape References: Message-ID: <3F59D5AD-E147-4B9A-995F-E8ADBBC72452@donders.ru.nl> Hi Ainsley, I forward your message to the discussion list. Dear list, Please have a look at Ainsley’s question below. Did anyone encounter this issue and has a solution? The error is a low-level MATLAB one, so apparently the input arguments to the ismember function call are not what they should be. Thanks and with best wishes, Jan-Mathijs J.M.Schoffelen, MD PhD Senior Researcher, VIDI-fellow - PI, language in interaction Telephone: +31-24-3614793 Physical location: room 00.028 Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands Begin forwarded message: From: Ainsley Temudo > Subject: ft_volumerealign with headshape Date: 13 March 2017 at 07:30:23 GMT+1 To: > Hi Jan-Mathijs, I'm trying to realign an MRI to a headshape. I found a previous discussion mail from someone who had a similar problem a couple of years ago and you replied saying you fixed it locally with a dirty hack. https://mailman.science.ru.nl/pipermail/fieldtrip/2015-November/009828.html I am doing it the same way: mri = ft_read_mri('WMCP1011+22+t1mprage.nii'); cfg = []; cfg.method = 'interactive'; cfg.coordsys = 'ctf'; mri_realigned = ft_volumerealign(cfg,mri); hs=ft_read_headshape('headscan.hsp'); cfg = []; cfg.method = 'headshape' cfg.coordsys = 'ctf'; cfg.headshape.headshape = hs; mri_realigned2 = ft_volumerealign(cfg,mri_realigned); and I get the following errors : doing interactive realignment with headshape Error using cell/ismember (line 34) Input A of class cell and input B of class cell must be cell arrays of strings, unless one is a string. Error in ft_senstype (line 303) if (mean(ismember(ft_senslabel('ant128'), sens.label)) > 0.8) Error in ft_datatype_sens (line 138) ismeg = ft_senstype(sens, 'meg'); Error in ft_checkconfig (line 250) cfg.elec = ft_datatype_sens(cfg.elec); Error in ft_interactiverealign (line 71) cfg.template = ft_checkconfig(cfg.template, 'renamed', {'vol', 'headmodel'}); Error in ft_volumerealign (line 691) tmpcfg = ft_interactiverealign(tmp cfg); Is it the same issue as before? if this issue was fixed, any idea why I'm getting these areas? I'm using field trip version 20160313 Kind Regards, Ainsley -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.schoffelen at donders.ru.nl Mon Mar 13 09:13:50 2017 From: jan.schoffelen at donders.ru.nl (Schoffelen, J.M. (Jan Mathijs)) Date: Mon, 13 Mar 2017 08:13:50 +0000 Subject: [FieldTrip] Fwd: [HCP-Users] hcp_anatomy.m needs an hsfile? References: Message-ID: <6B401176-3DCA-4858-AE3B-30DA4F0E331A@donders.ru.nl> Dear Jeff, Let me forward your question to the discussion list. Dear list, Jeff is encountering some coregistration problems, which may be FieldTrip related, but also could be a user error. Perhaps somebody has encountered them before. Let us know if you have a solution. The 45 degrees tilt looks odd. If this image was produced after reslicing the to-MNI-coregistered-image something went wrong with the realignment. If this image was produced prior to the reslicing, something funky has gone wrong with the acquisition sequence. I don’t know anything about the specifics of Brainstorm, so I am afraid I cannot help there. Best, Jan-Mathijs J.M.Schoffelen, MD PhD Senior Researcher, VIDI-fellow - PI, language in interaction Telephone: +31-24-3614793 Physical location: room 00.028 Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands Begin forwarded message: From: K Jeffrey Eriksen > Subject: RE: [HCP-Users] hcp_anatomy.m needs an hsfile? Date: 11 March 2017 at 02:47:33 GMT+1 To: "Schoffelen, J.M. (Jan Mathijs)" > Hello again, I encountered a problem when I tried to import into Brainstorm, even though I thought I had the transform text file correct. After importing the anatomy in Brainstorm, it was displayed with the brain rotated by 45 degrees in all axes. I then realized the I had visualized the registration of the headshape to the scalp surface and that looked good, but I had never visualized the MNI registration. I went back into the HCP scripts and found where the MNI registration could be visualized and discovered the 45 degree rotation seemed to occur there. So I thought maybe our local HCP pipeline did something unusual. To test this I ran these three conditions: 1. My hcp_anatomy_egi.m with our local HCP-pipeline produced T1 2. original hcp_anatomy.m with our local T1 3. original hcp_anatomy.m with downloaded HCM_MEG_pipeline produced T1 All three had the same apparent problem, shown on the attached images. I am quite puzzled by this since they are all the same, yet Brainstorm only imports #3 correctly (not counting #2 which is mixed). I put all three cases in the attached Word doc, with the Brainstorm registration display and the HCP headshape registration display. -Jeff From: Schoffelen, J.M. (Jan Mathijs) [mailto:jan.schoffelen at donders.ru.nl] Sent: Wednesday, March 08, 2017 8:52 AM To: K Jeffrey Eriksen Subject: Re: [HCP-Users] hcp_anatomy.m needs an hsfile? Hi Jeff, I made it all the way through hcp_anatomy_EGI.m (my version substituting ‘egi’ for ‘bti’. Amazing! I could not figure out how to do the interactive fine registration of the EGI electrode “headshape” to the scalp surface – where is that documented? Well it’s not extensively documented, but in the crude GUI you can fiddle around with translation and rotation parameters to move the electrode point cloud closer to the headsurface mesh, created from the MRI segmentation. The main remaining problem is that the BTI coordinate system has the X-axis toward the nasion, and the Y-axis toward the LPA. The EGI coordinate system has the X-axis toward the RPA and the Y-axis toward the nasion. Can you suggest the best way to change hcp_anatomy_EGI.m to reflect this? Well, it sounds as if the EGI has an RAS convention, which may be similar to the ‘neuromag’ convention (as per http://www.fieldtriptoolbox.org/faq/how_are_the_different_head_and_mri_coordinate_systems_defined) It could be that changing the required coordinate system (coordsys) to ‘neuromag’ while specifying the fiducials in ft_volumerealign (rather than ‘bti’) would do the trick. Each of the supported coordinates systems must have some kind of master definition somewhere in the code, and that would be the best place to define the EGI system. I think it is similar to the BESA system. The code that has the ‘intelligence’ to map the specification of defined fiducial/landmark locations is in https://github.com/fieldtrip/fieldtrip/blob/master/utilities/ft_headcoordinates.m with a typo in line48/49 I noticed just now. Feel free to suggest a new coordinate system if needed. Perhaps this is best done through the fieldtrip discussion list. Best, Jan-Mathijs -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 45 degree rotqation of MNI registration HCP_MEG_anatomy.docx Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document Size: 494751 bytes Desc: 45 degree rotqation of MNI registration HCP_MEG_anatomy.docx URL: From mpcoll at mac.com Mon Mar 13 13:47:25 2017 From: mpcoll at mac.com (MP Coll) Date: Mon, 13 Mar 2017 12:47:25 +0000 Subject: [FieldTrip] Precisions on DICS beamforming on EEG data and using normalisation Message-ID: <28ea0ca0-fed2-3cba-aa11-a79cc8c7c1a3@mac.com> Dear Fieldtrip Community, My name is Michel-Pierre Coll and I am a postdoctoral researcher at King's college London. A reviewer recently suggested we perform DICS beamforming to source localise EEG effects in the frequency domain during an action observation/execution paradigm. I was able to perform these analyses using the very good tutorial on the Fiedltrip website. However, I have some questions regarding these analyses. I have searched the literature and the mailing list but somehow I can't find clear answers to these basic questions. 1) Is it appropriate to perform DICS beamforming using EEG with 60 channels (standard montage) ? If not, what would be the appropriate number of channels ? Can you suggest a reference discussing this issue ? 2) When not using a contrast to perform the beamforming, is the normalisation of the lead field an adequate procedure to correct for the depth/centre bias ? The Fieldtrip tutorial suggest it is but other posts on the mailing list suggest that it is not. 3) How does one choose an optimal time window for DICS beamforming when the duration of the effect is quite long (e.g. several seconds of alpha changes in response to a visual stimulus) ? Is it correct to use a longer time-window (e.g. 2 seconds) that is representative of the duration of the effect ? I would greatly appreciate any hints on these questions or if you could point me towards relevant texts discussing these issues. Best, MP Coll From n.molinaro at bcbl.eu Tue Mar 14 17:21:43 2017 From: n.molinaro at bcbl.eu (Nicola Molinaro) Date: Tue, 14 Mar 2017 17:21:43 +0100 (CET) Subject: [FieldTrip] RESEARCH FACULTY POSITIONS at the BCBL Message-ID: <1648301278.309469.1489508503037.JavaMail.zimbra@bcbl.eu> Dear Fieldtrip community I am forwarding this message from the BCBL direction Nicola ------------- RESEARCH FACULTY POSITIONS at the BCBL- Basque Center on Cognition Brain and Language (San Sebastián, Basque Country, Spain) www.bcbl.eu (Center of excellence Severo Ochoa) The Basque Center on Cognition Brain and Language (San Sebastián, Basque Country, Spain) together with IKERBASQUE (Basque Foundation for Science) offer 3 permanent IKERBASQUE Research Professor positions in the following areas: - Language acquisition - Any area of Language processing and/or disorders with advanced experience in MRI - Any area of Language processing and/or disorders with advanced experience in MEG The BCBL Center (recently awarded the label of excellence Severo Ochoa) promotes a vibrant research environment without substantial teaching obligations. It provides access to the most advanced behavioral and neuroimaging techniques, including 3 Tesla MRI, a whole-head MEG system, four ERP labs, a NIRS lab, a baby lab including eyetracker, EEG and NIRS, two eyetracking labs, and several well-equipped behavioral labs. There are excellent technical support staff and research personnel (PhD and postdoctoral students). The senior positions are permanent appointments. We are looking for cognitive neuroscientists or experimental psychologists with a background in psycholinguistics and/or neighboring cognitive neuroscience areas, and physicists and/or engineers with fMRI or MEG expertise. Individuals interested in undertaking research in the fields described in http://www.bcbl.eu/research/lines/ should apply through the BCBL web page (www.bcbl.eu/jobs). The successful candidate will be working within the research lines of the BCBL whose main aim is to develop high-risk/high gain projects at the frontiers of Cognitive Neuroscience. We expect high readiness to work with strong engagement and creativity in an interdisciplinary and international environment. Deadline June 30th We encourage immediate applications as the selection process will be ongoing and the appointment may be made before the deadline. Only senior researchers with a strong record of research experience will be considered. Women candidates are especially welcome. To submit your application please follow this link: http://www.bcbl.eu/jobs applying for Ikerbasque Research Professor 2017 and upload: Your curriculum vitae. A cover letter/statement describing your research interests (4000 characters maximum) The names of two referees who would be willing to write letters of recommendation Applicants should be fluent in English. Knowledge of Spanish and/or Basque will be considered useful but is not compulsory. For more information, please contact the Director of BCBL, Manuel Carreiras (m.carreiras at bcbl.eu). From marc.lalancette at sickkids.ca Tue Mar 14 17:46:50 2017 From: marc.lalancette at sickkids.ca (Marc Lalancette) Date: Tue, 14 Mar 2017 16:46:50 +0000 Subject: [FieldTrip] Precisions on DICS beamforming on EEG data and normalisation Message-ID: <2A2B6A5B8C4C174CBCCE0B45E548DEB23B96F1D4@SKMBXX01.sickkids.ca> Hi Michel-Pierre, Regarding question 2, I'm mostly familiar with LCMV, and I can't remember exactly how DICS works, but I would guess normalization approaches have the same properties in both. (Please someone correct me on this if I'm wrong.) One great reference for LCMV beamformer in general, and normalization in particular, is the book by Sekihara and Nagarajan. For a scalar beamformer, yes normalizing the leadfield ("array-gain") will correct depth bias, but I find these absolute values harder to interpret. Dividing instead by projected noise ("unit-noise-gain") also corrects depth bias, and has better spatial resolution. For a vector beamformer, things get a bit more complicated as the "array-gain" and "unit-noise-gain" vector formulae in that book are not rotationally invariant and I would not recommend using them. (See my recent post: https://mailman.science.ru.nl/pipermail/fieldtrip/2017-March/011390.html) Fieldtrip does not by default use these normalizations, but I also haven't seen an analysis of (or had time to investigate much) how its vector beamformer normalization approach fares in terms of bias and resolution compared to others. Maybe it exists somewhere? Sorry if it's not a very practical answer... Cheers, Marc Lalancette Lab Research Project Manager, Research MEG The Hospital for Sick Children, Diagnostic Imaging, Room S742 555 University Avenue, Toronto, ON, M5G 1X8 416-813-7654 x201535 -----Original Message----- Date: Mon, 13 Mar 2017 12:47:25 +0000 From: MP Coll To: fieldtrip at science.ru.nl Subject: [FieldTrip] Precisions on DICS beamforming on EEG data and using normalisation Message-ID: <28ea0ca0-fed2-3cba-aa11-a79cc8c7c1a3 at mac.com> Content-Type: text/plain; charset=utf-8; format=flowed Dear Fieldtrip Community, My name is Michel-Pierre Coll and I am a postdoctoral researcher at King's college London. A reviewer recently suggested we perform DICS beamforming to source localise EEG effects in the frequency domain during an action observation/execution paradigm. I was able to perform these analyses using the very good tutorial on the Fiedltrip website. However, I have some questions regarding these analyses. I have searched the literature and the mailing list but somehow I can't find clear answers to these basic questions. 1) Is it appropriate to perform DICS beamforming using EEG with 60 channels (standard montage) ? If not, what would be the appropriate number of channels ? Can you suggest a reference discussing this issue ? 2) When not using a contrast to perform the beamforming, is the normalisation of the lead field an adequate procedure to correct for the depth/centre bias ? The Fieldtrip tutorial suggest it is but other posts on the mailing list suggest that it is not. 3) How does one choose an optimal time window for DICS beamforming when the duration of the effect is quite long (e.g. several seconds of alpha changes in response to a visual stimulus) ? Is it correct to use a longer time-window (e.g. 2 seconds) that is representative of the duration of the effect ? I would greatly appreciate any hints on these questions or if you could point me towards relevant texts discussing these issues. Best, MP Coll ------------------------------ _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip End of fieldtrip Digest, Vol 76, Issue 14 ***************************************** ________________________________ This e-mail may contain confidential, personal and/or health information(information which may be subject to legal restrictions on use, retention and/or disclosure) for the sole use of the intended recipient. Any review or distribution by anyone other than the person for whom it was originally intended is strictly prohibited. If you have received this e-mail in error, please contact the sender and delete all copies. From hamedtaheri at yahoo.com Tue Mar 14 18:26:54 2017 From: hamedtaheri at yahoo.com (Hamed Taheri) Date: Tue, 14 Mar 2017 17:26:54 +0000 (UTC) Subject: [FieldTrip] How can i see EEG References: <687200850.5575467.1489512414151.ref@mail.yahoo.com> Message-ID: <687200850.5575467.1489512414151@mail.yahoo.com> Hello allMy name is Hamed a Ph.D. candidate from the Sapienza University of Rome.I have an EEG data that recorded in 64 channel with .eeg format.How can I see my data in Fieldtrip. cfg = []cfg.dataset = 'mydata........'....  Hamed Taheri Gorji PhD Candidate Brain Imaging Laboratory  DEPARTMENT OF PSYCHOLOGY FACULTY OF MEDICINE AND PSYCHOLOGY SAPIENZA University of Rome Santa Lucia Foundation, ViaArdeatina 306, 00179 Rome -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailtome.2113 at gmail.com Wed Mar 15 07:16:28 2017 From: mailtome.2113 at gmail.com (Arti Abhishek) Date: Wed, 15 Mar 2017 17:16:28 +1100 Subject: [FieldTrip] Epoching between 1 to 30 seconds Message-ID: Dear fieldtrip community, I have EEG recorded in an auditory steady state paradigm and I want to epoch between 1-30 seconds. I don't want in my epoch any prestimulus time or the first second of the stimulus (to remove the onset response). I was wondering how I can epoch like this in fieldtrip? Can I epoch without using cfg.trialdef.prestim and cfg.trialdef.poststim parameters? Thanks, Arti -------------- next part -------------- An HTML attachment was scrubbed... URL: From toni.rbaena at gmail.com Wed Mar 15 08:26:05 2017 From: toni.rbaena at gmail.com (Antonio Rodriguez) Date: Wed, 15 Mar 2017 08:26:05 +0100 Subject: [FieldTrip] Epoching between 1 to 30 seconds In-Reply-To: References: Message-ID: Hello Arti, maybe you can try to set your pre stim time to a negative value ( so you will start after the event) , and then set the post stim at your final epoch time. Like this: cfg = []; cfg.datafile = datafile; cfg.headerfile = headerfile; cfg.trialdef.eventtype = 'Stimulus'; cfg.trialdef.eventvalue = 'S 19'; cfg.trialdef.prestim = -1; % start 1 second AFTER stim cfg.trialdef.poststim = 30; % end in the second 30 AFTER stim td = ft_definetrial(cfg); % my epochs are 29 second long Hope this helps. 2017-03-15 7:16 GMT+01:00 Arti Abhishek : > Dear fieldtrip community, > > I have EEG recorded in an auditory steady state paradigm and I want to > epoch between 1-30 seconds. I don't want in my epoch any prestimulus time > or the first second of the stimulus (to remove the onset response). I was > wondering how I can epoch like this in fieldtrip? Can I epoch without using > cfg.trialdef.prestim and cfg.trialdef.poststim parameters? > > Thanks, > Arti > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stan.vanpelt at donders.ru.nl Wed Mar 15 08:52:23 2017 From: stan.vanpelt at donders.ru.nl (Pelt, S. van (Stan)) Date: Wed, 15 Mar 2017 07:52:23 +0000 Subject: [FieldTrip] How can i see EEG In-Reply-To: <687200850.5575467.1489512414151@mail.yahoo.com> References: <687200850.5575467.1489512414151.ref@mail.yahoo.com> <687200850.5575467.1489512414151@mail.yahoo.com> Message-ID: <7CCA2706D7A4DA45931A892DF3C2894C58F0E941@exprd03.hosting.ru.nl> Hi Hamed, The FieldTrip tutorials are your friend here. See http://www.fieldtriptoolbox.org/getting_started. Best, Stan -- Stan van Pelt, PhD Donders Institute for Brain, Cognition and Behaviour Radboud University Montessorilaan 3, B.01.34 6525 HR Nijmegen, the Netherlands tel: +31 24 3616288 From: fieldtrip-bounces at science.ru.nl [mailto:fieldtrip-bounces at science.ru.nl] On Behalf Of Hamed Taheri Sent: dinsdag 14 maart 2017 18:27 To: fieldtrip at science.ru.nl Subject: [FieldTrip] How can i see EEG Hello all My name is Hamed a Ph.D. candidate from the Sapienza University of Rome. I have an EEG data that recorded in 64 channel with .eeg format. How can I see my data in Fieldtrip. cfg = [] cfg.dataset = 'mydata........' . . . . Hamed Taheri Gorji PhD Candidate Brain Imaging Laboratory DEPARTMENT OF PSYCHOLOGY FACULTY OF MEDICINE AND PSYCHOLOGY SAPIENZA University of Rome Santa Lucia Foundation, Via Ardeatina 306, 00179 Rome -------------- next part -------------- An HTML attachment was scrubbed... URL: From hamedtaheri at yahoo.com Wed Mar 15 09:10:23 2017 From: hamedtaheri at yahoo.com (hamed taheri) Date: Wed, 15 Mar 2017 09:10:23 +0100 Subject: [FieldTrip] How can i see EEG In-Reply-To: <7CCA2706D7A4DA45931A892DF3C2894C58F0E941@exprd03.hosting.ru.nl> References: <687200850.5575467.1489512414151.ref@mail.yahoo.com> <687200850.5575467.1489512414151@mail.yahoo.com> <7CCA2706D7A4DA45931A892DF3C2894C58F0E941@exprd03.hosting.ru.nl> Message-ID: I saw this tutorial but I couldn't find viewing code I want to see my 64 channels Sent from my iPhone > On Mar 15, 2017, at 8:52 AM, Pelt, S. van (Stan) wrote: > > Hi Hamed, > > The FieldTrip tutorials are your friend here. See http://www.fieldtriptoolbox.org/getting_started. > > Best, > Stan > > -- > Stan van Pelt, PhD > Donders Institute for Brain, Cognition and Behaviour > Radboud University > Montessorilaan 3, B.01.34 > 6525 HR Nijmegen, the Netherlands > tel: +31 24 3616288 > > From: fieldtrip-bounces at science.ru.nl [mailto:fieldtrip-bounces at science.ru.nl] On Behalf Of Hamed Taheri > Sent: dinsdag 14 maart 2017 18:27 > To: fieldtrip at science.ru.nl > Subject: [FieldTrip] How can i see EEG > > Hello all > My name is Hamed a Ph.D. candidate from the Sapienza University of Rome. > I have an EEG data that recorded in 64 channel with .eeg format. > How can I see my data in Fieldtrip. > > cfg = [] > cfg.dataset = 'mydata........' > . > . > . > . > > > > Hamed Taheri Gorji > PhD Candidate > Brain Imaging Laboratory > > DEPARTMENT OF PSYCHOLOGY > FACULTY OF MEDICINE AND PSYCHOLOGY > SAPIENZA > University of Rome > > Santa Lucia Foundation, Via > Ardeatina 306, 00179 Rome > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: From stan.vanpelt at donders.ru.nl Wed Mar 15 09:16:46 2017 From: stan.vanpelt at donders.ru.nl (Pelt, S. van (Stan)) Date: Wed, 15 Mar 2017 08:16:46 +0000 Subject: [FieldTrip] How can i see EEG In-Reply-To: References: <687200850.5575467.1489512414151.ref@mail.yahoo.com> <687200850.5575467.1489512414151@mail.yahoo.com> <7CCA2706D7A4DA45931A892DF3C2894C58F0E941@exprd03.hosting.ru.nl> Message-ID: <7CCA2706D7A4DA45931A892DF3C2894C58F0E95E@exprd03.hosting.ru.nl> Try http://www.fieldtriptoolbox.org/tutorial/preprocessing_erp Or the excellent walkthrough: http://www.fieldtriptoolbox.org/walkthrough From: fieldtrip-bounces at science.ru.nl [mailto:fieldtrip-bounces at science.ru.nl] On Behalf Of hamed taheri Sent: woensdag 15 maart 2017 9:10 To: FieldTrip discussion list Subject: Re: [FieldTrip] How can i see EEG I saw this tutorial but I couldn't find viewing code I want to see my 64 channels Sent from my iPhone On Mar 15, 2017, at 8:52 AM, Pelt, S. van (Stan) > wrote: Hi Hamed, The FieldTrip tutorials are your friend here. See http://www.fieldtriptoolbox.org/getting_started. Best, Stan -- Stan van Pelt, PhD Donders Institute for Brain, Cognition and Behaviour Radboud University Montessorilaan 3, B.01.34 6525 HR Nijmegen, the Netherlands tel: +31 24 3616288 From: fieldtrip-bounces at science.ru.nl [mailto:fieldtrip-bounces at science.ru.nl] On Behalf Of Hamed Taheri Sent: dinsdag 14 maart 2017 18:27 To: fieldtrip at science.ru.nl Subject: [FieldTrip] How can i see EEG Hello all My name is Hamed a Ph.D. candidate from the Sapienza University of Rome. I have an EEG data that recorded in 64 channel with .eeg format. How can I see my data in Fieldtrip. cfg = [] cfg.dataset = 'mydata........' . . . . Hamed Taheri Gorji PhD Candidate Brain Imaging Laboratory DEPARTMENT OF PSYCHOLOGY FACULTY OF MEDICINE AND PSYCHOLOGY SAPIENZA University of Rome Santa Lucia Foundation, Via Ardeatina 306, 00179 Rome _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.schoffelen at donders.ru.nl Wed Mar 15 09:27:29 2017 From: jan.schoffelen at donders.ru.nl (Schoffelen, J.M. (Jan Mathijs)) Date: Wed, 15 Mar 2017 08:27:29 +0000 Subject: [FieldTrip] Fwd: How can i see EEG References: <0CE090DC-AF2A-4133-83E8-F52895A64ECC@gmail.com> Message-ID: Hamed, To add to Stan’s excellent suggestions: If your question is about visualization, you could have a look at the plotting tutorial, or familiarize yourself with matlab’s basic plotting functionality, functions such as plot etc. Perhaps you could also check with colleagues in your lab who might know how to do this. Good luck Jan-Mathijs On 15 Mar 2017, at 09:10, hamed taheri > wrote: I saw this tutorial but I couldn't find viewing code I want to see my 64 channels Sent from my iPhone On Mar 15, 2017, at 8:52 AM, Pelt, S. van (Stan) > wrote: Hi Hamed, The FieldTrip tutorials are your friend here. See http://www.fieldtriptoolbox.org/getting_started. Best, Stan -- Stan van Pelt, PhD Donders Institute for Brain, Cognition and Behaviour Radboud University Montessorilaan 3, B.01.34 6525 HR Nijmegen, the Netherlands tel: +31 24 3616288 From: fieldtrip-bounces at science.ru.nl [mailto:fieldtrip-bounces at science.ru.nl] On Behalf Of Hamed Taheri Sent: dinsdag 14 maart 2017 18:27 To: fieldtrip at science.ru.nl Subject: [FieldTrip] How can i see EEG Hello all My name is Hamed a Ph.D. candidate from the Sapienza University of Rome. I have an EEG data that recorded in 64 channel with .eeg format. How can I see my data in Fieldtrip. cfg = [] cfg.dataset = 'mydata........' . . . . Hamed Taheri Gorji PhD Candidate Brain Imaging Laboratory DEPARTMENT OF PSYCHOLOGY FACULTY OF MEDICINE AND PSYCHOLOGY SAPIENZA University of Rome Santa Lucia Foundation, Via Ardeatina 306, 00179 Rome _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: From anne.hauswald at me.com Wed Mar 15 09:37:49 2017 From: anne.hauswald at me.com (anne Hauswald) Date: Wed, 15 Mar 2017 09:37:49 +0100 Subject: [FieldTrip] How can i see EEG In-Reply-To: References: <687200850.5575467.1489512414151.ref@mail.yahoo.com> <687200850.5575467.1489512414151@mail.yahoo.com> <7CCA2706D7A4DA45931A892DF3C2894C58F0E941@exprd03.hosting.ru.nl> Message-ID: <9C222EA7-DD8C-45F6-B3DE-BBA5B40276FD@me.com> Hi Hamed, as Stan pointed you to, you will find information on visual data inspection in http://www.fieldtriptoolbox.org/walkthrough . Basically, you can use e.g. ft_databrowser to view your data. for example cfg=[]; cfg.dataset='path to your eeg data‘; ft_databrowser(cfg) for more options see the references for this function. best anne > Am 15.03.2017 um 09:10 schrieb hamed taheri : > > I saw this tutorial but I couldn't find viewing code > I want to see my 64 channels > > > Sent from my iPhone > > On Mar 15, 2017, at 8:52 AM, Pelt, S. van (Stan) > wrote: > >> Hi Hamed, >> >> The FieldTrip tutorials are your friend here. See http://www.fieldtriptoolbox.org/getting_started . >> >> Best, >> Stan >> >> -- >> Stan van Pelt, PhD >> Donders Institute for Brain, Cognition and Behaviour >> Radboud University >> Montessorilaan 3, B.01.34 >> 6525 HR Nijmegen, the Netherlands >> tel: +31 24 3616288 >> >> From: fieldtrip-bounces at science.ru.nl [mailto:fieldtrip-bounces at science.ru.nl ] On Behalf Of Hamed Taheri >> Sent: dinsdag 14 maart 2017 18:27 >> To: fieldtrip at science.ru.nl >> Subject: [FieldTrip] How can i see EEG >> >> Hello all >> My name is Hamed a Ph.D. candidate from the Sapienza University of Rome. >> I have an EEG data that recorded in 64 channel with .eeg format. >> How can I see my data in Fieldtrip. >> >> cfg = [] >> cfg.dataset = 'mydata........' >> . >> . >> . >> . >> >> >> >> Hamed Taheri Gorji >> PhD Candidate >> Brain Imaging Laboratory >> >> DEPARTMENT OF PSYCHOLOGY >> FACULTY OF MEDICINE AND PSYCHOLOGY >> SAPIENZA >> University of Rome >> >> Santa Lucia Foundation, Via >> Ardeatina 306, 00179 Rome >> _______________________________________________ >> fieldtrip mailing list >> fieldtrip at donders.ru.nl >> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: From J.Herring at donders.ru.nl Wed Mar 15 09:44:56 2017 From: J.Herring at donders.ru.nl (Herring, J.D. (Jim)) Date: Wed, 15 Mar 2017 08:44:56 +0000 Subject: [FieldTrip] Precisions on DICS beamforming on EEG data and normalisation In-Reply-To: <2A2B6A5B8C4C174CBCCE0B45E548DEB23B96F1D4@SKMBXX01.sickkids.ca> References: <2A2B6A5B8C4C174CBCCE0B45E548DEB23B96F1D4@SKMBXX01.sickkids.ca> Message-ID: <6F9804CE79B042468FDC7E8C86CF4CBC500CF390@exprd04.hosting.ru.nl> Dear Michel-Pierre, Allow me to add some additional (unfortunately non-referenced) advice. 1) Is it appropriate to perform DICS beamforming using EEG with 60 channels (standard montage) ? If not, what would be the appropriate number of channels ? Can you suggest a reference discussing this issue ? First, make sure your data are referenced to the common-average as the forward model assumes this. Then, the appropriate number of channels depends on the required spatial resolution; If you wish to source localize posterior alpha activity 60 channels is fine. If you wish to parcellate your brain into 100 regions and do whole-brain connectivity, 60 channels is not fine and you might want to consider switching to MEG as well. 2) When not using a contrast to perform the beamforming, is the normalisation of the lead field an adequate procedure to correct for the depth/centre bias ? The Fieldtrip tutorial suggest it is but other posts on the mailing list suggest that it is not. You say that you are looking at a change of alpha in response to a visual stimulus? It seems like you do have a contrast. You can compare to the baseline. 3) How does one choose an optimal time window for DICS beamforming when the duration of the effect is quite long (e.g. several seconds of alpha changes in response to a visual stimulus) ? Is it correct to use a longer time-window (e.g. 2 seconds) that is representative of the duration of the effect ? Together with the previous point, you can compare your time window of interest to your baseline. Here it is important that you take the same window length from the baseline period as you take during the activation period to prevent a bias towards the window with more data when calculating the common filter. However, according to http://www.fieldtriptoolbox.org/example/common_filters_in_beamforming it is fine to have an unequal amount of trials in each conditions so if your baseline period is only 1 second, you could cut your 'active' period into 1s segments using ft_redefinetrial so you can still use all of the data. Best, Jim -----Original Message----- From: fieldtrip-bounces at science.ru.nl [mailto:fieldtrip-bounces at science.ru.nl] On Behalf Of Marc Lalancette Sent: Tuesday, March 14, 2017 5:47 PM To: fieldtrip at science.ru.nl Cc: mpcoll at mac.com Subject: Re: [FieldTrip] Precisions on DICS beamforming on EEG data and normalisation Hi Michel-Pierre, Regarding question 2, I'm mostly familiar with LCMV, and I can't remember exactly how DICS works, but I would guess normalization approaches have the same properties in both. (Please someone correct me on this if I'm wrong.) One great reference for LCMV beamformer in general, and normalization in particular, is the book by Sekihara and Nagarajan. For a scalar beamformer, yes normalizing the leadfield ("array-gain") will correct depth bias, but I find these absolute values harder to interpret. Dividing instead by projected noise ("unit-noise-gain") also corrects depth bias, and has better spatial resolution. For a vector beamformer, things get a bit more complicated as the "array-gain" and "unit-noise-gain" vector formulae in that book are not rotationally invariant and I would not recommend using them. (See my recent post: https://mailman.science.ru.nl/pipermail/fieldtrip/2017-March/011390.html) Fieldtrip does not by default use these normalizations, but I also haven'! t seen an analysis of (or had time to investigate much) how its vector beamformer normalization approach fares in terms of bias and resolution compared to others. Maybe it exists somewhere? Sorry if it's not a very practical answer... Cheers, Marc Lalancette Lab Research Project Manager, Research MEG The Hospital for Sick Children, Diagnostic Imaging, Room S742 555 University Avenue, Toronto, ON, M5G 1X8 416-813-7654 x201535 -----Original Message----- Date: Mon, 13 Mar 2017 12:47:25 +0000 From: MP Coll > To: fieldtrip at science.ru.nl Subject: [FieldTrip] Precisions on DICS beamforming on EEG data and using normalisation Message-ID: <28ea0ca0-fed2-3cba-aa11-a79cc8c7c1a3 at mac.com> Content-Type: text/plain; charset=utf-8; format=flowed Dear Fieldtrip Community, My name is Michel-Pierre Coll and I am a postdoctoral researcher at King's college London. A reviewer recently suggested we perform DICS beamforming to source localise EEG effects in the frequency domain during an action observation/execution paradigm. I was able to perform these analyses using the very good tutorial on the Fiedltrip website. However, I have some questions regarding these analyses. I have searched the literature and the mailing list but somehow I can't find clear answers to these basic questions. 1) Is it appropriate to perform DICS beamforming using EEG with 60 channels (standard montage) ? If not, what would be the appropriate number of channels ? Can you suggest a reference discussing this issue ? 2) When not using a contrast to perform the beamforming, is the normalisation of the lead field an adequate procedure to correct for the depth/centre bias ? The Fieldtrip tutorial suggest it is but other posts on the mailing list suggest that it is not. 3) How does one choose an optimal time window for DICS beamforming when the duration of the effect is quite long (e.g. several seconds of alpha changes in response to a visual stimulus) ? Is it correct to use a longer time-window (e.g. 2 seconds) that is representative of the duration of the effect ? I would greatly appreciate any hints on these questions or if you could point me towards relevant texts discussing these issues. Best, MP Coll ------------------------------ _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip End of fieldtrip Digest, Vol 76, Issue 14 ***************************************** ________________________________ This e-mail may contain confidential, personal and/or health information(information which may be subject to legal restrictions on use, retention and/or disclosure) for the sole use of the intended recipient. Any review or distribution by anyone other than the person for whom it was originally intended is strictly prohibited. If you have received this e-mail in error, please contact the sender and delete all copies. _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.schoffelen at donders.ru.nl Wed Mar 15 10:11:39 2017 From: jan.schoffelen at donders.ru.nl (Schoffelen, J.M. (Jan Mathijs)) Date: Wed, 15 Mar 2017 09:11:39 +0000 Subject: [FieldTrip] How can i see EEG In-Reply-To: <9C222EA7-DD8C-45F6-B3DE-BBA5B40276FD@me.com> References: <687200850.5575467.1489512414151.ref@mail.yahoo.com> <687200850.5575467.1489512414151@mail.yahoo.com> <7CCA2706D7A4DA45931A892DF3C2894C58F0E941@exprd03.hosting.ru.nl> <9C222EA7-DD8C-45F6-B3DE-BBA5B40276FD@me.com> Message-ID: <2AF1219A-FE85-4EFC-A8E1-2D25DB95F593@donders.ru.nl> tja, een beetje broekafzakkerig. T On 15 Mar 2017, at 09:37, anne Hauswald > wrote: Hi Hamed, as Stan pointed you to, you will find information on visual data inspection in http://www.fieldtriptoolbox.org/walkthrough. Basically, you can use e.g. ft_databrowser to view your data. for example cfg=[]; cfg.dataset='path to your eeg data‘; ft_databrowser(cfg) for more options see the references for this function. best anne Am 15.03.2017 um 09:10 schrieb hamed taheri >: I saw this tutorial but I couldn't find viewing code I want to see my 64 channels Sent from my iPhone On Mar 15, 2017, at 8:52 AM, Pelt, S. van (Stan) > wrote: Hi Hamed, The FieldTrip tutorials are your friend here. See http://www.fieldtriptoolbox.org/getting_started. Best, Stan -- Stan van Pelt, PhD Donders Institute for Brain, Cognition and Behaviour Radboud University Montessorilaan 3, B.01.34 6525 HR Nijmegen, the Netherlands tel: +31 24 3616288 From: fieldtrip-bounces at science.ru.nl [mailto:fieldtrip-bounces at science.ru.nl] On Behalf Of Hamed Taheri Sent: dinsdag 14 maart 2017 18:27 To: fieldtrip at science.ru.nl Subject: [FieldTrip] How can i see EEG Hello all My name is Hamed a Ph.D. candidate from the Sapienza University of Rome. I have an EEG data that recorded in 64 channel with .eeg format. How can I see my data in Fieldtrip. cfg = [] cfg.dataset = 'mydata........' . . . . Hamed Taheri Gorji PhD Candidate Brain Imaging Laboratory DEPARTMENT OF PSYCHOLOGY FACULTY OF MEDICINE AND PSYCHOLOGY SAPIENZA University of Rome Santa Lucia Foundation, Via Ardeatina 306, 00179 Rome _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.schoffelen at donders.ru.nl Wed Mar 15 10:16:47 2017 From: jan.schoffelen at donders.ru.nl (Schoffelen, J.M. (Jan Mathijs)) Date: Wed, 15 Mar 2017 09:16:47 +0000 Subject: [FieldTrip] How can i see EEG In-Reply-To: <2AF1219A-FE85-4EFC-A8E1-2D25DB95F593@donders.ru.nl> References: <687200850.5575467.1489512414151.ref@mail.yahoo.com> <687200850.5575467.1489512414151@mail.yahoo.com> <7CCA2706D7A4DA45931A892DF3C2894C58F0E941@exprd03.hosting.ru.nl> <9C222EA7-DD8C-45F6-B3DE-BBA5B40276FD@me.com> <2AF1219A-FE85-4EFC-A8E1-2D25DB95F593@donders.ru.nl> Message-ID: <01F7D3C2-7726-46A0-93FC-6025F919319E@donders.ru.nl> Hi all, Apologies to all. I replied incorrectly to this e-mail, so please ignore it. It’s out of context (and impossible to understand it without context). Best wishes, Jan-Mathijs > On 15 Mar 2017, at 10:11, Schoffelen, J.M. (Jan Mathijs) wrote: > > tja, een beetje broekafzakkerig. > T From jens.klinzing at uni-tuebingen.de Wed Mar 15 11:31:19 2017 From: jens.klinzing at uni-tuebingen.de (=?UTF-8?B?IkplbnMgS2xpbnppbmcsIFVuaSBUw7xiaW5nZW4i?=) Date: Wed, 15 Mar 2017 11:31:19 +0100 Subject: [FieldTrip] Sourcemodel inside definition too large when using warpmni In-Reply-To: <58BFCD56.2080508@uni-tuebingen.de> References: <08748577-A2CA-4D37-8B9E-BA75BD7BA5CD@donders.ru.nl> <58BFCD56.2080508@uni-tuebingen.de> Message-ID: <58C917F7.5040009@uni-tuebingen.de> I realized the problem also occurs when processing the fieldtrip example brain and filed it as bug 3271. Best, Jens > Jens Klinzing, Uni Tübingen > Mittwoch, 8. März 2017 10:22 > Hi Jan-Mathijs, > the size difference is still there with cfg.grid.nonlinear = no. > > > > > Best, > Jens > > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > Schoffelen, J.M. (Jan Mathijs) > Mittwoch, 8. März 2017 08:27 > Hi Jens, > > What does the ‘green’ point cloud look like relative to the blue > points when you switch off the non-linear step in recipe a)? > > JM > > > > > > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 111863 bytes Desc: not available URL: From kirsten.petras at uclouvain.be Wed Mar 15 13:04:57 2017 From: kirsten.petras at uclouvain.be (Kirsten Petras) Date: Wed, 15 Mar 2017 12:04:57 +0000 Subject: [FieldTrip] ft_electroderealign Reference to non-existent field error Message-ID: <2194f3f6bdc14ab6be094db21ad3487f@ucl-mbx02.OASIS.UCLOUVAIN.BE> Dear Fieldtrippers, I am a PhD student with at UC Louvain and am currently working on source-space analysis of 256 channel EEG data. I am having troubles using ft_electroderealign as follows to project my electrodes to the surface of the scalp. cfg = []; cfg.method = 'headshape'; cfg.elec = elec_prealigned; cfg.warp = 'rigidbody'; cfg.headshape = mesh; elec_aligned = ft_electroderealign(cfg); This fails with the following error message: Reference to non-existent field 'pos'. Error in ft_warp_error (line 57) el = project_elec(input, target.pos, target.tri); Error in fminunc (line 253) f = feval(funfcn{3},x,varargin{:}); Error in ft_warp_optim (line 129) rf = optimfun(errorfun, ri, options, pos1, pos2, 'rigidbody'); Error in ft_electroderealign (line 361) [norm.chanpos, norm.m] = ft_warp_optim(elec.chanpos, headshape, cfg.warp); Caused by: Failure in initial user-supplied objective function evaluation. FMINUNC cannot continue. The electrode-positions come in the format of the EGI template, however, the coordinates have been exchanged for the actual coordinates on the individual participant (done manually from the MRI). The Fid positions have been removed. So the struct looks like this: >> disp (elec_prealigned) chanpos: [256x3 double] elecpos: [256x3 double] homogeneous: [4x4 double] label: {256x1 cell} type: 'egi256' unit: 'cm' cfg: [1x1 struct] Mesh is the following structure: hex: [4794932x8 double] pnt: [4940731x3 double] labels: [4794932x1 double] tissue: [4794932x1 double] tissuelabel: {'air' 'csf' 'gray' 'scalp' 'skull' 'white'} unit: 'mm' cfg: [1x1 struct] At the point where it crashes, 'target' looks like this: disp(target) pnt: [4940731x3 double] poly: [291320x4 double] unit: 'cm' It looks like the headshape created in ft_electroderealign lines 249-259 ( if isstruct(cfg.headshape) && isfield(cfg.headshape, 'hex') cfg.headshape = fixpos(cfg.headshape); headshape = mesh2edge(cfg.headshape); ) is used as the 'target' input for ft_warp_optim(elec.chanpos, headshape, cfg.warp); in line 361. I tried replacing the input by cfg.headshape, but then the .tri field is still missing... Would anyone have a suggestion as to what I am doing wrong here? Thanks a lot! Kirsten -------------- next part -------------- An HTML attachment was scrubbed... URL: From anne.urai at gmail.com Wed Mar 15 13:29:25 2017 From: anne.urai at gmail.com (Anne Urai) Date: Wed, 15 Mar 2017 13:29:25 +0100 Subject: [FieldTrip] compiling ft_volumenormalise In-Reply-To: References: Message-ID: If anyone encounters the same problem, compilation works if I manually add a bunch of spm functions (which are not recognised by mcc, probably because they are in a class definition folder). Specifically, including '-a', '~/Documents/fieldtrip/external/spm8/spm.m', ... '-a', '~/Documents/fieldtrip/external/spm8/templates/T1.nii', ... '-a', '~/Documents/fieldtrip/external/freesurfer/MRIread', ... '-a', '~/code/Tools/spmbug/dim.m', ... '-a', '~/code/Tools/spmbug/dtype.m', ... '-a', '~/code/Tools/spmbug/fname.m', ... '-a', '~/code/Tools/spmbug/offset.m', ... '-a', '~/code/Tools/spmbug/scl_slope.m', ... '-a', '~/code/Tools/spmbug/scl_inter.m', ... '-a', '~/code/Tools/spmbug/permission.m', ... '-a', '~/code/Tools/spmbug/niftistruc.m', ... '-a', '~/code/Tools/spmbug/read_hdr.m', ... '-a', '~/code/Tools/spmbug/getdict.m', ... '-a', '~/code/Tools/spmbug/read_extras.m', ... '-a', '~/code/Tools/spmbug/read_hdr_raw.m', ... does the trick. Happy compiling, Anne On 1 March 2017 at 19:38, Anne Urai wrote: > Hi FieldTrippers, > > I compile my code to run on the supercomputer cluster (without many matlab > licenses), which usually works fine when I do something like: > > *addpath('~/Documents/fieldtrip');* > *ft_defaults; * > *addpath('~/Documents/fieldtrip/external/spm8');* > *mcc('-mv', '-N', '-p', 'stats', '-p', 'images', '-p', 'signal', ...* > * '-R', '-nodisplay', '-R', '-singleCompThread', fname);* > > However, compiling the ft_volumenormalise function gives me some problems. > Specifically, if source is the result of my beamformer analysis, this code > > * cfg = [];* > * cfg.parameter = 'pow';* > * cfg.nonlinear = 'no'; % can warp back to individual* > * cfg.template = > '/home/aeurai/Documents/fieldtrip/external/spm8/templates/T1.nii';* > * cfg.write = 'no';* > * cfg.keepinside = 'no'; % otherwise, ft_sourcegrandaverage > will bug* > * source = ft_volumenormalise(cfg, source);* > > works fine when running it within Matlab. However, when I run the > executable after compiling (which completes without error), a low-level spm > function throws the following error: > > *the input is source data with 16777216 brainordinates on a [256 256 256] > grid* > *Warning: could not reshape "freq" to the expected dimensions* > *> In ft_datatype_volume (line 136)* > *In ft_checkdata (line 350)* > *In ft_volumenormalise (line 98)* > *In B6b_sourceContrast_volNormalise (line 57)* > *Converting the coordinate system from ctf to spm* > *Undefined function 'fname' for input arguments of type 'struct'* > *Error in file_array (line 32)* > *Error in spm_create_vol>create_vol (line 77)* > *Error in spm_create_vol (line 16)* > *Error in volumewrite_spm (line 71)* > *Error in ft_write_mri (line 65)* > *Error in align_ctf2spm (line 168)* > *Error in ft_convert_coordsys (line 95)* > *Error in ft_volumenormalise (line 124)* > *Error in B6b_sourceContrast_volNormalise (line 57)* > *MATLAB:UndefinedFunction* > > I'd be very grateful for hints from anyone who's successfully compiled the > ft_normalise function! Adding the template T1.nii file, spm8 or freesurfer > at compilation does not solve the problem. > Thanks, > > — > Anne E. Urai, MSc > PhD student | Institut für Neurophysiologie und Pathophysiologie > Universitätsklinikum Hamburg-Eppendorf | Martinistrasse 52, 20246 | > Hamburg, Germany > www.anneurai.net / @AnneEUrai > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hamedtaheri at yahoo.com Wed Mar 15 14:33:54 2017 From: hamedtaheri at yahoo.com (Hamed Taheri) Date: Wed, 15 Mar 2017 13:33:54 +0000 (UTC) Subject: [FieldTrip] How can i see EEG In-Reply-To: <9C222EA7-DD8C-45F6-B3DE-BBA5B40276FD@me.com> References: <687200850.5575467.1489512414151.ref@mail.yahoo.com> <687200850.5575467.1489512414151@mail.yahoo.com> <7CCA2706D7A4DA45931A892DF3C2894C58F0E941@exprd03.hosting.ru.nl> <9C222EA7-DD8C-45F6-B3DE-BBA5B40276FD@me.com> Message-ID: <174832299.566947.1489584834648@mail.yahoo.com> Thanks, Dear Anne with ft_databrowser(cfg); i saw my signal but it's not good as EEGLAB  Hamed Taheri Gorji PhD Candidate Brain Imaging Laboratory  DEPARTMENT OF PSYCHOLOGY FACULTY OF MEDICINE AND PSYCHOLOGY SAPIENZA University of Rome Santa Lucia Foundation, ViaArdeatina 306, 00179 Rome On Wednesday, March 15, 2017 9:43 AM, anne Hauswald wrote: Hi Hamed,  as Stan pointed you to, you will find information on visual data inspection in http://www.fieldtriptoolbox.org/walkthrough. Basically, you can use e.g. ft_databrowser to view your data.for example cfg=[]; cfg.dataset='path to your eeg data‘;ft_databrowser(cfg) for more options see the references for this function.best anne Am 15.03.2017 um 09:10 schrieb hamed taheri : I saw this tutorial but I couldn't find viewing codeI want to see my 64 channels  Sent from my iPhone On Mar 15, 2017, at 8:52 AM, Pelt, S. van (Stan) wrote: Hi Hamed,  The FieldTrip tutorials are your friend here. See http://www.fieldtriptoolbox.org/getting_started.  Best,Stan  --Stan van Pelt, PhDDonders Institute for Brain, Cognition and BehaviourRadboud UniversityMontessorilaan 3, B.01.346525 HR Nijmegen, the Netherlandstel: +31 24 3616288  From: fieldtrip-bounces at science.ru.nl [mailto:fieldtrip-bounces at science.ru.nl] On Behalf Of Hamed Taheri Sent: dinsdag 14 maart 2017 18:27 To: fieldtrip at science.ru.nl Subject: [FieldTrip] How can i see EEG  Hello allMy name is Hamed a Ph.D. candidate from the Sapienza University of Rome.I have an EEG data that recorded in 64 channel with .eeg format.How can I see my data in Fieldtrip.  cfg = []cfg.dataset = 'mydata........'....     Hamed Taheri Gorji PhD Candidate  Brain Imaging Laboratory  DEPARTMENT OF PSYCHOLOGY FACULTY OF MEDICINE AND PSYCHOLOGY SAPIENZA University of Rome Santa Lucia Foundation, ViaArdeatina 306, 00179 Rome _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip _______________________________________________ fieldtrip mailing list fieldtrip at donders.ru.nl https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- An HTML attachment was scrubbed... URL: From elam4hcp at gmail.com Wed Mar 15 17:41:03 2017 From: elam4hcp at gmail.com (Jennifer Elam) Date: Wed, 15 Mar 2017 11:41:03 -0500 Subject: [FieldTrip] HCP Course 2017: Accommodations Reservations now available Message-ID: Spaces are beginning to fill for the 2017 HCP Course: "Exploring the Human Connectome" , to be held June 19-23 at the Djavad Mowafagian Centre for Brain Health at University of British Columbia (UBC) in Vancouver, BC, Canada! Reservations for on-site accommodations for those attending the course are now available. The 5-day intensive HCP course will provide training in acquisition, processing, analysis and visualization of whole brain imaging and behavioral data using methods and software tools developed by the WU-Minn-Oxford Human Connectome Project (HCP) consortium. The HCP Course is the best place to learn directly from HCP investigators and to explore HCP data and methods. This year's course will cover where HCP is heading with advent of the Lifespan HCP development (ages 5-18) and aging (ages 35-90+) projects and will provide hands-on experience in working with the multi-modal human cortical parcellation (Glasser *et al.* 2016, Nature ) and with the “HCP-Style” paradigm for data acquisition, analysis, and sharing (Glasser *et al.* 2016, Nature Neuroscience ). For more info and to register visit the HCP Course 2017 website . If you have any questions, please contact us at: hcpcourse at humanconnectome.org We look forward to seeing you in Vancouver! Best, 2017 HCP Course Staff -------------- next part -------------- An HTML attachment was scrubbed... URL: From eriksenj at ohsu.edu Wed Mar 15 23:56:21 2017 From: eriksenj at ohsu.edu (K Jeffrey Eriksen) Date: Wed, 15 Mar 2017 22:56:21 +0000 Subject: [FieldTrip] why realignment tilted in hcp_anatomy? Message-ID: All HCP_MEG users: In the hope of getting some responses, let me simplify this to the bare minimum. By setting a flag in [hcp_anatomy.m] to allow visualization of the realignment result, I have discovered something that appears wrong. The coronal view in the attached “realignment result” is tilted at a 45 degree angle. My first question is simply: is this what I should see? If so, why is it tilted? [cid:image001.png at 01D29D9F.396AFF00] I have not modified the script except to turn on this visualization. The input file (T1w_acpc_dc_restore.nii) is from one of the publically available HCP_MEG subjects (177746) that I downloaded. So there can be no “user error” at this point on my account, unless it is using [hcp_anatomy] outside the context of the whole HCP_MEG pipeline. The above plot occurs on line 156 of [hcp_anatomy.m]. Thanks, -Jeff PS. Just in case I am marking the ac, pc, zx, and r landmark points wrong, here is what I marked: [cid:image002.png at 01D29DA0.14D6DF00] And here is all the console output up to the point of drawing the realignment result: dicomfile = A:\HCP_MEG_subs\HCP-MEG-177746\MEG\anatomy\T1w_acpc_dc_restore.nii executing the anatomy pipeline for subject 177746 not using the high quality structural preprocessing results ------------------------------------------------------------------------- Running the interactive part of the anatomy pipeline Rescaling NIFTI: slope = 1, intercept = 0 Please identify the Anterior Commissure, Posterior Commissure, a point on the positive Z and X axes, and a point on the right part of the head the input is volume data with dimensions [260 311 260] 1. To change the slice viewed in one plane, either: a. click (left mouse) in the image on a different plane. Eg, to view a more superior slice in the horizontal plane, click on a superior position in the coronal plane, or b. use the arrow keys to increase or decrease the slice number by one 2. To mark a fiducial position or anatomical landmark, do BOTH: a. select the position by clicking on it in any slice with the left mouse button b. identify it by pressing the letter corresponding to the fiducial/landmark: press a for ac, p for pc, z for xzpoint press r for an extra control point that should be on the right side You can mark the fiducials multiple times, until you are satisfied with the positions. 3. To change the display: a. press c on keyboard to toggle crosshair visibility b. press f on keyboard to toggle fiducial visibility c. press + or - on (numeric) keyboard to change the color range's upper limit 4. To finalize markers and quit interactive mode, press q on keyboard ================================================================================== crosshair: voxel 7405710, index = [130 183 92], head = [-0.3 1.4 -8.3] mm ac: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm pc: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm xzpoint: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm right: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm ================================================================================== selected ac ================================================================================== crosshair: voxel 7405710, index = [130 183 92], head = [-0.3 1.4 -8.3] mm ac: voxel 7405710, index = [130 183 92], head = [-0.3 1.4 -8.3] mm pc: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm xzpoint: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm right: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm ================================================================================== crosshair: voxel 9014850, index = [130 152 112], head = [-0.3 -20.3 5.7] mm ac: voxel 7405710, index = [130 183 92], head = [-0.3 1.4 -8.3] mm pc: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm xzpoint: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm right: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm ================================================================================== selected pc ================================================================================== crosshair: voxel 9014850, index = [130 152 112], head = [-0.3 -20.3 5.7] mm ac: voxel 7405710, index = [130 183 92], head = [-0.3 1.4 -8.3] mm pc: voxel 9014850, index = [130 152 112], head = [-0.3 -20.3 5.7] mm xzpoint: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm right: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm ================================================================================== crosshair: voxel 14108972, index = [ 72 152 175], head = [40.3 -20.3 49.8] mm ac: voxel 7405710, index = [130 183 92], head = [-0.3 1.4 -8.3] mm pc: voxel 9014850, index = [130 152 112], head = [-0.3 -20.3 5.7] mm xzpoint: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm right: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm ================================================================================== selected xzpoint ================================================================================== crosshair: voxel 14108972, index = [ 72 152 175], head = [40.3 -20.3 49.8] mm ac: voxel 7405710, index = [130 183 92], head = [-0.3 1.4 -8.3] mm pc: voxel 9014850, index = [130 152 112], head = [-0.3 -20.3 5.7] mm xzpoint: voxel 14108972, index = [ 72 152 175], head = [40.3 -20.3 49.8] mm right: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm ================================================================================== crosshair: voxel 6508124, index = [ 64 152 81], head = [45.9 -20.3 -16.0] mm ac: voxel 7405710, index = [130 183 92], head = [-0.3 1.4 -8.3] mm pc: voxel 9014850, index = [130 152 112], head = [-0.3 -20.3 5.7] mm xzpoint: voxel 14108972, index = [ 72 152 175], head = [40.3 -20.3 49.8] mm right: voxel NaN, index = [NaN NaN NaN], head = [NaN NaN NaN] mm ================================================================================== selected right ================================================================================== crosshair: voxel 6508124, index = [ 64 152 81], head = [45.9 -20.3 -16.0] mm ac: voxel 7405710, index = [130 183 92], head = [-0.3 1.4 -8.3] mm pc: voxel 9014850, index = [130 152 112], head = [-0.3 -20.3 5.7] mm xzpoint: voxel 14108972, index = [ 72 152 175], head = [40.3 -20.3 49.8] mm right: voxel 6508124, index = [ 64 152 81], head = [45.9 -20.3 -16.0] mm ================================================================================== crosshair: voxel 6508124, index = [ 64 152 81], head = [45.9 -20.3 -16.0] mm ac: voxel 7405710, index = [130 183 92], head = [-0.3 1.4 -8.3] mm pc: voxel 9014850, index = [130 152 112], head = [-0.3 -20.3 5.7] mm xzpoint: voxel 14108972, index = [ 72 152 175], head = [40.3 -20.3 49.8] mm right: voxel 6508124, index = [ 64 152 81], head = [45.9 -20.3 -16.0] mm 615 cfg.fiducial = opt.fiducial; Warning: assuming that the units are "mm" > In ft_estimate_units (line 49) In ft_plot_slice (line 197) In ft_plot_ortho (line 120) In ft_volumerealign>cb_redraw (line 1446) In ft_volumerealign (line 1293) In hcp_anatomy (line 156) In test_hcp_meg_anat (line 27) Warning: assuming that the units are "mm" > In ft_estimate_units (line 49) In ft_plot_slice (line 197) In ft_plot_ortho (line 134) In ft_volumerealign>cb_redraw (line 1446) In ft_volumerealign (line 1293) In hcp_anatomy (line 156) In test_hcp_meg_anat (line 27) Warning: assuming that the units are "mm" > In ft_estimate_units (line 49) In ft_plot_slice (line 197) In ft_plot_ortho (line 148) In ft_volumerealign>cb_redraw (line 1446) In ft_volumerealign (line 1293) In hcp_anatomy (line 156) In test_hcp_meg_anat (line 27) K>> From: Schoffelen, J.M. (Jan Mathijs) [mailto:jan.schoffelen at donders.ru.nl] Sent: Monday, March 13, 2017 1:14 AM To: FieldTrip discussion list; K Jeffrey Eriksen Subject: Fwd: [HCP-Users] hcp_anatomy.m needs an hsfile? Dear Jeff, Let me forward your question to the discussion list. Dear list, Jeff is encountering some coregistration problems, which may be FieldTrip related, but also could be a user error. Perhaps somebody has encountered them before. Let us know if you have a solution. The 45 degrees tilt looks odd. If this image was produced after reslicing the to-MNI-coregistered-image something went wrong with the realignment. If this image was produced prior to the reslicing, something funky has gone wrong with the acquisition sequence. I don’t know anything about the specifics of Brainstorm, so I am afraid I cannot help there. Best, Jan-Mathijs J.M.Schoffelen, MD PhD Senior Researcher, VIDI-fellow - PI, language in interaction Telephone: +31-24-3614793 Physical location: room 00.028 Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands Begin forwarded message: From: K Jeffrey Eriksen > Subject: RE: [HCP-Users] hcp_anatomy.m needs an hsfile? Date: 11 March 2017 at 02:47:33 GMT+1 To: "Schoffelen, J.M. (Jan Mathijs)" > Hello again, I encountered a problem when I tried to import into Brainstorm, even though I thought I had the transform text file correct. After importing the anatomy in Brainstorm, it was displayed with the brain rotated by 45 degrees in all axes. I then realized the I had visualized the registration of the headshape to the scalp surface and that looked good, but I had never visualized the MNI registration. I went back into the HCP scripts and found where the MNI registration could be visualized and discovered the 45 degree rotation seemed to occur there. So I thought maybe our local HCP pipeline did something unusual. To test this I ran these three conditions: 1. My hcp_anatomy_egi.m with our local HCP-pipeline produced T1 2. original hcp_anatomy.m with our local T1 3. original hcp_anatomy.m with downloaded HCM_MEG_pipeline produced T1 All three had the same apparent problem, shown on the attached images. I am quite puzzled by this since they are all the same, yet Brainstorm only imports #3 correctly (not counting #2 which is mixed). I put all three cases in the attached Word doc, with the Brainstorm registration display and the HCP headshape registration display. -Jeff From: Schoffelen, J.M. (Jan Mathijs) [mailto:jan.schoffelen at donders.ru.nl] Sent: Wednesday, March 08, 2017 8:52 AM To: K Jeffrey Eriksen Subject: Re: [HCP-Users] hcp_anatomy.m needs an hsfile? Hi Jeff, I made it all the way through hcp_anatomy_EGI.m (my version substituting ‘egi’ for ‘bti’. Amazing! I could not figure out how to do the interactive fine registration of the EGI electrode “headshape” to the scalp surface – where is that documented? Well it’s not extensively documented, but in the crude GUI you can fiddle around with translation and rotation parameters to move the electrode point cloud closer to the headsurface mesh, created from the MRI segmentation. The main remaining problem is that the BTI coordinate system has the X-axis toward the nasion, and the Y-axis toward the LPA. The EGI coordinate system has the X-axis toward the RPA and the Y-axis toward the nasion. Can you suggest the best way to change hcp_anatomy_EGI.m to reflect this? Well, it sounds as if the EGI has an RAS convention, which may be similar to the ‘neuromag’ convention (as per http://www.fieldtriptoolbox.org/faq/how_are_the_different_head_and_mri_coordinate_systems_defined) It could be that changing the required coordinate system (coordsys) to ‘neuromag’ while specifying the fiducials in ft_volumerealign (rather than ‘bti’) would do the trick. Each of the supported coordinates systems must have some kind of master definition somewhere in the code, and that would be the best place to define the EGI system. I think it is similar to the BESA system. The code that has the ‘intelligence’ to map the specification of defined fiducial/landmark locations is in https://github.com/fieldtrip/fieldtrip/blob/master/utilities/ft_headcoordinates.m with a typo in line48/49 I noticed just now. Feel free to suggest a new coordinate system if needed. Perhaps this is best done through the fieldtrip discussion list. Best, Jan-Mathijs -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 92989 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 122781 bytes Desc: image002.png URL: From eriksenj at ohsu.edu Thu Mar 16 01:59:01 2017 From: eriksenj at ohsu.edu (K Jeffrey Eriksen) Date: Thu, 16 Mar 2017 00:59:01 +0000 Subject: [FieldTrip] how many points in Polhemus headshape file for HCP_MEG pipeline use? Message-ID: I am trying to simulate the HCP_MEG pipeline (specifically hcp_anatomy) and thus have to create my own simulated hs_file, as if I had the non-anonymized T1 and a Polhemus headshape file. Can someone tell me how many point are usually captured in these files? -Jeff -------------- next part -------------- An HTML attachment was scrubbed... URL: From da434 at cam.ac.uk Thu Mar 16 17:16:35 2017 From: da434 at cam.ac.uk (D.Akarca) Date: Thu, 16 Mar 2017 16:16:35 +0000 Subject: [FieldTrip] Neighbouring issue with ft_timelockstatistics Message-ID: Dear all, My name is Danyal Akarca, I’m a Master’s student working at Cambridge University, working at the MRC Cognition and Brain Sciences Unit. I’m currently working on some MEG data analysis, using ft_timelockstatistics and ft_clusterplot to determine clustering of neuromag magnetometers for task-related data. My neighbouring function is defined as follows: cfg = []; cfg.method = ‘distance’; cfg.neighbourdist = 0.13; cfg.template = ‘neuromag306mag_neighb’; cfg.layout = ‘NM306mag.lay’ cfg.channel = ‘all' neighbours = ft_prepare_neighbours(cfg, MagGM_Control_Deviant); % The input data here is one of the grand means computed with ft_timelockgrandaverage This provides me with an average of 5.5 neighbours per channel, and upon inspection with ft_neighbourplot, it looks very reasonable. I then went on to compute statistics, using ft_timelockstatistics as follows cfg = []; cfg.channel = ‘all’; cfg.neighbours = neighbours; cfg.latency = [0.1 0.54] cfg.method = ‘montecarlo’; cfg.randomization = 1000; cfg.correctm = ‘cluster’; cfg.correctail = ‘prob’; cfg.ivar = 1; cfg.uvar = 2; cfg.statistic = ‘ft_statfun_depsamplesT’; Nsub = 14; cfg.design(1,1:2*Nsub) = [ones(1,Nsub) 2*ones(1,Nsub)] cfg.design(2,1:2*Nsub)= [1:Nsub 1:Nsub]; stat = ft_timelockstatistics(cfg, cw{:}, cw1{:}) % cw and cw1 are cells containing my files When I run this, i obtain 52 positive clusters and 100 negative clusters, of which 6 negative clusters are significant. However, I have realised that this assumed that each channel is an independant cluster? These 6 ‘clusters’ are very close to each other when plotted using ft_clusterplot, so I thought that actually this should be 1 big cluster rather than 6 independant clusters very close to each other. So I therefore added cfg.minnbchan = 2; However, when I do this, it says there are 0 clusters generated at all. This occurs no matter how large I make cfg.neighbourdist (even when I make it so that each magnetometer is neighbours with every other neighbour, I still get no clusters forming). I was wondering if anyone had any thoughts, or could help me with this? I am still new to FieldTrip so any help would be very much appreciated. I hope that I’ve included all the relevant information above required. All the best, Danyal Akarca MPhil Neuroscience, Cambridge University MRC Cognition and Brain Sciences Unit From SXM1085 at student.bham.ac.uk Thu Mar 16 17:30:55 2017 From: SXM1085 at student.bham.ac.uk (Sebastian Michelmann) Date: Thu, 16 Mar 2017 16:30:55 +0000 Subject: [FieldTrip] how many points in Polhemus headshape file for HCP_MEG pipeline use? In-Reply-To: References: Message-ID: <2D9C9145AF1E4D4799ADDB2C0F996AE8019EF96FF9@EX13.adf.bham.ac.uk> Hi Jeff, we are currently taking >500 points Best, Sebastian From: fieldtrip-bounces at science.ru.nl [mailto:fieldtrip-bounces at science.ru.nl] On Behalf Of K Jeffrey Eriksen Sent: 16 March 2017 00:59 To: hcp-users at humanconnectome.org; fieldtrip at science.ru.nl Subject: [FieldTrip] how many points in Polhemus headshape file for HCP_MEG pipeline use? I am trying to simulate the HCP_MEG pipeline (specifically hcp_anatomy) and thus have to create my own simulated hs_file, as if I had the non-anonymized T1 and a Polhemus headshape file. Can someone tell me how many point are usually captured in these files? -Jeff -------------- next part -------------- An HTML attachment was scrubbed... URL: From seymourr at aston.ac.uk Thu Mar 16 18:58:36 2017 From: seymourr at aston.ac.uk (Seymour, Robert (Research Student)) Date: Thu, 16 Mar 2017 17:58:36 +0000 Subject: [FieldTrip] Granger Causality & ft_timelockstatistics Message-ID: Hi all, I'm currently using ft_timelockstatistics to compute the group-level statistical difference between 2 granger causality spectra (I'm substituting freq for time data). My question is whether my current cfg settings for ft_timelockstatistics (see code below) will cluster my data over time? I assume by selecting cfg.avgovertime = 'no' FT_STATISTICS_MONTECARLO will cluster over time rather than space.. but I just wanted to double check... Many thanks, Robert Seymour (Aston Brain Centre) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% cfg = []; cfg.avgovertime = 'no'; cfg.parameter = 'avg'; cfg.method = 'montecarlo'; cfg.statistic = 'ft_statfun_depsamplesT'; cfg.alpha = 0.05; cfg.clusteralpha = 0.05; cfg.correctm = 'cluster'; cfg.numrandomization = 1000; Nsub = numel(grandavgA); cfg.design(1,1:2*Nsub) = [ones(1,Nsub) 2*ones(1,Nsub)]; cfg.design(2,1:2*Nsub) = [1:Nsub 1:Nsub]; cfg.ivar = 1; % the 1st row in cfg.design contains the independent variable cfg.uvar = 2; % the 2nd row in cfg.design contains the subject number stat = ft_timelockstatistics(cfg,grandavgB{:},grandavgA{:}); figure; plot(stat.stat); xlabel('Freq (Hz)'); ylabel('t-value'); figure; plot(stat.prob);xlabel('Freq (Hz)'); ylabel('p-value'); %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% -------------- next part -------------- An HTML attachment was scrubbed... URL: From Umla-Runge at cardiff.ac.uk Thu Mar 16 19:15:39 2017 From: Umla-Runge at cardiff.ac.uk (Katja Umla-Runge) Date: Thu, 16 Mar 2017 18:15:39 +0000 Subject: [FieldTrip] PhD studentship at Cardiff University Message-ID: Applications are invited for a PhD studentship on functional and structural properties of spatial processing networks in the brain at Cardiff University starting from July 2017. Please see here for more details on the project and do contact me if you would like to know more: https://www.findaphd.com/search/ProjectDetails.aspx?PJID=82152 http://psych.cf.ac.uk/degreeprogrammes/postgraduate/research/ Regards, Katja Katja Umla-Runge Lecturer CUBRIC, School of Psychology (College of Biomedical and Life Sciences) Cardiff University Maindy Road Cardiff, CF24 4HQ Tel: +44 (0)29 2087 0715 Email: Umla-Runge at cardiff.ac.uk Katja Umla-Runge Darlithydd CUBRIC, Yr Ysgol Seicoleg (Coleg y Gwyddorau Biofeddygol a Bywyd) Prifysgol Caerdydd Maindy Road Caerdydd, CF24 4HQ Ffôn : +44 (0)29 2087 0715 E-bost: Umla-Runge at caerdydd.ac.uk Sent from my iPhone -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmatthes at cbs.mpg.de Fri Mar 17 13:35:10 2017 From: dmatthes at cbs.mpg.de (Daniel Matthes) Date: Fri, 17 Mar 2017 13:35:10 +0100 Subject: [FieldTrip] Bug in ft_trialfun_brainvision_segmented.m Message-ID: <24173161-9793-8db6-04bf-220d7b031ef3@cbs.mpg.de> Hi fieldtrip developers, I found a bug in fieldtrip/trialfun/ft_trialfun_brainvision_segmented.m. If the Brain Vision marker file *.vmrk includes no 'Stimulus' makers, the ft_trialfun_brainvision_segmented function crashes in line 116. The reason for this crash is absence of the trialinfo variable. In detail, if no 'Stimulus' is defined the numstim variable gets 0 (line 99), otherwise the query 'if all(numstim==numstim(1))' in line 100 results in true. I would recommend to change line 100 to: if ((numstim > 0 ) && (all(numstim==numstim(1)))) Hereby the else branch will be executed, if numstim = 0. Furthermore, the mentioned function also crashes, if the stimulus markers in the marker file either have no value or a value with wrong letters. This cases should be captured with a more obvious error message. All the best, Daniel From jan.schoffelen at donders.ru.nl Fri Mar 17 13:45:16 2017 From: jan.schoffelen at donders.ru.nl (Schoffelen, J.M. (Jan Mathijs)) Date: Fri, 17 Mar 2017 12:45:16 +0000 Subject: [FieldTrip] Bug in ft_trialfun_brainvision_segmented.m In-Reply-To: <24173161-9793-8db6-04bf-220d7b031ef3@cbs.mpg.de> References: <24173161-9793-8db6-04bf-220d7b031ef3@cbs.mpg.de> Message-ID: <68BDEFCD-EA25-459F-8465-B3D850838B67@donders.ru.nl> Thanks for your input, Daniel. May I suggest you to follow this up through github? http://www.fieldtriptoolbox.org/development/git The best thing for you to do would be to make a Pull Request with the suggested changes. Thanks, and keep up the good work, Jan_Mathijs > On 17 Mar 2017, at 13:35, Daniel Matthes wrote: > > Hi fieldtrip developers, > > I found a bug in fieldtrip/trialfun/ft_trialfun_brainvision_segmented.m. If the Brain Vision marker file *.vmrk includes no 'Stimulus' makers, the ft_trialfun_brainvision_segmented function crashes in line 116. The reason for this crash is absence of the trialinfo variable. > > In detail, if no 'Stimulus' is defined the numstim variable gets 0 (line 99), otherwise the query 'if all(numstim==numstim(1))' in line 100 results in true. > > I would recommend to change line 100 to: > > if ((numstim > 0 ) && (all(numstim==numstim(1)))) > > Hereby the else branch will be executed, if numstim = 0. > > Furthermore, the mentioned function also crashes, if the stimulus markers in the marker file either have no value or a value with wrong letters. This cases should be captured with a more obvious error message. > > All the best, > > Daniel > > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip From r.oostenveld at donders.ru.nl Tue Mar 21 12:01:50 2017 From: r.oostenveld at donders.ru.nl (Robert Oostenveld) Date: Tue, 21 Mar 2017 12:01:50 +0100 Subject: [FieldTrip] Donders training courses: "Tool-kits" of Cognitive Neuroscience Message-ID: <05323DBF-FF87-4F0E-AE08-1E59B82EBEA3@donders.ru.nl> > Begin forwarded message: > > From: "Stijns, M.H. (Tildie)" > Subject: Announcing Donders Tool-kits 2017 > Date: 15 March 2017 at 14:42:02 GMT+1 > > > Are you interested in learning neuroimaging techniques directly from the experts? > Do you like courses that take a practical hands-on approach to training? > To help you become proficient in modern neuroimaging methods, the Donders Institute offers “Tool-kits” of Cognitive Neuroscience, held annually at Radboud University, Nijmegen, the Netherlands. > Donders Tool-kits in 2017 : > Advanced MEG/EEG : (3-7 April 2017) - NOTE: registration is closed > Advanced (f)MRI: (15-18 May 2017) > Brain Stimulation : (30 May-2 June 2017) > Neuroimaging : (28 August-1 September 2017) > > Bests, Tildie -------------- next part -------------- An HTML attachment was scrubbed... URL: From J.Verhoef at donders.ru.nl Tue Mar 21 12:51:46 2017 From: J.Verhoef at donders.ru.nl (Verhoef, J.P. (Julia)) Date: Tue, 21 Mar 2017 11:51:46 +0000 Subject: [FieldTrip] Senior Postdoc for the Dutch Research Consortium 'Language in Interaction' Message-ID: <11E9E0B371DBAE4EB859A9CC30606A04023E199F@exprd04.hosting.ru.nl> Senior Postdoc for the Dutch Research Consortium 'Language in Interaction' (1.0 FTE) Dutch Research Consortium 'Language in Interaction' Maximum salary: € 4,691 gross/month Vacancy number: 30.01.17 Application deadline: 17 April 2017 [Logo NWO] [Logo] Responsibilities The Language in Interaction research consortium invites applications for a senior postdoctoral position. You will contribute to the integration of empirical research in our consortium. You will act in close collaboration with Peter Hagoort, programme director of the consortium. This position provides the opportunity for conducting world-class research as a member of an interdisciplinary team. Moreover, it will provide the opportunity to contribute to developing a theoretical framework for our understanding of the human language faculty. Work environment The Netherlands has an outstanding track record in the language sciences. The Language in Interaction research consortium, which is sponsored by a large grant from the Netherlands Organization for Scientific research (NWO), brings together many of the excellent research groups in the Netherlands in a research programme on the foundations of language. In addition to excellence in the domain of language and related relevant fields of cognition, our consortium provides state-of-the-art research facilities and a research team with ample experience in the complex research methods that will be invoked to address the scientific questions at the highest level of methodological sophistication. These include methods from genetics, neuroimaging, computational modelling, and patient-related research. This consortium realises both quality and critical mass for studying human language at a scale not easily found anywhere else. We have identified five Big Questions (BQ) that are central to our understanding of the human language faculty. These questions are interrelated at multiple levels. Teams of researchers will collaborate to collectively address these key questions of our field. Our five Big Questions are: BQ1: The nature of the mental lexicon: How to bridge neurobiology and psycholinguistic theory by computational modelling? BQ2: What are the characteristics and consequences of internal brain organization for language? BQ3: Creating a shared cognitive space: How is language grounded in and shaped by communicative settings of interacting people? BQ4: Variability in language processing and in language learning: Why does the ability to learn language change with age? How can we characterise and map individual language skills in relation to the population distribution? BQ5: How are other cognitive systems shaped by the presence of a language system in humans? You will be appointed at the Donders Institute, Centre for Cognitive Neuroimaging (Radboud University, Nijmegen). The research is conducted in an international setting at all participating institutions. English is the lingua franca. What we expect from you We are looking for a highly motivated, creative and talented candidate to enrich a unique consortium of researchers that aims to unravel the neurocognitive mechanisms of language at multiple levels. The goal is to understand both the universality and the variability of the human language faculty from genes to behaviour. The selection criteria include: · a PhD in an area related to the neurobiology of language and/or language sciences; · expertise/interest in theoretical neuroscience and language; · an integrative mindset; · a theory-driven approach; · good communication skills; · excellent proficiency in written and spoken English. What we have to offer · employment: 1.0 FTE; · a maximum gross monthly salary of € 4,691 based on a 38-hour working week (salary scale 11); · in addition to the salary: an 8% holiday allowance and an 8.3% end-of-year bonus; · you will be appointed for an initial period of 18 months, after which your performance will be evaluated. If the evaluation is positive, the contract will be extended by 30 months; · the Collective Labour Agreement (CAO) of Dutch Universities is applicable; · you will be classified as Researcher, level 3 in the Dutch university job-ranking system (UFO); · the Dutch universities and institutes involved have a number of regulations that enable employees to create a good work-life balance. Are you interested in our excellent employment conditions? Other Information The institute involved is an equal opportunity employer, committed to building a culturally diverse intellectual community, and as such encourages applications from women and minorities. Would you like to know more? Further information on: Language in Interaction Further information on: Donders Institute for Brain, Cognition and Behaviour For more information about this vacancy, please contact: Prof. dr. Peter Hagoort, programme director Language in Interaction and director of DCCN Telephone: +31 24 3610648, +31 24 3521301 E-mail: p.hagoort at donders.ru.nl Are you interested? You should upload your application (attn. of Prof. dr. P. Hagoort) exclusively using the button 'Apply' below. Your application should include (and be limited to) the following attachment(s): · a cover letter · your curriculum vitae, including a list of publications and the names of at least two people who can provide references Please apply before 17 April 2017, 23:59 CET. [Apply] No commercial propositions please. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 2461 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 40202 bytes Desc: image002.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 3660 bytes Desc: image003.jpg URL: From mailtome.2113 at gmail.com Thu Mar 23 07:09:07 2017 From: mailtome.2113 at gmail.com (Arti Abhishek) Date: Thu, 23 Mar 2017 17:09:07 +1100 Subject: [FieldTrip] Channel order after interpolation Message-ID: Dear list, I am working with 128 channel EEG data recorded from infants and young children. As they had few bad channels, I removed them, computed ICA, removed eye-blink components and then interpolated the removed channels. The interpolated channels were appended at the end, not at their original positions. Is there a way I can add the interpolated channels in the original order? I want to run some scripts outside fieldtrip on the data and the channel order is important. Any help would be greatly appreciated. Thanks, Arti cfg =[]; cfg.layout = 'GSN-Hydrocel-129.sfp'; lay = ft_prepare_layout(cfg); cfg = []; cfg_neighb.layout = lay; cfg_neighb.method = 'triangulation'; cfg.feedback = 'yes'; EEG_neighbours = ft_prepare_neighbours(cfg_neighb); load('NJ_24_ica_artrej.mat') badchannels = setdiff(lay.label(1:129), NJ_24_ica_artrej.label); cfg = []; cfg.layout = lay; cfg.neighbours = EEG_neighbours; cfg.badchannel = badchannels; cfg.method ='spline'; cfg.senstype = 'EEG'; NJ_24_ica_interp = ft_channelrepair(cfg, NJ_24_ica_artrej); -------------- next part -------------- An HTML attachment was scrubbed... URL: From julian.keil at gmail.com Thu Mar 23 09:44:38 2017 From: julian.keil at gmail.com (Julian Keil) Date: Thu, 23 Mar 2017 09:44:38 +0100 Subject: [FieldTrip] Channel order after interpolation In-Reply-To: References: Message-ID: <96928AB6-212E-4AD8-B30E-184B252A7465@gmail.com> Dear Arti, if you know exactly where your channels are, and where they ought to be, you can simply build a vector with the index of the current channel at the position where it should be, and assign this vector as a new matrix index. So for example, if you have channels A, B and C, but they should be ordered B-C-A, you can use something like this: neworder = [2 3 1]; % Element 2, should now be at the beginning, then the third element, and then the first; data.avg = data.avg(neworder,:); % Assign neworder to the 2D-Matrix of - for example - trial averaged data Hope this helps, Julian Am 23.03.2017 um 07:09 schrieb Arti Abhishek: > Dear list, > > I am working with 128 channel EEG data recorded from infants and young children. As they had few bad channels, I removed them, computed ICA, removed eye-blink components and then interpolated the removed channels. The interpolated channels were appended at the end, not at their original positions. Is there a way I can add the interpolated channels in the original order? I want to run some scripts outside fieldtrip on the data and the channel order is important. Any help would be greatly appreciated. > > Thanks, > Arti > > cfg =[]; > cfg.layout = 'GSN-Hydrocel-129.sfp'; > lay = ft_prepare_layout(cfg); > cfg = []; > cfg_neighb.layout = lay; > cfg_neighb.method = 'triangulation'; > cfg.feedback = 'yes'; > EEG_neighbours = ft_prepare_neighbours(cfg_neighb); > > load('NJ_24_ica_artrej.mat') > badchannels = setdiff(lay.label(1:129), NJ_24_ica_artrej.label); > > > cfg = []; > cfg.layout = lay; > cfg.neighbours = EEG_neighbours; > cfg.badchannel = badchannels; > cfg.method ='spline'; > cfg.senstype = 'EEG'; > NJ_24_ica_interp = ft_channelrepair(cfg, NJ_24_ica_artrej); > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail URL: From elinor.tzvi at neuro.uni-luebeck.de Thu Mar 23 11:37:38 2017 From: elinor.tzvi at neuro.uni-luebeck.de (Elinor Tzvi-Minker) Date: Thu, 23 Mar 2017 10:37:38 +0000 Subject: [FieldTrip] =?utf-8?q?OPEN_PHD_POSITION_University_of_L=C3=BCbeck?= =?utf-8?q?=2C_GERMANY?= Message-ID: <00444adb22804d07814214e640867971@hermes.neuro.uni-luebeck.de> The Cognitive Neuroscience Group at the Neurology department of the University of Lübeck offers a PhD position (65% E13 TV-L) starting immediately. The candidate will be working on a project that develops and implements neuromodulation techniques (tDCS) in combination with fMRI and then translates these methods to social neuroscience paradigms. ​ We offer The department of Neurology is part of the Center for Brain, Behavior and Metabolism (CBBM), which offers an excellent and state-of the-art research environment. The research group “Cognitive Neuroscience” (headed by Prof. Ulrike Krämer) is working on different topics related to cognitive and affective control (anger and aggression, response inhibition, regulation of eating behavior) and motor control. Our researchers use diverse and complex methods to analyze brain-behavior relationships. At the CBBM, a 3T Skypra MRI research scanner, several EEG labs, fNIRS, TMS and tDCS are available. Thus, we offer an excellent environment for interdisciplinary research. We require The successful candidate will hold an MSc/MA/Dipl. in Psychology or related fields (cognitive science, neuroscience or other). Experience in acquisition and analysis of human neuroimaging data (fMRI, EEG, MEG or NIRS) and Programming skills in Matlab (or equivalent) is preferred. Interest and/or experience in the field of cognitive neuroscience are obligatory. We are looking for a motivated, analytic and problem-solving oriented candidate who enjoys interdisciplinary challenges. The candidate will work in the “Cognitive Neuroscience Group” under co-supervision of Dr. Elinor Tzvi-Minker and Prof. Ulrike M. Krämer. Applicants with disabilities are preferred if qualification is equal. The University of Lübeck is an equal opportunity employer, aiming to increase the proportion of women in science. Applications by women are particularly welcome. For questions about the details of the assignment please contact Dr. Elinor Tzvi-Minker (elinor.tzvi at neuro.uni-luebeck.de). Please send your application (Letter of motivation, CV, contact information of two references, relevant certificates) as one single complete PDF file to the Email-address mentioned above. Applications will be considered until the position has been filled. -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.chait at ucl.ac.uk Thu Mar 23 23:20:03 2017 From: m.chait at ucl.ac.uk (Chait, Maria) Date: Thu, 23 Mar 2017 22:20:03 +0000 Subject: [FieldTrip] Post-Doc position on Auditory Attention [DEADLINE March 31] Message-ID: (please forward; deadline next week) A postdoctoral research associate position is available at the UCL Ear Institute's 'Auditory Cognitive Neuroscience Lab' to work on an EC-funded project that will use psychophysics, eye tracking and EEG to investigate auditory attention in humans. The post is funded for 20 months in the first instance. For more information about the post please see the lab website: http://www.ucl.ac.uk/ear/research/chaitlab/vacancies The Ear Institute is a leading interdisciplinary centre for hearing research in Europe, situated within one of the strongest neuroscience communities in the world at University College London Key Requirements The successful applicant will have a PhD in neuroscience or a neuroscience-related discipline and proven ability to conduct high-quality original research and prepare results for publication. Essential skills include excellent time-management and organizational ability; proficiency in computer programming and good interpersonal, oral and written communication skills. Previous experience with functional brain imaging, neural data analysis, psychophysical assessment, and/or auditory science or acoustics would be desirable. Further Details You should apply for this post (Ref #: 1631454) through UCL's online recruitment website, www.ucl.ac.uk/hr/jobs, where you can download a job description and person specifications. Closing Date for applications is: 31 March 2017 For an informal discussion please contact Dr. Maria Chait (m.chait at ucl.ac.uk). Maria Chait PhD m.chait at ucl.ac.uk Reader in Auditory Cognitive Neuroscience Lab site: http://www.ucl.ac.uk/ear/research/chaitlab/ UCL Ear Institute 332 Gray's Inn Road London WC1X 8EE -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandrina.guran at uni-luebeck.de Mon Mar 27 11:34:19 2017 From: alexandrina.guran at uni-luebeck.de (Alexandrina Guran) Date: Mon, 27 Mar 2017 09:34:19 +0000 Subject: [FieldTrip] Problem with downsampling / automatic artifact rejection Message-ID: <815CD0C7-3692-488F-8012-3A23529C52E0@uni-luebeck.de> Dear FieldTrip Community, My name is Alexandrina Guran, I am a PhD Student at the University of Lübeck and I have recently started working with FieldTrip in order to preprocess (and later analyse) EEG data. I have encountered an odd problem, that I nor people I asked in the lab could solve, also using the help function and google: After running the epoching (trial length 5s), filtering (high-pass, low-pass and notch, for a time-frequency analysis) and downsampling (to 250 Hz), I wanted to do an automatic artifact rejection, in order to have exploratory information of how many of my trials would be affected by artifacts and if there were participants that blinked on a majority of trials in order to determine whether I should shorten my trial length and/or conduct an ICA. I used the ft_artifact_threshold function, in Matlab R2016b, with different FieldTrip versions (march 2017 as well as end 2016 and end 2015). However, the automatic artifact detection did not work – that is, it would stop rejecting artifacts after a number x of trials (usually between 90 and 140 trials), depending on participant. I would get an error message but then the artifact rejection would go on, telling me all trials were ok (even if I set 1 microvolt as a threshold). The error message I got is the following: “(…) threshold artifact scanning: trial 128 from 320 exceeds max-threshold threshold artifact scanning: trial 129 from 320 is ok threshold artifact scanning: trial 130 from 320 is ok threshold artifact scanning: trial 131 from 320 is ok Warning: data contains NaNs, no filtering or preprocessing applied > In ft_warning (line 184) In preproc (line 283) In ft_artifact_threshold (line 164) In preprocessing (line 266) threshold artifact scanning: trial 132 from 320 is ok threshold artifact scanning: trial 133 from 320 is ok threshold artifact scanning: trial 134 from 320 is ok threshold artifact scanning: trial 135 from 320 is ok threshold artifact scanning: trial 136 from 320 is ok threshold artifact scanning: trial 137 from 320 is ok threshold artifact scanning: trial 138 from 320 is ok threshold artifact scanning: trial 139 from 320 is ok threshold artifact scanning: trial 140 from 320 is ok threshold artifact scanning: trial 141 from 320 is ok threshold artifact scanning: trial 142 from 320 is ok (…)” This was however only the case if I ran the artifact detection on down-sampled data. It worked fine with just filtered data. However, I checked the preprocessed (downsampled) data for NaNs (using the isnan-MATLAB function) and there were none to be found (I also checked visually in one dataset). Has anyone encountered this problem and found a solution? Of course, I considered just doing the downsampling after the automatic and visual artifact rejection, but I would like to be sure that the downsampling will work correctly at any point of the preprocessing and right now I am a little flummoxed at “what is happening” with the data in that function. Down below you can find code excerpts for both the artifact rejection and the downsampling. Both were looped over participants but the error appears regardless of that. Downsampling: cfg = []; cfg.dataset = ['tfdata_filtfilt_' num2str(subj(s)) '.mat']; %tfdata_filtfilt_ is the epoched and filtered data cfg.resamplefs = 250; cfg.detrend = 'no'; cfg.inputfile = ['tfdata_filtfilt_' num2str(subj(s)) '.mat']; cfg.outputfile = ['tfdata_filt_rs_' num2str(subj(s)) '.mat']; datatfrs = ft_resampledata(cfg) Artifact rejection cfg = []; config = load(['tfcfg_' num2str(subj(s)) '.mat']); cfg.trl = config.cfg.trl; cfg.continuous = 'no' ; cfg.artfctdef.threshold.channel = [1:28 33:63]; %exclude eye channels 'VEOG1' 'VEOG2' 'HEOG1' 'HEOG2' cfg.artfctdef.threshold.max = 75; cfg.artfctdef.threshold.min = -75; cfg.artfctdef.threshold.bpfilter = 'no'; cfg.inputfile = ['tfdata_filt_rs_' num2str(subj(s)) '.mat']; cfg.outputfile = ['tfdata_artif_' num2str(subj(s)) '.mat']; cfg = ft_artifact_threshold(cfg); save (cfg.outputfile, 'cfg') Since I am new to FieldTrip, I can imagine it to be a “simple/stupid” error having to do with the cfg. Thank you for reading this and trying to help ☺ Best regards Alexandrina -- C.-N. Alexandrina Guran, M.Sc. PhD student Institute of Psychology I University of Lübeck Maria-Goeppert-Straße 9a 23562 Lübeck Germany Building MFC 8, 1st Floor, Room 1 Phone: +49 451 3101 3635 Fax: +49 451 3101 3604 -------------- next part -------------- An HTML attachment was scrubbed... URL: From chuanjigao at gmail.com Mon Mar 27 14:30:44 2017 From: chuanjigao at gmail.com (Jack Gao) Date: Mon, 27 Mar 2017 08:30:44 -0400 Subject: [FieldTrip] Post-hoc tests for cluster-based permutation tests on event-related fields Message-ID: Dear Community, My name is Chuanji Gao, I'm a PhD student in Experimental Psychology Program at University of South Carolina. I'm now analyzing EEG data to get some event-related fields results. There are three conditions (condition1, 2 and 3) that I want to compare. So I used ft_timelockstatistics to run the cluster-based permutation test firstly. The cfg are as below. %--------- *cfg = [];...cfg.neighbours = neighbours;...* *cfg.latency = [0.1 0.8];cfg.avgovertime = 'no';cfg.parameter = 'avg';cfg.method = 'montecarlo';cfg.statistic = 'depsamplesFmultivariate'; * *...Nsub = 19;cfg.design(1,1:3*Nsub) = [ones(1,Nsub) 2*ones(1,Nsub) 3*ones(1,Nsub)]; cfg.design(2,1:3*Nsub) = [1:Nsub 1:Nsub 1:Nsub];cfg.ivar = 1; cfg.uvar = 2; stat = ft_timelockstatistics(cfg,cond1{:},cond2{:}, cond3{:});* %--------- The null hypothesis was rejected, and it seems the effect was most pronounced from 224ms to 800ms at centro-parietal regions. The next step: I want to do pairwise comparisons of the three conditions. I'm not sure if I should use the time window identified from the last analyses as below: *...cfg.latency = [0.224 0.8];cfg.avgovertime = 'yes';cfg.parameter = 'avg';cfg.method = 'montecarlo';cfg.statistic = 'ft_statfun_depsamplesT'; * *...stat = ft_timelockstatistics(cfg,cond1{:},cond2{:});* OR should I use the whole time window as I used in the first analyses as below: *...cfg.latency = [0.1 0.8];cfg.avgovertime = 'no';cfg.parameter = 'avg';cfg.method = 'montecarlo';cfg.statistic = 'ft_statfun_depsamplesT'; ...stat = ft_timelockstatistics(cfg,cond1{:},cond2{:});* I'm inclined to use the whole time window and "non-average over time", but not entirely sure. Can someone give me some suggestions on it? Any help would be very appreciated. Best, Chuanji Chuanji Gao PhD student Department of Psychology University of South Carolina E-Mail chuanji at email.sc.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From braingirl at gmail.com Mon Mar 27 16:19:03 2017 From: braingirl at gmail.com (Teresa Madsen) Date: Mon, 27 Mar 2017 10:19:03 -0400 Subject: [FieldTrip] Problem with downsampling / automatic artifact rejection In-Reply-To: <815CD0C7-3692-488F-8012-3A23529C52E0@uni-luebeck.de> References: <815CD0C7-3692-488F-8012-3A23529C52E0@uni-luebeck.de> Message-ID: I don't see anything obviously wrong with your cfg, but I don't know what is loaded into the config variable - is it possible config.cfg.trl is requesting samples that are not present in the input file? If it's based on the data before downsampling, the sample numbers could be off by a factor of 250. If that's not it, here are some more general troubleshooting tips: First, I would set Matlab to dbstop if warning so it pauses execution at that warning message. You'll need to dbup at least once to get out of ft_warning, and then you'll have access to the workspace of the preproc function. Examine the dat variable for NaNs and see if you can track back to figure out where they were added. Since dat is an input to that function, you might start by typing dbup twice to get to the workspace of ft_artifact_threshold and verify whether any NaNs are present in dat there. If neither of those help you figure out the problem, it should at least give you more info to provide in a bug report to http://bugzilla.fieldtriptoolbox.org/ Hope that helps, Teresa On Mon, Mar 27, 2017 at 5:34 AM, Alexandrina Guran < alexandrina.guran at uni-luebeck.de> wrote: > Dear FieldTrip Community, > > > > My name is Alexandrina Guran, I am a PhD Student at the University of > Lübeck and I have recently started working with FieldTrip in order to > preprocess (and later analyse) EEG data. I have encountered an odd problem, that > I nor people I asked in the lab could solve, also using the help function > and google: > > > > After running the epoching (trial length 5s), filtering (high-pass, > low-pass and notch, for a time-frequency analysis) and downsampling (to 250 > Hz), I wanted to do an automatic artifact rejection, in order to have > exploratory information of how many of my trials would be affected by > artifacts and if there were participants that blinked on a majority of > trials in order to determine whether I should shorten my trial length > and/or conduct an ICA. > > > > I used the ft_artifact_threshold function, in Matlab R2016b, with > different FieldTrip versions (march 2017 as well as end 2016 and end 2015). > > However, the automatic artifact detection did not work – that is, it would > stop rejecting artifacts after a number x of trials (usually between 90 and > 140 trials), depending on participant. I would get an error message but > then the artifact rejection would go on, telling me all trials were ok > (even if I set 1 microvolt as a threshold). > > The error message I got is the following: > > > > “(…) threshold artifact scanning: trial 128 from 320 exceeds max-threshold > > threshold artifact scanning: trial 129 from 320 is ok > > threshold artifact scanning: trial 130 from 320 is ok > > threshold artifact scanning: trial 131 from 320 is ok > > Warning: data contains NaNs, no filtering or preprocessing applied > > > In ft_warning (line 184) > > In preproc (line 283) > > In ft_artifact_threshold (line 164) > > In preprocessing (line 266) > > threshold artifact scanning: trial 132 from 320 is ok > > threshold artifact scanning: trial 133 from 320 is ok > > threshold artifact scanning: trial 134 from 320 is ok > > threshold artifact scanning: trial 135 from 320 is ok > > threshold artifact scanning: trial 136 from 320 is ok > > threshold artifact scanning: trial 137 from 320 is ok > > threshold artifact scanning: trial 138 from 320 is ok > > threshold artifact scanning: trial 139 from 320 is ok > > threshold artifact scanning: trial 140 from 320 is ok > > threshold artifact scanning: trial 141 from 320 is ok > > threshold artifact scanning: trial 142 from 320 is ok (…)” > > > > This was however only the case if I ran the artifact detection on > down-sampled data. It worked fine with just filtered data. > > > > However, I checked the preprocessed (downsampled) data for NaNs (using the > isnan-MATLAB function) and there were none to be found (I also checked > visually in one dataset). > > > > Has anyone encountered this problem and found a solution? > > > > Of course, I considered just doing the downsampling after the automatic > and visual artifact rejection, but I would like to be sure that the > downsampling will work correctly at any point of the preprocessing and > right now I am a little flummoxed at “what is happening” with the data in > that function. > > > > Down below you can find code excerpts for both the artifact rejection and > the downsampling. Both were looped over participants but the error appears > regardless of that. > > Downsampling: > > > > cfg = []; > > cfg.dataset = ['tfdata_filtfilt_' > num2str(subj(s)) '.mat']; %tfdata_filtfilt_ is the epoched and filtered > data > > cfg.resamplefs = 250; > > cfg.detrend = 'no'; > > cfg.inputfile = ['tfdata_filtfilt_' > num2str(subj(s)) '.mat']; > > cfg.outputfile = ['tfdata_filt_rs_' > num2str(subj(s)) '.mat']; > > datatfrs = ft_resampledata(cfg) > > > > Artifact rejection > > cfg = []; > > config = load(['tfcfg_' > num2str(subj(s)) '.mat']); > > cfg.trl = config.cfg.trl; > > cfg.continuous = 'no' ; > > cfg.artfctdef.threshold.channel = [1:28 33:63]; %exclude eye > channels 'VEOG1' 'VEOG2' 'HEOG1' 'HEOG2' > > cfg.artfctdef.threshold.max = 75; > > cfg.artfctdef.threshold.min = -75; > > cfg.artfctdef.threshold.bpfilter = 'no'; > > cfg.inputfile = ['tfdata_filt_rs_' > num2str(subj(s)) '.mat']; > > cfg.outputfile = ['tfdata_artif_' > num2str(subj(s)) '.mat']; > > cfg = ft_artifact_threshold(cfg); > > save (cfg.outputfile, 'cfg') > > > > Since I am new to FieldTrip, I can imagine it to be a “simple/stupid” > error having to do with the cfg. > > Thank you for reading this and trying to help J > > > > Best regards > > Alexandrina > > > > > > -- > > C.-N. Alexandrina Guran, M.Sc. > > PhD student > > Institute of Psychology I > > University of Lübeck > > Maria-Goeppert-Straße 9a > 23562 Lübeck > > Germany > > > > Building MFC 8, 1st Floor, Room 1 > > Phone: +49 451 3101 3635 <+49%20451%2031013635> > > Fax: +49 451 3101 3604 <+49%20451%2031013604> > > > > > > > > _______________________________________________ > fieldtrip mailing list > fieldtrip at donders.ru.nl > https://mailman.science.ru.nl/mailman/listinfo/fieldtrip > -- Teresa E. Madsen, PhD Research Technical Specialist: *in vivo *electrophysiology & data analysis Division of Behavioral Neuroscience and Psychiatric Disorders Yerkes National Primate Research Center Emory University Rainnie Lab, NSB 5233 954 Gatewood Rd. NE Atlanta, GA 30329 (770) 296-9119 braingirl at gmail.com https://www.linkedin.com/in/temadsen -------------- next part -------------- An HTML attachment was scrubbed... URL: From efrain.torres at marquette.edu Wed Mar 29 07:54:14 2017 From: efrain.torres at marquette.edu (Torres, Efrain) Date: Wed, 29 Mar 2017 05:54:14 +0000 Subject: [FieldTrip] Activity not changing with time, SAM Beamforming Message-ID: When I plot my results using ft_sourceplot. My results do not seem to change despite changes in latency that I indicate through the configuration. Below is my code for SAM Beamformer of EEG data. I am unsure what I am doing wrong. Note that for preprocessing, it was previously done in EEGLab and imported into fieldtrip. cfg.trialdef.eventtype='trigger'; cfg.trialdef.prestim=.2; cfg.trialdef.poststim=.8; cfg.trialdef.ntrials=50; %%This was changed to 1 from 64 to cfg.dataset=rawEEG; cfg=ft_definetrial(cfg) cfg.continous='yes' cfg.trialfun='ft_trialfun_general' cfg.method='trial' %changed from channel to trial PU74954_PL5=ft_preprocessing(cfg) %% timelock analysis cfg=[]; cfg.covariance='yes'; cfg.covariancewindow='poststim'; cfg.vartrllength=2; timelock=ft_timelockanalysis(cfg,PU74954_PL5); plot(timelock.time, timelock.avg); %% headmodel Subject01='/home/etorres/Desktop/HAL_Fieldtrip/Anatomy/PU7493_1/RAW/anat+orig.BRIK'; mri=ft_read_mri(Subject01); cfg=[]; cfg.output='brain'; seg=ft_volumesegment(cfg, mri); cfg = []; cfg.method = 'singlesphere'; headmodel = ft_prepare_headmodel(cfg, seg); %% Preparing the subject specific grid %hdr=ft_read_header(PU74954_PL5); cfg=[]; cfg.elec=PU74954_PL5.hdr.elec; cfg.headmodel=headmodel; cfg.grid.resolution=1; cfg.grid.unit='cm'; %cfg.inwardshift=-1.5; grid=ft_prepare_sourcemodel(cfg); %% Creating the leadfield cfg=[]; cfg.elec=PU74954_PL5.hdr.elec; cfg.reducerank='3'; cfg.headmodel=headmodel; cfg.grid=grid; cfg.normalize='yes'; lf=ft_prepare_leadfield(cfg); %% Source Analysis cfg=[]; cfg.method='sam'; cfg.grid=lf; cfg.headmodel=headmodel; %cfg.keepfilter='yes'; cfg.lcmv.fixedori='yes'; source_avg=ft_sourceanalysis(cfg,timelock); %% Plotting Results mri = ft_read_mri(Subject01); mri = ft_volumereslice([], mri); cfg=[]; cfg.parameter='avg.pow'; [interp]=ft_sourceinterpolate(cfg,source_avg,mri); cfg=[]; cfg.method='slice'; cfg.funcolorlim=[0 10]; cfg.nslices=25; cfg.latency=-.1; cfg.funcolormap='jet'; cfg.funparameter='avg.pow'; ft_sourceplot(cfg, interp); Efrain Torres -------------- next part -------------- An HTML attachment was scrubbed... URL: From gunnar.norrman at biling.su.se Wed Mar 29 09:10:29 2017 From: gunnar.norrman at biling.su.se (Gunnar Norrman) Date: Wed, 29 Mar 2017 07:10:29 +0000 Subject: [FieldTrip] PhD position at Centre for Research on Bilingualism, Stockholm University Message-ID: <1490771429408.37501@biling.su.se> The Centre for Research on Bilingualism at Stockholm University is announcing a fully funded 4-year PhD position in bilingualism, starting fall 2017. The Centre is an interdisciplinary unit with focus on psycholinguistic and sociolinguistic aspects of bilingualism, including bilingual cognition and second language acquisition. We offer a vibrant interdisciplinary research environment, as well as a fully equipped EEG/ERP and Eye Tracking lab, and we strongly encourage students with a background in any of these methodologies to apply. Read more about the position here: http://www.su.se/english/about/vacancies/vacancies-new-list?rmpage=job&rmjob=2862&rmlang=UK Applications are submitted through the university recruitment system, and the last date for applications is April 18, 2017. --- Gunnar Norrman Centre for Research on Bilingualism, Stockholm University +46 (0)8 16 3643 | gunnar.norrman at biling.su.se -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bastiaansen4.M at nhtv.nl Thu Mar 30 12:19:46 2017 From: Bastiaansen4.M at nhtv.nl (Bastiaansen, Marcel) Date: Thu, 30 Mar 2017 10:19:46 +0000 Subject: [FieldTrip] PhD position Tilburg University on 'decoding emotions from the brain' Message-ID: Dear Fieldtrippers, The departments of Cognitive Neuropsychology and Methodology and Statistics have a vacancy for a 4-year, fully funded PhD position to work on decoding emotions induced in Virtual reality environments from EEG signals. Deadline for applications is April 9th, 2017. Additional information about the position can be found through the link below. Inquiries about the position can be addressed directly to me. https://career012.successfactors.eu/career?_s.crb=%252bZoJOFM7vsQ4kHTupKwp7t2BWvc%253d best, Marcel *** Dr Marcel C.M. Bastiaansen Senior lecturer and researcher in quantitative research methods Academy for Leisure & Academy for Tourism NHTV Breda University of Applied Sciences Visiting adress: Room C1.011, Academy for Leisure Archimedesstraat 17, 4816 BA, Breda Phone: +31 76 533 2869 Email: bastiaansen4.m at nhtv.nl And Department of Cognitive Neuropsychology Tilburg School of Social and Behavioral Sciences Tilburg University Visiting address: Room S217, Simon building Warandelaan 2 5000 LE Tilburg Email: M.C.M.Bastiaansen at uvt.nl publications linked-in *** ----------------------------------------------------- Op deze e-mail zijn de volgende voorwaarden van toepassing : The following disclaimer applies to the e-mail message : http://www.nhtv.nl/disclaimer ----------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From elam4hcp at gmail.com Thu Mar 30 21:28:58 2017 From: elam4hcp at gmail.com (Jennifer Elam) Date: Thu, 30 Mar 2017 14:28:58 -0500 Subject: [FieldTrip] HCP Course 2017: Faculty and Course Schedule Available -- Register Now! Message-ID: Faculty listings and the full schedule of covered topics are now available for the 2017 HCP Course: "Exploring the Human Connectome" , to be held June 19-23 at the Djavad Mowafagian Centre for Brain Health at University of British Columbia (UBC) in Vancouver, BC, Canada! The 5-day intensive course is a great opportunity to learn directly from HCP investigators and gain practical experience with the Human Connectome Project's approach to multimodal whole brain imaging acquisition, processing, analysis, visualization, and sharing of data and results. For more info and to register visit the HCP Course 2017 website . Don't delay, registration is limited, and the course is filling up fast! Discounted on-site UBC accommodations are available through May 17, 2017 to attendees reserving through the HCP Course room block . If you have any questions, please contact us at: hcpcourse at humanconnectome. org We look forward to seeing you in Vancouver! Best, 2017 HCP Course Staff Jennifer Elam, Ph.D. Scientific Outreach, Human Connectome Project Washington University School of Medicine Department of Neuroscience, Box 8108 660 South Euclid Avenue St. Louis, MO 63110 314-362-9387 elam at wustl.edu www.humanconnectome.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bastiaansen4.M at nhtv.nl Fri Mar 31 09:31:23 2017 From: Bastiaansen4.M at nhtv.nl (Bastiaansen, Marcel) Date: Fri, 31 Mar 2017 07:31:23 +0000 Subject: [FieldTrip] PhD position Tilburg University on 'decoding emotions from the brain' In-Reply-To: References: Message-ID: Dear list, I posted a PhD vacancy yesterday, but I included a link that for some reason does not seem to work. Below is the full vacancy text as can be found on the website of Tilburg University. Apologies for the multiple posting. Best, Marcel PhD student on Decoding emotions from the brain (1,0 fte) PhD student on Decoding emotions from the brain, Departments of Cognitive Neuropsychology and Methodology and Statistics (1,0 fte) Project description Central aim of the PhD research is to decode / classify discrete categories of emotions, based on recordings of neural activity (EEG) and other physiological measures (HR, GSR, facial EMG). Emotion induction will be realized using Tilburg University’s advanced Virtual and Augmented Reality facilities. Emotion classification will be performed using state-of-the-art machine learning and data science techniques in order to optimize the sensitivity to identify and classify (differences in) emotional states. The PhD project will be supervised by promotors prof.dr. J. Vroomen, and co-promotors dr. Katrijn van Deun and dr. Marcel C.M. Bastiaansen. A more detailed project description is available upon request from dr. Marcel Bastiaansen. Tasks * Designing and conducting research; * Presenting findings on scientific conferences; * Reporting findings in international journals, resulting in a dissertation; * Participating in the graduate school; * Participating in the teaching program of the departments. Qualifications * Master’s degree (preferably Research Master) in cognitive neuroscience or a closely related discipline; * hands-on experience with EEG data analysis (preferably Fieldtrip); * Fluency in spoken English and excellent writing skills in English; * Programming skills (Matlab, R), and a keen interest in using advanced data analysis techniques are an important asset; * Experience with VR would be helpful; * Willingness and proven ability to work independently. Terms of Employment Tilburg University is among the top Dutch employers and has an excellent policy concerning terms of employment. The collective employment terms and conditions for Dutch universities will apply. The appointment is intended to lead to the completion of a PhD thesis. The PhD appointment at Tilburg University begins with a period of 12 months. Continuation of the appointment with another 36 months will will be based on performance evaluation. The gross salary for the PhD position amounts € 2.191 per month in the first year, rising to € 2.801 per month in the fourth year, based on a full-time appointment (38 hours per week). Applications and Information For additional information about the vacancy can be obtained from Dr. Marcel Bastiaansen, M.C.M.Bastiaansen at tilburguniversity.edu, tel.: +31 13 466 2408. Applicants should send their CV and a covering letter to Hans-Georg van Liempd MSc, Managing Director, Tilburg School of Social and Behavioral Sciences, only by the link mentioned below. The closing date for applications is April 9th 2017. Tilburg School of Social and Behavioral Sciences Tilburg School of Social and Behavioral Sciences (TSB) is a modern, specialized university. The teaching and research of the Tilburg School of Social and Behavioral Sciences are organized around the themes of Health, Organization, and Relations between State, Citizen, and Society. The Schools inspiring working environment challenges its workers to realize their ambitions; involvement and cooperation are essential to achieve this. Tilburg School of Social and Behavioral Sciences Department of Cognitive Neuropsychology The Department of Cognitive Neuropsychology of Tilburg University consists of a vibrant mix of people interested in cognitive and clinical neuropsychology. Our department is an intellectually exciting and productive group, advancing fundamental understanding in the cognitive neuroscience and clinical neuropsychology. Our research is highly recognized both nationally and internationally. Our fundamental research is focused on the integration of information from different modalities (hearing, seeing, touch) for perceiving speech, emotions, and crossmodal synchrony in the healthy population and in patient groups with autism, schizophrenia, or developmental dyslexia. We use behavioral measures and variety of psychophysical methods like eye tracking, EEG, and fMRI. We have access to the DAF Technology Lab for creating Virtual Reality. Department Methodology and Statistics The Department of Methodology and Statistics is an internationally renowned group, holding several experts in data science methods, latent variable methods, psychometrics, meta-research, survey methodology, and other applied statistics fields. The department has a strong tradition of working with the other (substantive) research programs in our School. The department is part of the School of Social and Behavioral Sciences at Tilburg University and responsible for the teaching and the research in the area of methodology and statistics for the social and behavioral sciences, the Data Science programs (including the novel joint bachelor in Data Science together with the technical university of Eindhoven), and the Liberal Arts and Science program of Tilburg University. The department is a member of the Interuniversity Graduate School for Psychometrics and Sociometrics (IOPS). Recruitment code Tilburg University applies the recruitmentcode of the Dutch Association for Personnel Management & Organization Development (NVP). Disclaimer The text of this vacancy advertisement is copyright-protected property of Tilburg University. Use, distribution and further disclosure of the advertisement without express permission from Tilburg University is not allowed, and this applies explicitly to use by recruitment and selection agencies which do not act directly on the instructions of Tilburg University. Responses resulting from recruitment by non-contractors of Tilburg Universities will not be handled. *** Dr Marcel C.M. Bastiaansen Senior lecturer and researcher in quantitative research methods Academy for Leisure & Academy for Tourism NHTV Breda University of Applied Sciences Visiting adress: Room C1.011, Academy for Leisure Archimedesstraat 17, 4816 BA, Breda Phone: +31 76 533 2869 Email: bastiaansen4.m at nhtv.nl And Department of Cognitive Neuropsychology Tilburg School of Social and Behavioral Sciences Tilburg University Visiting address: Room S217, Simon building Warandelaan 2 5000 LE Tilburg Email: M.C.M.Bastiaansen at uvt.nl publications linked-in *** From: Bastiaansen, Marcel Sent: donderdag 30 maart 2017 12:20 To: fieldtrip at science.ru.nl Cc: J.Vroomen at uvt.nl; k.vandeun at tilburguniversity.edu Subject: PhD position Tilburg University on 'decoding emotions from the brain' Dear Fieldtrippers, The departments of Cognitive Neuropsychology and Methodology and Statistics have a vacancy for a 4-year, fully funded PhD position to work on decoding emotions induced in Virtual reality environments from EEG signals. Deadline for applications is April 9th, 2017. Additional information about the position can be found through the link below. Inquiries about the position can be addressed directly to me. https://career012.successfactors.eu/career?_s.crb=%252bZoJOFM7vsQ4kHTupKwp7t2BWvc%253d best, Marcel *** Dr Marcel C.M. Bastiaansen Senior lecturer and researcher in quantitative research methods Academy for Leisure & Academy for Tourism NHTV Breda University of Applied Sciences Visiting adress: Room C1.011, Academy for Leisure Archimedesstraat 17, 4816 BA, Breda Phone: +31 76 533 2869 Email: bastiaansen4.m at nhtv.nl And Department of Cognitive Neuropsychology Tilburg School of Social and Behavioral Sciences Tilburg University Visiting address: Room S217, Simon building Warandelaan 2 5000 LE Tilburg Email: M.C.M.Bastiaansen at uvt.nl publications linked-in *** ----------------------------------------------------- Op deze e-mail zijn de volgende voorwaarden van toepassing : The following disclaimer applies to the e-mail message : http://www.nhtv.nl/disclaimer ----------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From narendra.kumar at iitrpr.ac.in Fri Mar 31 13:34:25 2017 From: narendra.kumar at iitrpr.ac.in (narendra karna) Date: Fri, 31 Mar 2017 17:04:25 +0530 Subject: [FieldTrip] Regarding Analysis of EGI's EEG Data using fieldtrip Message-ID: ​Hi, I am pursuing PhD in Linguistics. I don't know much about MATLAB. I have recently done one EEG/ERP experiment using EGI's 128 channel EEG system. I came to know that fieldtrip supports the analysis of EGI's EEG data. So, if be possible can anyone send me the script for analysing EGI's EEG data with ICA analysis. ​Thanks. Narendra​ Research Scholar Department of Humanities and Social Sciences Indian Institute of Technology Ropar Punjab, India - 140001 -------------- next part -------------- An HTML attachment was scrubbed... URL: From max-philipp.stenner at med.ovgu.de Fri Mar 31 15:13:37 2017 From: max-philipp.stenner at med.ovgu.de (Stenner, Max-Philipp) Date: Fri, 31 Mar 2017 13:13:37 +0000 Subject: [FieldTrip] PhD on human motor learning at the Leibniz Insitut for Neurobiology, Magdeburg/Germany Message-ID: Dear fieldtrip community a 3-year PhD position is available for a research project on the role of neural oscillations for motor learning in humans with Dr Max-Philipp Stenner and Prof Jens-Max Hopf at the Leibniz Institute for Neurobiology in Magdeburg, Germany (http://www.lin-magdeburg.de/en/departments/behavioral_neurology/physiology_motorlearning/index.jsp). Please find all details in the attached pdf. Best wishes Max-Philipp Stenner -------------- next part -------------- A non-text attachment was scrubbed... Name: PhD ad.pdf Type: application/pdf Size: 150597 bytes Desc: PhD ad.pdf URL: From dlozanosoldevilla at gmail.com Fri Mar 31 16:40:36 2017 From: dlozanosoldevilla at gmail.com (Diego Lozano-Soldevilla) Date: Fri, 31 Mar 2017 16:40:36 +0200 Subject: [FieldTrip] how to make the cfg.selectfeature work in ft_databrowser? Message-ID: Hi all, I'm using ft_databrowser to inspect sleep data and I want to visually mark different events (spindles, k-complexes, artifacts, so forth) and asign them to different cfg.artfctdef.xxx.artifact substructures. Could somebody help me to mark different artifact trial types using the cfg.selectfeature option? Please find below the code and data to reproduce the error I got. I'm using the very last fieldtrip version on windows with matlab 7.9b. Thanks beforehand, Diego data = []; data.label = {'Fpz';'F7';'F3';'Fz';'F4';'F8';'C3';'Cz';'C4';'P3';'Pz';'P4';'O1';'Oz';'O2'}; data.fsample = 250; data.trial{1} = rand(size(data.label,1),data.fsample*30); data.time{1} = (1:data.fsample*30)./data.fsample; cfg = []; cfg.length = 2; cfg.overlap = 0; trl = ft_redefinetrial(cfg,data); cfg = []; cfg.channel = 'all'; cfg.blocksize = 2; cfg.selectfeature = {'a';'b'}; cfg.viewmode = 'vertical'; events = ft_databrowser(cfg,trl); the input is raw data with 15 channels and 15 trials detected 0 a artifacts detected 0 b artifacts ??? Error using ==> plus Matrix dimensions must agree. Error in ==> ft_databrowser at 745 hsel = [1 2 3] + (opt.ftsel-1) .*3; ??? Reference to non-existent field 'trlvis'. Error in ==> ft_databrowser>redraw_cb at 1639 begsample = opt.trlvis(opt.trlop, 1); Error in ==> ft_databrowser>winresize_cb at 2250 redraw_cb(h,eventdata); ??? Error while evaluating figure ResizeFcn Virus-free. www.avast.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> -------------- next part -------------- An HTML attachment was scrubbed... URL: From sunsunruirui1111 at gmail.com Fri Mar 31 17:40:38 2017 From: sunsunruirui1111 at gmail.com (Rachel S) Date: Fri, 31 Mar 2017 11:40:38 -0400 Subject: [FieldTrip] Fwd: OpenMEEG binaries are not correctly installed In-Reply-To: References: Message-ID: Hello fieldtrip community, My name is Rachel and I am a Master student working on a project on Ecog. I am trying to use ft_prepare_headmodel with cfg = 'openmeeg' and I get the error "OpenMEEG binaries are not correctly installed". I use a Windows machine and I already add the openmeeg install folder to 'PATH'. When I ran system('om_assemble'), the output is: om_assemble version 2.1.0 (799) compiled at Aug 17 2011 19:50:41 Not enough arguments Please try "om_assemble -h" or "om_assemble --help " ans = 0 Any suggestions? Thanks in advance. Best wishes, Rachel -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian.jobke at nexgo.de Fri Mar 31 18:07:43 2017 From: sebastian.jobke at nexgo.de (Sebastian Jobke) Date: Fri, 31 Mar 2017 18:07:43 +0200 Subject: [FieldTrip] How to compute permutation test on ITC data Message-ID: <00f501d2aa38$ee73c260$cb5b4720$@nexgo.de> Hello Fieldtrip-Community, I am writing you to ask for some help. At the moment I am analysing EEG-Data gained during a passive oddball paradigm. For the preprocessing I used eeglab, transformed the data to the fieldtrip structure and computed time-frequency analysis and ITC, for which you provided great tutorials. Now I am a little stuck, because I was wondering how to compute permutation tests on ITC data? I have several subjects and want to compare two conditions (standards and deviants). I saw that there is a function (FT_STATFUN_DIFF_ITC) for this, but I unfortunately don't know how to use it. More specifically, I was wondering how to average over subjects and if I have to do the permutation test on every frequency band again (This, I did for the time-frequency analysis, as described in your tutorial). Further, I was wondering about how to use ft_freqstatistics with ITC-Data, how you described it in the tutorial. For any advise, I would be more than greatful. Thank you very much in advance. Best, Sebastian -------------- next part -------------- An HTML attachment was scrubbed... URL: