[FieldTrip] Problem Loading .fif Data Files
Murphy, Nicholas R
Nicholas.R.Murphy at uth.tmc.edu
Mon Dec 19 20:15:46 CET 2016
Dear all,
My name is Nik Murphy and I work in the department of psychiatry at UTHealth Houston. I'm currently experiencing some problems with the design of my MEG pipeline which seem to stem back to how the data is loaded.
When I run the read header and data commands Fieldtrip begins the reading at an (I'm guessing) arbitrary point in the data. I have a total of 452,999 samples (1000hz sampling rate) per run and a total of 54 trials. When I load the data using brainstorm the total number of samples and events identified is correct, however, when I load the data using fieldtrip I consistently lose up to 100,000 samples from the beginning of my data.
My pipeline currently looks like this
hdr = ft_read_header(file);
data = ft_read_data(file);
events = ft_read_events(file);
cfg = [];
cfg.dataset = file;
cfg.channel = 'meg';
cfg.padding = 0;
cfg.padtype = 'data';
cfg.bpfilter = 'yes';
cfg.bpfreq = [0.1 100];
cfg.bsfilter = 'yes';
cfg.bsfreq = [59 61];
cfg.event = events;
cfg.trialdef.eventtype = 'STI101';
cfg.trialdef.prestim = .5;
cfg.trialdef.poststim= 6;
cfg.baselinewindow = [-.2 0];
cfg.demean = 'yes';
cfg = ft_defintetrial(cfg);
[data_new] = ft_preprocessing(cfg);
Any help with this would be greatly appreciated.
Many thanks
Regards
Nik
------------------
Dr Nicholas Murphy MSc BSc(HONS)
Nicholas.R.Murphy at uth.tmc.edu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.science.ru.nl/pipermail/fieldtrip/attachments/20161219/183c0cd9/attachment.html>
More information about the fieldtrip
mailing list