[FieldTrip] Standardization of EEG recordings collected across days in a within-subject analysis

Nikolaos Vardalakis nikos.chania at gmail.com
Thu Nov 27 16:59:36 CET 2025


Hello to the community,

Apologies if cross-posting, I might have messed up the first time I shared
my question. To introduce myself, I am a postdoctoral researcher working on
neuropsychiatric disorders and DBS. The datasets we collect include SEEG
recordings in EMU settings, iEEG recordings from fully-implanted devices,
and scalp EEG for longitudinal monitoring of our patients. I'm interested
in the effects of limbic DBS on cognition, which is why I run cognitive
tasks in our (arguably very small) patient cohort.

My main concern (and what I would like to know more about from everyone who
has dealt with this) involves the pooling of single-patient recordings from
multiple days. My particular case involves SEEG data that I preprocess,
epoch, and calculate time-frequency power per trial. All of these steps are
performed separately for each session. To compare statistically two
conditions (A vs B) in a single session, I simply log the trial-level power
spectra through ft_math and perform permutation tests with custom code (it
is unintuitive how to do it using ft_freqstatistics for SEEG data with no
neighbors!). For visualization, I just apply standard dB normalization to
the raw-power means of condition A and B against their common baseline and
plot their difference map.

My sticking point: data collected across two different sessions/days vary
in power, impedance, signal quality etc. My question is this: for a single
subject, how do you deal with data standardization prior to pooling and
what statistical tests do you run? I could compute mean baseline-normalized
spectra per session, average those, and end up with patient/condition
averages, but this is the approach for running a group-level
statistical analysis across participants; I would lose individual trial
structure, therefore can't do permutation tests. I also considered
z-transforming all power values per session and then running
ft_appendfreqdata, ending up with z-scored time-frequency power per
trial, and then pool all trials from each session together. This approach
seems unsavory because it destroys mean/variance and I wouldn't know what
statistical tests to run on this dataset. What are the suggestions of the
community?

Apologies for a (very) long first/second message! Looking forward to
reading everyone's two-cents on this; neural data analysis is a form of art
after all and everyone has their own pipelines.

Best,
Nikos
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.science.ru.nl/pipermail/fieldtrip/attachments/20251127/b39eb69d/attachment.htm>


More information about the fieldtrip mailing list