[FieldTrip] Comparing ERPs/ERFs with measurement errors factored in

Sebastian.Neudek at med.uni-duesseldorf.de Sebastian.Neudek at med.uni-duesseldorf.de
Wed Oct 5 13:40:34 CEST 2022


Hi all,

I spent some time thinking about the error calculation for an ERF/ERP, which probably is neglected by many. The method I thought of is maybe not 100 percent correct, so hopefully you can help me perfect this, but I think it is at least going into the right direction.

tldr: first calculate the measurement error (uncertainty) of your device. This measurement error propergates when averaging over trials.  The measurement error and the error of the subjects mean propagates to the groups mean when averaging over subjects.


First, lets start with a single subject. You have multiple trials, which hopefully are now artifact free, because you applied some preprocessing and filtering. Also there is an empty room recording. Maybe you haven't thought about it, but to each value of your ERP/ERF trials, there is an uncertainty, which results from random or systematic uncertainties.
What uncertainties? Most prominantly there are errors from your measuring device (it can't measure infinetly accurate), your preprocessing method, some other factors....
How can I calculate these uncertainties? Let's take the empty room recording. Split it into trials with the same length like your other trials (if the length varies, take the lenght of longest trial) and apply the same preprocessing as for your subject. Now we can average these empty room trials and calculate the standard deviation. What do we expect? In a perfect condition, we expect the average to be 0 across the whole trial (if you de-meaned the data). But it can happen, that because of the preprocessing or systematic errors the average isn't 0 everywhere. That means: If you measure your Subject, all of the trials encounter this systematic error. To correct it, you can substract this systematic uncertainty from each ouf your subject's trials.
The standard deviation (not the standard error!) on the other hand estimates your random uncertainty at each timpoint of your measuring method. If you detrended your data, the standard deviation should be approximately the same at each timepoint. If not, your uncertainty will grow with time, as a few sensor drifts occur.

Back to our subjects data. This far, we calculated the random uncertainty for each timepoint and applied the systematic uncertainty correction (also at each timepoint) to our trials. We need to keep in mind, that the random uncertainty applies to each trial. What happens if we want to average over our trials?
First we average over all trials like we did before. What we get is our subjects mean and the variance of the subject. By this we can calculate the standard error of the subjects mean (This error is only because of the subjects variance but does not factor in the uncertainty of the measurements). The tricky part is to factor in, that all of the previously used values aren't measured infidently accurate, but are subject to the measurement uncertainty. This is done using the 'Propagation of uncertainty' (also called 'Propagation of error').
The uncertainties of two (and all other) trials are independent. If we add two trials, the resulting uncertainty is dz= sqrt(dx^2+dy^2) (dz is total uncertainty of both trials added, dx is uncertainty of trial 1 and dy is uncertainty of trial 2). Because we want the average of both measurements, the uncertainty is devided by the number of measurements.
(Source: https://www.physics.upenn.edu/sites/default/files/Managing%20Errors%20and%20Uncertainty.pdf)
Therefore if we average over all trials, the uncertainties are as follow:
d_avg = 1/n *sqrt(d_1^2+d_2^2+...+d_n^2). In case of a single subject the uncertainties are all same and therefore: d_avg = sqrt(n)/n * d (d is the uncertainty of one trial).
What did we calculate with this formula? We calculated the uncertainty of our average because of our measureing method and measuring device. Our previously calculated mean value of our subject is therefore not a perfect measurement, but 'smeared' by our uncertainty of measurement. With other words: we couldn't calculate it more precise, because of our device and methods.
We now have two errors (uncertainties are also errors): The error of our subjects mean (EOS), because of the subjects variability, and the error of our measurement (EOM). How do these add together?
Because both errors are independent error_total = sqrt(EOS^2+EOM^2).
This is now the error of the mean of the ERP/ERF (or total error, if you want to call it like that).
To get the standard deviation of the ERP/ERF you need to multiply this error with the square root of number of trials, and by squaring this standard deviation you get the variance of the ERP/ERF (your actually measured variance), which factors in your measurement errors.

What do we need to do to compare the ERP/ERF of two subjects?
The now calulated total variance for subject 1 and subject 2 factors in the measurement error and the variance of the subjects. Therefore it compares it by with the variances we actually measured and which we for certain know.  As a result, the variances are always bigger, than just using the subjects variances. Therefore you will get worse p-values.

Last but not least: Now for all subjects the variances and means are calculated. How to calculate the variance of the group?
Again you start as normal by calculating the groups average and variance with the average values of the subjects. But again, for all these average values of the subjects, there is an uncertainty. We already calulated thess uncertainty with the total error for each subjects mean. And again the error will propagate with averaging:
The measurement error for the groups average is again err_measurement = 1/n *sqrt(err_total_1^2+err_total_2^2+...+err_total_n^2), because the errors of all subjects are independent. You now need the again the standard error of the mean. Your total error of the grous is then: error_total_group = sqrt(EOS^2+EOM^2). Calculating back to the variance you will get your measured variance for the group, on which you can do your statistics.


What do you think of it? This procedure is only for ERF/ERP and not for TFR. For TFR I don't know how the error propagates in case of a fourier transformation.

Best,
Sebastian











-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.science.ru.nl/pipermail/fieldtrip/attachments/20221005/3b5a0f66/attachment.htm>


More information about the fieldtrip mailing list