<div dir="ltr"><div class="gmail_default" style="font-family:arial,helvetica,sans-serif">Hi everyone,</div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><br></div><div class="gmail_default" style="font-family:arial,helvetica,sans-serif">When playing around with different ways of rejecting (muscle) artifacts, I came across something in the tutorial + code that I do not fully understand. The tutorial ( <a href="http://www.fieldtriptoolbox.org/tutorial/automatic_artifact_rejection#iii_z-transforming_the_filtered_data_and_averaging_it_over_channels">http://www.fieldtriptoolbox.org/tutorial/automatic_artifact_rejection#iii_z-transforming_the_filtered_data_and_averaging_it_over_channels</a> ) states that:</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class="gmail_default" style="font-family:arial,helvetica,sans-serif">4. Per timepoint these z-values are 
averaged. Since an artifact might occur on any and often on more than 
one electrode (think of eyeblinks and muscle artifacts), averaging 
z-values over channels/electrodes allows evidence for an artifact to 
accumulate. </div></blockquote><div><br></div><div style="font-family:arial,helvetica,sans-serif" class="gmail_default">​This line of reasoning makes perfect sense to me. However, in the subsequent mathematical description, and in the implementation, averaging isn't applied at all: instead, a scaled summation is computed.</div><div style="font-family:arial,helvetica,sans-serif" class="gmail_default"><br></div><div style="font-family:arial,helvetica,sans-serif" class="gmail_default">zsum = SUM_{ch in C} = z(ch,t)/sqrt(C) ​</div><div style="font-family:arial,helvetica,sans-serif" class="gmail_default"><br></div><div style="font-family:arial,helvetica,sans-serif" class="gmail_default">where C is the number of channels [forgive my math editing skills] .<br></div><div style="font-family:arial,helvetica,sans-serif" class="gmail_default"><br></div><div style="font-family:arial,helvetica,sans-serif" class="gmail_default">I fail to understand why the accumulated z-score uses a division by the _sqrt_ of the number of channels, rather than simple averaging. In fact, the potential downsides I see  are:</div><div style="font-family:arial,helvetica,sans-serif" class="gmail_default"> - The reported z-value, and the one used for thresholding does not correspond to the traditional interpretation of z.</div><div style="font-family:arial,helvetica,sans-serif" class="gmail_default"> - The value of a 'good' z-threshold value is dependent on the number of channels, as this changes the scaling.<br></div><div style="font-family:arial,helvetica,sans-serif" class="gmail_default"> - Should we wish to use the  channel-level z-scores for identifying channels affected by the artefact, the accumulated threshold value now has a different scale than the channel-level z-values.</div><div style="font-family:arial,helvetica,sans-serif" class="gmail_default"><br></div><div style="font-family:arial,helvetica,sans-serif" class="gmail_default">Am I missing something? What are the upsides of the sqrt scaling? <br></div><div style="font-family:arial,helvetica,sans-serif" class="gmail_default"><br></div><div style="font-family:arial,helvetica,sans-serif" class="gmail_default">Thank you!</div><div style="font-family:arial,helvetica,sans-serif" class="gmail_default">Wouter<br></div></div>