[FieldTrip] impact of skewed power distributions on data analysis

Matt Craddock m.p.craddock at leeds.ac.uk
Fri Jan 13 15:51:32 CET 2017


Hi all,

I'm fiddling around with simulations at the moment but not got a lot of 
free time, and every time I do fiddle with them the more complicated I 
seem to make them...

correct me if i'm wrong, but there are a couple of different issues here.

About the skewness of the distribution - you're not usually comparing 
two skewed distributions statistically (which even then would probably 
be ok if they're similary skewed), usually it's a bunch of means from 
those distributions, and the distribution of the means will often be 
normally distributed (as will their differences), and thus ok for 
parametric stats. So the main problem is really whether the mean is a 
good summary statistic for capturing differences between the underlying 
distributions or not (similar debates about reaction times) - it's 
certainly not robust to outliers. Median might be better in some ways, 
but it's a biased estimator of the population median and might have 
consequences on stat power - that's one for the simulations at some point.

Taking the log first won't necessarily help. As the code Michael Cohen 
provides shows, taking the log can still result in skewed distributions 
- power is right-skewed but its log is left-skewed. Taking the square 
root would obviously be better in this case but that's because it just 
returns it to the original unsquared normal distribution - whether 
that's true of real data needs checking out. In addition if you take the 
log you're no longer testing the same thing when you compare the means - 
you're testing the geometric mean instead of the arithmetic mean. 
Differences in the arithmetic means across conditions will not 
necessarily translate into differences in the geometric means and vice 
versa.

If you use a subtractive baseline, using the median or mean of power 
will only change what constant you subtract from the data. If you 
baseline correct using the average of all trials rather than condition 
specific baselines, it should make no difference to the stats if you use 
the mean or the median of baseline power (or indeed if you don't use 
baseline correction at all). If you test differences across electrodes 
as well as condition, and use electrode but not condition specific 
baselines - which seems reasonable, since you might expect bigger 
baseline differences across than within electrodes, it'll only change 
the main electrode effect, not the condition effect or the interaction 
between condition and electrode. If you use condition and electrode 
specific baselines, then it influences *everything*, and you also no 
longer know if the differences between conditions/electrodes are due to 
baseline or post-stimulus differences.

Using the median over the mean will change the stats if you use divisive 
baselines, but whether that's a good or bad difference is another 
question for simulations.

Cheers,
Matt

On 12/01/2017 13:37, Mike X Cohen wrote:
> Interesting discussion here. I think we should take a step back and
> distinguish between trivial and nontrivial causes and consequences for
> the skewed distribution. To some extent, the non-normal distribution is
> simply the result of defining power as a squared distance -- distances
> are always positive and squaring them means big values become really
> big. Consider the following:
>
> d = randn(10000,1); % random numbers
> subplot(311), hist(d,500); % their distribution
> subplot(312), hist(d.^2,500); % "power" distribution, also try a
> log-scaled y-axis
> subplot(313), hist(log(d.^2),500); % log-power distribution
>
> The fact that power values have a power-law-like distribution is
> therefore trivial.
>
> But this leads to two non-trivial questions:
> 1) Is this distribution meaningful for brain function (beyond simply the
> result of taking squared values)? People who study "the log-brain" and
> fractal-like (or scale-free) organization of brain function would argue
> that these distributions reveal meaningful insights into brain function,
> and therefore it is not really an artifact for analyses. In other words,
> large values are large for a reason; they might not be outliers that we
> should attempt to compress.
>
> 2) Does it matter for real data analysis? I think this is Teresa's
> initial concern. I'm inclined to think that it doesn't really matter,
> but that's just based on the idea (hope!) that if it did really matter
> and the way we do it is wrong, the field would have discovered this a
> long time ago. On the other hand, this wouldn't be the first time that
> people have gotten things wrong for decades.
>
> I think the best way to investigate #2 would be with simulated data,
> showing that a "true" effect is missed when not first computing
> log-power, presumably because the effect was overshadowed by
> large-amplitude "noise" (but somehow this would have to be physiological
> noise that wouldn't get rejected during data cleaning). In empirical
> data, all you'd be able to do is show a difference without knowing the
> right answer.
>
> btw, make sure to be careful with baselining log-power -- any divisions
> (e.g., dB or percent change) will be unstable/run off to infinity when
> baseline power is close to zero, i.e., raw power is close to 1. The sign
> might also get weird. Probably best to use a baseline subtraction.
>
> Mike
>
>
>
>
>
> --------------------------------------------><------------------------------------------------
>
> Thanks for the reference.  In return, I also recommend reading Ciuparu and
> Mures an's recent rebuttal:
>
> European Journal of Neuroscience, Vol. 43, pp. 861–869, 2016,
> doi:10.1111/ejn.13179 <http://dx.doi.org/10.1111/ejn.13179>
> TECHNICAL SPOTLIGHT
> Sources of bias in single-trial normalization procedures
>
> They demonstrate that the positive bias Grandchamp and Delorme warned about
> with single-trial baseline normalization was, in fact, due to the
> correlated numerators and denominators in each of the baseline
> normalization procedures they tested, which was, in turn, caused by the
> skewed distributions of baseline power values.  They demonstrate that this
> bias is reduced by using a much longer baseline, ideally incorporated into
> the experimental design, but when that's not possible, stitching together
> the baselines of many trials.
>
> Neither article addresses my specific question of whether it would be even
> better to log-transform the raw power values before averaging, so I suppose
> I'll have to test it myself and write my own methods article!  🤓
>
> I will go ahead and incorporate some of these alternative baseline
> normalization methods in my git fork of FieldTrip as I go along with my own
> analyses, so let me know if anyone else would find them useful, and I'll
> submit a pull request to contribute them to the master branch.
>
> Thanks for the fruitful discussion, all!
> ~Teresa
>
>
> On Mon, Dec 19, 2016 at 8:01 PM, Alik Widge <alik.widge at gmail.com
> <http://gmail.com>> wrote:
>
>> Indeed, in a separate thread with Michael Cohen several months back he
>> suggested precisely that paper.
>>
>> On Dec 19, 2016 5:07 PM, "Nicholas A. Peatfield" <nick.peatfield at
> gmail.com <http://gmail.com>>
>> wrote:
>>
>>> I think this paper is relevant to this discussion.
>>>
>>> Grandchamp, R., & Delorme, A. (2011). Single-Trial Normalization for
>>> Event-Related Spectral Decomposition Reduces Sensitivity to Noisy
> Trials. *Frontiers
>>> in Psychology*, *2*, 236. http://doi.org/10.3389/fpsyg.2011.00236
>>>
>>> https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3183439/
>>>
>>>
>>>
>>> On 19 December 2016 at 13:08, Teresa Madsen <tmadsen at emory.edu
> <http://emory.edu>> wrote:
>>>
>>>> I appreciate everyone's feedback, but I still wonder if something is
>>>> being missed.  I understand that the non-normally distributed power
> values
>>>> may be less of an issue when performing non-parametric stats or even a
>>>> paired-samples t-test that looks at difference values which may be
> normal
>>>> even when the raw data isn't.  However, my concern comes into play even
>>>> before these statistical comparisons are made, whenever any averaging is
>>>> done to freq-type data across times, frequencies, trials, electrodes,
>>>> subjects, etc.  That means any time any of these configuration
> options are
>>>> used for any of these functions, and probably more:
>>>>
>>>> ft_freqanalysis:          cfg.keeptrials or cfg.keeptapers = 'no';
>>>> ft_freqgrandaverage:   cfg.keepindividual = 'no';
>>>> ft_freqstatistics:         cfg.avgoverchan, cfg.avgovertime, or
>>>> cfg.avgoverfreq = 'yes';
>>>> ft_freqbaseline:          cfg.baseline = anything but 'no'
>>>>
>>>> In each case, if raw power values are averaged, the result will be
>>>> positively skewed.  Maybe it's not a huge problem if all of the data is
>>>> treated identically, but the specific case that triggered my concern
> was in
>>>> ft_freqbaseline, where the individual time-frequency bins are
> compared to
>>>> the mean over time for the baseline period.  For example, when using
>>>> cfg.baselinetype = 'db', as Giuseppe Pellizzer suggested, the output
> freq
>>>> data does indeed have a more normal distribution over time, but the mean
>>>> over the baseline time period is performed *before* the log
> transform, when
>>>> the distribution is still highly skewed:
>>>>
>>>>   meanVals = repmat(nanmean(data(:,:,baselineTimes), 3), [1 1
>>>> size(data, 3)]);
>>>>   data = 10*log10(data ./ meanVals);
>>>>
>>>> That's what I had originally done when analyzing data for my SfN poster,
>>>> when I realized the background noise that shouldn't have changed
> much from
>>>> baseline was mostly showing a decrease from baseline of about -3dB.
>>>>
>>>> Now, I've realized I'm seeing this as more of a problem than others
>>>> because of another tweak I made, which was to use a long, separate
> baseline
>>>> recording to normalize my trial data, rather than a short pre-trial
> period
>>>> as ft_freqbaseline is designed to do.  Averaging a few hundred
> milliseconds
>>>> for a baseline power estimate might be okay because overlapping time
> points
>>>> in the original data are used to calculate those power values anyway,
>>>> probably making them less skewed, but also (it seems to me) more
> arbitrary
>>>> and prone to error.  I already offered my custom function BLnorm.m
> to one
>>>> person who was asking about this issue of normalizing to a separate
>>>> baseline recording, and I would be happy to contribute it to
> FieldTrip if
>>>> others would appreciate it.
>>>>
>>>> Since a few people suggested using the median, and it is also suggested
>>>> in Cohen's textbook
>>>> <https://mitpress.mit.edu/books/analyzing-neural-time-series-data> as
>>>> an alternative measure of the central tendency for skewed raw power
> values,
>>>> I wonder if the simplest fix might be to add an option to select mean or
>>>> median in each of the functions listed above.  Another possibility
> would be
>>>> adding an option to transform the power values upon output from
>>>> ft_freqanalysis.
>>>>
>>>> Would anyone else find such changes useful?
>>>>
>>>> Thanks,
>>>> Teresa
>>>>
>>>>
>>>> On Wed, Dec 14, 2016 at 4:22 AM, Herring, J.D. (Jim) <
>>>> J.Herring at donders.ru.nl <http://donders.ru.nl>> wrote:
>>>>
>>>>> In terms of statistics it is the distribution of values that you do the
>>>>> statistics on that matters. In case of a paired-samples t-test when
>>>>> comparing two conditions, it is the distribution of difference
> values that
>>>>> has to be normally distributed. The distribution of difference
> values is
>>>>> often normal given two similarly non-normal distributions, offering no
>>>>> complications for a regular parametric test.
>>>>>
>>>>>
>>>>>
>>>>> The non-parametric tests offered in fieldtrip indeed do not assume
>>>>> normality, so you should have no problem there either.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> *From:* fieldtrip-bounces at science.ru.nl <http://science.ru.nl>
> [mailto:fieldtrip-bounces <mailto:fieldtrip-bounces> at scie
>>>>> nce.ru.nl <http://nce.ru.nl>] *On Behalf Of *Alik Widge
>>>>> *Sent:* Tuesday, December 13, 2016 3:10 PM
>>>>> *To:* FieldTrip discussion list <fieldtrip at science.ru.nl
> <http://science.ru.nl>>
>>>>> *Subject:* Re: [FieldTrip] impact of skewed power distributions on
>>>>> data analysis
>>>>>
>>>>>
>>>>>
>>>>> In this, Teresa is right and we have observed this in our own EEG data
>>>>> -- depending on one's level of noise and number of trials/patients, the
>>>>> mean can be a very poor estimator of central tendency. My students are
>>>>> still arguing about what we really want to do with it, but at least
> one of
>>>>> them has shifted to using the median as a matter of course for baseline
>>>>> normalization.
>>>>>
>>>>>
>>>>> Alik Widge
>>>>> alik.widge at gmail.com <http://gmail.com>
>>>>> (206) 866-5435
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Dec 12, 2016 at 6:45 PM, Teresa Madsen <tmadsen at
> emory.edu <http://emory.edu>>
>>>>> wrote:
>>>>>
>>>>> That may very well be true; to be honest, I haven't looked that deeply
>>>>> into the stats offerings yet. However, my plan is to express each
>>>>> electrode's experimental data in terms of change from their respective
>>>>> baseline recordings before attempting any group averaging or
> statistical
>>>>> testing, and this problem shows up first in the baseline correction
> step,
>>>>> where FieldTrip averages raw power over time.
>>>>>
>>>>> ~Teresa
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Dec 12, 2016 at 4:56 PM Nicholas A. Peatfield <
>>>>> nick.peatfield at gmail.com <http://gmail.com>> wrote:
>>>>>
>>>>> Correct me if I'm wrong, but, if you are using the non-parametric
>>>>> statistics implemented by fieldtrip, the data does not need to be
> normally
>>>>> distributed.
>>>>>
>>>>>
>>>>>
>>>>> On 12 December 2016 at 13:39, Teresa Madsen <tmadsen at emory.edu
> <http://emory.edu>> wrote:
>>>>>
>>>>> No, sorry, that's not what I meant, but thanks for giving me the
>>>>> opportunity to clarify. Of course everyone is familiar with the 1/f
> pattern
>>>>> across frequencies, but the distribution across time (and according
> to the
>>>>> poster, also across space), also has an extremely skewed, negative
>>>>> exponential distribution. I probably confused everyone by trying to
> show
>>>>> too much data in my figure, but each color represents the
> distribution of
>>>>> power values for a single frequency over time, using a histogram
> and a line
>>>>> above with circles at the mean +/- one standard deviation.
>>>>>
>>>>> My main point was that the mean is not representative of the central
>>>>> tendency of such an asymmetrical distribution of power values over
> time.
>>>>> It's even more obvious which is more representative of their actual
>>>>> distributions when I plot e^mean(logpower) on the raw plot and
>>>>> log(mean(rawpower)) on the log plot, but that made the figure even more
>>>>> busy and confusing.
>>>>>
>>>>> I hope that helps,
>>>>> Teresa
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Dec 12, 2016 at 3:47 PM Nicholas A. Peatfield <
>>>>> nick.peatfield at gmail.com <http://gmail.com>> wrote:
>>>>>
>>>>> Hi Teresa,
>>>>>
>>>>>
>>>>>
>>>>> I think what you are discussing is the 1/f power scaling of the power
>>>>> spectrum. This is one of the reasons that comparisons are made within
>>>>> a band (i.e. alpha to alpha) and not between bands (i.e. alpha to
> gamma),
>>>>> as such the assumption is that within bands there should be a relative
>>>>> change against baseline and this is what the statistics are
> performed on.
>>>>> That is, baseline correction is assumed to be the mean for a specific
>>>>> frequency and not a mean across frequencies.
>>>>>
>>>>>
>>>>>
>>>>>  And this leads to another point that when you are selecting a
>>>>> frequency range to do the non-parametric statistics on you should
> not do
>>>>> 1-64 Hz but break it up based on the bands.
>>>>>
>>>>>
>>>>>
>>>>> Hope my interpretation of your point is correct. I sent in
>>>>> individually, as I wanted to ensure I followed your point.
>>>>>
>>>>>
>>>>>
>>>>> Cheers,
>>>>>
>>>>>
>>>>>
>>>>> Nick
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On 12 December 2016 at 08:23, Teresa Madsen <tmadsen at emory.edu
> <http://emory.edu>> wrote:
>>>>>
>>>>> FieldTrippers,
>>>>>
>>>>>
>>>>>
>>>>> While analyzing my data for the annual Society for Neuroscience
>>>>> meeting, I developed a concern that was quickly validated by
> another poster
>>>>> (full abstract copied and linked below) focusing on the root of the
>>>>> problem:  neural oscillatory power is not normally distributed
> across time,
>>>>> frequency, or space.  The specific problem I had encountered was in
>>>>> baseline-correcting my experimental data, where, regardless of
>>>>> cfg.baselinetype, ft_freqbaseline depends on the mean power over time.
>>>>> However, I found that the distribution of raw power over time is so
> skewed
>>>>> that the mean was not a reasonable approximation of the central
> tendency of
>>>>> the baseline power, so it made most of my experimental data look
> like it
>>>>> had decreased power compared to baseline.  The more I think about
> it, the
>>>>> more I realize that averaging is everywhere in the way we analyze
> neural
>>>>> oscillations (across time points, frequency bins, electrodes, trials,
>>>>> subjects, etc.), and many of the standard statistics people use
> also rely
>>>>> on assumptions of normality.
>>>>>
>>>>>
>>>>>
>>>>> The most obvious solution for me was to log transform the data first,
>>>>> as it appears to be fairly log normal, and I always use log-scale
>>>>> visualizations anyway.  Erik Peterson, middle author on the poster,
> agreed
>>>>> that this would at least "restore (some) symmetry to the error
>>>>> distribution."  I used a natural log transform, sort of arbitrarily to
>>>>> differentiate from the standard decibel transform included in
> FieldTrip as
>>>>> cfg.baselinetype = 'db'.  The following figures compare the 2
> distributions
>>>>> across several frequency bands (using power values from a wavelet
>>>>> spectrogram obtained from a baseline LFP recorded in rat prelimbic
>>>>> cortex).  The lines at the top represent the mean +/- one standard
>>>>> deviation for each frequency band, and you can see how those
> descriptive
>>>>> stats are much more representative of the actual distributions in
> the log
>>>>> scale.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> ​​
>>>>>
>>>>> For my analysis, I also calculated a z-score on the log transformed
>>>>> power to assess how my experimental data compared to the
> variability of the
>>>>> noise in a long baseline recording from before conditioning, rather
> than a
>>>>> short pre-trial baseline period, since I find that more informative
> than
>>>>> any of FieldTrip's built-in baseline types.  I'm happy to share the
> custom
>>>>> functions I wrote for this if people think it would be a useful
> addition to
>>>>> FieldTrip.  I can also share more about my analysis and/or a copy
> of the
>>>>> poster, if anyone wants more detail - I just didn't want to make
> this email
>>>>> too big.
>>>>>
>>>>>
>>>>>
>>>>> Mostly, I'm just hoping to start some discussion here as to how to
>>>>> address this.  I searched the wiki
>>>>> <http://www.fieldtriptoolbox.org/development/zscores>, listserv
>>>>>
> <https://mailman.science.ru.nl/pipermail/fieldtrip/2006-December/000773.html>
>>>>>  archives
>>>>>
> <https://mailman.science.ru.nl/pipermail/fieldtrip/2010-March/002718.html>,
>>>>> and bugzilla
>>>>> <http://bugzilla.fieldtriptoolbox.org/show_bug.cgi?id=1574> for
>>>>> anything related and came up with a few topics surrounding
> normalization
>>>>> and baseline correction, but only skirting this issue.  It seems
> important,
>>>>> so I want to find out whether others agree with my approach or
> already have
>>>>> other ways of avoiding the problem, and whether FieldTrip's code
> needs to
>>>>> be changed or just documentation added, or what?
>>>>>
>>>>>
>>>>>
>>>>> Thanks for any insights,
>>>>>
>>>>> Teresa
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> 271.03 / LLL17 - Neural oscillatory power is not Gaussian distributed
>>>>> across time
>>>>> <http://www.abstractsonline.com/pp8/#!/4071/presentation/24150>
>>>>>
>>>>> *Authors*
>>>>>
>>>>> **L. IZHIKEVICH*, E. PETERSON, B. VOYTEK;
>>>>> Cognitive Sci., UCSD, San Diego, CA
>>>>>
>>>>> *Disclosures*
>>>>>
>>>>>  *L. Izhikevich:* None. *E. Peterson:* None. *B. Voytek:* None.
>>>>>
>>>>> *Abstract*
>>>>>
>>>>> Neural oscillations are important in organizing activity across the
>>>>> human brain in healthy cognition, while oscillatory disruptions are
> linked
>>>>> to numerous disease states. Oscillations are known to vary by
> frequency and
>>>>> amplitude across time and between different brain regions; however,
> this
>>>>> variability has never been well characterized. We examined human
> and animal
>>>>> EEG, LFP, MEG, and ECoG data from over 100 subjects to analyze the
>>>>> distribution of power and frequency across time, space and species. We
>>>>> report that between data types, subjects, frequencies, electrodes, and
>>>>> time, an inverse power law, or negative exponential distribution, is
>>>>> present in all recordings. This is contrary to, and not compatible
> with,
>>>>> the Gaussian noise assumption made in many digital signal processing
>>>>> techniques. The statistical assumptions underlying common
> algorithms for
>>>>> power spectral estimation, such as Welch's method, are being violated
>>>>> resulting in non-trivial misestimates of oscillatory power. Different
>>>>> statistical approaches are warranted.
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>> Teresa E. Madsen, PhD
>>>>> Research Technical Specialist:  *in vivo *electrophysiology & data
>>>>> analysis
>>>>>
>>>>> Division of Behavioral Neuroscience and Psychiatric Disorders
>>>>> Yerkes National Primate Research Center
>>>>>
>>>>> Emory University
>>>>>
>>>>> Rainnie Lab, NSB 5233
>>>>> 954 Gatewood Rd. NE
>>>>> Atlanta, GA 30329
>>>>>
>>>>> (770) 296-9119
>>>>>
>>>>> braingirl at gmail.com <http://gmail.com>
>>>>>
>>>>> https://www.linkedin.com/in/temadsen
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> fieldtrip mailing list
>>>>> fieldtrip at donders.ru.nl <http://donders.ru.nl>
>>>>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>> Nicholas Peatfield, PhD
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>> Nicholas Peatfield, PhD
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> fieldtrip mailing list
>>>>> fieldtrip at donders.ru.nl <http://donders.ru.nl>
>>>>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> fieldtrip mailing list
>>>>> fieldtrip at donders.ru.nl <http://donders.ru.nl>
>>>>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Teresa E. Madsen, PhD
>>>> Division of Behavioral Neuroscience and Psychiatric Disorders
>>>> Yerkes National Primate Research Center
>>>> Emory University
>>>> Rainnie Lab, NSB 5233
>>>> 954 Gatewood Rd. NE
>>>> Atlanta, GA 30329
>>>> (770) 296-9119
>>>>
>>>> _______________________________________________
>>>> fieldtrip mailing list
>>>> fieldtrip at donders.ru.nl <http://donders.ru.nl>
>>>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip
>>>>
>>>
>>>
>>>
>>> --
>>> Nicholas Peatfield, PhD
>>>
>>>
>>> _______________________________________________
>>> fieldtrip mailing list
>>> fieldtrip at donders.ru.nl <http://donders.ru.nl>
>>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip
>>>
>>
>> _______________________________________________
>> fieldtrip mailing list
>> fieldtrip at donders.ru.nl <http://donders.ru.nl>
>> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip
>>
>
>
>
> --
> Teresa E. Madsen, PhD
> Division of Behavioral Neuroscience and Psychiatric Disorders
> Yerkes National Primate Research Center
> Emory University
> Rainnie Lab, NSB 5233
> 954 Gatewood Rd. NE
> Atlanta, GA 30329
> (770) 296-9119
>
> --
> Mike X Cohen, PhD
> mikexcohen.com <http://mikexcohen.com>
>
>
> _______________________________________________
> fieldtrip mailing list
> fieldtrip at donders.ru.nl
> https://mailman.science.ru.nl/mailman/listinfo/fieldtrip
>



More information about the fieldtrip mailing list