Misunderstanding the Microbiome: misuse of community ecology tools to understand microbial communities

Understanding how the vast collection of organisms within us (‘the ‘microbiome’) is linked to human (and ecosystem) health is one of the most exciting scientific topics today.  It really does have the possibility of improving our lives considerably though is often over-hyped (see the link below). However, I’ve recently I’ve been reading quite a few microbiome papers (it was our journal clubs topic of the month) and have been struck by the poor study design and lack of understanding of the statistical methodology. Talking to colleagues in the microbiome field – these problems maybe more widespread and could be hindering our progress in understanding this important component of the ecosystem within us.

Of course microbiome research is simply microbe community ecology, but the way some microbiome practitioners use and report community ecology statistics is problematic and sometimes outright deceptive.This includes people publishing in the highest scientific journals. I won’t pick on any particular paper, but here are a few general observations (sorry for the technical detail).

  1. Effect sizes are often not reported or visualized using ordination techniques. They have a significant P value but how do you know how biologically relevant this is?  My guess is that they are small as in often the case with free living communities.
  2. Little detail is given about how the particular test is performed. Usual example: “We did a PERMANOVA  to test for XX”. Despite the fact that the PERMANOVA has some general issues (see the Warton et al paper below), no information is given about the test anyway e.g., was it a two way crossed design, did they use Type III sums of squares etc? Did they test for multivariate disperson using PERMDISP or similar? Literally that is one of the only assumptions of the test but I haven’t read any microbiome paper that has checked. If they haven’t we can’t trust the results. Have they read the original paper by Marti Anderson? Some cite it at least….
  3. I haven’t found any PCA or PCoA plot with % of variance explained. This is annoying – the axes shown may only explain a small amount of variance in the community,  so thus the pretty little clusters of points shown maybe pretty artificial.

I’ll stop ranting. These issues really impair interpretation of the results and make the science difficult to replicate. It makes you ask “how do these papers get through the gates?’ I’m guessing that a significant proportion of authors, reviewers and editors have little experience in  community biostats and don’t really understand what the tests are doing. They are relying on analytical pipelines such as QUIIME that claim to ‘ publication quality graphics and statistics’ and not thinking much more about it. More microbiome researchers need to go beyond these pipelines and keep up-to-date with community methods more broadly. The quality of the research will clearly improve.

Links:

Microbiome over-hype: http://www.nature.com/news/microbiology-microbiome-science-needs-a-healthy-dose-of-scepticism-1.15730).

Warton et al: http://onlinelibrary.wiley.com/doi/10.1111/j.2041-210X.2011.00127.x/full

Marti Anderson’s paper: http://onlinelibrary.wiley.com/doi/10.1111/j.1442-9993.2001.01070.pp.x/full

Advertisements

Guide to reducing data dimensions

In a world where collecting enormous amounts of complex and co-linear, data is increasingly the norm, techniques that reduce data dimensions to something that can be used in statistical models is essential . However, in ecology at least the means of doing this are unclear and the info out there is confusing. Earlier this year Meng et al provided a nice overview to what’s out there (see link below) specifically for reducing omics data sets, but is equally relevant for ecologists. One weakness of the paper is they provided only small amounts of practical advice particularly on how to interpret the resultant dimension reduced data.  Overall though this is an excellent guide and I aim to give a bit extra practical advice on dimension reduction using the techniques that i use.

Anyway, before going forward – what do we mean by dimension reduction? Paraphrasing from Meng et al – Dimension reduction is the mapping of data to a lower dimensional space such that redundant variance in the data is reduced , allowing for a lower-dimensional representation (say 2-3 dimensions, but sometime many more) without significant loss of information. My philosophy is to try to use the data in a raw form wherever possible, but where this is problematic due to problems with co-linearity etc and where machine learning algorithms such as Random Forests are not appropriate (eg., your response variable is a matrix….) this is my brief practical guide to three common ones:

PCA : dependable principal components analysis – excellent  if you have lots of correlated continuous predictor variables with few missing values and 0’s. There are a large number of PCA based analyses that may be useful (e.g., penalized PCA for feature selection, see Meng et al), but I’ve never used them. Choosing  the right number of PCAs is subjective and is a problem for lots of these techniques -an arbitrary cutoff of selecting PCAs that account for ~70% of the original variation seems reasonable. However, if your PCs only explain a small amount of variation you have a problem as fitting a large number of PCs to a regression model is usually not an option (and if PC25 is a significant predictor what does that even mean biologically?). Furthermore,and if there are non-linear trends this technique  won’t be useful.

PCoA :  Principal co-ordinate analysis or classical scaling is similar to PCA but used on  matrices. Has the same limitations of PCA.

NMDS: Non-metric multidimensional scaling is a much more flexible method that can cope much better with non-linear trends in the data. This method is trying to best preserve distances between objects (using a ranking system), rather than finding the axes that best represent variation in the data as PCA and PCoA do. This means that NMDS also captures variation often in a few dimensions (often 2-3), though it is important to assess NMDS fit by assessing ‘stress’ (values below 0.1 are usually OK). There is debate how useful these axis scores are (see here: https://stat.ethz.ch/pipermail/r-sig-ecology/2016-January/005246.html) as they are rank based and the axis 1 doesn’t explain the largest amount of variation and so fourth as is the case with PCA/PCoA. However I still think this is a useful strategy (see the Beale 2006 link below).

I stress (no pun intended!) the biggest problem with this techniques is interpretation of new variables. Knowing the raw data inside and out and how they are mapped onto the new latent variables is important. For example, high loading’s on PCA1 reflect  high soil moisture and high pH. If you don’t know this  interpreting regressions coefficients in a meaningful way is going to be impossible. It also leads to annoying statements like ‘PcOA 10 was the most significant predictor’ without any further biological reasoning for what the axes actually represents. Tread with caution and data dimension reduction can be a really useful thing to do.

Meng et al: http://bib.oxfordjournals.org/content/early/2016/03/10/bib.bbv108.full

Beale 2006: http://link.springer.com/article/10.1007/s00442-006-0551-8

Random forests: identifying package conflicts in R

I just lost a morning of my life dealing with a strange R problem. As a reader of my blog you may know my love for machine learning and Gradient Forests – turns out that if you have the uni-variate version installed (‘Random Forests’) your beautiful Gradient Forest is no longer – just a barren wasteland remains. Excuse the terrible metaphor, basically there is some weird conflict between the two that make Gradient forest produce the horrifying error: “The response has five or fewer unique values.  Are you sure you want to do regression?”. This was made worse by the fact that yesterday Gradient Forests was working perfectly yesterday (i.e. I hadn’t loaded Random Forests), but then today I found an inconsistency in the data and made some reasonable sized changes, got distracted and ran  Random Forests on another piece of data, then came back to analyse my modified data from yesterday  and bang I got the above error + topping it all off the error “The gradient forest is empty”.

The horror – was it the changes to the data I made? Did I modify the code and forget (I though my book keeping was pretty good…)? What’s going on?  I then did the same analysis on the old data-set and got the same error, (phew) and then eventually by process of elimination worked out that it was a package conflict. I wonder how many collective hours people spend diagnosing problems like this in science? Millions I suspect. Anyway, I guess I learnt something (?) and will check for this type of issue more frequently.

Gradient Forests: http://gradientforest.r-forge.r-project.org/biodiversity-survey.pdf

Useful Bayesian explanations

I’ve been going over Bayesian analysis principals recently and trying to work out ways to explain it clearly (and  teach it).

Here are are my top 5 links that do a reasonable job at explaining Bayesian principles (in no particular order):

  1. Count Baysie with Lego: https://www.countbayesie.com/blog/2015/2/18/bayes-theorem-with-lego
  2. Julia Galef has some good explantions: https://www.youtube.com/watch?v=BrK7X_XlGB8
  3. Aaron Eliison’s paper on Bayesian inference in ecology : http://www.uvm.edu/~bbeckage/Teaching/DataAnalysis/AssignedPapers/Ellison_2004.pdf
  4. The ETZ files Bayes theorem.
  5. On a lighter note you can’t beat xkcd:

Frequentists vs. Bayesians

Possibly the greatest quantitative methods lecturer ever

I’ve been a fan of Professor Andy Field for a long time – his books are great but his lectures on statistics are the best! He provides his lectures for free on youtube which is very nice of him (and presumably a good way to sell books).  This particular lecture is particularly brilliant: https://www.youtube.com/watch?v=u5JnILqlX9w&index=12&list=PL343F1B5F55734D55 – A lecture on mixed design ANOVA unlike any other.