‘Amusing’ reviewers comments

Having a thick skin and the ability to shrug off  harsh and sometimes personal criticism is an often unrecognized trait of a scientist. You put your work out there to the world and get feedback from often anonymous peers(but this is changing slowly) . The system works usually pretty well and 99% of the time makes the paper better.  When the comments are  highly critical, you go through a mini five stages of grief but  you always come around and the paper gets better. I’ve definitely had my fair share of critical feedback, but one of my recent favorites was  a reviewer suggesting that my literature review “hadn’t gone beyond the literature”….(?) However, none have come close to the comments that this author received:


there are so many good lines but this one is the best: “This paper has merit and no errors, but I do not like it …”

Pleasing that it still got published in the journal anyway!


Interesting May edition of Animal Ecology

The May edition from the Journal of Animal Ecology is pretty much essential reading for anyone interested in disease ecology (particularly those using network approaches). Springer et al’s paper about dynamic networks and Cryptosporidium spread is particularly interesting  – I really like the fact that they incorporated different transmission modes into their dynamic network model  –  this reflects the reality in lots of host-parasite systems. I also like that they used both empirically derived networks and simulated models. The comparison between static and dynamic models wasn’t particularly exciting  – it seemed obvious that dynamic models were always going to lead to bigger outbreaks. Nonetheless really interesting work.

The study by Patterson et al on tuberculosis  and meerkats was also really cool – combining both social and environmental predictors to understand tb risk in the Kalahari was interesting and is something I’m trying to with the Serengeti lions. They should have used machine learning though!

Furthermore the community ecology section is full of interesting papers as well – hopefully I’ll get around to reading them soon.



Excellent series of blog articles about data science

I just found this excellent series of articles by John Mount.  Really intuitive and the explanations he gives are good. Furthermore there is really useful R code to recreate the figures they make.   Really a must read if you are getting into data science.



Demystifying the BEAST (Part I)

If you are like me when you first opened up BEAST phylogenetics software, or more specifically BEAUTi GUI,  you are immediately impressed but a little overwhelmed by the number of options you have to do reconstruction and phylogenetic analysis more broadly. It’s an amazingly powerful and accessible  free software package –  particularly with the extra utilities such as Fig Tree and SPREAD (will talk about these in future posts) you really can’t beat BEAST. The tutorials (see below) provide step by step instructions basically mean that anyone can run BEAST. Furthermore, If you have any technical problems,  they are often quickly resolved via the google group (https://groups.google.com/forum/#!forum/beast-users).

Despite this, understanding the numerous decisions that have to made throughout the process from sequence alignment to final product is the real challenge here. These decisions can make real differences to the inferences you make so are critical to get right. One weakness in the tutorials is that they tell you how, but don’t give enough detail to why you’d make a particular decision (e.g., why one tree prior over another?). The aims of  the following posts will be to demystify this process a little  and direct you to useful resources.

The plan is to do this tab by tab of BEAUti and I will assume that you know how to import your data in and set dates/traits (all  of this can be learnt from the tutorials easily).


Part 2 –  Sites

Part 3 – Clocks

Part 4 – Trees

Part 5 -States, Priors and Operators

Part 6  – Running the whole thing and model selection

In the mean time, if you haven’t already, download BEAST 1.8.4 (http://beast.bio.ed.ac.uk/tutorials) and go through the tutorials: http://beast.bio.ed.ac.uk/tutorials.



NSF vs ARC: A Postdocs Perspective on American and Australian Research Funding

My first NSF DEB pre-proposal submitted (or any ‘big’ grant for that matter) … hooray! It’s nice to regain head-space to think about something else for a while at least. Even as a co-PI on a pre-proposal, the process was a tad stressful.  To tell you the truth though, I actually enjoyed the process.Maybe because the thinking was in the future tense rather than past (i.e. I was thinking about future research rather than analyzing and writing about great data of the past)? Partly perhaps, but I enjoyed the fact that  Meggan and I worked together nicely and with people across the world to create a 5 page document that sold, what we think at least, is a cool an novel idea. I read it and want to actually do it – I hope reviewers/the panel agree!

If you think about it logically though,  the process looks absurd in that we put so much time and effort into something with an 8% chance of success is clearly insane (see the NSF blog below for trends). And this success rate is pre-Trump! I thought things were bad in Australia, but this actually makes the Australian Research Council  (ARC) equivalent grants (Discovery or DECRA) seem like a ‘good’ bet with success rates of around ~17% (see below). I wonder where the cutoff is? I wonder at what % success will researchers even bother submitting anything? Or is even 1% success worth the effort considering the reward? This situation is clearly stressful for faculty, but for postdocs like myself who rely on this type of funding to ‘make a name’ and to get a gig (read tenure track position or another postdoc) it’s nearly too much. Nonetheless, I somehow push it to the back of my brain and continue to do what I enjoy doing (and hope is of some use to society). Should we move to NZ, Canada, Europe or Asia? Any perspective on these countries/continents would be great.

Even if we don’t get funded which is highly likely, we can no doubt use these ideas in other grants. Fingers crossed of course! Nonetheless, it has been an excellent learning experience and I’ve had fun helping craft the pre-proposal. There are excellent resources out there that have helped enormously and I feel are valuable for grant writing in general (the NSF DEB blog and Mike Kaspari’s blog below for example).  Hopefully, one day things will get better and less of the collective grant writing effort will be wasted.

DEB Numbers: FY 2016 Wrap-Up


On writing a strong NSF pre-proposal

Guide to reducing data dimensions

In a world where collecting enormous amounts of complex and co-linear, data is increasingly the norm, techniques that reduce data dimensions to something that can be used in statistical models is essential . However, in ecology at least the means of doing this are unclear and the info out there is confusing. Earlier this year Meng et al provided a nice overview to what’s out there (see link below) specifically for reducing omics data sets, but is equally relevant for ecologists. One weakness of the paper is they provided only small amounts of practical advice particularly on how to interpret the resultant dimension reduced data.  Overall though this is an excellent guide and I aim to give a bit extra practical advice on dimension reduction using the techniques that i use.

Anyway, before going forward – what do we mean by dimension reduction? Paraphrasing from Meng et al – Dimension reduction is the mapping of data to a lower dimensional space such that redundant variance in the data is reduced , allowing for a lower-dimensional representation (say 2-3 dimensions, but sometime many more) without significant loss of information. My philosophy is to try to use the data in a raw form wherever possible, but where this is problematic due to problems with co-linearity etc and where machine learning algorithms such as Random Forests are not appropriate (eg., your response variable is a matrix….) this is my brief practical guide to three common ones:

PCA : dependable principal components analysis – excellent  if you have lots of correlated continuous predictor variables with few missing values and 0’s. There are a large number of PCA based analyses that may be useful (e.g., penalized PCA for feature selection, see Meng et al), but I’ve never used them. Choosing  the right number of PCAs is subjective and is a problem for lots of these techniques -an arbitrary cutoff of selecting PCAs that account for ~70% of the original variation seems reasonable. However, if your PCs only explain a small amount of variation you have a problem as fitting a large number of PCs to a regression model is usually not an option (and if PC25 is a significant predictor what does that even mean biologically?). Furthermore,and if there are non-linear trends this technique  won’t be useful.

PCoA :  Principal co-ordinate analysis or classical scaling is similar to PCA but used on  matrices. Has the same limitations of PCA.

NMDS: Non-metric multidimensional scaling is a much more flexible method that can cope much better with non-linear trends in the data. This method is trying to best preserve distances between objects (using a ranking system), rather than finding the axes that best represent variation in the data as PCA and PCoA do. This means that NMDS also captures variation often in a few dimensions (often 2-3), though it is important to assess NMDS fit by assessing ‘stress’ (values below 0.1 are usually OK). There is debate how useful these axis scores are (see here: https://stat.ethz.ch/pipermail/r-sig-ecology/2016-January/005246.html) as they are rank based and the axis 1 doesn’t explain the largest amount of variation and so fourth as is the case with PCA/PCoA. However I still think this is a useful strategy (see the Beale 2006 link below).

I stress (no pun intended!) the biggest problem with this techniques is interpretation of new variables. Knowing the raw data inside and out and how they are mapped onto the new latent variables is important. For example, high loading’s on PCA1 reflect  high soil moisture and high pH. If you don’t know this  interpreting regressions coefficients in a meaningful way is going to be impossible. It also leads to annoying statements like ‘PcOA 10 was the most significant predictor’ without any further biological reasoning for what the axes actually represents. Tread with caution and data dimension reduction can be a really useful thing to do.

Meng et al: http://bib.oxfordjournals.org/content/early/2016/03/10/bib.bbv108.full

Beale 2006: http://link.springer.com/article/10.1007/s00442-006-0551-8

Guide to functional redundancy

Functional redundancy has always been a problematic buzz word in ecology. People (including myself) liked to use it and intuitively got it , though there was such a variety of techniques and approaches to calculate it that comparing across studies was impossible. I’ve previously used deBello et al’s 2007 functional redundancy measure, but a broader framework was lacking.

Carlo Ricotta and colleagues  have started to fill that gap with a reasonably coherent in the latest issue of Methods in Ecology and Evolution (see link below). It is a ripper of a issue by the way – I could write several more posts if I had the time.  One noticeable thing missing are appropriate nulls models testing infer there is more redundancy in a community than expected by chance. Unless I’ve missed something, anyone want to write a short paper?


Submitting manuscripts: Excellent advice from Mike Kaspari

Here is some excellent advice about submitting manuscripts from Mike Kaspari:

What journal gets the first peek at your manuscript? Results from a year of ruminating.

I don’t agree with point 3 about shopping a manuscript around – I think some manuscripts are strengthened post peer review.  Nonetheless, sage advice. His benchmarks are great – currently I’m spending 100% of my time revising manuscripts and it is driving me insane….30% is an excellent goal.