Submitting manuscripts: Excellent advice from Mike Kaspari

Here is some excellent advice about submitting manuscripts from Mike Kaspari:

What journal gets the first peek at your manuscript? Results from a year of ruminating.

I don’t agree with point 3 about shopping a manuscript around – I think some manuscripts are strengthened post peer review.  Nonetheless, sage advice. His benchmarks are great – currently I’m spending 100% of my time revising manuscripts and it is driving me insane….30% is an excellent goal.

Random forests: identifying package conflicts in R

I just lost a morning of my life dealing with a strange R problem. As a reader of my blog you may know my love for machine learning and Gradient Forests – turns out that if you have the uni-variate version installed (‘Random Forests’) your beautiful Gradient Forest is no longer – just a barren wasteland remains. Excuse the terrible metaphor, basically there is some weird conflict between the two that make Gradient forest produce the horrifying error: “The response has five or fewer unique values.  Are you sure you want to do regression?”. This was made worse by the fact that yesterday Gradient Forests was working perfectly yesterday (i.e. I hadn’t loaded Random Forests), but then today I found an inconsistency in the data and made some reasonable sized changes, got distracted and ran  Random Forests on another piece of data, then came back to analyse my modified data from yesterday  and bang I got the above error + topping it all off the error “The gradient forest is empty”.

The horror – was it the changes to the data I made? Did I modify the code and forget (I though my book keeping was pretty good…)? What’s going on?  I then did the same analysis on the old data-set and got the same error, (phew) and then eventually by process of elimination worked out that it was a package conflict. I wonder how many collective hours people spend diagnosing problems like this in science? Millions I suspect. Anyway, I guess I learnt something (?) and will check for this type of issue more frequently.

Gradient Forests: http://gradientforest.r-forge.r-project.org/biodiversity-survey.pdf

An interesting read about what reviewers want

Animal ecology recently put out an interesting post about what reviewers want (see the link below). Particularly interesting that so may respondents to the survey though a major shake up was needed (74%) – I couldn’t agree further. Also I found training to be a peer reviewer was an interesting idea and should be mandatory. No surprises that people reviewing high ranking journals are more likely to accept manuscripts and spend more time on them. I also find it strange that scientists find the idea of being paid to review articles weird – why should the companies simply get to profit off the authors and the readers without giving any of it back to the community? I guess that unfortunately the consequences of doing so are likely to lead to increased publication costs which would be annoying.

https://journalofanimalecology.wordpress.com/2016/09/22/what-do-reviewers-want/

How many types of statistical analysis approaches do you use regularly

Whilst deciphering  really cool R package called GDM (see below), I was thinking about how many different statistical approaches and techniques have have I read about, deciphered and applied in the last 2 years?  What’s a usual number of techniques people use reasonably regularly? My list is at approximately 30 currently –  but I am a postdoc that spends basically all of my time analyzing data from diverse range of systems with an equally wide variety of data types, so  perhaps that’s normal?

The first place I started looking was in my R package list and I quickly realized that there were quite a few. I excluded ‘bread and butter’ GLM type analyses and there Bayesian equivalents e.g., ANOVA & GLMMs and basic ordination techniques (e.g., PCoA, NMDS). I  haven’t also included techniques to calculate the various aspects of diversity or sequence alignment algorithms either as the list would just keep going. As I deal with species distribution data,  distances and (dis)similarities quite often there was obviously a trend towards distance-based techniques (see below), with a mixture of spatial, epi and phylogenetic approaches.

I’m too lazy to add citations and descriptions for each one – but they are all easy to find in google or email me if you are interested. If there is anything else that I should know and use to answer disease/phylogenetic community ecology type questions, please make suggestions.

In no particular order:

Permutation-based ANOVA (PERMANOVA), permutation based tests for homogeneity of dispersion PERMDISP, canonical analysis of principal coordinates (CAP) analyses , dbMEMs, Generalized dissimilarity modelling (GDM), distance-based linear modelling (distLM), multiple matrix regression (MRM), network-based linear models (netLM), Gradient Forests, Random Forests, cluster analysis, SYNCSA analysis,  fourth corner analysis, RLQ tests, Mantel tests, Moran’s I tests (phylogenetic and spatial), Phylogenetic GLMMs, everything in the R package Picante, ecophylogenetic regression (Pez), dynamic assembly model of colonization, local extinction and speciation (DAMOCLES), dynamical assembly of islands by speciation, immigration and extinction (DAISIE), all sorts of ancestral state reconstruction approaches, numerous Bayesian evolutionary analysis sampling trees (BEAST) methods, numerous phytools methods, environmental raster and phylogenetically informed movement (SERAPHIM),SaTScan, Circuitscape, point-time pattern analysis, Kriging, epitools risk analysis.

Link to GDM: https://cran.r-project.org/web/packages/gdm/vignettes/gdmVignette.pdf