Papers that just report AICs of statistical models bugs me. AICs (Akaike information criterion) and similar metrics (BICs etc) are great ways to assess how good statistical models are – but are pretty much meaningless when reported by themselves. Particularly when just one model is presented! Yes I’m sure the authors selected the most parsimonious model with the lowest AIC, but they provide no detail about how much variation the model actually explains. You can have a ‘low’ AIC for example, but the model may only explain a small fraction of the variation in the data.
Calculating effect size for mixed models is difficult (see the article attached below) and there was little consensus between statisticians about how to do it until recently. Due to these reasons many biologists simply avoided reporting/calculating R2 for mixed models. In particular this must drive people trying to to do meta analyses crazy! Now there is no excuse with plenty of R packages available to help calculate pseudo R2. One R package I like is MuMiN (multi model inference) by Kamil Barton: https://cran.r-project.org/web/packages/MuMIn/MuMIn.pdf. This package is an easy to use and seems to be a reasonably robust way to calculate pseudo R2 for mixed models generated in lme4. It also has lots of other very useful tool as well, such as AICc calculation. Hopefully in the future we see a smaller number of papers getting through the review without reporting this really useful statistic.
Nakagawa Schielzeth (2012): http://onlinelibrary.wiley.com/doi/10.1111/j.2041-210x.2012.00261.x/full.