Today, colleagues from around the globe and I published a paper in Nature Communications titled “Biodiversity enhances ecosystem multifunctionality across trophic levels and habitats.” The paper is an important step forward in connecting biological diversity — the variety of organisms living in an ecosystem — to the myriad processes operating in natural, functioning ecosystems. Its worth digging a bit into this analysis, and explaining a little bit about why its important.
Being an ecologist is all about the trade-off between effort, and time and money. Given infinite amounts of both, we would undoubtedly sample the heck out of nature. But it would be an exercise in diminishing returns: after a certain amount of sampling, we would fail to unturn any stone that has not already been unturned. Thus, ecology is a balancing act of effort: too little, and we have no real insight. Too much, and we’ve wasted a lot of time and money.
I’m always looking for ways to improve my balance, which is why I was interested to see a new paper in Ecology Letters called “Measures of precision for dissimilarity-based multivariate analysis of ecological communities” by Marti Anderson and Julia Santana-Garcon.
In a nutshell, the paper introduces a method for, “assessing sample-size adequacy in studies of ecological communities.” Put in a slightly different way, the authors have devised a technique for determining when additional sampling does not really improve one’s ability to describe whole communities — both the number of species, and their relative abundances. Perfect for evaluating when enough is enough, and adjusting the output of time and money!
In this post, I dig into this technique, show its applications using an example, and introduce a new R function to assess multivariate precision quickly and easily.
Recently, I was exploring techniques to interpolate some missing environmental data, and stumbled across something called ‘random forest’ analysis. Random what now? I did a little digging and came across the massive and insanely complicated field of machine learning. I couldn’t find a concise guide to machine learning techniques, or when I might want to use one or the other, so I thought I would cobble together a brief guide on my own. Below is a rough stab at explaining and exploring different machine learning techniques, from CARTs to GBMs, using R.
In our new paper just published online early in Oikos, we synthesise our current understanding of the functional consequences of changes in species richness in the marine realm. For those familiar with the field of biodiversity and ecosystem functioning, the first question might well be: do we really need yet another meta-analysis on this topic? I mean, really. There have been several meta-analyses published in recent years. Do we really need this work?
Well, our answer to the question is yes. Here’s why.
(This post is written to highlight a recent experiment which is now hosted as a preprint in PeerJ Preprints. You can find that paper here. Check it out and provide some open peer review!)
We live in an era of widespread human-driven extinction: the Anthropocene. Its a fact that many more species are being lost now than at any point in recorded history, and the future is grim. We are forecasted to lose 6,300% more species by 2100 than we have lost in the last 66 million years (based on evidence from the fossil record). So naturally there has been intense interest in cataloging the world’s biodiversity, and conducting experiments to understand the consequences of losing all these species. But are we looking at the right metric of diversity?
Nature is complex. This seems like an obvious statement, but too often we reduce it to straightforward models.
y ~ x and that sort of thing. Not that there’s anything wrong with that: sometimes
y is actually directly a function of
x and anything else would be, in the words of Brian McGill, ‘statistical machismo.’
But I would wager that, more often that not,
y is not directly a function of
x . Rather,
y may be affected by a host of direct and indirect factors, which themselves affect one another directly and indirectly. If only there was someway to translate this network of interacting factors into a statistical framework to better and more realistically understand nature. Oh wait, structural equation modeling.
[Updated March 10, 2016: You can find more materials relating to SEM, including lectures, example analyses, and R code here: https://jonlefcheck.net/teaching/].
[Updated October 13, 2015: Active development has moved to my piecewiseSEM package on Github, so please see the link for the latest versions of all functions.]
I’ve been suspiciously quiet as of late — working on finishing up my dissertation — but in doing so have come up with a slew of new blog posts that should begin to trickle out shortly. For those who follow the blog, I wanted to put out a short post letting you know what to expect:
-an update to my post on R2 for linear mixed effects models (already live)
-an introduction to piecewise structural equation modeling (SEM), including a new function for automating Shipley’s tests of d-separation. The function is already hosted on GitHub
-how to download, align, and concatenate gene sequence data, and how to construct phylogenetic trees using maximum likelihood and Bayesian inference, all through R
Looking forward to getting these posts out there!