Piecewise structural equation modeling in ecological research

Nature is complex. This seems like an obvious statement, but too often we reduce it to straightforward models. y ~ x and that sort of thing. Not that there’s anything wrong with that: sometimes y is actually directly a function of x and anything else would be, in the words of Brian McGill, ‘statistical machismo.’

But I would wager that, more often that not, y is not directly a function of x . Rather, y may be affected by a host of direct and indirect factors, which themselves affect one another directly and indirectly. If only there was someway to translate this network of interacting factors into a statistical framework to better and more realistically understand nature. Oh wait, structural equation modeling.

[Updated March 10, 2016: You can find more materials relating to SEM, including lectures, example analyses, and R code here: https://jonlefcheck.net/teaching/].

[Updated October 13, 2015: Active development has moved to my piecewiseSEM package on Github, so please see the link for the latest versions of all functions.]

What is structural equation modeling?

Structural equation modeling, or SEM for short, is a statistical framework that, ‘unites two or more structural models to model multivariate relationships,’ in the words of Jim Grace. Here, multivariate relationships refers to the sum of direct and indirect interactions among variables. In practice, it essentially strings together a series of linear relationships.

These relationships can be represented using a path diagram with arrows denoting which variables are influencing (and influenced by) other variables. Take a very simple path diagram:



Which is described by the equation: Y ~ X1 + X2.

Not consider a different arrangement of the same variables:


Which is now described by two equations: X2 ~ X1 and Y ~ X2. These two equations make up the SEM.

Assumptions of SEM

There are a few important points to make:

(1) As I pointed out, I’m talking about linear relationships. There are ways to integrate non-linear relationships into SEM (see Cardinale et al. 2009 for an excellent example), but for the most part, we’re dealing with straight lines.

(2) Second, we assume that the relationships above are causal. By causal, I mean that we have structured our models such that we assume X1 directly affects Y , and not the other way around. This is a big leap from more traditional statistics where we the phrase ‘correlation does not imply causation’ is drilled into our brains. The reason we can infer causative relationships has to do with the underlying philosophy of SEM. SEM is designed principally to test competing hypotheses about complex relationships.

In other words, I might hypothesize that Y is directly influenced by both X1 and X2 , as in the first example above. Alternately, I might suspect that X1 indirectly affects Y through X2 . I might have arrived at these hypotheses through prior experimentation or expert knowledge of the system, or I may have derived them from theory (which itself is supported, in most cases, by experimentation and expert knowledge).

Either way, I have mathematically specified a series of causal assumptions in a causal model (e.g., Y ~ X1 + X2 ) which I can then test with data. Path significance, the goodness-of-fit, and the magnitude of the relationships all then have some bearing on which logical statements can be extracted from the test: X1 does not directly affect Y , etc. For more information on the debate over causation in SEMs, see chapters by Judea Pearl and Kenneth Bollen.

(3) Third, in the second example, we have not estimated a direct effect of X1 on Y (i.e., there is no arrow from X1 to Y , as in the first example). We can, however, estimate the indirect effect as the product of the effect of X1 on X2 and the effect of X2 on Y . Thus, by constructing a causal model, we can identify both direct and indirect effects simultaneously, which could be tremendously useful in quantifying cascading effects.

Traditional SEM attempts to reproduce the entire variance-covariance matrix among all variables, given the specified model. The discrepancy between the observed and predicted matrices defines the model’s goodness-of-fit. Parameter estimates are derived that minimize this discrepancy, typically using something like maximum likelihood.

Piecewise SEM

There are, however, a few shortcomings to traditional SEM. First, it assumes that all variables are derived from a normal distribution.

Second, it assumes that all observations are independent. In other words, there is no underlying structure to the data. These assumptions are often violated in ecological research. Count data are derived from a Poisson distribution. We often design experiments that have hierarchical structure (e.g., blocking) or observational studies where certain sets of variables are more related than others (spatially, temporally, etc.)

Finally, traditional SEM requires quite a large sample size: at least (at least!) 5 samples per estimated parameter, but preferably 10 or more. This issue can be particularly problematic if variables are nested, where the sample size is limited to the use of variables at the highest level of the hierarchy. Such an approach drastically reduces the power of any analysis, not to mention the heartbreak of collapsing painstakingly collected data.

These violations are dealt with regularly by, for instance, fitting generalized linear models to a range of different distributions. We account for hierarchy by nesting variables in a mixed model framework, which also alleviates issues of sample size. However, these techniques are not amenable to traditional SEM.

These limitations have led to the development of an alternative kind of SEM called piecewise structural equation modeling. Here, paths are estimated in individual models, and then pieced together to construct the causal model. Piecewise SEM is actually a better gateway into SEM because it essentially is running a series of linear models, and ecologists know linear models.

Evaluating Fit in Piecewise SEM: Tests of d-separation

Piecewise SEM is a more flexible and potentially more powerful technique for the reasons outlined above, but it comes with its own set of restrictions. First, estimating goodness-of-fit and comparing models is not straightforward. In traditional SEM, we can easily derived a chi-squared statistic that describes the degree of agreement among the observed and predicted variance-covariance matrices. We can’t do that here, because we’re estimating a separate variance-covariance matrix for each piecewise model.

Enter Bill Shipley, who derived two criteria of model fit for piecewise SEM. The first is his test of d-separation. In essence, this test asks whether any paths are missing from the model, and whether the model would be improved with the inclusion of the missing path(s). To understand this better, we must first define a few terms.

Shipley’s perspective is based on what are known as directed acyclic graphs (DAG), which are essentially path diagrams, as above. Two variables are considered causally dependent if there is an arrow between them. They are causally independent if there is not an arrow between them. Consider the following example:


X1 is causally independent of Y2 because no arrow exists between them.

However, X1 indirectly influences Y2 via Y1 . Thus, X1 is causally independent of Y2 conditional on Y1 . This is an important distinction, because it has implications for the way in which we test whether the missing arrow between X1 and Y2 is important.

A test of directional separation (d-sep) asks whether the causally independent paths are significant while statistically controlling for variables on which these paths are conditional. To begin, one must list all the pairs of variables with no arrows between them, then list all the other variables that are direct causes of either variable in the pair. These pairs of independence claims, and their conditional variables, forms the basis set.

For the example above, the basis set is:

  • X1 and X2
  • X1 and Y2 , conditional on Y1
  • X2 and Y2 , conditional on Y1

The basis set can then be turned into a series of linear models:

  • X1 ~ X2
  • X1 ~ Y2 + Y1
  • X2 ~ Y2 + Y1

As you can see, we include the conditional variables ( Y1 and Y2 ) as covariates, but we are interested in those missing relationships (e.g., X1 ~ X2). We can run these models, and extract the p-values associated with the missing path, all the while controlling for Y1 and/or Y2. From these p-values, we can calculate a Fisher’s C statistic using the following equation:


which follows a chi-squared distribution with 2k degrees of freedom (where k = the number of pairs in the basis set). Thus, if a chi-squared tests is run on the C statistic and < 0.05, then the model is not a good fit. In other words, one or more of the missing paths contains some useful information. Conversely, if > 0.05, then the model represents the data well, and no paths are missing.

Evaluating Fit in Piecewise SEM: AIC

SEM is often implemented in a model testing framework (as opposed to a more exploratory analysis). In other words, we are taking a priori models of how we think the world works, and testing them against each other: reversing paths, dropping variables or relationships, and so on. A popular way to compare nested models is the use of the Akaike Information Criterion (AIC). Shipley has recently extended d-sep tests to use AIC. It’s actually quite straightforward:


The formula should look quite familiar if you’ve ever calculated an AIC. In this case, the gobbledy gook in brackets is simply C, above. In this case, however, is not the number of pairs in the basis set (that’s little k, not big K). Rather, K the number of parameters estimated across all models. The additive term can be modified to provide an estimate of AIC corrected for small sample size (AICc).

An Example from Shipley (2009)

In his 2009 paper, Shipley sets forth an example of tree growth and survival among 20 sites varying in latitude using the following DAG:


Happily for our purposes, Shipley has provided the raw data as a supplement, so we construct, test, and evaluate this SEM. In fact, I’ve written a function that takes a list of models, performs tests of d-sep, and calculates a chi-squared statistic and an AIC value for the entire model. The function alleviates some of the frustration with having to derive the basis set–an exercise at which I have never reliably performed–and then craft each model individually, which can be cumbersome with many variables and missing paths. You can find the most up to date version of the function on GitHub.

First, we load up the required packages and the data:

library(devtools) #Install from CRAN if you do not already have

Then we load the data and build the model set:

# Load required libraries
shipley2009.modlist = list(
  lme(DD~lat, random = ~1|site/tree, na.action = na.omit, 
    data = Shipley),
  lme(Date~DD, random = ~1|site/tree, na.action = na.omit, 
    data = Shipley),
  lme(Growth~Date, random = ~1|site/tree, na.action = na.omit, 
    data = Shipley),
    family=binomial(link = "logit"), data = shipley2009) )

Finally, we can derive the fit statistics:

sem.fit(shipley2009.modlist, shipley2009)

which reveals that C = 11.54 and = 0.48, implying that this model is a good fit to the data. If we wanted to compare the model, the AIC score is 49.54 and the AICc is 50.08.

*These values differ from those reported in Shipley (2009) as the result of updates to the R packages for mixed models, and the fact that he did not technically correctly model survivorship as a binomial outcome, as that functionality is not implemented in the nlme package. I use the lmerTest package at it returns p-values for merMod objects based on the oft-debated Satterthwaite approximation popular in SAS, but not in R.

Shortcomings of Piecewise SEM

There are a few drawbacks to a piecewise approach. First, the entire issue of whether p-values can be meaningfully calculated for mixed models rears its ugly head once again (see famous post by Doug Bates here). Second, piecewise SEM cannot handle latent variables, which are variables that represent an unmeasured variable that is informed by measured variables (akin to using PCA to collapse environmental data, and using the first axis to represent the ‘environment’). Third, we cannot accurately test for d-separation in models with feedbacks (e.g., A -> B -> C -> A). Finally, I have run into issues where tests of d-separation cannot be run because the model is ‘fully identified,’ aka there are no missing paths. In such a model, all variables are causally dependent. One can, however, calculate an AIC value for these models.

Get to it!

Ultimately, I find SEM to be a uniquely powerful tool for thinking about, testing, and predicting complex natural systems. Piecewise SEM is a gentle introduction since most ecologists are familiar with constructing generalized linear models and basic mixed effects models. Please also let me know if anything I’ve written here is erroneous (including the code on GitHub!). These are complex topics, and I don’t claim to be any kind of expert! I’m afraid I’ve also only scratched the surface. For an excellent introductory resource to SEM, see Jarrett Byrnes’ SEM course website–and take the course if you ever get a chance, its fantastic! I hope I’ve done this topic sufficient justice to get you interested, and to try it out. Check out the references below and get started.


Shipley, Bill. “Confirmatory path analysis in a generalized multilevel context.” Ecology 90.2 (2009): 363-368.

Shipley, Bill. “The AIC model selection method applied to path analytic models compared using a d-separation test.” Ecology 94.3 (2013): 560-564.

Shipley, Bill. Cause and correlation in biology: a user’s guide to path analysis, structural equations and causal inference. Cambridge University Press, 2002.

47 thoughts on “Piecewise structural equation modeling in ecological research

  1. Andrew says:

    Thanks for this really nice introduction to your recently developed package and piecewise SEM! It’s nice to see more user-friendly approaches being developed for Bill Shipley’s D-sep test. Do you know if it is possible to extend this method to use models such as MRM’s (multiple regression on distance matrices) that are based on permutation tests of significance? And if so, any chance this could be eventually included in the piecewiseSEM package?

      • Oliver Purschke says:

        Hi Jon,

        I am just interested in the outcome of the discussion above: are you aware of a way to include distance matrices (e.g. functional, phylo.) into SEMs, as you can do in MRMs or Mantel tests? Taking principal coordinates (or axes from a PCA) wouldn’t be a solution for me, as I am interested doing an SEM based on distances only.

        Any hints are welcome.


      • Jon Lefcheck says:

        Hi Oliver, Drop me an email and we can coordinate. The short answer is: its complicated. The long answer is: its doable, but may be extraordinarily difficult depending on the complexity of your model. The issues are primarily that the MRM() function in the ecodist package does not return a S3 / S4 model object, which means it does not play nice with piecewiseSEM. Cheers, Jon

  2. Jaime Anaya says:

    Hi Jon, thanks for piecewiseSEM. I have been working on it. However, when I try to use it with my data I get some errors. These errors happen when I use random effects (lme, lmer, or glmer..). When I use lm() function and omit the random effects everything works fine…

    If I use the lmer(), I also get an error, the p-values can not be
    any thoughts?

    Do you think I should test for missing paths by hand and use the lmer results?

    I can sent you me code if necessary..


    > sem.fit(Zoo_pSEM_randomList, Zoo)
    | 0%
    Error in MEEM(object, conLin, control$niterEM) :
    Singularity in backsolve at level 0, block 1

    • Jon Lefcheck says:

      Hi Jaime, It seems like you have a convergence error. This is, in my experience, far more likely with lme4 (and lmerTest) than with other packages, including the base lm() and lme() from the nlme package. You can try to relax the criteria for convergence using the lmerControl() option in the model list. Alternately, I recommend fitting the model using lme(), unless you have a non-normal response. Shoot me an email if you have any further questions or issues! Cheers, Jon

      • Jaime says:

        Hi Jon, thanks for the answer… I have tried to relax the models and thinks look better, however, I still have some problems with the random effects… whenever I want to use the random effects, I have to the same fixed variables (factors from an experiment) in each model… so, what I did was a lm(X~random.effects) and use the residuals as response… and switch to lm() instead of lmer or lme. I will keep playing with it and drop you a mail… Thanks alot…


  3. Amos says:

    Hi Jon, Thanks for this! It seems very suited to my needs and very useful in general. I was trying to implement it and had a few questions:
    1) Is there a way to work around the need for all the separate linear models using the same dataset? My problem is that this unnecessarily reduces my sample size for some paths. I can use some data points “upstream” and not ‘downstream” if that makes sense.
    2) Should I use nlme or lme4? I’ve tried both and they both give errors (“invalid formula for groups” for nlme and “lmerTest did not return p-values. Consider using functions in the nlme package.” for lme4

    Thanks again

    • Jon Lefcheck says:

      Hi Amos (hopefully not the AMOS program!!), Happy to assist.

      (1) I’m not sure I follow. The basis of piecewise SEM is the listed of structured equations, which can be estimated locally, fit to the same dataset. Do you mean that you have hierarchical data such that values of one variable are repeated over unique values of another?

      (2) I recommend nlme because it has fewer problems with convergence, and therefore greater likelihood of returning P-values with which to construct the d-sep test. Sounds like you have a problem specifying your random structure. Drop me a line and we can get it sorted.



  4. Ellen says:

    Hi Jon,
    I’m using your piecewiseSEM package (it’s great…thanks!). I have a question about interpreting path coefficients when there are different types of glm links being used (binomial, poisson). In your example in the vignette:
    there are both binomial and Gaussian families used. When you do sem.coefs(shipley2009.modlist, shipley2009, standardize = “scale”) it gives the warning
    (## Warning in FUN(X[[i]], …): Reponse ‘ Live ‘ is not modeled to a gaussian
    ## distribution: keeping response on original scale.
    I’m a little unclear what this warning means–is it still scaling all models (Gaussian and binomial)? I just want to check that I can compare the path coefficients from this scaled output. Thanks!

    • Jon Lefcheck says:

      Hi Ellen,

      Thanks for the kind words! The warning is meant to imply that centering and scaling the response would no longer make it binomial- or Poisson-distributed. (i.e., it has gone from 0-1 to something altogether different, or whatever). The d-sep model would then kick back an error saying the variable is not binomially, etc. distributed. There are two ways around this: understanding that the standardized predictors are in units of standard deviations of the mean, so that a 10^1 sd change in predictor X results in a 1 unit change in the predictor (if you were modeling a Poisson variable with a log-link) OR transform the response so that it can be centered and scaled. Make sense? Hope this helps!



  5. Ellen says:

    Hi Jon,
    I ran into a couple more questions. Can you use semPlot (or is there something equivalent) with piecewiseSEM? I tried it but it didn’t seem to work. Also, “lmer” models (package lme4) give a warning for sem.fit (‘lmerTest did not return p-values. Consider using functions in the nlme package’). Am I restricted to using “lme” models (package nlme)?
    Thanks for your help,

    • Jon Lefcheck says:

      Hi Ellen,

      I have tried to implement semPlot with piecewiseSEM, but it would be difficult. I should contact the package author to see if they would be interested in helping to implement it.

      As for lmerTest, it has convergence errors more often than not, in my experience. In that case, it doesn’t approximate the ddf and therefore doesn’t return a p-value, which is essentially for the tests of directed separation. In this case, I recommend using nlme since it (more) reliably returns p-values. If you have non-normal variables, you may consider log10-transforming count data. Another option is to use glmmPQL from the MASS package which allows for non-normal data but fits using lme.

      Let me know if you continue to run into problems and we can try to work something out.



  6. Katy says:

    Hello Jon,
    I had used piecewiseSEM earlier this year with a dataset containing binomial and NB-distributed response variables and a random effect. I had used glmer and glmer.nb and got some warnings, but it worked. I just re-installed piecewiseSEM today and now with the same data and model code I am getting this error message:
    > modList = list(
    …. )
    Warning messages:
    1: Some predictor variables are on very different scales: consider rescaling
    2: In checkConv(attr(opt, “derivs”), opt$par, ctrl = control$checkConv, :
    Model failed to converge with max|grad| = 0.00168764 (tol = 0.001, component 2)
    3: In checkConv(attr(opt, “derivs”), opt$par, ctrl = control$checkConv, :
    Model is nearly unidentifiable: very large eigenvalue
    – Rescale variables?;Model is nearly unidentifiable: large eigenvalue ratio
    – Rescale variables?
    4: In theta.ml(Y, mu, sum(w), w, limit = control$maxit, trace = control$trace > :
    iteration limit reached
    5: In sqrt(1/i) : NaNs produced

    > sem.fit(modList, df1)
    Error in matrix(rep(0, dim(amat)[1]), nrow = 1, dimnames = list(responses)) :
    length of ‘dimnames’ [1] not equal to array extent

    Is glmer.nb no longer supported?

  7. Pablo Antiqueira says:

    Hi Jon,

    Many thanks for piecewise SEM. That’s a great ecological tool! I have been working using it. However, when I try to run it with my data I get some errors.

    When I try to use the “sem.fit()” function I am getting this error message:

    #Error in topOrder(amat) : The graph is not acyclic!
    In addition: Warning message:
    In get.dag(formula.list) : The graph contains directed cycles!#

    Besides that, when I have tried to explore individual model fits using the sem.model.fits() I’m getting this error message:

    #Error in terms.formula(formula, data = data) :
    ‘data’ argument is of the wrong type

    Why do you think it is going on?
    Many thanks in advance,

    • Jon Lefcheck says:

      Hi Pablo, This warning means your SEM is recursive. In other words, you have feedback loops, such as A -> B -> C -> A. Unfortunately the framework at the moment cannot handle such paths. You will have to reconsider the topology of your network — sorry! Cheers, Jon

  8. Kristin says:

    Thanks for this Jon!

    I have considered SEM for a while now, but never really made it work. This is helping a lot and I managed to apply your tutorial to my data (yes!) – so first of all thank you very much 🙂
    However, I have absolutely no formal training in statistics (or ecology), but ended up doing my PhD about soil microbial communities… Now I know at least some things 😉
    Making SEM work on my data now puts me in front of several questions – these might well be very basic, but we have very few people around who really work with statistics, or know their way around ecological modeling or else… So should I better shoot you an email (so I don’t bore the rest of your readers to death? 😉

    Thanks in advance,
    have a great day!

  9. Ellen says:

    Hi Jon,
    I saw that you are working on a ‘helper function to get scaled model for sem.coefs’ in GitHub. What does this do? Does it help get standardized coefficients from glmer models? Thanks!

    • Jon Lefcheck says:

      Hi Ellen, This function was already included with sem.coefs and the argument standardize = "scale", but people were having issues with transformed and offset variables in the model formula. That lead me to create the helper function to break apart the formula, transform the variables, scale the data appropriately, and refit the model, since it was getting too unwieldy to do in the original function. Cheers, Jon

  10. Barney Luttbeg says:

    Great package, thank you. I’m having a problem where I can fit a model, but I can’t get sem.coefs or sem.model.fits to work.

    The following works fine:
    Model2.modlist = list(
    lm(Hydroperiod~LandUse+Temperature, data =frog.data),
    glm(Frog.count~Hydroperiod, data =frog.data, family=”poisson”)
    Model2.result=sem.fit(Model2.modlist, frog.data)

    But when I do the following:
    sem.coefs(Model2.result, frog.data)

    I get this error message: Error in order(response, p.value) : object ‘response’ not found

    I’ve tried changing my lm to glm and changing my poisson glm to glm.nb, but get the same error message. Am I using an supported model? Thank you for any help. Barney


    Hi Jon,

    It is me again, sorry for bothering you. But, I am kind of stuck in some issues with SEM. First of all thanks for a great package and the nice tutorials, I have really enjoyed them.

    Here is my problem:
    I have data from a mesocosm experiment, in which I measured a bunch of stuff (chemical, biological, and physical parameters), it is a fully factorial experiment with 5 experimental blocks, so I should use random effects, and that is why piecewise SEM is perfect for me. I want to estimate the magnitude of the direct and indirect effects of my treatments (fish population x density x infection ) at multiple levels (e.g. mesograzers and pelagic algae).

    So my questions are:
    1. Can I use my experimental treatments (main effects and their interactions) as exogenous variables in a piecewise SEM to estimate their direct and indirect effects?. if so, there will be 7 exogenous variables that are uncorrelated. Which bring me to the next question.

    How many endogenous variables (response variables) can I use in the model? because if I follow the d-rule, I will be allowed to use no more than two variables. I have 40 samples, 5 x 8 treatments:
    A => X1
    B => X1
    C=> X1
    A*B=> X1
    A*C=> X1
    B*C=> X1
    A*B*C=> X1
    X1 => X2

    Writing Shipley’s book 2000, I found a chapter about multigroup SEM, What I understood is that multigroup SEM tests the differences in the covariance of a network of interactions, and test the effects of some treatments. But since, I want to estimate the direct and indirect effect, I am not sure if I should use it. What would think?

    Thanks so much for your help,

    Best regards,


    • Jon Lefcheck says:

      Hi Jaime, Three-factor experiments are troubling to fit into an SEM framework. I am struggling with it myself currently. Why don’t you send me an email and we can discuss the specifics of your design, and what approach might be best. Cheers, Jon

  12. Theresa says:

    Hi Jon,

    I have started to use your piecewiseSEM package which seems to be a very good approach for my purposes. However I am getting this warning message:
    Warning message:

    In FUN(X[[i]], …) :
    Some d-sep tests are non-symmetrical. The most conservative P-value has been returned. Stay tuned for future developments…

    Strangely I did not get this warning message before….



    • Jon Lefcheck says:

      Hi Theresa, Yes this is an issue when there are intermediate endogenous variables that are modeled to a non-normal distribution. The transformation via the link function (e.g., log(x) for Poisson) means that the formula y ~ x does not return the same P-value as x ~ y which makes a difference in constructing the d-sep tests. At the moment the code returns the lower of the two P-values for the two formulas. But we’re working on some conceptual updates that may fix this error. Stay tuned… Jon

  13. Holly Copeland says:

    Hi Jon,
    Thank you for this detailed and clear explanation of piecewiseSEM and for the examples that you have provided. Super helpful! I have been applying these methods for the first time to a project (scary!), and I was hoping that you could provide feedback for whether this analysis makes sense and is applied correctly? Here it goes. I want to evaluate the contribution of road and gas wells (as distance from leks) on background noise (measured at the lek) and consequently see if they are linked to declines of sage-grouse at leks (N=22). I’ve provided the code and R results below. My interpretation of these results is that both well distance and road distance “cause” noise, but that well distance is 2x as important as road distance, and that noise leads to (causes? can I say that?) declines. Is that an appropriate interpretation? Also, I do have another covariate (X3, if you will), percent decline in sagebrush, that is not related to noise but could be linked to declines and I’m not sure I understand how to include this path. Any guidance here would be very helpful. Thanks so much, Holly

    The model list:
    mydf.modlist = list(
    lme(Median_L50 ~ RoadDistM + WellDistM, random = ~1|SiteNum, na.action = na.omit, data = mydf),
    lme(Negbin_tre ~ Median_L50, random = ~1|SiteNum, na.action = na.omit, data = mydf)
    R Results (the model passed the Fisher’s C test)
    > #Extract path coefficients
    > sem.coefs(mydf.modlist, mydf, standardize = “scale”)
    response predictor estimate std.error p.value
    1 Median_L50 WellDistM -0.6408378 0.1489238 0.0004 ***
    2 Median_L50 RoadDistM -0.2936578 0.1489238 0.0634
    3 Negbin_tre Median_L50 -0.6276897 0.1740698 0.0018 **

    • Jon Lefcheck says:

      Hi Holly,

      Thanks for getting in touch. This seems like a great application of SEM because you have indirect, cascading effects (distance & wells -> noise -> lek abundance).

      Your interpretation is reasonable: not only is the distance of wells from the lek 2x more important than road distance in reducing noise (based on those standardized coefficients), but it is actually the only ‘significant’ contributor (P < 0.001 vs P = 0.06).

      You can further calculate the indirect effect by multiplying the standardized coefficients: -0.64 * -0.63 = 0.40. Note that the sign changes, implying that increasing distance from wells indirectly increases lek abundance by directly reducing noise.

      Adding another covariate is possible but keep in mind that the `sem.fit` function will then test for other associations between this variable (X3) and unlinked variables in the model. So ,for instance, if you added it to your first model, the goodness-of-fit tests would include an additional independence claim between `Negbin_tre ~ X3`. This could be good or bad, depending on whether this relationship is significant and changes the fit of your model.

      Give an email if you want to discuss further. (I did send you a reply and never heard back, but happy to have the exchange in public in case others can benefit from this conversation!)



  14. Isabel says:

    Hi Jon,
    thanks so much for this summary. It is really helpful to clarify some points. However, it raises me other doubts that are probably more related with the words that the different authors uses than with the statistics.
    Shipley (2002) states :”Two different ways of testing causal models will be presented in this book. The most common method is called structural equations modelling
    (SEM) and is based on maximum likelihood techniques. (…) These drawbacks led me to develop an alternative set of methods that can be used for small sample sizes, non-normally distributed data or non-linear functional relationships (Shipley 2000) that I will call these d-sep tests”.
    From here, it seems that SEM and d-sep tests are just two methodologies to test causal models. Howeve, from Shipley (2013) and you I interpret that SEM is an statisitcal technique for multivariate analyses that can be evaluated used d-sep test (or AIC). Even more, when you go to MPlus, the concept of “piecewise SEM” is not used at any moment even when includes non-normal distributed or latent variables. Can you shed light on this? It is a little confusing

    Secondly, regarding the evaluating methods, are both d-sep and AIC comparison independent procedures? It seems from Shipley that to compute AIC one needs to perform d-sep procedure. But this seems weird to me. Can I use simply the AIC model comparison to select by best “path model”? or since I need the LogLikelihood for computing AIC the only way is to use d-sep test?


    • Jon Lefcheck says:

      Hi Isabel,

      Certainly! It may be more helpful to think of the first method in terms of “global estimation,” where relationships for the entire causal network are solved simultaneously (this is what is referred to by Shipley as the “common method”). The d-sep tests are a form of “local estimation” where each path is solved independently from the others. It amounts to simply estimating each (multiple) linear regression by itself.

      Because each equation is solved individually, each response can be modeled to its own (potentially non-normal) distribution and set of assumptions. This makes the piecewise technique a lot more flexible, as Shipley has argued.

      (Latent variables are tricky and are not yet implemented in a piecewise framework.)

      As for your second question, the d-sep tests — evaluating the “independence claims” or missing paths in your diagram — provide significance statistics (P-values) that are used to construct the AIC value. So yes, Shipley is correct that, in computing AIC, one must first conduct the d-sep tests.

      The d-sep tests that produce the Fisher’s C statistic, which is used to infer goodness-of-fit, and the AIC value, which is used to make model comparisons, are two sides of the same coin. One evaluates the independent fit of the model, i.e., does it reproduce the data well, or are there missing relationships that are important given the data? The other is used to compare different topologies. One can use AIC to infer a more likely topology but still have a poor-fitting model, in the same way you can select the most parsimonious regression from a candidate set but it still has low explanatory power (e.g., low R^2). Make sense?

      Hope this helps!



      • Isabel says:

        Hi again Jon,
        and thanks for your clarification. It is putting order in my head
        Anyway, regarding my second question, Shipley uses the p-value to compute C and using C he proposed to calculate AIC using the formula you wrote above. Original definition of AIC from Akaike (1987) is AIC = -2logL + 2*r where r is the number of free parameters, right? This makes ask me some questions:
        1. C is just the “gobbledygook” in brackets or also the -2ln? if the latter is true, AIC = C+2K, right?
        2. Is there no other way to estimate the likelihood of a Piecewise SEM without using C, and the d-sep test? I mean, several programs like MPlus gives a likelihood and AIC value but I found no reference to Shipley’s d-test
        3. And, overall, how your PiecewiseSEM package calculates the LogLikelihood of a path and so the AIC? Using d-sep and C, I guess, am I right?


      • Jon Lefcheck says:

        Hi Isabel, 1. the C replaces the log-likelihood component, so everything to the right of the 2*r; 2. The AIC given by those programs is for global estimation, i.e., the whole vcov matrix is estimated using ML so there is a more straightforward formulation of AIC. As it stands, Fisher’s C is the only way to compute an AIC score for SEMs fit using local estimation; 3. The likelihood is captured by Fisher’s C (see Shipley’s 2013 paper in Ecology). The likelihood df are pulled directly from the models. — Let me know if this makes sense and if you need any more clarification! Cheers, Jon

  15. Isabel says:

    Hi again Jon,
    certainly, now I am a little more confused. In your reply above, are you saying that some programs estimate the whole vcov matrix for Piecewise SEM using ML?
    From this tutorial I understood that this is not possible. Above, in this page I can read that for Piecewise SEM we are “estimating a separate variance-covariance matrix for each piecewise model” (for each single equation of the piecewise SEM, right?). And also, that we are not “reproducing the entire variance-covariance matrix among all variables” as we do in traditional SEM.
    So, and sorry for the direct question, is it possible or not to estimate a whole vcov matrix for Piecewise SEM?
    Probably my misunderstanding is because I do not understand very well the difference between local and global estimation, can you recommend me any lecture?

    Thanks for your patience! I promise no more questions 😉

  16. Nick says:

    Hi Jon,

    Thanks for your work on this great R package. I have a question about effect sizes for pathways in glmer models. One of the strengths of SEM is visually showing the strengths (effect size) of explanatory variables via the size of the arrows in the SEM figure with boxes and arrows. This is great when all response variables have Gaussian distributions because you can compare the strengths of pathways across the entire SEM via standardized coefficients. Is there a way to do this for pathways in generalized linear mixed-effects models (glmer)? My SEM has three response (endogenous) variables – one is binomial and the other two have Poisson distributions. Thus I have three glmer models within the SEM. I have random effects too, to account for the hierarchical design (plots nested in sites) and I’m using the glmer function in lme4. I can’t center and scale the responses. Standardizing the predictors doesn’t quite get me the solution either, I think. How do you recommend I depict the arrows and coefficients in the SEM plot? What is most meaningful and easiest for the reader to understand? I will have the ANOVA table of all the fixed effects – coefficients of non-standardized predictors, standard error and p-values (from sem.coefs). But what/how can I show these best in the SEM figure? One more complication, one of the predictor variables is a binomial variable (the response variable with the binomial distribution, presence/absence). I assume I can’t standardize this as the predictor variable.

    • Jon Lefcheck says:

      Hi Nick, Yes you have hit the nail on the head when it comes to local estimation. We are working on this as we speak, it may be simpler than you think. If you like, you can download the development version of the package: library(devtools); install_github("jslefche/piecewiseSEM@2.0") where we have implemented a working version of a solution. However, BE WARNED! This will overwrite your current version of piecewiseSEM (1.4) and has a very different (but more familiar) syntax. It may be that our solution does not stand up to scrutiny so I would take any estimates with a heaping chunk of salt until we are confident we have a solution — we will release the package to CRAN when that happens! Cheers, Jon

  17. Vesna Gagic says:

    Hi Jon, thanks for a great package! Can you provide a working example for using sem.fit with glmmadmb within model list? I tried to change glmer function to glmmadmb in the “shipley2009.reduced” data but sem.fit gives this error: unused argument (control = control). If I change “sem.missing.paths” function that sem.fit uses (use admb.opts = control, instead of control = control since it seems that glmmadmb uses this argument in the model formula) I get another error saying that all grouping variables in random effects must be factors (but they are!). So, I got stuck a bit..

    • Jon Lefcheck says:

      Hi Vesna, Sorry I see you also emailed me and filed a bug on Github, but I just returned from a whirlwind of travel. This may have to do with the bug someone recently uncovered when the dataset is named ‘control’. I am working on a development version of the package and hope to have support for glmmadmb soon. Is the analysis urgent? If so drop me a line. Cheers, Jon

  18. Leonardo Saravia says:

    Thanks Jon for your great post, I am using piecewise SEM because a have a relatively small data set but I am including some two-way interactions, and I wonder why the scaled estimate for one of the interactions is greater than one.

    Here is the model and some results:

    chla_SEM = list(

    ChlaMaxS = lm(ChlaMaxS ~ SSD*period+SST*period+SPMS*period, data = pchla),

    SST = lm(SST ~ PDD, data = pchla),

    Q = lm(Q ~ PDD, data = pchla),

    SSD = lm(SSD ~ Q, data = pchla),

    SPM = lm(SPMS ~ Q, data = pchla)


    sem.coefs(chla_SEM, pchla, standardize = “scale”)

    response predictor estimate std.error p.value
    1 ChlaMaxS SSD:periodafter 2.4939955 0.705944938 0.0026 **
    2 ChlaMaxS SPMS 0.4187186 0.128841642 0.0047 **
    3 ChlaMaxS periodafter:SPMS -0.5586233 0.272587009 0.0562
    4 ChlaMaxS SSD -0.1158602 0.095550453 0.2419
    5 ChlaMaxS SST 0.1109951 0.092051462 0.2444
    6 ChlaMaxS periodafter 0.4089875 0.428126531 0.3528
    7 ChlaMaxS periodafter:SST 0.2041288 0.597129420 0.7367

    do I have to eliminate the nonsignificative interactions using AIC for model comparison?

    • Jon Lefcheck says:

      Hi Leonardo, The units of standardized coefficients are in standard deviations of the mean. Thus, assuming the errors on the estimates are very small, it is possible to have a standardized coefficient that is very large, i.e., larger than 1. As for the non-significant interactions, that is up to you: you obviously included them in the model because you thought they were mechanistically important. I would caution against removing them for statistical reasons (e.g., non-significant P-value) since this goes against the core philosophies of SEM. If you chose to proceed with AIC comparisons, recall that you must include all variables in nested models, even if they have no links, using the add.vars = argument in sem.fit. Good luck! Jon

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s