Dealing with multicollinearity using VIFs

Besides normality of residuals and homogeneity of variance, one of the biggest assumptions of linear modeling is independence of predictors. If one or more of the predictors in a model are correlated, then the model may produce unstable parameter estimates with highly inflated standard errors, resulting in an overall significant model with no significant predictors. In other words, bad news if your goal is to try and determine the contribution of each predictor in explaining the response. But there is hope!

There are several methods for dealing with multicollinearity. The simplest is to construct a correlation matrix and corresponding scatterplots. If the correlations between predictors approach 1, then multicollinearity might be a problem. In that case, one can make some educated guesses about which predictors to retain in the analysis (based on biological significance, ease of measurement, etc.).

Another way to identify collinear predictors is by calculating a variance inflation factor (VIF) for each predictor. The variance inflation factor represents the proportion of variance in one predictor explained by all the other predictors in the model. A VIF = 1 indicates no collinearity, whereas increasingly higher values suggest increasing multicollinearity. The approach suggested by Zuur et al. (2010, reference below) is to calculate VIFs for each parameter in the model, and if they are larger than some cutoff, sequentially drop the predictor with the largest VIF, recalculate, and repeat until all values are below the cutoff (they suggest a cutoff of 2). VIFs are especially nice for dealing with collinearity of interaction terms.

The vif function in the “car” package in R will calculate VIFs for a linear model. I’ve written a quick function that will identify if any VIFs > cutoff, remove the largest value, recalculate, and repeat until all VIFS < cutoff. It produces a final model with the same name as the original model. I’ve also included a function for calculating VIFs for linear mixed effects models built using the lmer function in the “lme4” package (From:

A WORD OF CAUTION! The following code is a brute force method to removing variables. Its always important to consider the identity of variables that are being removed, and whether certain variables are more biologically meaningful, or whether certain variables are better representations of underlying processes. I recommend going through the sequential deletion step-by-step after this method to see whether you agree with what is happening mathematically. To that end, the function also outputs a table showing the VIF values and deleted variables at each step in the sequence.

# Load psych package
library(psych) #Calls: pairs.panels
# Load car package
library(car) #Calls: vif
# Load plyr package
library(plyr) #Calls: rbind.fill

# Create a fake data frame with two pairs of highly collinear variables
df$x3=df$x1+runif(500,-50,50); df$x4=df$x2+runif(500,-5,5)
# Fit a linear model to the data
mod=lm(y~.,data=df); plot(mod,which=1:2)
# Generate scatterplots with Pearson correlations
# Use function `vif` to calculate variance inflation factors for each variable
# in the model

# Choose a VIF cutoff under which a variable is retained (Zuur et al. 2010 
# MEE recommends 2)
# Create function to sequentially drop the variable with the largest VIF until 
# all variables have VIF > cutoff
while(flag==TRUE) {
  if(max(vfit)>cutoff) { mod=
	update(mod,as.formula(paste(".","~",".","-",names(which.max(vfit))))) }
  else { flag=FALSE } }
# Look at the final model
# And associated VIFs
# And show the order in which variables were dropped

And the function for lmer (from here):

vif.mer <- function (fit) {
    ## adapted from rms::vif

    v <- vcov(fit)
    nam <- names(fixef(fit))

    ## exclude intercepts
    ns <- sum(1 * (nam == "Intercept" | nam == "(Intercept)"))
    if (ns > 0) {
        v <- v[-(1:ns), -(1:ns), drop = FALSE]
        nam <- nam[-(1:ns)]

    d <- diag(v)^0.5
    v <- diag(solve(v/(d %o% d)))
    names(v) <- nam

Created by Pretty R at

Zurr et al. (2010). A protocol for data exploration to avoid common statistical problems. Methods in Ecology and Evolution 1: 3-14.

Created by Pretty R at

11 thoughts on “Dealing with multicollinearity using VIFs

  1. jebyrnes says:

    Note, often collinarities will be generated with interaction terms. By calculating a centered interaction, you can remove these, rather than worrying about removing the interaction. You just have to be careful about interpretation of main effects afterwards!

    • jslefche says:

      Good point, Jarrett! Centering is yet another good way to deal with collinearity. I will have to ponder on situations where certain techniques are more or less appropriate…

  2. Martí says:

    Hello, If i fit a Logistic Mixed Model with lme4 and I want to take into account the colineality and correlation between discrete covariates, what is your opinion? I should’nt use “vif.mer “, isn’t it?

    Great post!!


      • Martí says:

        Thank you so much. I’ll try it. I think the the conditional R^2 is useful to select the “best” model such as AIC or BIC selection. However, in the new version of lme4 we can fit the model with the script ” print(model, corr=FALSE).”. I don’t know if it takes account the correlation.

        Thank you again.


  3. Matteo says:

    There is an error in the function to test the collinearity! After ## exclude intercepts
    It’s missing this part:
    ns <- sum(1 * (nam == "Intercept" | nam == "(Intercept)"))

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s