This vignette demonstrates how to use the auditor package for auditing residuals of models. The auditor provides methods for model verification and validation by error analysis.

Many models, such as random forests and neutral networks are nowadays treated as black boxes. Therefore, there is a lack of theory that describes the behavior of errors in those models. Most methods provided in auditor package are model-agnostic, so can be used regardless of knowledge about errors.

Some of the graphical error analysis methods also have corresponding scores, which allow comparison of two models.

Regression use case - dragons data

To illustrate applications of auditor to regression problems we will use an artificial dataset dragons available in the DALEX2 package. Our goal is to predict the length of life of dragons.

library(DALEX2)
data("dragons")
head(dragons)
##   year_of_birth   height   weight scars colour year_of_discovery
## 1         -1291 59.40365 15.32391     7    red              1700
## 2          1589 46.21374 11.80819     5    red              1700
## 3          1528 49.17233 13.34482     6    red              1700
## 4          1645 48.29177 13.27427     5  green              1700
## 5            -8 49.99679 13.08757     1    red              1700
## 6           915 45.40876 11.48717     2    red              1700
##   number_of_lost_teeth life_length
## 1                   25   1368.4331
## 2                   28   1377.0474
## 3                   38   1603.9632
## 4                   33   1434.4222
## 5                   18    985.4905
## 6                   20    969.5682

Models

Linear model

lm_model <- lm(life_length ~ ., data = dragons)

Random forest

library("randomForest")
set.seed(59)
rf_model <- randomForest(life_length ~ ., data = dragons)

Preparation for error analysis

The beginning of each analysis is creation of a modelAudit object. It’s an object that can be used to audit a model.

library("auditor")

lm_audit <- audit(lm_model, label = "lm", data = dragons, y = dragons$life_length)
rf_audit <- audit(rf_model, label = "rf", data = dragons, y = dragons$life_length)

Audit of residuals

In this section we give short overview of a visual validation of model errors and show the propositions for the validation scores. Auditor helps to find answers for questions that may be crucial for further analyses.

  • Does the model fit data? Is it not missing the information?

  • Which model has better performance?

  • How similar models are?

In further sections we will overview auditor functions for analysis of model residuals. They are discussed in alphabetical order.

Audit pipelines

The auditor provides 2 pipelines of model audit.

  1. model %>% audit() %>% modelResiduals() %>% plot(type=…) This pipeline is recommended. Function modelResiduals() creates a modelResiduals object. Such object may be passed to a plot() function with defined type of plot. This approach requires one additional function within the pipeline. However, once created modelResiduals contains all nessesary calculations that all plots require. Therefore, generating multiple plots is fast.
    Alternative: model %>% audit() %>% modelResiduals() %>% plotType()

  2. model %>% audit() %>% plot(type=…) This pipeline is shorter than previous one. Calculations are carried out every time a function is called. However, it is faster to use.
    Alternative model %>% audit() %>% plotType()

Help of functions plot[Type]() contains additional information about plots.

modelResiduals()

In this vignette we use first pipeline. However, alternative evaluations are showed as comments. First, we need to create a modelResiduals objects.

lm_mr <- modelResiduals(lm_audit, variable = "")
rf_mr <- modelResiduals(rf_audit, variable = "")

head(lm_mr)
##   label       res val     variable         y fitted.values    std.res
## 1    lm -14.72707   1 Observations 1368.4331     1383.1602 -0.3617580
## 2    lm  80.72393   2 Observations 1377.0474     1296.3235  1.9829147
## 3    lm  64.16099   3 Observations 1603.9632     1539.8022  1.5760602
## 4    lm  35.38060   4 Observations 1434.4222     1399.0416  0.8690943
## 5    lm  12.70785   5 Observations  985.4905      972.7826  0.3121576
## 6    lm -61.54707   6 Observations  969.5682     1031.1152 -1.5118515
##   index
## 1     1
## 2     2
## 3     3
## 4     4
## 5     5
## 6     6

Some plots may require specified variable or fitted values for modelResidual object.

lm_mr_year <- modelResiduals(lm_audit, variable = "year_of_discovery")
rf_mr_year <- modelResiduals(rf_audit, variable = "year_of_discovery")

lm_mr_fitted <- modelResiduals(lm_audit)
rf_mr_fitted <- modelResiduals(rf_audit)

head(lm_mr_year)
##   label       res  val          variable         y fitted.values
## 1    lm -14.72707 1700 year_of_discovery 1368.4331     1383.1602
## 2    lm  80.72393 1700 year_of_discovery 1377.0474     1296.3235
## 3    lm  64.16099 1700 year_of_discovery 1603.9632     1539.8022
## 4    lm  35.38060 1700 year_of_discovery 1434.4222     1399.0416
## 5    lm  12.70785 1700 year_of_discovery  985.4905      972.7826
## 6    lm -61.54707 1700 year_of_discovery  969.5682     1031.1152
##      std.res index
## 1 -0.3617580     1
## 2  1.9829147     2
## 3  1.5760602     3
## 4  0.8690943     4
## 5  0.3121576     5
## 6 -1.5118515     6

Plots

plotACF() - Autocorrelation Function of Residuals

Autocorrelation Function plot can be used to check randomness of errors. If random, autocorrelations should be near zero for lag separations. If non-random, then autocorrelations will be significantly non-zero.

Residuals may be ordered by values of any model variable or by fitted values. If variable is not specified, function takes order from the data set.

lm_mr_surface <- modelResiduals(lm_audit, variable = "year_of_discovery")
plot(lm_mr_surface, type = "ACF")

# alternative:
# plotACF(lm_audit, variable = "year_of_discovery")

plotAutocorrelation() - Autocorrelation of Residuals

On the Autocorrelation plot there are i-th vs i+1-th residuals. This plot may be useful for checking autocorrelation of residuals.

Sometimes it is difficult to compare two models basing only on visualizations. Therefore, we have proposed some scores, which may be useful for choosing a better model.

plot(rf_mr_fitted, type = "Autocorrelation")

# alternative:
# plotAutocorrelation(rf_audit)

DW score and Runs score are based on Durbin-Watson and Runs test statistics.

Scores can be calculated with the scoreDW() and scoreRuns() functions or the score() function with argument score equals to “DW” or “Runs”.

rf_score_DW <- scoreDW(rf_audit)
rf_score_Runs <- scoreRuns(rf_audit)

rf_score_DW$score
## [1] 0.5508757
rf_score_Runs$score
## [1] -11.85027

plotModelCorrelation - Correlation of Models

A grid of plots presents correlation of dependennt variable and fitted model values.

plot(rf_mr, lm_mr, type = "ModelCorrelation")
## Called from: plotModelCorrelation(x, ...)
## debug: p <- ggpairs(df)
## debug: p + theme_drwhy() + ggtitle("Model Correlation")

# alternative:
# plotModelCorrelation(rf_audit, lm_audit)

plotModelPCA() - Model PCA

Principal Component Analysis of models residuals. PCA can be used to assess the similarity of the models.

plot(rf_mr, lm_mr, type = "ModelPCA")

# alternative:
# plotModelPCA(rf_audit, lm_audit)

plotPredition() - Observed vs Predicted

Basic plot of residuals vs observed, fitted or variable values. If variable is not specified, function takes order from the data set.

Black line corresponds to the y=x function.

lm_mr_m2 <- modelResiduals(lm_audit, variable = "life_length")
rf_mr_m2 <- modelResiduals(rf_audit, variable = "life_length")

plot(rf_mr_m2, lm_mr_m2, type = "Prediction")

# alternative:
# plotPrediction(rf_audit, lm_audit, variable = "life_length")

Residuals may be ordered by values any model variable or by fitted values. And both models may be plotted together.

plot(rf_mr_fitted, lm_mr_fitted, type = "Residual")

# alternative:
# plotResidual(rf_audit, lm_audit)

plotREC() - Regression Error Characteristic (REC) Curve

Error Characteristic curves are a generalization of ROC curves. On the x axis of the plot there is an error tolerance and on the y axis there is a percentage of observations predicted within the given tolerance. REC curve estimates the Cumulative Distribution Function (CDF) of the error. Area Over the REC Curve (REC) is a biased estimate of the expected error.

plot(rf_mr, lm_mr, type = "REC")

# alternative:
# plotREC(rf_audit, lm_audit)

plotResidual() - Plot Residuals vs Observed, Fitted or Variable Values

Basic plot of residuals vs observed, fitted or variable values. It provides information about the structure of the model.

plot(rf_mr, type = "Residual")

# alternative:
# plotResidual(rf_audit)

Residuals may be ordered by values any model variable or by fitted values. And both models may be plotted together. If variable is not specified, function takes order from the data set.

plot(rf_mr_fitted, lm_mr_fitted, type = "Residual")

# alternative:
# plotResidual(rf_audit, lm_audit)

plotResidualBoxplot() - Boxplot of residuals

Comparison of the absolute valued of residuals. The red dot stands for the root mean square.

plot(lm_mr, rf_mr, type = "ResidualBoxplot")

# alternative
# plotResidualBoxplot(lm_audit, rf_audit)

plotResidualDensity() - Density of Residuals

Density of residuals may be plotted in different ways. Residuals of models may be simply compared.

plot(rf_mr, lm_mr, type = "ResidualDensity")

# alternative
# plotResidualDensity(rf_audit, lm_audit)

Resuduals may be also divided by median of the numeric variable and splitted by a factor variable

plotResidualDensity(lm_mr_m2, rf_mr_m2)

# alternative
# plotResidualDensity(rf_audit, lm_audit, variable = "life_length")
plotResidualDensity(lm_mr_year, rf_mr_year)

# alternative
# plotResidualDensity(rf_audit, lm_audit, variable = "year_of_discovery")

plotRROC() - Regression Receiver Operating Characteristic (RROC)

The basic idea of the ROC curves for regression is to show model asymmetry. The RROC is a plot where on the x-axis we depict total over-estimation and on the y-axis total under-estimation.

For RROC curves we use a shift, which is an equvalent to the threshold for ROC curves. For each observation we calculate new prediction: where s is the shift. Therefore, there are different error values for each shift:

Over-estimation is caluclates as: . Under-estimation is calculated as: . The shift equals 0 is represented by a dot.

The Area Over the RROC Curve (AOC) equals to the variance of the errors multiplied by .

plot(rf_mr, lm_mr, type = "RROC")

# alternative:
# plotRROC(rf_audit, lm_audit)

plotScaleLocation() - Scale Location Plot

This plot shows if residuals are spread equally along the ranges of predictors.

plot(rf_mr_fitted, lm_mr_fitted, type = "ScaleLocation")

# alternative:
# plotScaleLocation(rf_audit, lm_audit)

For comparing 2 models we can use GQ score, which is based on Goldfeld-Quandt test statistic. And may be computed also in score() function with argument score equals “GQ”.

plotTwoSidedECDF() - Two-sided Empirical Cumulative Distribution Function (ECDF)

Cumulative Distribution Function for positive and negative residuals.

plot(rf_mr, lm_mr, type = "TwoSidedECDF")

# alternative
# TwoSidedECDF(rf_audit, lm_audit)

Other methods

Other methods and plots are described in vignettes: