This vignette demonstrates how to use the auditor package for auditing residuals of models. The auditor provides methods for model verification and validation by error analysis.
Many models, such as random forests and neutral networks are nowadays treated as black boxes. Therefore, there is a lack of theory that describes the behavior of errors in those models. Most methods provided in auditor package are model-agnostic, so can be used regardless of knowledge about errors.
Some of the graphical error analysis methods also have corresponding scores, which allow comparison of two models.
To illustrate applications of auditor to regression problems we will use an artificial dataset dragons available in the DALEX2 package. Our goal is to predict the length of life of dragons.
## year_of_birth height weight scars colour year_of_discovery ## 1 -1291 59.40365 15.32391 7 red 1700 ## 2 1589 46.21374 11.80819 5 red 1700 ## 3 1528 49.17233 13.34482 6 red 1700 ## 4 1645 48.29177 13.27427 5 green 1700 ## 5 -8 49.99679 13.08757 1 red 1700 ## 6 915 45.40876 11.48717 2 red 1700 ## number_of_lost_teeth life_length ## 1 25 1368.4331 ## 2 28 1377.0474 ## 3 38 1603.9632 ## 4 33 1434.4222 ## 5 18 985.4905 ## 6 20 969.5682
The beginning of each analysis is creation of a
modelAudit object. It’s an object that can be used to audit a model.
In this section we give short overview of a visual validation of model errors and show the propositions for the validation scores. Auditor helps to find answers for questions that may be crucial for further analyses.
Does the model fit data? Is it not missing the information?
Which model has better performance?
How similar models are?
In further sections we will overview auditor functions for analysis of model residuals. They are discussed in alphabetical order.
The auditor provides 2 pipelines of model audit.
model %>% audit() %>% modelResiduals() %>% plot(type=…) This pipeline is recommended. Function
modelResiduals() creates a
modelResiduals object. Such object may be passed to a
plot() function with defined type of plot. This approach requires one additional function within the pipeline. However, once created
modelResiduals contains all nessesary calculations that all plots require. Therefore, generating multiple plots is fast.
Alternative: model %>% audit() %>% modelResiduals() %>% plotType()
model %>% audit() %>% plot(type=…) This pipeline is shorter than previous one. Calculations are carried out every time a function is called. However, it is faster to use.
Alternative model %>% audit() %>% plotType()
Help of functions
plot[Type]() contains additional information about plots.
In this vignette we use first pipeline. However, alternative evaluations are showed as comments. First, we need to create a
## label res val variable y fitted.values std.res index ## 1 lm -14.72707 1 NA 1368.4331 1383.1602 -0.3617580 1 ## 2 lm 80.72393 2 NA 1377.0474 1296.3235 1.9829147 2 ## 3 lm 64.16099 3 NA 1603.9632 1539.8022 1.5760602 3 ## 4 lm 35.38060 4 NA 1434.4222 1399.0416 0.8690943 4 ## 5 lm 12.70785 5 NA 985.4905 972.7826 0.3121576 5 ## 6 lm -61.54707 6 NA 969.5682 1031.1152 -1.5118515 6
Some plots may require specified variable or fitted values for
## label res val variable y fitted.values ## 1 lm -14.72707 1700 year_of_discovery 1368.4331 1383.1602 ## 2 lm 80.72393 1700 year_of_discovery 1377.0474 1296.3235 ## 3 lm 64.16099 1700 year_of_discovery 1603.9632 1539.8022 ## 4 lm 35.38060 1700 year_of_discovery 1434.4222 1399.0416 ## 5 lm 12.70785 1700 year_of_discovery 985.4905 972.7826 ## 6 lm -61.54707 1700 year_of_discovery 969.5682 1031.1152 ## std.res index ## 1 -0.3617580 1 ## 2 1.9829147 2 ## 3 1.5760602 3 ## 4 0.8690943 4 ## 5 0.3121576 5 ## 6 -1.5118515 6
Autocorrelation Function plot can be used to check randomness of errors. If random, autocorrelations should be near zero for lag separations. If non-random, then autocorrelations will be significantly non-zero.
Residuals may be ordered by values of any model variable or by fitted values. If variable is not specified, function takes order from the data set.
On the Autocorrelation plot there are i-th vs i+1-th residuals. This plot may be useful for checking autocorrelation of residuals.
Sometimes it is difficult to compare two models basing only on visualizations. Therefore, we have proposed some scores, which may be useful for choosing a better model.
DW score and Runs score are based on Durbin-Watson and Runs test statistics.
##  0.651724
##  -11.21878
A grid of plots presents correlation of dependennt variable and fitted model values.
Principal Component Analysis of models residuals. PCA can be used to assess the similarity of the models.
Basic plot of residuals vs observed, fitted or variable values. If variable is not specified, function takes order from the data set.
Black line corresponds to the y=x function.
Residuals may be ordered by values any model variable or by fitted values. And both models may be plotted together.
Error Characteristic curves are a generalization of ROC curves. On the x axis of the plot there is an error tolerance and on the y axis there is a percentage of observations predicted within the given tolerance. REC curve estimates the Cumulative Distribution Function (CDF) of the error. Area Over the REC Curve (REC) is a biased estimate of the expected error.
Basic plot of residuals vs observed, fitted or variable values. It provides information about the structure of the model.
Residuals may be ordered by values any model variable or by fitted values. And both models may be plotted together. If variable is not specified, function takes order from the data set.
Comparison of the absolute valued of residuals. The red dot stands for the root mean square.
Density of residuals may be plotted in different ways. Residuals of models may be simply compared.
Resuduals may be also divided by median of the numeric variable and splitted by a factor variable
The basic idea of the ROC curves for regression is to show model asymmetry. The RROC is a plot where on the x-axis we depict total over-estimation and on the y-axis total under-estimation.
For RROC curves we use a shift, which is an equvalent to the threshold for ROC curves. For each observation we calculate new prediction: where s is the shift. Therefore, there are different error values for each shift:
Over-estimation is caluclates as: . Under-estimation is calculated as: . The shift equals 0 is represented by a dot.
The Area Over the RROC Curve (AOC) equals to the variance of the errors multiplied by .
This plot shows if residuals are spread equally along the ranges of predictors.
For comparing 2 models we can use GQ score, which is based on Goldfeld-Quandt test statistic. And may be computed also in
score() function with argument
score equals “GQ”.