Slides and other materials are available at
https://tinyurl.com/UseR2022
Meet your guide to ResponsibleML
More about MI² https://mi2.ai/.
Feel free to post any questions during the workshop in the chat. From time to time we will approach these questions.
16:00 About the team
16:10 Agenda + motivation
16:20 EDA
16:30 Let’s train some models
16:40 Evaluate performance + examples
16:50 Do it yourself
17:00 XAI piramide - introduction
17:10 Permutational Variable Importance + examples
17:20 Do it yourself
17:30 BREAK
18:00 Break-down + examples
18:10 SHAP + examples
18:20 Do it yourself
18:30 Ceteris Paribus + examples
18:40 Partial Dependence Profile + examples
18:50 Do it yourself
19:00 modelStudio + examples
19:10 Do it yourself
19:20 Closing remarks
This workshop aims to present a set of methods for the exploration of complex predictive models. We assume that participants are familiar with R and have some basic knowledge of predictive models. In this workshop, we will show how to explore these models.
The workshop consists of 1/3 lecture, 1/3 code examples discussed by the tutor and 1/3 computer-based exercises for participants.
As the group is large, it may happen that someone will have some problems with the tasks, in such a situation please write about it in the chat.
It may also happen that someone will do the tasks much earlier. In such a situation it would be best to help others who have questions.
The workshop is based on material from a comic book on Responsible Machine Learning. This book can be accessed online and is also available in paperback on amazon and lulu.
Get prepared. Install the following packages.
install.packages(c("tableone", "DALEX", "ggplot2", "partykit", "ranger", "rms"))
The purpose of this tutorial is to present techniques for model exploration, visualisation and explanation. To do this we will use some interesting real-world data, train a few models on the data and then use XAI (eXplainable artificial intelligence) techniques to explore these models. Along the way, we will tackle various interesting topics such as model training, model verification, model visualisation, model comparison and exploratory model analysis.
We assume that users have some basic knowledge about predictive modelling so we can focus on model exploration. If you want to learn models about models, then some in-depth introduction in An Introduction to Statistical Learning: with Applications in R. In this tutorial we will present some basics of model explanatory analysis, if you are looking for details then you will find them in Explanatory Model Analysis.
Why should I care?
Predictive models have been used throughout entire human history. Priests in Egypt were predicting when the flood of the Nile or a solar eclipse would come. Developments in statistics, increasing the availability of datasets, and increasing computing power allow predictive models to be built faster and faster.
Today, predictive models are used virtually everywhere. Planning the supply chain for a large corporation, recommending lunch or a movie for the evening, or predicting traffic jams in a city. Newspapers are full of interesting applications.
But how are such predictive models developed? In the following sections, we will go through a life cycle of a predictive model. From the concept phase, through design, training, and checking, to the deployment. For this example, we will use the data set on the risk of death for Covid-19 patients after SARS-COV-2 infection. But keep in mind that the data presented here is artificial. It is generated to mirror relations in real data but does not contain real observations for real patients. Still, it should be an interesting use case to discuss a typical lifetime of a predictive model.
Tools
In this tutorial we will work on three types of models, logistic regression with splines, which is implemented in the rms
package, simple decision tree implemented in partykit
package and random forest implemented in the ranger
package.
Models will be explained and visualized with the DALEX
package. Note that there are also other packages with similar functionalities, for modelling other popular choices are mlr
, tidymodels
and caret
while for the model explanation you will find lots of interesting features in flashlight
and iml
.
The problem
The life cycle of a predictive model begins with a well-defined problem. In this example, we are looking for a model that assesses the risk of death after being diagnosed covid. We don’t want to guess who will survive and who won’t. We want to construct a score that allows us to sort patients by risk of death.
Why do we need such a model? It could have many applications! Those at higher risk of death could be given more protection, such as providing them with pulse oximeters or preferentially vaccinating them.
library("tableone")
library("ggplot2")
library("partykit")
library("ranger")
set.seed(1313)
Before we build any model, even before we touch any data we should first determine for what purpose we will build a predictive model.
It is very important to define the objective before we sit down to programming because later it is easy to get lost in setting function parameters and dealing with all these details that we need to do. It is easy to lose sight of the long-term goal.
So, first: Define the objective.
For these exercises, We have selected data on the covid pandemic. Imagine that we want to determine the order of vaccination. In this example, we want to create a predictive model that assesses individual risks because we would like to rank patients according to their risks.
To get a model that gives the best ranking we will use the AUC measure to evaluate model performance. What exactly the AUC is I’ll talk about a little later, right now the key thing is that we’re interested in ranking patients based on their risk score.
To build a model we need good data. In Machine Learning, the word good means a large amount of representative data. Collecting representative data is not easy and often requires designing an appropriate experiment.
The best possible scenario is that one can design and run an experiment to collect the necessary data. In less comfortable situations, we look for “natural experiments,” i.e., data that have been collected for another purpose but that can be used to build a model. Here we will use the data= collected through epidemiological interviews. There will be a lot of data points and it should be fairly representative, although unfortunately it only involves symptomatic patients who are tested positive for SARS-COV-2.
For this exercise, we have prepared two sets of characteristics of patients infected with covid. It is important to note that these are not real patient data. This is simulated data, generated to have relationships consistent with real data (obtained from NIH), but the data itself is not real. Fortunately, they are sufficient for our exercise.
The data is divided into two sets covid_spring
and covid_summer
. The first is acquired in spring 2020 and will be used as training data while the second dataset is acquired in summer and will be used for validation. In machine learning, model validation is performed on a separate data set. This controls the risk of overfitting an elastic model to the data. If we do not have a separate set then it is generated using cross-validation, out of sample or out of time techniques.
covid_spring
corresponds to covid mortality data from spring 2020. We will use this data for model training.covid_summer
corresponds to covid mortality data from summer 2020. We will use this data for model validation.Both datasets are available in the DALEX
package.
library("DALEX")
head(covid_spring)
## Gender Age Cardiovascular.Diseases Diabetes Neurological.Diseases
## 1 Male 29 No No No
## 2 Male 50 No No No
## 3 Male 39 No No No
## 4 Male 40 No No No
## 5 Male 53 No No No
## 6 Female 36 No No No
## Kidney.Diseases Cancer Hospitalization Fever Cough Weakness Death
## 1 No No No No No No No
## 2 No No No Yes Yes Yes No
## 3 No No No No No No No
## 4 No No No No No No No
## 5 No No No Yes Yes Yes No
## 6 No No No No No No No
head(covid_summer)
## Gender Age Cardiovascular.Diseases Diabetes Neurological.Diseases
## 1 Female 57 No No No
## 2 Male 34 No No No
## 3 Male 73 No No No
## 4 Female 48 Yes No No
## 5 Male 29 No No No
## 6 Male 54 No No No
## Kidney.Diseases Cancer Hospitalization Fever Cough Weakness Death
## 1 No No Yes Yes Yes No No
## 2 No No No No No No No
## 3 No No Yes No No No Yes
## 4 No No No Yes No No No
## 5 No No Yes Yes No Yes No
## 6 No No Yes Yes Yes No No
Before we start any serious modelling, it is worth looking at the data first. To do this, we will do a simple EDA. In R there are many tools to do data exploration, I value packages that support so-called table one.
library("tableone")
table1 <- CreateTableOne(vars = colnames(covid_spring)[1:11],
data = covid_spring,
strata = "Death")
print(table1)
## Stratified by Death
## No Yes p test
## n 9487 513
## Gender = Male (%) 4554 (48.0) 271 (52.8) 0.037
## Age (mean (SD)) 44.19 (18.32) 74.44 (13.27) <0.001
## Cardiovascular.Diseases = Yes (%) 839 ( 8.8) 273 (53.2) <0.001
## Diabetes = Yes (%) 260 ( 2.7) 78 (15.2) <0.001
## Neurological.Diseases = Yes (%) 127 ( 1.3) 57 (11.1) <0.001
## Kidney.Diseases = Yes (%) 111 ( 1.2) 62 (12.1) <0.001
## Cancer = Yes (%) 158 ( 1.7) 68 (13.3) <0.001
## Hospitalization = Yes (%) 2344 (24.7) 481 (93.8) <0.001
## Fever = Yes (%) 3314 (34.9) 335 (65.3) <0.001
## Cough = Yes (%) 3062 (32.3) 253 (49.3) <0.001
## Weakness = Yes (%) 2282 (24.1) 196 (38.2) <0.001
During modelling, the part related to exploration often takes the most time. In this case, we will limit ourselves to some simple graphs.
ggplot(covid_spring, aes(Age)) +
geom_histogram() +
ggtitle("Histogram of age")
ggplot(covid_spring, aes(Age, fill = Death)) +
geom_histogram(color = "white") +
ggtitle("Histogram of age") +
DALEX::theme_ema() +
scale_fill_manual("", values = c("grey", "red3"))
library(ggmosaic)
ggplot(data = covid_spring) +
geom_mosaic(aes(x=product(Diabetes), fill = Death)) +
DALEX::theme_ema() +
scale_fill_manual("", values = c("grey", "red3"))
One of the most important rules to remember when building a predictive model is: Do not condition on the future!
Variables like Hospitalization
or Cough
are not good predictors, because they are not known in advance.
covid_spring <- covid_spring[,c("Gender", "Age", "Cardiovascular.Diseases", "Diabetes",
"Neurological.Diseases", "Kidney.Diseases", "Cancer",
"Death")]
covid_summer <- covid_summer[,c("Gender", "Age", "Cardiovascular.Diseases", "Diabetes",
"Neurological.Diseases", "Kidney.Diseases", "Cancer",
"Death")]
DALEX
package. You will work with them in the following DIY sessions
covid_summer
. The classification task with the target variable Death
. Recommended for beginners. You can replicate examples from the tutorial without any change.titanic_imputed
. The classification task with the target variable survived
. You can replicate examples from the tutorial with just small changes.happiness_train
. The regression task with target variable score
. You can replicate examples from the tutorial with small changes, but note that it is a regression task.fifa
. The regression task with target variable value_eur
. Recommended for advanced users. There is a large number of features and observations. You can replicate examples from the tutorial but be prepared for changes.tableone
(works for classification tasks).Do it yourself!
We will think of a predictive model as a function that computes a certain prediction for certain input data. Usually, such a function is built automatically based on the data.
In Machine Learning, there are hundreds of algorithms available. Usually, this training boils down to finding parameters for some family of models. One of the most popular families of models is decision trees. Their great advantage is the transparency of their structure.
We will begin building the model by constructing a decision tree. We will stepwise control the complexity of the model.
library("partykit")
tree1 <- ctree(Death ~., covid_spring,
control = ctree_control(maxdepth = 1))
plot(tree1)
tree2 <- ctree(Death ~., covid_spring,
control = ctree_control(maxdepth = 2))
plot(tree2)
tree3 <- ctree(Death ~., covid_spring,
control = ctree_control(maxdepth = 3))
plot(tree3)
tree <- ctree(Death ~., covid_spring,
control = ctree_control(alpha = 0.0001))
plot(tree)
Decision trees are models that have low bias but high variance. In 2001, Leo Breiman proposed a new family of models, called a random forest, which averages scores from multiple decision trees trained on bootstrap samples of the data. The whole algorithm is a bit more complex but also very fascinating. You can read about it at https://tinyurl.com/RF2001. Nowadays a very popular, in a sense complementary technique for improving models is boosting, in which you reduce the model load at the expense of variance. This algorithm reduces variance at the expense of bias. Quite often it leads to a better model.
We will train a random forest with the ranger
library.
At this stage we do not dig deep into the model, as we will treat it as a black box.
library("ranger")
forest <- ranger(Death ~., covid_spring, probability = TRUE)
forest
## Ranger result
##
## Call:
## ranger(Death ~ ., covid_spring, probability = TRUE)
##
## Type: Probability estimation
## Number of trees: 500
## Sample size: 10000
## Number of independent variables: 7
## Mtry: 2
## Target node size: 10
## Variable importance mode: none
## Splitrule: gini
## OOB prediction error (Brier s.): 0.03713562
For classification problems, the first choice model is often logistic regression, which in R is implemented in the glm
function. In this exercise, we will use a more extended version of logistic regression, in which transformations using splines are allowed. This will allow us to take into account also non-linear relationships between the indicated variable and the model target. Later, exploring the model, we will look at what relationship is learned by logistic regression with splines.
We will train a logistic regression with splines with the rms
library. rcs
stands for linear tail-restricted cubic spline function.
At this stage we do not dig deep into the model, as we will treat it as a black box.
library("rms")
lmr_rcs <- lrm(Death ~ Gender + rcs(Age, 3) + Cardiovascular.Diseases + Diabetes + Neurological.Diseases + Kidney.Diseases + Cancer, covid_spring)
lmr_rcs
## Logistic Regression Model
##
## lrm(formula = Death ~ Gender + rcs(Age, 3) + Cardiovascular.Diseases +
## Diabetes + Neurological.Diseases + Kidney.Diseases + Cancer,
## data = covid_spring)
##
## Model Likelihood Discrimination Rank Discrim.
## Ratio Test Indexes Indexes
## Obs 10000 LR chi2 1495.41 R2 0.417 C 0.925
## No 9487 d.f. 8 g 2.155 Dxy 0.849
## Yes 513 Pr(> chi2) <0.0001 gr 8.630 gamma 0.851
## max |deriv| 5e-08 gp 0.080 tau-a 0.083
## Brier 0.037
##
## Coef S.E. Wald Z Pr(>|Z|)
## Intercept -9.0107 0.9465 -9.52 <0.0001
## Gender=Male 0.6870 0.1094 6.28 <0.0001
## Age 0.0872 0.0221 3.94 <0.0001
## Age' 0.0027 0.0170 0.16 0.8749
## Cardiovascular.Diseases=Yes 0.7746 0.1185 6.54 <0.0001
## Diabetes=Yes 0.1291 0.1687 0.77 0.4441
## Neurological.Diseases=Yes 0.4927 0.1998 2.47 0.0137
## Kidney.Diseases=Yes 1.1614 0.2022 5.74 <0.0001
## Cancer=Yes 1.4454 0.1832 7.89 <0.0001
##
In R, we have many tools for creating models. The problem with them is that these tools are created by different people and return results in different structures. So in order to work uniformly with the models we need to package the model in such a way that it has a uniform interface.
Different models have different APIs.
But you need One API to Rule Them All!
The DALEX
library provides a unified architecture to explore and validate models using different analytical methods.
A trained model can be turned into an explainer. Simpler functions can be used to calculate the performance of this model. But using explainers has an advantage that will be seen in all its beauty in just two pages.
To work with different models uniformly, we will also wrap this one into an explainer.
model_tree <- DALEX::explain(tree,
data = covid_summer[,-8],
y = covid_summer$Death == "Yes",
type = "classification",
label = "Tree",
verbose = FALSE)
predict(model_tree, covid_summer) |> head()
## 1 2 3 4 5 6
## 0.007781621 0.007781621 0.199029126 0.039772727 0.007781621 0.007781621
model_forest <- DALEX::explain(forest,
data = covid_summer[,-8],
y = covid_summer$Death == "Yes",
type = "classification",
label = "Forest",
verbose = FALSE)
predict(model_forest, covid_summer) |> head()
## [1] 0.009068877 0.008685151 0.185812679 0.054596836 0.009005221 0.012129550
model_lmr_rcs <- DALEX::explain(lmr_rcs,
data = covid_summer[,-8],
y = covid_summer$Death == "Yes",
type = "classification",
label = "LMR",
verbose = FALSE)
predict(model_lmr_rcs, covid_summer) |> head()
## 1 2 3 4 5 6
## 0.018046606 0.004689560 0.135326096 0.017429904 0.003033037 0.027103381
The evaluation of the model performance for the classification is based on different measures than for the regression.
For regression, commonly used measures are Mean squared error MSE
\[MSE(f) = \frac{1}{n} \sum_{i}^{n} (f(x_i) - y_i)^2 \]
and Rooted mean squared error RMSE
\[RMSE(f) = \sqrt{MSE(f, X, y)} \]
For classification, commonly used measures are Accuracy
\[ACC(f) = (TP + TN)/n\]
Precision
\[Prec(f) = TP/(TP + FP)\]
and Recall
\[Recall(f) = TP/(TP + FN)\]
and F1 score
\[F1(f) = 2\frac{Prec(f) * Recall(f) }{Prec(f) + Recall(f)}\]
In this problem we are interested in ranking of scores, so we will use the AUC measure (the area under the ROC curve).
There are many measures for evaluating predictive models and they are located in various R packages (ROCR
, measures
, mlr3measures
, etc.). For simplicity, in this example, we use only the AUC measure from the DALEX
package.
Pregnancy: Sensitivity and Specificity
http://getthediagnosis.org/diagnosis/Pregnancy.htm
https://en.wikipedia.org/wiki/Sensitivity_and_specificity
For AUC the cutoff
does not matter. But we set it to get nice precision and F1.
Model performance
Model exploration starts with an assessment of how good is the model. The DALEX::model_performance
function calculates a set of the most common measures for the specified model.
library("DALEX")
mp_forest <- model_performance(model_forest, cutoff = 0.1)
mp_forest
## Measures for: classification
## recall : 0.9012876
## precision : 0.1536211
## f1 : 0.2625
## accuracy : 0.882
## auc : 0.9444813
##
## Residuals:
## 0% 10% 20% 30% 40% 50%
## -0.548347589 -0.130419874 -0.022959318 -0.009477602 -0.008834231 -0.008685151
## 60% 70% 80% 90% 100%
## -0.008609669 -0.007320821 -0.007205166 -0.007067867 0.992854808
mp_tree <- model_performance(model_tree, cutoff = 0.1)
mp_tree
## Measures for: classification
## recall : 0.8626609
## precision : 0.1492205
## f1 : 0.2544304
## accuracy : 0.8822
## auc : 0.9126477
##
## Residuals:
## 0% 10% 20% 30% 40% 50%
## -0.462686567 -0.199029126 -0.007781621 -0.007781621 -0.007781621 -0.007781621
## 60% 70% 80% 90% 100%
## -0.007781621 -0.007781621 -0.007781621 -0.007781621 0.992218379
mp_lmr_rcs <- model_performance(model_lmr_rcs, cutoff = 0.1)
mp_lmr_rcs
## Measures for: classification
## recall : 0.8454936
## precision : 0.1849765
## f1 : 0.3035439
## accuracy : 0.9096
## auc : 0.9413383
##
## Residuals:
## 0% 10% 20% 30% 40%
## -0.9428013172 -0.0828083761 -0.0335257237 -0.0174629404 -0.0103056658
## 50% 60% 70% 80% 90%
## -0.0066507046 -0.0042978321 -0.0027801308 -0.0015117688 -0.0005860675
## 100%
## 0.9965505407
Note: The model is evaluated on the data given in the explainer. Use DALEX::update_data()
to specify another dataset.
Note: The explainer knows whether the model is for classification or regression, so it automatically selects the right measures. It can be overridden if needed.
The S3 generic plot
function draws a graphical summary of the model performance. With the geom
argument, one can determine the type of chart.
plot(mp_forest, geom = "roc") + DALEX::theme_ema()
plot(mp_forest, mp_tree, mp_lmr_rcs, geom = "roc") + DALEX::theme_ema()
covid_summer
, titanic_imputed
, happiness_train
, fifa
)Example for titanic
head(titanic_imputed)
tree <- ctree(survived ~., titanic_imputed)
plot(tree)
tree <- ctree(factor(survived) ~., titanic_imputed,
control = ctree_control(alpha = 0.0001))
plot(tree)
model_tree <- DALEX::explain(tree,
data = titanic_imputed,
y = titanic_imputed$survived == 1,
type = "classification",
label = "Tree",
verbose = FALSE)
mp <- model_performance(model_tree)
mp
plot(mp, geom = "roc")
Do it yourself!
We will devote the second day entirely to talking about methods for model exploration.
Some models have built-in methods for the assessment of Variable importance. For linear models, one can use standardized model coefficients or p-values. For random forest one can use out-of-bag classification error. For tree boosting models, one can use gain statistics. Yet, the problem with such measures is that not all models have build-in variable importance statistics (e.g. neural networks) and that scores between different models cannot be directly compared (how to compare gains with p-values).
This is why we need a model agnostic approach that will be comparable between different models. The procedure described below is universal, model agnostic and does not depend on the model structure.
The procedure is based on variable perturbations in the validation data. If a variable is important in a model, then after its permutation the model predictions should be less accurate.
The permutation-based variable-importance of a variable \(i\) is the difference between the model performance for the original data and the model performance measured on data with the permutated variable \(i\)
\[ VI(i) = L(f, X^{perm(i)}, y) - L(f, X, y) \]
where \(L(f, X, y)\) is the value of loss function for original data \(X\), true labels \(y\) and model \(f\), while \(X^{perm(i)}\) is dataset \(x\) with \(i\)-th variable permutated.
Which performance measure should you choose? It’s up to you. In the DALEX
library, by default, RMSE is used for regression and 1-AUC for classification problems. But you can change the loss function by specifying the argument.
mpart_forest <- model_parts(model_forest)
mpart_forest
## variable mean_dropout_loss label
## 1 _full_model_ 0.04412282 Forest
## 2 Neurological.Diseases 0.04549286 Forest
## 3 Diabetes 0.04619241 Forest
## 4 Kidney.Diseases 0.04807367 Forest
## 5 Gender 0.04891244 Forest
## 6 Cancer 0.04980458 Forest
## 7 Cardiovascular.Diseases 0.05898298 Forest
## 8 Age 0.19238159 Forest
## 9 _baseline_ 0.52764321 Forest
plot(mpart_forest, show_boxplots = FALSE, bar_width=4) +
DALEX:::theme_ema_vertical() +
theme( axis.text = element_text(color = "black", size = 12, hjust = 0)) +
ggtitle("Variable importance","")
mpart_forest <- model_parts(model_forest, type = "difference")
mpart_forest
## variable mean_dropout_loss label
## 1 _full_model_ 0.0000000000 Forest
## 2 Neurological.Diseases 0.0009551697 Forest
## 3 Diabetes 0.0016697222 Forest
## 4 Kidney.Diseases 0.0039059966 Forest
## 5 Gender 0.0043023468 Forest
## 6 Cancer 0.0073184652 Forest
## 7 Cardiovascular.Diseases 0.0190751528 Forest
## 8 Age 0.1468608904 Forest
## 9 _baseline_ 0.4582920699 Forest
plot(mpart_forest, show_boxplots = FALSE, bar_width=4) +
DALEX:::theme_ema_vertical() +
theme( axis.text = element_text(color = "black", size = 12, hjust = 0)) +
ggtitle("Variable importance","")
mpart_forest <- model_parts(model_forest)
mpart_tree <- model_parts(model_tree)
mpart_lmr_rcs <- model_parts(model_lmr_rcs)
plot(mpart_forest, mpart_tree, mpart_lmr_rcs, show_boxplots = FALSE, bar_width=4) +
DALEX:::theme_ema_vertical() +
theme( axis.text = element_text(color = "black", size = 12, hjust = 0)) +
ggtitle("Variable importance","") +
facet_wrap(~label, ncol = 1, scales = "free_y")
plot(mpart_forest, mpart_tree, mpart_lmr_rcs, show_boxplots = FALSE, bar_width=4) +
DALEX:::theme_ema_vertical() +
theme( axis.text = element_text(color = "black", size = 12, hjust = 0)) +
ggtitle("Variable importance","")
covid_summer
, titanic_imputed
, happiness_train
, fifa
)Do it yourself!
See you in 30 minutes!
Once we calculate the model prediction, the question often arises which variables had the greatest impact on it.
For linear models it is easy to assess the impact of individual variables because there is one coefficient for each variable.
For tabular data, one of the most commonly used techniques for local variable attribution is Shapley values. The key idea behind this method is to analyze the sequence of conditional expected values. This way, we can trace how the conditional mean moves from the average model response to the model prediction for observation of interest \(x^*\). Let’s consider a sequence of expected values.
By looking at consecutive differences \(\mu_{x_1}-\mu\), \(\mu_{x_1,x_2}-\mu_{x_1}\) and so on, one can calculate the added effects of individual variables. It sounds like a straightforward solution; however, there are two issues with this approach.
One is that it is not easy to estimate the conditional expected value. In most implementations, it is assumed that features are independent, and then we can estimate \(\mu_{K}\) as an average model response with variables in the set \(K\) replaced by corresponding values from observation \(x^*\). So the crude estimate would be
\[ \widehat{\mu}_{K} = \frac 1n \sum_{i=1}^n f(x_1^o, x_2^o, ..., x_p^o), \] where \(x_j^o = x_j^*\) if \(j \in K\) and \(x_j^o = x_j^i\) if \(j \not\in K\).
The second issue is that these effects may depend on the order of conditioning. How to solve this problem? The Shapley values method calculates attributions as an average of all (or at least a large number of random) orderings, while the Break-down method uses a single ordering determined with a greedy heuristic that prefers variables with the largest attribution at the beginning.
Steve <- data.frame(Gender = factor("Male", c("Female", "Male")),
Age = 76,
Cardiovascular.Diseases = factor("Yes", c("No", "Yes")),
Diabetes = factor("No", c("No", "Yes")),
Neurological.Diseases = factor("No", c("No", "Yes")),
Kidney.Diseases = factor("No", c("No", "Yes")),
Cancer = factor("No", c("No", "Yes")))
predict(model_forest, Steve)
## Yes
## 0.3269438
Steve
## Gender Age Cardiovascular.Diseases Diabetes Neurological.Diseases
## 1 Male 76 Yes No No
## Kidney.Diseases Cancer
## 1 No No
It turns out that such attributions can be calculated for any predictive model. The most popular model agnostic method is Shapley values. They may be calculated with a predict_parts()
function.
ppart_tree <- predict_parts(model_tree, Steve)
plot(ppart_tree)
ppart_forest <- predict_parts(model_forest, Steve,
keep_distributions = TRUE)
plot(ppart_forest, plot_distributions = TRUE) + ggtitle("Consecutive conditoning for Forest")+
DALEX:::theme_ema_vertical() +
theme( axis.text = element_text(color = "black", size = 12, hjust = 0))
ppart_forest <- predict_parts(model_forest, Steve, type = "shap")
plot(ppart_forest)
ppart_tree <- predict_parts(model_tree, Steve, type = "shap")
plot(ppart_tree)
ppart_lmr_rcs <- predict_parts(model_lmr_rcs, Steve, type = "shap")
plot(ppart_lmr_rcs)
ppart_forest <- predict_parts(model_forest, Steve, type = "shap")
pl1 <- plot(ppart_forest) + ggtitle("Shapley values for Ranger")+
DALEX:::theme_ema_vertical() +
theme( axis.text = element_text(color = "black", size = 12, hjust = 0))
ppart_forest <- predict_parts(model_forest, Steve)
pl2 <- plot(ppart_forest) + ggtitle("Break-down for Ranger")+
DALEX:::theme_ema_vertical() +
theme( axis.text = element_text(color = "black", size = 12, hjust = 0))
library("patchwork")
pl1 + (pl2 + scale_y_continuous("prediction", limits = c(0,0.4)))
The show_boxplots
argument allows you to highlight the stability bars of the estimated attributions.
Other possible values of the type
argument are oscillations
, shap
, break_down
, break_down_interactions
.
With order
one can force a certain sequence of variables.
By default, functions such as model_parts
, predict_parts
, model_profiles
do not calculate statistics on the entire data set, but on n_samples
of random cases, and the entire procedure is repeated B
times to estimate the error bars.
covid_summer
, titanic_imputed
, happiness_train
, fifa
)Do it yourself!
Profile for a single prediction
Ceteris Paribus (CP) is a Latin phrase for “other things being equal”. It is also a very useful technique for analysis of model behaviour for a single observation. CP profiles, sometimes called Individual Conditional Expectations (ICE), show how the model response would change for a~selected observation if a value for one variable was changed while leaving the other variables unchanged.
While local variable attribution is a convenient technique for answering the question of which variables affect the prediction, the local profile analysis is a good technique for answering the question of how the model response depends on a particular variable. Or answering the question of what if…
The predict_profiles()
function calculated CP profiles for a selected observation, model and vector of variables (all continuous variables by default).
mprof_forest <- predict_profile(model_forest, Steve, "Age")
plot(mprof_forest)
CP profiles can be visualized with the generic plot()
function.
For technical reasons, quantitative and qualitative variables cannot be shown in a single chart. So if you want to show the importance of quality variables you need to plot them separately.
mprof_forest <- predict_profile(model_forest, variable_splits = list(Age=0:100), Steve)
mprof_tree <- predict_profile(model_tree, variable_splits = list(Age=0:100), Steve)
mprof_lmr_rcs <- predict_profile(model_lmr_rcs, variable_splits = list(Age=0:100), Steve)
plot(mprof_forest)
plot(mprof_forest, mprof_lmr_rcs, mprof_tree)
mprof_forest <- predict_profile(model_forest, variables = "Age", Steve)
pl1 <- plot(mprof_forest) + ggtitle("Ceteris paribus for Ranger")+
DALEX:::theme_ema() + scale_y_continuous("prediction", limits = c(0,0.55)) +
theme( axis.text = element_text(color = "black", size = 12, hjust = 0))
mprof_forest2 <- predict_profile(model_forest, variables = "Cardiovascular.Diseases", Steve)
pl2 <- plot(mprof_forest2, variable_type = "categorical", variables = "Cardiovascular.Diseases", categorical_type = "lines") + ggtitle("Ceteris paribus for Ranger")+
DALEX:::theme_ema() + scale_y_continuous("prediction", limits = c(0,0.55)) +
theme( axis.text = element_text(color = "black", size = 12, hjust = 0))
library("patchwork")
pl1 + pl2
plot(mprof_forest, mprof_lmr_rcs, mprof_tree) + ggtitle("Ceteris paribus for Steve")+
DALEX:::theme_ema() +
theme( axis.text = element_text(color = "black", size = 12, hjust = 0))
Local importance of variables can be measured as oscillations of CP plots. The greater the variability of the CP profile, the more important is the variable. Set type = "oscillations"
in the predict_parts
function.
covid_summer
, titanic_imputed
, happiness_train
, fifa
)Do it yourself!
Once we know which variables are important, it is usually interesting to determine the relationship between a particular variable and the model prediction. Popular techniques for this type of Explanatory Model Analysis are Partial Dependence (PD) and Accumulated Local Effects (ALE).
PD profiles were initially proposed in 2001 for gradient boosting models but can be used in a model agnostic fashion. This method is based on an analysis of the average model response after replacing variable \(i\) with the value of \(t\).
More formally, Partial Dependence profile for variable \(i\) is a function of \(t\) defined as
\[ PD(i, t) = E\left[ f(x_1, ..., x_{i-1}, t, x_{i+1}, ..., x_p) \right], \]
where the expected value is calculated over the data distribution. The straightforward estimator is
\[ \widehat{PD}(i, t) = \frac 1n \sum_{j=1}^n f(x^j_1, ..., x^j_{i-1}, t, x^j_{i+1}, ..., x^j_p). \]
Replacing \(i\)-th variable by value \(t\) can lead to very strange observations, especially when \(i\)-th variable is correlated with other variables and we ignore the correlation structure. One solution to this are Accumulated Local Effects profiles, which average over the conditional distribution.
The model_profiles()
function calculates PD profiles for a specified model and variables (all by default).
mprof_forest <- model_profile(model_forest, "Age")
plot(mprof_forest)
mgroup_forest <- model_profile(model_forest, variable_splits = list(Age = 0:100))
plot(mgroup_forest)+
DALEX:::theme_ema() +
theme( axis.text = element_text(color = "black", size = 12, hjust = 0)) +
ggtitle("PD profile","")
Grouped partial dependence profiles
By default, the average is calculated for all observations. But with the argument groups=
one can specify a factor variable in which CP profiles will be averaged.
mgroup_forest <- model_profile(model_forest, variable_splits = list(Age = 0:100), groups = "Diabetes")
plot(mgroup_forest)+
DALEX:::theme_ema() +
theme( axis.text = element_text(color = "black", size = 12, hjust = 0)) +
ggtitle("PD profiles for groups","") + ylab("") + theme(legend.position = "top")
mgroup_forest <- model_profile(model_forest, variable_splits = list(Age = 0:100), groups = "Cardiovascular.Diseases")
plot(mgroup_forest)+
DALEX:::theme_ema() +
theme( axis.text = element_text(color = "black", size = 12, hjust = 0)) +
ggtitle("PDP variable profile","") + ylab("") + theme(legend.position = "top")
mgroup_forest <- model_profile(model_forest, "Age", k = 3, center = TRUE)
plot(mgroup_forest)+
DALEX:::theme_ema() +
theme( axis.text = element_text(color = "black", size = 12, hjust = 0)) +
ggtitle("PD profiles for segments","")
mgroup_forest <- model_profile(model_forest, "Age", groups = "Cardiovascular.Diseases")
plot(mgroup_forest)+
DALEX:::theme_ema() +
theme( axis.text = element_text(color = "black", size = 12, hjust = 0)) +
ggtitle("Variable profile","")
mgroup_forest <- model_profile(model_forest, variable_splits = list(Age = 0:100), groups = "Cardiovascular.Diseases")
plot(mgroup_forest, geom = "profiles")
plot(mgroup_forest, geom = "points")
mprof_forest <- model_profile(model_forest, variable_splits = list(Age=0:100))
mprof_tree <- model_profile(model_tree, variable_splits = list(Age=0:100))
mprof_lmr_rcs <- model_profile(model_lmr_rcs, variable_splits = list(Age=0:100))
Profiles can be then drawn with the plot()
function.
plot(mprof_forest, mprof_lmr_rcs, mprof_tree) +
ggtitle("","") +
DALEX:::theme_ema() +
theme( axis.text = element_text(color = "black", size = 12, hjust = 0), legend.position = "top")
If the model is additive, all CP profiles are parallel. But if the model has interactions, CP profiles may have different shapes for different observations. Defining the k argument allows to find and calculate the average in k segments of CP profiles.
PDP profiles do not take into account the correlation structure between the variables. For correlated variables, the Ceteris paribus assumption may not make sense. The model_profile
function can also calculate other types of aggregates, such as marginal profiles and accumulated local profiles. To do this, specify the argument type=
for "conditional"
or "accumulated"
.
covid_summer
, titanic_imputed
, happiness_train
, fifa
)Do it yourself!
We have made the model built for Covid data, along with the explanations described in this book, available at webpage. After two months, tens of thousands of people used it. With proper tools the deployment of such a model is not difficult.
To obtain a safe and effective model, it is necessary to perform a detailed Explanatory Model Analysis. However, we often don’t have much time for it. That is why tools that facilitate fast and automated model exploration are so useful.
One of such tools is modelStudio
. It is a package that transforms an explainer into an HTML page with javascript based interaction. Such an HTML page is easy to save on a disk or share by email. The webpage has various explanations pre-calculated, so its generation may be time-consuming, but the model exploration is very fast, and the feedback loop is tight.
Generating a modelStudio
for an explainer is trivially easy.
library("modelStudio")
ms <- modelStudio(model_forest, new_observation = Steve)
ms
devtools::session_info()
## ─ Session info ───────────────────────────────────────────────────────────────
## setting value
## version R version 4.1.0 (2021-05-18)
## os macOS Big Sur 10.16
## system x86_64, darwin17.0
## ui X11
## language (EN)
## collate en_US.UTF-8
## ctype en_US.UTF-8
## tz Europe/Warsaw
## date 2022-06-20
##
## ─ Packages ───────────────────────────────────────────────────────────────────
## package * version date lib source
## assertthat 0.2.1 2019-03-21 [2] CRAN (R 4.1.0)
## backports 1.2.1 2020-12-09 [2] CRAN (R 4.1.0)
## base64enc 0.1-3 2015-07-28 [2] CRAN (R 4.1.0)
## bslib 0.2.5.1 2021-05-18 [2] CRAN (R 4.1.0)
## cachem 1.0.5 2021-05-15 [2] CRAN (R 4.1.0)
## callr 3.7.0 2021-04-20 [2] CRAN (R 4.1.0)
## checkmate 2.0.0 2020-02-06 [2] CRAN (R 4.1.0)
## class 7.3-19 2021-05-03 [2] CRAN (R 4.1.0)
## cli 3.3.0 2022-04-25 [1] CRAN (R 4.1.2)
## cluster 2.1.2 2021-04-17 [2] CRAN (R 4.1.0)
## codetools 0.2-18 2020-11-04 [2] CRAN (R 4.1.0)
## colorspace 2.0-3 2022-02-21 [1] CRAN (R 4.1.2)
## conquer 1.0.2 2020-08-27 [2] CRAN (R 4.1.0)
## crayon 1.5.1 2022-03-26 [1] CRAN (R 4.1.2)
## DALEX * 2.4.2 2022-06-15 [1] CRAN (R 4.1.2)
## data.table 1.14.0 2021-02-21 [2] CRAN (R 4.1.0)
## DBI 1.1.1 2021-01-15 [2] CRAN (R 4.1.0)
## desc 1.3.0 2021-03-05 [2] CRAN (R 4.1.0)
## devtools 2.4.2 2021-06-07 [2] CRAN (R 4.1.0)
## digest 0.6.29 2021-12-01 [1] CRAN (R 4.1.0)
## dplyr 1.0.9 2022-04-28 [1] CRAN (R 4.1.2)
## e1071 1.7-7 2021-05-23 [2] CRAN (R 4.1.0)
## ellipsis 0.3.2 2021-04-29 [2] CRAN (R 4.1.0)
## evaluate 0.14 2019-05-28 [2] CRAN (R 4.1.0)
## fansi 1.0.3 2022-03-24 [1] CRAN (R 4.1.2)
## farver 2.1.0 2021-02-28 [2] CRAN (R 4.1.0)
## fastmap 1.1.0 2021-01-25 [2] CRAN (R 4.1.0)
## forcats 0.5.1 2021-01-27 [2] CRAN (R 4.1.0)
## foreign 0.8-81 2020-12-22 [2] CRAN (R 4.1.0)
## Formula * 1.2-4 2020-10-16 [2] CRAN (R 4.1.0)
## fs 1.5.0 2020-07-31 [2] CRAN (R 4.1.0)
## generics 0.1.1 2021-10-25 [2] CRAN (R 4.1.0)
## ggmosaic * 0.3.3 2021-02-23 [2] CRAN (R 4.1.0)
## ggplot2 * 3.3.6.9000 2022-06-15 [1] Github (tidyverse/ggplot2@3375667)
## ggrepel 0.9.1 2021-01-15 [2] CRAN (R 4.1.0)
## glue 1.6.2 2022-02-24 [1] CRAN (R 4.1.2)
## gridExtra 2.3 2017-09-09 [2] CRAN (R 4.1.0)
## gtable 0.3.0 2019-03-25 [2] CRAN (R 4.1.0)
## haven 2.4.1 2021-04-23 [2] CRAN (R 4.1.0)
## highr 0.9 2021-04-16 [2] CRAN (R 4.1.0)
## Hmisc * 4.5-0 2021-02-28 [2] CRAN (R 4.1.0)
## hms 1.1.0 2021-05-17 [2] CRAN (R 4.1.0)
## htmlTable 2.2.1 2021-05-18 [2] CRAN (R 4.1.0)
## htmltools 0.5.1.1 2021-01-22 [2] CRAN (R 4.1.0)
## htmlwidgets 1.5.4 2021-09-08 [1] CRAN (R 4.1.0)
## httr 1.4.2 2020-07-20 [2] CRAN (R 4.1.0)
## iBreakDown 2.0.1 2021-05-07 [2] CRAN (R 4.1.0)
## ingredients 2.2.0 2021-04-10 [2] CRAN (R 4.1.0)
## inum 1.0-4 2021-04-12 [2] CRAN (R 4.1.0)
## jpeg 0.1-8.1 2019-10-24 [2] CRAN (R 4.1.0)
## jquerylib 0.1.4 2021-04-26 [2] CRAN (R 4.1.0)
## jsonlite 1.7.2 2020-12-09 [2] CRAN (R 4.1.0)
## knitr 1.33 2021-04-24 [2] CRAN (R 4.1.0)
## labeling 0.4.2 2020-10-20 [2] CRAN (R 4.1.0)
## labelled 2.8.0 2021-03-08 [2] CRAN (R 4.1.0)
## lattice * 0.20-44 2021-05-02 [2] CRAN (R 4.1.0)
## latticeExtra 0.6-29 2019-12-19 [2] CRAN (R 4.1.0)
## lazyeval 0.2.2 2019-03-15 [2] CRAN (R 4.1.0)
## libcoin * 1.0-8 2021-02-08 [2] CRAN (R 4.1.0)
## lifecycle 1.0.1 2021-09-24 [2] CRAN (R 4.1.0)
## magrittr 2.0.3 2022-03-30 [1] CRAN (R 4.1.2)
## MASS 7.3-54 2021-05-03 [2] CRAN (R 4.1.0)
## Matrix 1.3-3 2021-05-04 [2] CRAN (R 4.1.0)
## MatrixModels 0.5-0 2021-03-02 [2] CRAN (R 4.1.0)
## matrixStats 0.58.0 2021-01-29 [2] CRAN (R 4.1.0)
## memoise 2.0.0 2021-01-26 [2] CRAN (R 4.1.0)
## mitools 2.4 2019-04-26 [2] CRAN (R 4.1.0)
## multcomp 1.4-17 2021-04-29 [2] CRAN (R 4.1.0)
## munsell 0.5.0 2018-06-12 [2] CRAN (R 4.1.0)
## mvtnorm * 1.1-1 2020-06-09 [2] CRAN (R 4.1.0)
## nlme 3.1-152 2021-02-04 [2] CRAN (R 4.1.0)
## nnet 7.3-16 2021-05-03 [2] CRAN (R 4.1.0)
## partykit * 1.2-13 2021-03-03 [2] CRAN (R 4.1.0)
## patchwork * 1.1.1 2020-12-17 [2] CRAN (R 4.1.0)
## pillar 1.7.0 2022-02-01 [1] CRAN (R 4.1.2)
## pkgbuild 1.2.0 2020-12-15 [2] CRAN (R 4.1.0)
## pkgconfig 2.0.3 2019-09-22 [2] CRAN (R 4.1.0)
## pkgload 1.2.1 2021-04-06 [2] CRAN (R 4.1.0)
## plotly 4.9.3 2021-01-10 [2] CRAN (R 4.1.0)
## plyr 1.8.6 2020-03-03 [2] CRAN (R 4.1.0)
## png 0.1-7 2013-12-03 [2] CRAN (R 4.1.0)
## polspline 1.1.19 2020-05-15 [2] CRAN (R 4.1.0)
## prettyunits 1.1.1 2020-01-24 [2] CRAN (R 4.1.0)
## processx 3.5.2 2021-04-30 [2] CRAN (R 4.1.0)
## productplots 0.1.1 2016-07-02 [2] CRAN (R 4.1.0)
## proxy 0.4-25 2021-03-05 [2] CRAN (R 4.1.0)
## ps 1.6.0 2021-02-28 [2] CRAN (R 4.1.0)
## purrr 0.3.4 2020-04-17 [2] CRAN (R 4.1.0)
## quantreg 5.85 2021-02-24 [2] CRAN (R 4.1.0)
## R6 2.5.1 2021-08-19 [2] CRAN (R 4.1.0)
## ranger * 0.12.1 2020-01-10 [2] CRAN (R 4.1.0)
## RColorBrewer 1.1-3 2022-04-03 [1] CRAN (R 4.1.2)
## Rcpp 1.0.7 2021-07-07 [2] CRAN (R 4.1.0)
## remotes 2.3.0 2021-04-01 [2] CRAN (R 4.1.0)
## rlang 1.0.2 2022-03-04 [1] CRAN (R 4.1.2)
## rmarkdown 2.11 2021-09-14 [2] CRAN (R 4.1.0)
## rms * 6.2-0 2021-03-18 [2] CRAN (R 4.1.0)
## rpart 4.1-15 2019-04-12 [2] CRAN (R 4.1.0)
## rprojroot 2.0.2 2020-11-15 [2] CRAN (R 4.1.0)
## rstudioapi 0.13 2020-11-12 [2] CRAN (R 4.1.0)
## sandwich 3.0-1 2021-05-18 [2] CRAN (R 4.1.0)
## sass 0.4.0 2021-05-12 [2] CRAN (R 4.1.0)
## scales 1.2.0 2022-04-13 [1] CRAN (R 4.1.2)
## sessioninfo 1.1.1 2018-11-05 [2] CRAN (R 4.1.0)
## SparseM * 1.81 2021-02-18 [2] CRAN (R 4.1.0)
## stringi 1.7.5 2021-10-04 [2] CRAN (R 4.1.0)
## stringr 1.4.0 2019-02-10 [2] CRAN (R 4.1.0)
## survey 4.0 2020-04-03 [2] CRAN (R 4.1.0)
## survival * 3.2-11 2021-04-26 [2] CRAN (R 4.1.0)
## tableone * 0.12.0 2020-07-26 [2] CRAN (R 4.1.0)
## testthat 3.0.2 2021-02-14 [2] CRAN (R 4.1.0)
## TH.data 1.0-10 2019-01-21 [2] CRAN (R 4.1.0)
## tibble 3.1.7 2022-05-03 [1] CRAN (R 4.1.2)
## tidyr 1.1.4 2021-09-27 [2] CRAN (R 4.1.0)
## tidyselect 1.1.1 2021-04-30 [2] CRAN (R 4.1.0)
## usethis 2.0.1 2021-02-10 [2] CRAN (R 4.1.0)
## utf8 1.2.2 2021-07-24 [2] CRAN (R 4.1.0)
## vctrs 0.4.1 2022-04-13 [1] CRAN (R 4.1.2)
## viridisLite 0.4.0 2021-04-13 [2] CRAN (R 4.1.0)
## withr 2.5.0 2022-03-03 [1] CRAN (R 4.1.2)
## xfun 0.23 2021-05-15 [2] CRAN (R 4.1.0)
## yaml 2.2.1 2020-02-01 [2] CRAN (R 4.1.0)
## zoo 1.8-9 2021-03-09 [2] CRAN (R 4.1.0)
##
## [1] /Users/pbiecek/Library/R/x86_64/4.1/library
## [2] /Library/Frameworks/R.framework/Versions/4.1/Resources/library