Model explainability
Contents
4.3. Model explainability#
4.3.1. Computing Shapley values via XGBoost very fast#
Calculating Shapley values eats CPU cores for dinner. You can prevent that by calculating them with XGBoost on GPUs if you are using tree-based models.
If you use the Scikit-learn API of XGBoost, you can extract the core booster object and use its predict method by setting pred_contribs to True to calculate Shapley values on GPUs.
Don’t forget to drop the bias column XGBoost adds at the end.
4.3.2. Never trust feature importance scores of tree-based models#
You should never, ever trust feature importance scores returned by tree-based models. Why?
There are multiple ways of computing them and the importance order computed by each contradicts the others. Here is an example from XGBoost that shows three FI calculation methods.
As you can see, the order of importance is different in each.
You should always use more robust methods to calculate FI scores. The best consistency guarantee comes with Shapley values.
4.3.3. Permutation Importance with ELI5#
Permutation importance is one of the most reliable ways to see the important features in a model.
Its advantages:
Works on any type of model structure
Easy to interpret and implement
Consistent and reliable
Permutation importance of a feature is defined as the change in model performance when that feature is randomly shuffled.
PI is available through the eli5 package. Below are PI scores for an XGBoost Regressor model👇
The show_weights function displays the features that hurt the model’s performance the most after being shuffled - i.e. the most important features.