A Comprehensive Guide to Machine Learning Interpretability.

Important libraries for ML Interpretability

ELI5

# explain tree regressor feature importance# show predictions
eli5.show_prediction(model, X_test.iloc[10], show_feature_values=True)
eli5.sklearn.explain_prediction.explain_prediction_tree_regressor(model, doc=X_train.values[randint(0, 100)], feature_names=X_train.columns.tolist()))
An example of the output of explain_prediction() method for a dataset.

PDPBox

pdp_goals = pdp.pdp_isolate(model=self.model, dataset=self.X_train, model_features=self.base_features,feature=b_feature)pdp.pdp_plot(pdp_goals, b_feature)

SHAP

Fig: Different Shap plots from the dataset ( in order starting from top left in clockwise direction are beeswarm, heatmap, bar and scatter plots.)

Yellowbrick

Different visualizations from the yellowbrick library.

Why interpretability is and should be an important part of the process?

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store