Chapter 6: Local Interpretable Model-agnostic Explanations (LIME)

A common approach to interpret an ML model locally is implemented by LIME. The basic idea is to fit a surrogate model while focussing on data points near the observation of interest. The resulting model should be an inherently interpretable one.