Chapter 6: Local Interpretable Model-agnostic Explanations (LIME)
A common approach to interpret an ML model locally is implemented by LIME. The basic idea is to fit a surrogate model while focussing on data points near the observation of interest. The resulting model should be an inherently interpretable one.
-
Chapter 6.1: Introduction to Local Explanations
In this section the motivation, use-cases and characteristics of local explanation methods are discussed.
-
Chapter 6.2: Local Interpretable Model-agnostic Explanations (LIME)
Local interpretable model-agnostic explanations (LIME) is a local explanation method which is based on surrogate models. Characteristics of LIME as well as requirements for the surroate model and its calculation are examined.
-
Chapter 6.3: LIME Examples
Possible applications of LIME are shown in this section. They range from tabular data over text data to image data.
-
Chapter 6.4: LIME Pitfalls
IML methods need to be applied with caution. Some Pifalls of LIME and their implications are discussed in this section.