References
Alvarez-Melis, David, and Tommi S Jaakkola. 2018. “On the Robustness of Interpretability Methods.” arXiv Preprint arXiv:1806.08049.
Apley, Daniel W. 2016. Visualizing the Effects of Predictor Variables in Black Box Supervised Learning Models. https://arxiv.org/ftp/arxiv/papers/1612/1612.08468.pdf.
Archer, Kellie J., and Ryan V. Kimes. 2008. “Empirical Characterization of Random Forest Variable Importance Measures.” Computational Statistics and Data Analysis 52: 2249–60.
Bischl, Bernd, Michel Lang, Lars Kotthoff, Patrick Schratz, Julia Schiffner, Jakob Richter, Zachary Jones, Giuseppe Casalicchio, and Mason Gallo. 2020. Mlr: Machine Learning in R. https://CRAN.R-project.org/package=mlr.
Breiman, Leo. 2001a. “Random Forests.” Machine Learning 45 (1). Springer: 5–32.
———. 2001b. “Statistical Modeling: The Two Cultures (with Comments and a Rejoinder by the Author).” Statist. Sci. 16 (3). The Institute of Mathematical Statistics: 199–231. doi:10.1214/ss/1009213726.
Breiman, Leo, Adele Cutler, Andy Liaw, and Matthew Wiener. 2018. RandomForest: Breiman and Cutler’s Random Forests for Classification and Regression. https://www.stat.berkeley.edu/~breiman/RandomForests/.
Caruana, Rich, Yin Lou, Johannes Gehrke, Paul Koch, Marc Sturm, and Noemie Elhadad. 2015. “Intelligible Models for Healthcare: Predicting Pneumonia Risk and Hospital 30-Day Readmission.” In Proceedings of the 21th Acm Sigkdd International Conference on Knowledge Discovery and Data Mining, 1721–30. ACM.
Casalicchio, Giuseppe, Christoph Molnar, and Bernd Bischl. 2018. “Visualizing the Feature Importance for Black Box Models.” In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 655–70. Springer.
Chen, Tianqi, and Carlos Guestrin. 2016. “XGBoost: A Scalable Tree Boosting System.” CoRR abs/1603.02754. http://arxiv.org/abs/1603.02754.
Craven, Mark, and Jude W Shavlik. 1996. “Extracting Tree-Structured Representations of Trained Networks.” In Advances in Neural Information Processing Systems, 24–30.
Fahrmeir, L., C. Heumann, R. Künstler, I. Pigeot, and G. Tutz. 2016. Statistik: Der Weg Zur Datenanalyse. Springer-Lehrbuch. Springer Berlin Heidelberg. https://books.google.de/books?id=rKveDAAAQBAJ.
Fahrmeir, L., T. Kneib, S. Lang, and B. Marx. 2013. Regression: Models, Methods and Applications. Springer Berlin Heidelberg. https://books.google.de/books?id=EQxU9iJtipAC.
Fanaee-T, Hadi, and Joao Gama. 2013. “Event Labeling Combining Ensemble Detectors and Background Knowledge.” Progress in Artificial Intelligence. Springer Berlin Heidelberg, 1–15. doi:10.1007/s13748-013-0040-3.
———. 2014. “Event Labeling Combining Ensemble Detectors and Background Knowledge.” Progress in Artificial Intelligence 2 (2-3). Springer: 113–27.
Fisher, Aaron, Cynthia Rudin, and Francesca Dominici. 2018. “Model Class Reliance: Variable Importance Measures for Any Machine Learning Model Class, from the” Rashomon” Perspective.” arXiv Preprint arXiv:1801.01489.
Friedman, Jerome H. 2001. “Greedy Function Approximation: A Gradient Boosting Machine.” Annals of Statistics. JSTOR, 1189–1232.
Friedman, Jerome H, and others. 1991. “Multivariate Adaptive Regression Splines.” The Annals of Statistics 19 (1). Institute of Mathematical Statistics: 1–67.
Goldstein, Alex, Adam Kapelner, Justin Bleich, and Emil Pitkin. 2013. “Peeking Inside the Black Box: Visualizing Statistical Learning with Plots of Individual Conditional Expectation.” Journal of Computational and Graphical Statistics 24 (September). doi:10.1080/10618600.2014.907095.
Gower, John C. 1971. “A General Coefficient of Similarity and Some of Its Properties.” Biometrics. JSTOR, 857–71.
Hall, Patrick, Wen Phan, and Sri Satish Ambati. 2017. “Ideas on Interpreting Machine Learning.” O’Reilly Ideas.
Harrison, D., and D.L. Rubinfeld. 1978. “Hedonic Prices and the Demand for Clean Air.” Economics and Management 5: 81–102.
Hastie, T., R. Tibshirani, and J. Friedman. 2013. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer Series in Statistics. Springer New York. https://books.google.de/books?id=yPfZBwAAQBAJ.
Hooker, Giles, and Lucas Mentch. 2019. “Please Stop Permuting Features: An Explanation and Alternatives.” arXiv E-Prints. https://arxiv.org/pdf/1905.03151.pdf.
Hooker, Giles, and Lucas Mentch. 2019. “Please Stop Permuting Features: An Explanation and Alternatives.” arXiv E-Prints, May, arXiv:1905.03151.
Huang, Zhexue. 1998. “Extensions to the K-Means Algorithm for Clustering Large Data Sets with Categorical Values.” Data Mining and Knowledge Discovery 2 (3). Springer: 283–304.
Karatzoglou, Alexandros, Alex Smola, Kurt Hornik, and Achim Zeileis. 2004. “Kernlab – an S4 Package for Kernel Methods in R.” Journal of Statistical Software 11 (9): 1–20. http://www.jstatsoft.org/v11/i09/.
Laugel, Thibault, Xavier Renard, Marie-Jeanne Lesot, Christophe Marsala, and Marcin Detyniecki. 2018. “Defining Locality for Surrogates in Post-Hoc Interpretablity.” arXiv Preprint arXiv:1806.07498.
Lei, Jing, Max G’Sell, Alessandro Rinaldo, Ryan J Tibshirani, and Larry Wasserman. 2018. “Distribution-Free Predictive Inference for Regression.” Journal of the American Statistical Association 113 (523). Taylor & Francis: 1094–1111.
Meinshausen, Nicolai, and Peter Bühlmann. 2010. “Stability Selection.” Journal of the Royal Statistical Society: Series B (Statistical Methodology) 72 (4). Wiley Online Library: 417–73.
Mentch, Lucas, and Giles Hooker. 2016. “Quantifying Uncertainty in Random Forest via Confidence Intervals and Hypothesis Tests.” The Journal of Maschine Learning Research 17 (1): 841–81.
Molnar, Christoph. 2019. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable.
Molnar, Christoph, Giuseppe Casalicchio, and Bernd Bischl. 2018. “Iml: An R Package for Interpretable Machine Learning.” The Journal of Open Source Software 3 (786): 10–21105.
Parr, Terence, Kerem Turgutlu, Christopher Csiszar, and Jeremy Howard. 2018. “Beware Default Random Forest Importances.” https://explained.ai/rf-importance/index.html.
Pearl, Judea. 1993. “Comment: Graphical Models, Causality and Intervention.” Statistical Science 8 (3): 266–69.
Pedersen, Thomas Lin. 2019. “LIME R Package.” GitHub Repository. https://github.com/thomasp85/lime; GitHub.
Pedersen, Thomas Lin, and Michaël Benesty. 2019. Lime: Local Interpretable Model-Agnostic Explanations. https://CRAN.R-project.org/package=lime.
Peltola, Tomi. 2018. “Local Interpretable Model-Agnostic Explanations of Bayesian Predictive Models via Kullback-Leibler Projections.” CoRR abs/1810.02678. http://arxiv.org/abs/1810.02678.
R Core Team. 2018. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing. https://www.R-project.org/.
———. 2020. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing. https://www.R-project.org/.
Ribeiro, Marco Tulio Correia. 2019. “LIME Python Package.” GitHub Repository. https://github.com/marcotcr/lime; GitHub.
Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. 2016a. “Model-Agnostic Interpretability of Machine Learning.” arXiv Preprint arXiv:1606.05386.
———. 2016b. “Why Should I Trust You?: Explaining the Predictions of Any Classifier.” In Proceedings of the 22nd Acm Sigkdd International Conference on Knowledge Discovery and Data Mining, 1135–44. ACM.
Scholbeck, Christian. 2018. “Interpretierbares Machine-Learning. Post-Hoc Modellagnostische Verfahren Zur Bestimmung von Prädiktoreffekten in Supervised-Learning-Modellen.” Ludwig-Maximilians-Universität München.
Strobl, Carolin, Anne-Laure Boulesteix, Thomas Kneib, and Thomas Augustin. 2008. “Conditional Variable Importance for Random Forest.” BMC Bioinformatics 9 (July). doi:10.1186/1471-2105-9-307.
Wickham, Hadley, Winston Chang, Lionel Henry, Thomas Lin Pedersen, Kohske Takahashi, Claus Wilke, Kara Woo, Hiroaki Yutani, and Dewey Dunnington. 2020. Ggplot2: Create Elegant Data Visualisations Using the Grammar of Graphics. https://CRAN.R-project.org/package=ggplot2.
Zhao, Qingyuan, and Trevor Hastie. 2018. Causal Interpretations of Black-Box Models. http://web.stanford.edu/~hastie/Papers/pdp_zhao_final.pdf.