Объяснимый искусственный интеллект и методы интерпретации результатов
Работая с нашим сайтом, вы даете свое согласие на использование файлов cookie. Это необходимо для нормального функционирования сайта, показа целевой рекламы и анализа трафика. Статистика использования сайта отправляется в «Яндекс» и «Google»
Научный журнал Моделирование, оптимизация и информационные технологииThe scientific journal Modeling, Optimization and Information Technology
Online media
issn 2310-6018

Explainable artificial intelligence and methods for interpreting results

idShevskaya N.V.

UDC 004.891.2
DOI: 10.26102/2310-6018/2021.33.2.024

  • Abstract
  • List of references
  • About authors

Artificial intelligence systems are used in many areas of human life support for example finance or medicine. Every year intelligent systems process more and more data and make more and more decisions. All these decisions have an increasing impact on the fate of people. The corner-stone is a distrust of completely non-human, autonomous artificial intelligence systems. The key to distrust lies in the misunderstanding of why intelligent systems make this or that decision, based on what beliefs such systems operate (and whether they have their own beliefs or only those that were given to them by the developers). To solve the problem of “distrust” in such sys-tems, the methods of explainable artificial intelligence have been used. This article provides a brief overview of the most popular methods in the academic environment such methods as PDP, SHAP, LIME, DeepLIFT, permutation importance, ICE plots. Practical exercises demonstrate the ease of application of PDP and SHAP methods, as well as the convenience of "reading" the graphical results of these methods, which explain the constructed decision tree model and ran-dom forest model on the example of a small set of sales data

1. Transparency and Responsibility in Artificial Intelligence. Deloitte. Available at: https://www2.deloitte.com/content/dam/Deloitte/nl/Documents/innovatie/deloitte-nl-innovation-bringing-transparency-and-ethics-into-ai.pdf (accessed 10.06.2021)

2. Linardatos P., Papastefanopoulos V., Kotsiantis S. Explainable AI: A Review of Machine Learning Interpretability Methods. Entropy. 2021:23(18). Available at: https://www.mdpi.com/1099-4300/23/1/18/pdf DOI: 10.3390/e23010018 (accessed: 10.06.2021)

3. Rosenfeld A., Richardson A. Explainability in human–agent systems. Autonomous Agents and Multi-Agent Systems. 2019;33(6):673-705. Available at: https://www.researchgate.net/publication/333084339_Explainability_in_human-agent_systems DOI: 10.1007/s10458-019-09408-y (accessed 10.06.2021)

4. Lundberg S. M., Lee S. I. Consistent feature attribution for tree ensembles. Pro-ceedings of the ICML Workshop on Human Interpretability in Machine Learning (WHI’2017). 2017. Available at: https://arxiv.org/abs/1706.06060 (accessed 10.06.2021)

5. Lundberg S. M., Lee S. I. A Unified Approach to Interpreting Model Predictions. Proceedings of the Neural Information Processing Systems (NIPS). 2017. Available at: https://arxiv.org/abs/1705.07874 (accessed 10.06.2021)

6. Friedman J. H. Greedy function approximation: a gradient boosting machine. An-nals of statistics. 2001;29(5):1189-1232. Available at: https://statweb.stanford.edu/~jhf/ftp/trebst.pdf (accessed 10.06.2021)

7. Ribeiro M. T., Singh S., Guestrin C. “Why should I trust you?" Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 2016;1135-1144. DOI: 10.1145/2939672.2939778

8. Shrikumar A., Greenside P., Kundaje A. Learning important features through propagating activation differences. Proceedings of the 34th International Conference on Ma-chine Learning (PMLR). 2017;70:3145-3153.

9. Goldstein A. et al. Peeking inside the black box: Visualizing statistical learning with plots of individual conditional expectation. Journal of Computational and Graphical Sta-tistics. 2015;24(1):44-65. DOI: 10.1080/10618600.2014.907095

10. Machine Learning Explainability. Permutation Importance. Available at: https://www.kaggle.com/dansbecker/permutation-importance (accessed 10.06.2021)

11. Machine Learning Explainability Course. Partial Dependence Plots. Available at: https://www.kaggle.com/dansbecker/partial-plots (accessed 10.06.2021)

12. Machine Learning Explainability. SHAP Values, Available at: https://www.kaggle.com/dansbecker/shap-values (accessed 10.06.2021)

Shevskaya Natalya Vladimirovna

Scopus | ORCID | eLibrary |

Saint Petersburg Electrotechnical University "LETI"

Saint Petersburg, Russia

Keywords: artificial intelligence, explainable artificial intelligence, interpretable artificial intelligence, explainability, interpretability, XAI, PDP, SHAP

For citation: Shevskaya N.V. Explainable artificial intelligence and methods for interpreting results. Modeling, Optimization and Information Technology. 2021;9(2). URL: https://moitvivt.ru/ru/journal/pdf?id=1005 DOI: 10.26102/2310-6018/2021.33.2.024 (In Russ).

1503

Full text in PDF

Accepted 30.07.2021

Published 30.06.2021