Повышение достоверности объяснимого искусственного интеллекта посредством нечеткой логики и онтологии
Работая с сайтом, я даю свое согласие на использование файлов cookie. Это необходимо для нормального функционирования сайта, показа целевой рекламы и анализа трафика. Статистика использования сайта обрабатывается системой Яндекс.Метрика
Научный журнал Моделирование, оптимизация и информационные технологииThe scientific journal Modeling, Optimization and Information Technology
Online media
issn 2310-6018

Enhancing the trustworthiness of explainable artificial intelligence through fuzzy logic and ontology

idKosov P.I., idGardashova L.A.

UDC 004.89
DOI: 10.26102/2310-6018/2025.49.2.014

  • Abstract
  • List of references
  • About authors

The insufficient explainability of machine learning models has long constituted a significant challenge in the field. Specialists across various domains of artificial intelligence (AI) application have endeavored to develop explicable and reliable systems. To address this challenge, DARPA formulated a contemporary approach to explainable AI (XAI). Subsequently, Bellucci et al. expanded DARPA's XAI concept by proposing a novel methodology predicated on semantic web technologies. Specifically, they employed OWL2 ontologies for the representation of user-oriented expert knowledge. This system enhances confidence in AI decisions through the provision of more profound explanations. Nevertheless, XAI systems continue to encounter difficulties when confronted with incomplete and imprecise data. We propose a novel approach that utilizes fuzzy logic to address this limitation. Our methodology is founded on the integration of fuzzy logic and machine learning models to imitate human thinking. This new approach more effectively interfaces with expert knowledge to facilitate deeper explanations of AI decisions. The system leverages expert knowledge represented through ontologies, maintaining full compatibility with the architecture proposed by Bellucci et al. in their work. The objective of this research is not to enhance classification accuracy, but rather to improve the trustworthiness and depth of explanations generated by XAI through the application of "explanatory" properties and fuzzy logic.

1. Dwivedi R., Dave D., Naik H., et al. Explainable AI (XAI): Core Ideas, Techniques, and Solutions. ACM Computing Surveys. 2023;55(9):1–33. https://doi.org/10.1145/3561048

2. Jo T. Machine Learning Foundations: Supervised, Unsupervised, and Advanced Learning. Cham: Springer; 2021. 391 p. https://doi.org/10.1007/978-3-030-65900-4

3. Saranya A., Subhashini R. A Systematic Review of Explainable Artificial Intelligence Models and Applications: Recent Developments and Future Trends. Decision Analytics Journal. 2023;7. https://doi.org/10.1016/j.dajour.2023.100230

4. Gunning D., Aha D.W. DARPA’s Explainable Artificial Intelligence (XAI) Program. AI Magazine. 2019;40(2):44–58. https://doi.org/10.1609/aimag.v40i2.2850

5. Bellucci M., Delestre N., Malandain N., Zanni-Merk C. Combining an Explainable Model Based on Ontologies with an Explanation Interface to Classify Images. Procedia Computer Science. 2022;207:2395–2403. https://doi.org/10.1016/j.procs.2022.09.298

6. Kulmanov M., Smaili F.Z., Gao X., Hoehndorf R. Semantic Similarity and Machine Learning with Ontologies. Briefings in Bioinformatics. 2021;22(4). https://doi.org/10.1093/bib/bbaa199

7. Giustozzi F., Saunier J., Zanni-Merk C. A Semantic Framework for Condition Monitoring in Industry 4.0 based on Evolving Knowledge Bases. Semantic Web. 2023;15(3):1–29. https://doi.org/10.3233/SW-233481

8. Bourgais M., Giustozzi F., Vercouter L. Detecting Situations with Stream Reasoning on Health Data Obtained with IoT. Procedia Computer Science. 2021;192:507–516. https://doi.org/10.1016/j.procs.2021.08.052

9. Zadeh L.A. Fuzzy Sets. Information and Control. 1965;8(3):338–353. https://doi.org/10.1016/S0019-9958(65)90241-X

10. Aliev R.A., Aliev R.R. Soft Computing and Its Applications. Singapore: World Scientific; 2001. 460 p. https://doi.org/10.1142/4766

11. Dumitrescu C., Ciotirnae P., Vizitiu C. Fuzzy Logic for Intelligent Control System Using Soft Computing Applications. Sensors. 2021;21(8). https://doi.org/10.3390/s21082617

12. Gardashova L.A. Synthesis of Fuzzy Terminal Controller for Chemical Reactor of Alcohol Production. In: 10th International Conference on Theory and Application of Soft Computing, Computing with Words and Perceptions – ICSCCW-2019, 27–28 August 2019, Prague, Czech Republic. Cham: Springer; 2020. P. 106–112. https://doi.org/10.1007/978-3-030-35249-3_13

13. Kosov P., El Kadhi N., Zanni-Merk C., Gardashova L. Advancing XAI: New Properties to Broaden Semantic-Based Explanations of Black-Box Learning Models. Procedia Computer Science. 2024;246:2292–2301. https://doi.org/10.1016/j.procs.2024.09.560

14. Bezdek J.C., Ehrlich R., Full W. FCM: The Fuzzy C-Means Clustering Algorithm. Computers & Geosciences. 1984;10(2–3):191–203. https://doi.org/10.1016/0098-3004(84)90020-7

15. Kosov P., El Kadhi N., Zanni-Merk C., Gardashova L. Semantic-Based XAI: Leveraging Ontology Properties to Enhance Explainability. In: 2024 International Conference on Decision Aid Sciences and Applications (DASA), 11–12 December 2024, Manama, Bahrain. IEEE; 2025. P. 1–5. https://doi.org/10.1109/DASA63652.2024.10836289

16. Jones N.A., Ross H., Lynam T., Perez P., Leitch A. Mental Models: An Interdisciplinary Synthesis of Theory and Methods. Ecology and Society. 2011;16(1). URL: http://www.jstor.org/stable/26268859

17. Horrocks I., Patel-Schneider P.F., Boley H., Tabet S., Grosof B., Dean M. SWRL: A Semantic Web Rule Language Combining OWL and RuleML. World Wide Web Consortium. URL: https://www.w3.org/submissions/SWRL [Accessed 12th March 2025].

Kosov Pavel Igorevich

Email: pavel_kosov@asoiu.edu.az

Scopus | ORCID | eLibrary |

Azerbaijan State Oil and Industry University

Baku, Azerbaijan

Gardashova Latafat Abbas qizi
Doctor of Engineering Sciences, Professor
Email: l.qardashova@asoiu.edu.az

Scopus | ORCID |

Azerbaijan State Oil and Industry University

Baku, Azerbaijan

Keywords: explainable artificial intelligence, explainability, ontology, fuzzy system, fuzzy clustering

For citation: Kosov P.I., Gardashova L.A. Enhancing the trustworthiness of explainable artificial intelligence through fuzzy logic and ontology. Modeling, Optimization and Information Technology. 2025;13(2). URL: https://moitvivt.ru/ru/journal/pdf?id=1872 DOI: 10.26102/2310-6018/2025.49.2.014 (In Russ).

112

Full text in PDF

Received 27.03.2025

Revised 18.04.2025

Accepted 24.04.2025