Проблема компрометации системы распознавания изображений путем целенаправленной фальсификации обучающего множества
Работая с нашим сайтом, вы даете свое согласие на использование файлов cookie. Это необходимо для нормального функционирования сайта, показа целевой рекламы и анализа трафика. Статистика использования сайта отправляется в «Яндекс» и «Google»
Научный журнал Моделирование, оптимизация и информационные технологииThe scientific journal Modeling, Optimization and Information Technology
Online media
issn 2310-6018

The problem of compromising the image recognition system by purposefully falsifying the training set

idKhmeleva A.A. idDemina R.Y. idAzhmukhamedov I.M.

UDC 004.83
DOI: 10.26102/2310-6018/2024.45.2.005

  • Abstract
  • List of references
  • About authors

This work is devoted to the problem of the security of image recognition systems based on the use of neural networks. Such systems are used in various fields and it is extremely important to ensure their safety from attacks aimed at artificial intelligence methods. The convolutional neural network ResNet18, the ImageNet verification set for recognizing objects in an image and classifying it to a class, and adversarial attacks aimed at changing the image processed by this neural network are considered. Convolutional neural networks detect and segment the objects that are in the images. The attack was carried out at the detection stage in order not to recognize the presence of objects in the image, as well as at the segmentation stage, the modified image attributed the recognized object to another class. A series of experiments was implemented that showed how an adversarial attack changes different images. To do this, images with animals were taken and an adversarial attack was carried out on them, the analysis of their results allowed us to determine the number of iterations necessary to make a successful attack. The original images were also compared with their versions modified during the attack.

1. Murzina D.O., Dolzhenkova I.V. Application of artificial intelligence systems. Forum molodykh uchenykh. 2017;(12):1313–1316. (In Russ.).

2. Alikperova N.V. Artificial Intelligence in Healthcare: Risks and Opportunities. Zdorov'e megapolisa = City Healtyhcare. 2023;4(3):41–49. (In Russ.). https://doi.org/10.47619/2713-2617.zm.2023.v.4i3;41-49.

3. Prokopenya A.S., Azarov I.S. Overview of convolutional neural networks for image recognition. BIG DATA and Advanced Analytics. 2020;(6-1):271–280. (In Russ.).

4. Nazarov A.V., Marenkov A.N., Kaliev A.B. Detection of cryptographic viruses behavior signs in the work of the computer system. Prikaspiiskii zhurnal: upravlenie i vysokie tekhnologii = Caspian Journal: Management and High Technologies. 2018;(1):196–204. (In Russ.).

5. Marenkov A.N., Kuznetsova V.Yu., Gelagaev T.M. Application of face recognition technologies in control and access control systems. Prikaspiiskii zhurnal: upravlenie i vysokie tekhnologii = Caspian Journal: Management and High Technologies. 2021;(1):83–90. (In Russ.). https://doi.org/10.21672/2074-1707.2021.53.1.083-090.

6. Alekseenko Yu.V. Razrabotka sistemy raspoznavaniya izobrazhenii na osnove svertochnykh neironnykh setei. Evraziiskii Soyuz Uchenykh. 2017;(7-1):8–11. (In Russ.).

7. Demina R.Yu., Azhmukhamedov I.M. Increasing quality of classifying objects using new metrics of clustering. Vestnik Astrakhanskogo gosudarstvennogo tekhnicheskogo universiteta. Seriya: Upravlenie, vychislitel'naya tekhnika i informatika = Vestnik of Astrakhan State Technical University. Series: Management, Computer Science and Informatics. 2019;(4):106–114. (In Russ.). https://doi.org/10.24143/2072-9502-2019-4-106-114.

8. Chekhonina E.A., Kostyumov V.V. Overview of adversarial attacks and defenses for object detectors. International Journal of Open Information Technologies. 2023;11(7):11–20. (In Russ.).

9. Sai Abhishek A.V., Gurrala V.R., Sahoo L. Resnet18 Model With Sequential Layer For Computing Accuracy On Image Classification Dataset. International Journal of Creative Research Thoughts. 2022;10(5):176–181.

10. Sikorskii O.S. Obzor svertochnykh neironnykh setei dlya zadachi klassifikatsii izobrazhenii. In: Dvadtsatyi nauchno-prakticheskii seminar «Novye informatsionnye tekhnologii v avtomatizirovannykh sistemakh»: Materialy dvadtsatogo nauchno-prakticheskogo seminara, 20 April 2017, Moscow, Russia. Moscow: HSE Tikhonov Moscow Institute of Electronics and Mathematics; 2017. P. 37–42. (In Russ.).

Khmeleva Anastasia Alexandrovna

ORCID |

Astrakhan State University named after V. N. Tatishchev

Astrakhan, the Russian Federation

Demina Raisa Yurievna
Candidate of Engineering Sciences, docent

ORCID | eLibrary |

Astrakhan State University named after V. N. Tatishchev

Astrakhan, the Russian Federation

Azhmukhamedov Iskandar Maratovich
Doctor of Engineering Sciences, Professor

WoS | Scopus | ORCID | eLibrary |

Astrakhan State University named after V. N. Tatishchev

Astrakhan, the Russian Federation

Keywords: neural networks, attacks on neural networks, adversarial attacks, resNet18, transformation matrix

For citation: Khmeleva A.A. Demina R.Y. Azhmukhamedov I.M. The problem of compromising the image recognition system by purposefully falsifying the training set. Modeling, Optimization and Information Technology. 2024;12(2). Available from: https://moitvivt.ru/ru/journal/pdf?id=1535 DOI: 10.26102/2310-6018/2024.45.2.005 (In Russ).

51

Full text in PDF

Received 04.04.2024

Revised 15.04.2024

Accepted 19.04.2024

Published 23.04.2024