Анализ методов обеспечения безопасности систем машинного обучения
Работая с нашим сайтом, вы даете свое согласие на использование файлов cookie. Это необходимо для нормального функционирования сайта, показа целевой рекламы и анализа трафика. Статистика использования сайта отправляется в «Яндекс» и «Google»
Научный журнал Моделирование, оптимизация и информационные технологииThe scientific journal Modeling, Optimization and Information Technology
Online media
issn 2310-6018

Analysis of methods for machine learning system security

Bobrov N.D.   idChekmarev M.A. Klyuev S.G.  

UDC 004.056.53
DOI: 10.26102/2310-6018/2022.36.1.006

  • Abstract
  • List of references
  • About authors

The employment of machine learning systems is an effective way to achieve goals, operating with large amounts of data, which contributes to their widespread implementation in various fields of activity. At the same time, such systems are currently vulnerable to malicious manipulations that can lead to a violation of integrity and confidentiality, which is confirmed by the fact that these threats were included in the Information Security Threats Databank by the Federal Service for Technical and Expert Control (FSTEC) in December 2020. Under these conditions, ensuring the safe use of machine learning systems at all stages of the life cycle is an important task. This explains the relevance of the study. The paper discusses the existing security methods, proposed by various researchers and described in the scientific literature, their shortcomings, and prospects for further application. In this respect, this review article aims to identify research issues, relating to machine learning system security, with a view to subsequent development of technical and scientific solutions, regarding the matter. The materials of the article are of practical value for information security specialists and developers of machine learning systems.

1. Chekmarev M.A., Klyuev S.G., Shadskiy V.V. Modeling security violation processes in machine learning systems. Nauchno-tehnicheskij vestnik informacionnyh tehnologij, mehaniki i optiki = Scientific and Technical Journal of International Technologies, Mechanics and Optics. 2021;21(4):592–598. (in Russ.). DOI: 10.17586/2226-1494-2021-21-4-592-598.

2. Nelson B., Barreno M., Chi F.J., Joseph A.D., Rubinstein B.I.P., Saini U., Sutton C., Tygar J.D., Xia K. Exploiting machine learning to subvert your spam filter. Proc. of First USENIX Workshop on Large-Scale Exploits and Emergent Threats. 2008. Available by: https://people.eecs.berkeley.edu/~tygar/papers/SML/Spam_filter.pdf (accessed on 09.12.2021).

3. Biggio B., Nelson B., Laskov P. Poisoning attacks against support vector machines. Proc. of the 29th International Conference on Machine Learning (ICML 2012). 2012;1807–1814. Available by: https://icml.cc/2012/papers/880.pdf (accessed on 09.12.2021).

4. Gu, Tianyu & Liu, Kang & Dolan-Gavitt, Brendan & Garg, Siddharth. BadNets: Evaluating Backdooring Attacks on Deep Neural Networks. IEEE Access. 2019;7:47230-47244. DOI: 10.1109/ACCESS.2019.2909068.

5. Koh P., Steinhardt J., Liang P. Stronger Data Poisoning Attacks Break Data Sanitization Defenses. arXiv preprint arXiv: 1811.00741, 208. Available by: https://arxiv.org/pdf/1811.00741.pdf (accessed on 09.12.2021).

6. Huang X., Kwiatkowska M., Wang S., Wu M. Safety Verification of Deep Neural Networks. Computer Aided Verification. CAV 2017. Lecture Notes in Computer Science. 2017;10426. DOI: 10.1007/978-3-319-63387-9_1.

7. Tjeng V., Xiao K., Tedrake R. Evaluating Robustness of Neural Networks with Mixed Integer Programming. arXiv preprint arXiv: 1711.07356. Available by: https://arxiv.org/pdf/1711.07356 (accessed on 09.12.2021).

8. Madry A., Makelov A., Schmidt L., Tsipras D., Vladu A. Towards Deep Learning Models Resistant to Adversarial Attacks. arXiv preprint arXiv: 1706.06083. Available by: https://arxiv.org/pdf/1706.06083 (accessed on 09.12.2021).

9. Carlini N., Wagner D. Defensive Distillation is Not Robust to Adversarial Examples. arXiv preprint arXiv: 1607.04311. Available by: https://arxiv.org/pdf/1607.04311 (accessed on 09.12.2021).

10. Goodfellow I., Shlens J., Szegedy C. Explaining and harnessing adversarial examples. arXiv preprint arXiv: 1412.6572. Available by: https://arxiv.org/pdf/1412.6572 (accessed on 09.12.2021).

11. Paudice A., Muñoz-González L., György A., Lupu E. Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection. arXiv preprint arXiv: 1802.03041. Available by: https://arxiv.org/pdf/1802.03041 (accessed on 09.12.2021).

12. Steinhardt J., Koh P.W., Liang P. Certified defenses for data poisoning attacks. Advances in Neural Information Processing Systems. 2017;30:3518–3530.

13. Nelson B., Barreno M., Jack Chi F., Joseph A.D., Rubinstein BIP, Saini U., Sutton C., Tygar JD, Xia K. Misleading learners: co-opting your spam filter. Springer. 2009:17–51. DOI: 10.1007/978-0-387-88735-7_2.

14. Suciu O., Marginean R., Kaya Y., Daumé H., Dumitras T. Technical Report: When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks. arXiv preprint arXiv: 1803.06975v2. Available by: https://arxiv.org/pdf/1803.06975.pdf (accessed on 09.12.2021).

15. Carlini N., Liu C., Erlingsson Ú., Kos J., Song D. The secret sharer: Evaluating and testing unintended memorization in neural networks. Proc. of the 28th USENIX Security Symposium. 2019;267–284.

16. Ateniese G., Mancini L.V., Spognardi A., Villani A., Vitali D., Felici G. Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers. International Journal of Security and Networks. 2015;10(3):137–150. DOI: 10.1504/IJSN.2015.071829.

17. Tramèr F., Zhang F., Juels A., Reiter M.K., Ristenpart T. Stealing machine learning models via prediction APIs. Proc. of the 25th USENIX Conference on Security Symposium. 2016;601–608.

18. Fredrikson M., Jha S., Ristenpart T. Model inversion attacks that exploit confidence information and basic countermeasures. Proc. of the 22nd ACM SIGSAC Conference on Computer and Communications Security. 2015;1322–1333. DOI: 10.1145/2810103.2813677.

Bobrov Nikita Dmitrievich

Krasnodar Higher Military School

Krasnodar, Russian Federation

Chekmarev Maxim Alekseevich

Email: max.chek13@gmail.com

ORCID | eLibrary |

Krasnodar Higher Military School

Krasnodar, Russian Federation

Klyuev Stanislav Gennadievich
Cand. Sci. (Engineering), Assistant Professor

eLibrary |

Krasnodar Higher Military School

Krasnodar, Russian Federation

Keywords: machine learning, malicious impact, integrity, confidentiality, security

For citation: Bobrov N.D. Chekmarev M.A. Klyuev S.G. Analysis of methods for machine learning system security. Modeling, Optimization and Information Technology. 2022;10(1). Available from: https://moitvivt.ru/ru/journal/pdf?id=935 DOI: 10.26102/2310-6018/2022.36.1.006 (In Russ).

495

Full text in PDF

Received 03.12.2021

Revised 31.01.2022

Accepted 25.02.2022

Published 26.02.2022