metadata of articles for the last 2 years
Работая с сайтом, я даю свое согласие на использование файлов cookie. Это необходимо для нормального функционирования сайта, показа целевой рекламы и анализа трафика. Статистика использования сайта обрабатывается системой Яндекс.Метрика
Научный журнал Моделирование, оптимизация и информационные технологииThe scientific journal Modeling, Optimization and Information Technology
Online media
issn 2310-6018

metadata of articles for the last 2 years

Human pose estimation from video stream

2025. T.13. № 2. id 1920
Potenko M.A. 

DOI: 10.26102/2310-6018/2025.49.2.036

The article presents a study of a human body pose estimation system based on the use of two neural networks. The proposed system allows determining the spatial location of 33 key points corresponding to the main joints of the human body (wrists, elbows, shoulders, feet, etc.), as well as constructing a segmentation mask for accurate delineation of human figure boundaries in an image. The first neural network implements object detection functions and is based on the Single Shot Detector (SSD) architecture with the application of Feature Pyramid Network (FPN) principles. This approach ensures the effective combination of features at different levels of abstraction and enables the processing of input images with a resolution of 224×224 for subsequent determination of people's positions in a frame. A distinctive feature of the implementation is the use of information from previous frames, which helps optimize computational resources. The second neural network is designed for key point detection and segmentation mask construction. It is also based on the principles of multi-scale feature analysis using FPN, ensuring high accuracy in localizing key points and object boundaries. The network operates on images with a resolution of 256×256, which allows achieving the necessary precision in determining spatial coordinates. The proposed architecture is characterized by modularity and scalability, enabling the system to be adapted for various tasks requiring different numbers of control points. The research results have broad practical applications in fields such as computer vision, animation, cartoon production, security systems, and other areas related to the analysis and processing of visual information.

Keywords: neural networks, convolutional neural networks, machine learning, computer vision, human pose estimation, keypoints, image segmentation

Optimization of the management of the development of the process of population medical examination based on predictive modeling of the rate of change in morbidity in the organizational system of regional health care

2025. T.13. № 2. id 1919
Gafanovich E.Y.  Lvovich A.I.  Preobrazhenskiy A.P.  Choporov O.N. 

DOI: 10.26102/2310-6018/2025.49.2.047

The article examines the possibility of increasing the efficiency of managing the development of the process of medical examination of the population of the region based on predictive modeling of the rate of change in morbidity. As a prerequisite for optimizing management predictive estimates serve decisions in the distribution of resource provision planned within the forecasting horizon by the managing center of the organizational system of regional healthcare. Based on long-term statistical data, the average annual rates of change in morbidity and volumes of medical examinations are calculated as estimates of additional resource provision. The initial data allow us to calculate the rates of change for a group of nosological units as a whole, for each nosological unit and for each territorial entity. The use of visual expert modeling allows us to estimate the degree of synchronization of changes in morbidity with the allocation of additional volumes of medical examination. The expediency of using the analysis of long-term medical and statistical information and predictive estimates for making management decisions within the organizational system of healthcare in the Voronezh Region is substantiated. Based on the analysis of retrospective data, a conclusion was made about the insufficient degree of synchronization of the rate of change in morbidity with the allocated resource without using an optimization approach. In addition, it is proposed to conduct a preliminary classification of the territorial entities of the Voronezh Region by the rate of change into three groups: low, medium, and high, taking into account the large number of territorial entities of the Voronezh Region. The main classes of problems of optimization of management decisions in the distribution of additional volumes of medical examinations within the planning horizon of the development of the organizational system are considered. For this purpose, prognostic estimates are transformed into coefficients of priority of use of the planned resource. To distribute the volumes of medical examinations between nosological units, they focus on optimization using expert choice. An optimization problem of resource distribution among territorial entities is formed, management decisions on the basis of which are determined using the algorithm of multi-alternative optimization.

Keywords: organizational system, development management, predictive modeling, visual expert modeling, optimization

Development of a lightweight model for automatic classification of structured and unstructured data in streaming sources to optimize optical character recognition

2025. T.13. № 3. id 1918
Gavrilov V.S.  Korchagin S.A.  Dolgov V.I.  Andriyanov N.A. 

DOI: 10.26102/2310-6018/2025.50.3.006

This article discusses the task of preliminary assessment of incoming electronic document management based on computer vision technologies. The authors synthesized a dataset of images with structured data based on the invoice form and also collected scans of various documents from pages of scientific articles and documentation in the electronic mailbox of a scientific organization to Rosstat reports. Thus, the first part of the dataset refers to structured data with a strict form, and the second part refers to unstructured scans, since information can be presented in different ways on different scanned documents: only text, text and images, graphs, since different sources have different requirements and their own standards. The primary analysis of data in streaming sources can be done using computer vision models. The experiments performed have shown high accuracy of convolutional neural networks. In particular, for a neural network with the Xception architecture, the result is achieved with an accuracy of more than 99%. The advantage over the simpler MobileNetV2 model is about 9%. The proposed approach will allow for the primary filtering of documents by department without using large language and character recognition models, which will increase speed and reduce computational costs.

Keywords: intelligent document processing, computer vision, convolutional neural networks, stream data processing, machine learning

Platform for testing radiological artificial intelligence-powered software

2025. T.13. № 2. id 1917
Kovalchuk A.Y.  Ponomarenko A.P.  Arzamasov K.P. 

DOI: 10.26102/2310-6018/2025.49.2.023

The amount of AI-based software used in radiology has been rapidly increasing in recent years, and the effectiveness of such AI services should be carefully assessed to ensure the quality of the developed algorithms. Manual assessment of such systems is a labor-intensive process. In this regard, an urgent task is to develop a specialized unified platform designed for automated testing of AI algorithms used to analyze medical images. The proposed platform consists of three main modules: a testing module that ensures interaction with the software being tested and collects data processing results; a viewing module that provides tools for visually evaluating the obtained graphic series and structured reports; a metrics calculation module that allows calculating diagnostic characteristics of the effectiveness of artificial intelligence algorithms. During the development, such technologies as Python 3.9, Apache Kafka, PACS and Docker were used. The developed platform has been successfully tested on real data. The obtained results indicate the potential of using the developed platform to improve the quality and reliability of AI services in radiation diagnostics, as well as to facilitate the process of their implementation in clinical practice.

Keywords: platform, diagnostic imaging, testing, medical images, artificial intelligence

Artificial intelligence in the task of generating distractors for test questions

2025. T.13. № 2. id 1915
Dagaev A. 

DOI: 10.26102/2310-6018/2025.49.2.028

Creating high-quality distractors for test items is a labor-intensive task that plays a crucial role in the accurate assessment of knowledge. Existing approaches often produce implausible alternatives or fail to reflect typical student errors. This paper proposes an AI-based algorithm for distractor generation. It employs a large language model (LLM) to first construct a correct chain of reasoning for a given question and answer, and then introduces typical misconceptions to generate incorrect but plausible answer choices, aiming to capture common student misunderstandings. The algorithm was evaluated on questions from the Russian-language datasets RuOpenBookQA and RuWorldTree. Evaluation was conducted using both automatic metrics and expert assessment. The results show that the proposed algorithm outperforms baseline methods (such as direct prompting and semantic modification), generating distractors with higher levels of plausibility, relevance, diversity, and similarity to human-authored reference distractors. This work contributes to the field of automated assessment material generation, offering a tool that supports the development of more effective evaluation resources for educators, educational platform developers, and researchers in natural language processing.

Keywords: distractor generation, artificial intelligence, large language models, knowledge assessment, test items, automated test generation, NLP

Methodology for creating a dataset for predictive analysis of an industrial robot

2025. T.13. № 2. id 1912
Kormin T.G.  Tikhonov I.N.  Berestova S.A.  Zyryanov A.V. 

DOI: 10.26102/2310-6018/2025.49.2.034

Industrial robots are one of the ways to increase production volumes. Bundling, milling, welding, laser processing, and 3D printing are a number of processes that require maintaining high precision positioning of industrial robots throughout the entire operation cycle. This article analyzes the use of the Denavit-Harterberg (DH) method to determine the positioning and orientation errors of an industrial robot. In this study, the DH method is used to create a model of possible errors in industrial robots and to create a database of deviations of the links and the working body of the robot from a predetermined trajectory. Special attention is paid to the presentation of practical steps to create a synthetic data set for the deviation of axes of an industrial robot, starting from the kinematic model of the robot and ending with the preparation of the final data format for subsequent analysis and the construction of a predictive analytics model. The importance of careful data preparation is highlighted by examples from other research in the field of predictive analytics of industrial equipment, demonstrating the economic benefits of timely detection and prevention of possible equipment failures. The developed model is used in the future to generate a synthetic data set for the deviation of the axes of an industrial robot. The proposed data collection model and methodology for creating a data set for predictive analytics are being tested on a six-axis robot designed for this purpose.

Keywords: inverse kinematics problem, predictive analytics, simulation modeling, industrial robot malfunction assessment, denavit-Hartenberg method, automation, fault diagnosis

Development of software for evaluating the elastic properties of multilayer composite materials

2025. T.13. № 4. id 1908
Bokhoeva L.A.  Titov V.A.  Shatov M.S.  Targashin K.V.  Mei S. 

DOI: 10.26102/2310-6018/2025.51.4.021

This paper presents the development of software for the automated computation of the elastic properties of multilayer composite materials (MCM) intended for use in structures subjected to high-velocity impact loading. The generated array of calculated data can be used for training and testing artificial neural networks used in predicting the ballistic characteristics of MCM subjected to high-speed impact loads. An algorithm has been developed to determine the elastic characteristics of a composite laminate, encompassing the transition from fiber and matrix volume fractions to the properties of a unidirectional composite and subsequently to the full multilayered structure. The implementation includes strength assessment based on the Mises–Hill failure criterion, as well as support for batch data processing via Excel spreadsheets. The software provides analysis of stacking sequences comprising layers of various materials, thicknesses, fiber orientation angles, and through-thickness arrangements. The results will serve as a foundation for the development of an integrated approach to the design of composite structures. The developed software can be used as a standalone tool for engineering analysis or as part of integrated numerical modeling systems. The obtained results significantly reduce the time required to prepare input data for numerical simulations and ensure greater accuracy of initial parameters.

Keywords: elastic properties, composite, elastic properties determination algorithm, database, layer stacking

A conceptual approach to the integration of artificial intelligence into engineering activities

2025. T.13. № 2. id 1907
Terekhin M.A.  Ivaschenko A.V.  Kulakov G.A. 

DOI: 10.26102/2310-6018/2025.49.2.031

The article addresses the pressing issue of developing a unified information space for the integration of artificial intelligence components in the context of information support for design and technological preparation of production. It considers the challenge of creating a digital engineering assistant whose functions include the analysis of design documentation, processing of two-dimensional and three-dimensional models, and generation of new design and technological solutions. The model of interaction between the digital assistant and the engineer is proposed within the framework of integrating computer-aided design systems, engineering data management systems, and digital content management systems. This integration is based on the novel concept of "affordance", which is widely used to describe the characteristics of artificial intelligence systems, as well as in perception psychology and design to describe human interaction with technical devices. Using this concept, an information-logical model of an integrated enterprise information environment has been developed—an environment that brings together natural and artificial intelligence for the purpose of facilitating creative engineering activity. The classification of implementation options based on affordances is proposed as a foundation for compiling and annotating training datasets for generative models, as well as a guideline for formulating subsequent prompt queries. The proposed concept has been practically implemented and illustrated through the unification of medical device designs, including rehabilitation products, surgical navigation systems, multisensory simulators, and a modular expert virtual system. The findings presented in the article have practical value for the automation of engineering decision-making support, as well as for higher education in training engineering specialists, including in interdisciplinary fields such as medical engineering.

Keywords: computer-aided design systems, product information support, artificial intelligence, scientific and technical creativity, engineering activities, affordance

Detection of eating disorders in social media texts and network analysis of affected users

2025. T.13. № 2. id 1906
Solokhov T.D. 

DOI: 10.26102/2310-6018/2025.48.1.033

Eating disorders (EDs) are among the most pressing issues in public health, affecting individuals across various age and social groups. With the rapid growth of digitalization and the widespread use of social media, there emerges a promising opportunity to detect signs of EDs through the analysis of user-generated textual content. This study presents a comprehensive approach that combines natural language processing (NLP) techniques, Word2Vec vectorization, and a neural network architecture for binary text classification. The model aims to identify whether a post is related to disordered eating behavior. Additionally, the study incorporates social network analysis to examine the structure of interactions among users who publish related content. Experimental results demonstrate high precision (0.87), recall (0.84), and overall performance, confirming the model’s practical applicability. The network analysis revealed clusters of users with ED-related content, suggesting the presence of a "social contagion" effect – here dysfunctional behavioral patterns may spread through online social connections. These findings highlight the potential of NLP and graph-based modeling in the early detection, monitoring, and prevention of eating disorders by leveraging digital traces left in online environments.

Keywords: eating disorders, text analysis, machine learning, neural network models, natural language processing, social graph, network analysis

Statistical estimation of the probability of reaching a target price considering volatility and returns across different timeframes

2025. T.13. № 2. id 1905
Gilmullin T.M.  Gilmullin M.F. 

DOI: 10.26102/2310-6018/2025.49.2.030

The article proposes an original algorithm for statistical estimation of the probability of reaching a target price based on the analysis of returns and volatility using a drifted random walk model and integration of data from different timeframes. The relevance of the study stems from the need to make informed decisions in algorithmic trading under market uncertainty. The key feature of the approach is the aggregation of probabilities computed from different time intervals using Bayesian adjustment and weighted averaging, with weights dynamically determined based on volatility. The use of a universal fuzzy scale for qualitative interpretation of the evaluation results is also proposed. The algorithm includes the calculation of logarithmic returns, trend, and volatility, while stability is improved through data cleaning and anomaly filtering using a modified Hampel method. The article presents a calculation example using real OHLCV data and discusses possible approaches to validating the accuracy of the estimates when historical records of target price attainment are available. The results demonstrate the practical applicability of the proposed method for assessing the feasibility of reaching forecasted targets and for filtering trading signals. The developed algorithm can be used in risk management, trading strategy design, and expert decision support systems in financial markets.

Keywords: statistical probability estimation, target price, return, volatility, random walk with drift, timeframe integration, bayesian adjustment, fuzzy logic, logarithmic return, financial modeling

Study of the problem of automated matching of audio files

2025. T.13. № 4. id 1903
Levshin D.V.  Bystryakov D.V.  Zubkov A.V. 

DOI: 10.26102/2310-6018/2025.51.4.004

The volume of audio recording data has significantly increased and continues to grow, which complicates the processing of such data due to the presence of numerous duplicates, noisy recordings, and truncated audio clips. This article presents a solution to the problem of detecting fuzzy duplicates in large-scale audio datasets. The proposed method is based on the use of a cascaded ensemble. For feature extraction, temporal parameter analysis, and similarity evaluation between recordings, Convolutional Neural Networks (CNN), Temporal Shift Networks (TSN), and Siamese Networks were utilized. The input data were initially converted into mel-spectrogram images using the Short-Time Fourier Transform (STFT) algorithm. Each audio file was segmented at a specific sampling rate, with attention to temporal continuity, transformed using STFT, and then passed through the ensemble of models. The study focuses on the behavior of the ensemble when processing recordings that have undergone various modifications, such as noise addition, distortion, and trimming. Experiments conducted on the dataset demonstrated a high degree of correlation between the results obtained from human evaluators and the method, confirming the effectiveness of the proposed solution. The method showed strong robustness to different types of audio modifications, such as tempo changes, noise injection, and clipping. Future research may aim to adapt the ensemble to other types of data, including video and graphical recordings, which would expand the applicability of the proposed approach.

Keywords: audio duplicates, convolutional networks, fourier transform, audio noise, model robustness, mel-spectrogram, siamese architecture, temporal features, comparison of audio recordings

Interpreted reinforcement learning to optimize the operational efficiency of enterprises in the context of digital transformation

2025. T.13. № 3. id 1901
Prokhorova O.K.  Petrova E.S. 

DOI: 10.26102/2310-6018/2025.50.3.001

In the context of the digital transformation of education, MOOC platforms face the need to optimize operational processes while maintaining the quality of education. Traditional approaches to resource management often do not take into account complex temporal patterns of user behavior and individual learning characteristics. This paper proposes an innovative solution based on interpreted reinforcement learning (RL) integrated with the Shapley Value method to analyze the contribution of factors. The study demonstrates how data on activity time, user IDs, training goals, and other parameters can be used to train an RL agent capable of optimizing the allocation of platform resources. The developed approach allows: quantifying the contribution of each factor to operational efficiency; identifying hidden temporal patterns of user activity; and personalizing load management during peak periods. The article contains a mathematical justification of the method, practical implementation in MATLAB, as well as the results of testing, which showed a reduction in operating costs while increasing user satisfaction. Special attention is paid to the interpretability of the RL agent's decisions, which is critically important for the educational sphere. The work provides a ready-made methodology for the implementation of intelligent management systems in digital education, combining theoretical developments with practical recommendations for implementation. The results of the study open up new opportunities for improving the effectiveness of MOOC platforms in the face of growing competition in the educational technology market.

Keywords: reinforcement learning, shapley Value, operational efficiency, digital transformation, interpreted AI, business process optimization

Analyzing customer behavior and choosing marketing strategies based on reinforcement learning

2025. T.13. № 2. id 1900
Prokhorova O.K.  Петрова Е.С. 

DOI: 10.26102/2310-6018/2025.49.2.035

In today's competitive market, companies face the challenge of choosing optimal marketing strategies that maximize customer engagement, retention, and revenue. Traditional methods such as rule-based approaches or A/B testing are often not flexible enough to adapt to dynamic customer behavior and long-term trends. Reinforcement Learning (RL) offers a promising solution, allowing you to make adaptive decisions through continuous interaction with the environment. This article explores the use of RL in marketing, demonstrating how customer data – such as purchase history, campaign interactions, demographic characteristics, and loyalty metrics – can be used to train an RL agent. The agent learns to choose personalized marketing actions, such as sending discounts or customized offers, in order to maximize metrics such as increased revenue or reduced customer churn. The article provides a step-by-step guide to implementing an RL-based marketing strategy using MATLAB. The creation of a user environment, the design of an RL agent and the learning process are considered, as well as practical recommendations for interpreting agent decisions. By simulating customer interactions and evaluating agent performance, we demonstrate the potential of RL to transform marketing strategies. The aim of the work is to bridge the gap between advanced machine learning methods and their practical application in marketing by offering a roadmap for companies seeking to use the capabilities of RL for decision making.

Keywords: reinforcement learning, customer behavior, marketing strategies, state of the environment, agent actions, agent reward

Construction of regression models with switching nonlinear transformations for the assigned explanatory variable

2025. T.13. № 2. id 1893
Bazilevskiy M.P. 

DOI: 10.26102/2310-6018/2025.49.2.024

Often, when constructing regression models, it is necessary to resort to nonlinear transformations of explanatory variables. Both elementary and non-elementary functions can be used for this. This is done because many patterns in nature are complex and poorly described by linear dependencies. Usually, the transformations of explanatory variables in a regression model are constant for all observations of the sample. This work is devoted to constructing nonlinear regressions with switching transformations of the selected explanatory variable. In this case, the least absolute deviations method is used to estimate the unknown regression parameters. To form the rule for switching transformations, an integer function "floor" is used. A mixed 0–1 integer linear programming problem is formulated. The solution of this problem leads to both the identification of optimal estimates for nonlinear regression and the identification of a rule for switching transformations based on the values of explanatory variables. A problem of modeling the weight of aircraft fuselages is solved using this method. The nonlinear regression constructed with the proposed method using switching transformations turned out to be more accurate than the model using constant transformations over the entire sample. An advantage of the mechanism developed for constructing regression models is that thanks to the knowledge of the rules for switching transformations, the resulting regression can be used for forecasting.

Keywords: regression analysis, nonlinear regression, least absolute deviations method, mixed 0–1 integer linear programming problem, integer function «floor», weight model of aircraft fuselage

Review and analysis of optical sensor-based technical vision systems technologies for autonomous navigation on dirt roads

2025. T.13. № 2. id 1892
Bychkov A.  Bulanov A. 

DOI: 10.26102/2310-6018/2025.49.2.045

This review is devoted to computer vision technologies for the autonomous navigation of a mobile robot on dirt roads, and to analyzing their degree of technological readiness. The selection of studies was conducted according to the PRISMA methodology using the academic article aggregator Google Scholar. Based on the analysis of the works from the selected sample, key technologies were identified, including datasets, terrain mapping techniques, and methods for road and obstacle detection. These were further divided into sub-technologies, each of which was evaluated for its level of technological readiness using the scale presented in the study – a newly proposed interpretation of the TRL scale – taking into account the particular challenges of working on dirt roads and in environments that are generally difficult to replicate under laboratory conditions. As a result of the study, statistics were compiled that highlight the most significant works in the field of autonomous navigation on dirt roads. It was also concluded that the main trend in navigation development involves the acquisition and processing of comprehensive data, that traversability analysis focuses on the extraction and processing of geometric features, and that there is an urgent need for high-quality datasets.

Keywords: off-road, dirt roads, obstacle detection, depth sensors, off-road classification, datasets

Practical implementation of software automation of the BP curve and its components for the analysis of the balance of payments of Russia

2025. T.13. № 3. id 1891
Shchegolev A.V. 

DOI: 10.26102/2310-6018/2025.50.3.019

This article presents a practical implementation of the balance of payments (BP) curve using the Python programming language. In this regard, this article is aimed at modeling the relationship between the interest rate, the exchange rate and the state of external economic equilibrium within the framework of the modified IS-LM-BP model. The use of numerical methods and machine learning algorithms makes it possible to analyze the dynamics of macroeconomic indicators and assess the impact of external economic factors on the country's balance of payments. The study uses real statistical data, which ensures the practical applicability of the results obtained. The leading approach to the research is the development of software code for the numerical solution of a system of equations, calibration of the model based on empirical data and the construction of forecasts on various time horizons. The materials of the article are of practical importance for using modern computational economics tools for analyzing and modeling macroeconomic equilibrium, as well as their potential in developing economic policy measures. This model is useful for strategic analysis, as it allows us to assess the impact of changes in interest rates and the exchange rate on macroeconomic equilibrium. The developed methodology allows not only to build a BP curve based on real data, but also to use it to predict future economic conditions, which makes this approach useful for macroeconomic analysis and strategic planning.

Keywords: balance of payments, BP curve, IS-LM-BP model, numerical modeling, macroeconomic equilibrium, correlation-regression analysis, python

Using the ternary balanced number system to improve the accuracy of calculations

2025. T.13. № 2. id 1889
Blinova D.V.  Giniyatullin V.M.  Kupbaev T. 

DOI: 10.26102/2310-6018/2025.49.2.026

The paper describes the use of a ternary balanced number system for calculating the elements of the inverse matrix for ill-conditioned matrices. The conditionality of a matrix characterizes how strongly the solution of a linear equations system can change depending on small perturbations in the data. The higher the conditionality value, the more sensitive the matrix is to small changes in the data. As an example of an ill-conditioned matrix in this paper the three-by-three Hilbert matrix is considered. Based on the known expression, the true values of the elements of the inverse Hilbert matrix are calculated. An assessment of the errors in calculating the elements of the inverse Hilbert matrix, obtained with varying degrees of calculation accuracy in the binary number system (using a computer, software implementation in C language) and in the ternary balanced number system (calculations were performed manually), is given. Comparison of calculation results is performed in the decimal number system. It is shown that the use of a ternary balanced number system allows to reduce the calculation error of ill-conditioned matrices elements by several times (by 3 or more times for low-precision data and by 1,5 or more times for more precise data).

Keywords: inverse matrix, the Hilbert matrix, ternary balanced number system, ill-conditioned matrix, calculation errors

Informative features for electromagnetic detection and recognition of biological objects

2025. T.13. № 2. id 1888
Aleshkov A.A.  Tsvetkov G.A.  Kokovin A.N. 

DOI: 10.26102/2310-6018/2025.49.2.020

The relevance of the study is due to the need to improve the reliability and efficiency of physical protection systems of protected objects in the face of growing security threats, which is possible through the use of more sensitive and selective methods of identification of intruders, which includes the developed method – electromagnetic detection and recognition of biological objects (BO). The purpose of the work is to study the bifurcation process of interaction of external electromagnetic field of radio wave range with the electromagnetic shell of a living organism to substantiate, evaluate and calculate informative features of electromagnetic detection and recognition of BO with the subsequent formation of a dictionary of typical features. The study is based on the previously developed mathematical model of a BO, which is refined and supplemented by analyzing the scientific literature devoted to the study of bioradioinformative technology and bioelectromagnetism. In the course of the work, the conditions and modes of functioning of a biological medium generating electromagnetic radiation are determined and described, depending on the combination of energy and frequency parameters of the external field with the characteristics of this medium. The nomenclature of the most informative signs of electromagnetic recognition – bifurcation parameters characterizing mass, dimensions and electrodynamic properties of a bioobject – is proposed and substantiated. Analytical expressions for calculating the features of classification of BO are derived, confirmed by the results of computational experiment. A dictionary of intruder attributes is developed, providing the possibility of informed decision-making about the presence of an object in the controlled space, its belonging to a certain class and motion parameters. The presented results can be used in the development of means of intruder identification for security and territory monitoring systems.

Keywords: informative features, information interaction, biological object, electromagnetic fields, strength, bioelectromagnetism, intruder identification, bifurcation parameters, feature dictionary

A novel hybrid anomaly detection model using federated graph neural networks and ensemble machine learning for network security

2025. T.13. № 2. id 1887
Arm A.  Lyapuntsova E.V. 

DOI: 10.26102/2310-6018/2025.49.2.044

Traditional network intrusion detection systems have increasingly complex challenges as the sophistication and frequency of cyber-attacks grow. This research proposes federated ensemble graph-based network as a novel hybrid approach to anomaly detection that increases detection performance while minimizing false positives. This new framework relies on federated graph neural networks combined with ensemble approaches using three highly recognized machine learning techniques –Random Forest, XGboost, and LightGBM – to accurately characterize expected patterns of traffic and discern anomalies. Moreover, the framework uses federated learning to ensure privacy-compliant decentralized training across multiple clients learning the same model concurrently without exposure to raw data. The FEGB-Net framework is evaluated using the CICIDS2017 dataset, achieving 97.1% accuracy, 96.2% F1-Score, and 0.98 metrics for evaluating the effectiveness of models, surpassing results from both traditional machine learning and deep learning approaches. By relying on novel graph signal processing approaches to shape the relational learning and ensemble-based voting techniques to categorize results, FEGB-Net can become a practical and effective framework for real-world use due to its transparent interpretability, relative ease of use, and scalability. key contributions include a privacy-preserving Fed-GNN and ensemble framework, a novel meta-fusion algorithm, a reproducible Python implementation, and a large-scale evaluation on CICIDS2017. Future work includes experiments to apply the obtained results in real time and subsequent research considering new attack vectors.

Keywords: network security, anomaly detection, federated learning, graph neural networks, ensemble learning, FEGB-Net, metrics for evaluating the effectiveness of models (AUC-ROC)

Assessment of the reliability and effectiveness of artificial intelligence systems in radiation diagnostics at the operational stage

2025. T.13. № 2. id 1886
Zinchenko V.V.  Vladzimirskyy A.V.  Arzamasov K.M. 

DOI: 10.26102/2310-6018/2025.49.2.016

In the context of the active implementation of artificial intelligence (AI) technologies in healthcare, ensuring stable, controlled and high-quality operation of such systems at the operational stage is of particular relevance. Monitoring of AI systems is enshrined in law: within three years after the implementation of medical devices, including AI systems, it is necessary to provide regular reports to regulatory authorities. The aim of the study is to develop methods for assessing the reliability and effectiveness of medical artificial intelligence for radiation diagnostics. The proposed methods were tested on the data of the Moscow Experiment on the use of innovative technologies in the field of computer vision in the direction of chest radiography, collected in 2023. The developed methods take into account a set of parameters: emerging technological defects, research processing time, the degree of agreement of doctors with the analysis results and other indicators. The proposed approach can be adapted for various types of medical research and become the basis for a comprehensive assessment of AI systems as part of the monitoring of medical devices with artificial intelligence. The implementation of these methods can increase the level of trust of the medical community not only in specific AI-based solutions, but also in intelligent technologies in healthcare in general.

Keywords: artificial intelligence, reliability, efficiency, artificial intelligence system, radiology, radiation diagnostics, monitoring

Optimization of the nomenclature-volume balance of suppliers and consumers in the management of the organizational system of drug supply

2025. T.13. № 2. id 1885
Shvedov N.N.  Lvovich Y.E. 

DOI: 10.26102/2310-6018/2025.49.1.018

The article discusses approaches and tools aimed at improving the intelligent management of the nomenclature component in the drug supply system using optimization problems. We are talking about the relationship between the list of drugs and their quantitative distribution in such a way that the degree of balance between supply and demand is taken into account. The problem lies in insufficient coordination of drug flows, imbalances in stocks and inefficient distribution of resources. All these factors lead to increased costs and reduced availability of vital drugs for end consumers. Effective management of the nomenclature-volume balance allows you to avoid shortages, excess stocks and increase the sustainability of the drug supply system, ensuring optimal stocks and availability of drugs. The main attention is paid to the use of optimization problems and expert assessments of their parameters in managing the digital interaction of suppliers and consumers, which allows for increased accuracy in controlling the range and demand. Control means minimizing shortages or excess stocks, guaranteeing the availability of the necessary drugs for the end consumer. The results of the study were used to develop an intelligent subsystem for supporting management decisions, promoting balanced resource management and increasing the availability of drugs.

Keywords: organizational system, drug provision, management, optimization, expert assessment

Method of numerical calculation of the security level of information infrastructure components

2025. T.13. № 2. id 1884
Belikov Y.V. 

DOI: 10.26102/2310-6018/2025.49.2.025

One of the key issues in the process of organizing information security is the assessment of compliance with the requirements for infrastructure protection, as well as response to current threats and risks. This assessment is ensured by conducting an appropriate audit. Domestic and international standards specify various methods for conducting an information security audit, and also provide conceptual models for constructing the assessment process. However, the disadvantages of these standards include the impossibility of their in-depth adaptation within individual information systems, as well as the partial or complete lack of a numerical assessment of security parameters, which can negatively affect the objectivity of the assessment of the parameters used and not reflect real threats. In turn, the adaptation of numerical methods in the analysis of the maturity level of information security processes allows solving a number of important problems, for example, automation of the assessment process, providing a more accurate indicator of identifying vulnerable components of the information infrastructure, as well as the ability to integrate the obtained values with other processes aimed at neutralizing current security threats from intruders. The purpose of this work is to analyze the possibility of using a numerical assessment of the maturity level of information security, as well as the use of fuzzy sets in the audit.

Keywords: information security, audit, maturity level assessment, information security tools, numerical assessment, fuzzy sets, fuzzy logic, security criteria, risks

Application of the task of finding the minimum vertex coverage in a graph to improve the robustness of digital identity system

2025. T.13. № 2. id 1883
Akutin A.S.  Pechenkin V.V. 

DOI: 10.26102/2310-6018/2025.49.2.015

This paper examines the features of building digital identity systems for managing information technology processes in an enterprise, the architecture of which depends on decentralized data registers - blockchains. The paper considers blockchains as weighted graphs and formulates a number of theses that speak about the specifics of the functioning of such distributed networks in real information technology enterprises. The features of various network topologies and possible architectural vulnerabilities and flaws that can affect the operation of the entire network are considered – centralization of mining, centralization of staking, various attacks on a functioning network (topological and 51% percent attack). Blockchains using various consensus-building algorithms, taking into account their features, are considered. The paper considers the task of finding the minimum coverage in a graph and emphasizes the importance of applying this task to the described digital personality system in order to increase the reliability of the blockchain computer network by analyzing its topology. Various methods of finding the minimum coverage in a graph are considered – exact and heuristic algorithms. The paper analyzes an application that implements the ant colony algorithm to solve the problem, provides numerical characteristics of the algorithm and its formal description.

Keywords: digital identity system, blockchain, distributed systems, graphs, minimum coverage search

Modeling and approximation of scattering characteristics of elementary reflectors

2025. T.13. № 3. id 1882
Preobrazhensky A.P.  Avetisyan T.V.  Preobrazhensky Y.P. 

DOI: 10.26102/2310-6018/2025.50.3.015

Tasks related to the modeling of various electrodynamic objects are encountered in radar, design of electrodynamic devices, measures to reduce radar visibility, development of antennas and diffraction structures. In general, on the basis of the decomposition method, electrodynamic objects can be represented as a set of various elementary components. The scattering properties of the entire object are determined by the scattering properties of each of the components. To determine such characteristics, it is necessary, in general, to rely on the appropriate numerical methods. For a fairly limited number of diffraction structures, various analytical expressions are given in the literature. In some cases, they are quite bulky and require some experience from researchers in the course of use. The paper proposes to approximate the characteristics of elementary reflectors based on the method of least squares and Lagrange polynomials. On the basis of the study, the values of the powers of approximating polynomials were determined, which give an error that does not exceed the specified value. The results of the work can be used in the design of diffraction structures. Based on the results obtained, the time for calculating scattering characteristics will be reduced.

Keywords: modeling, approximation, edge wave method, modal method, diffraction structure

Algorithms and programs for calculating nonparametric criteria for statistical hypothesis testing based on permutations with repetitions

2025. T.13. № 2. id 1880
Agamirov L.V.  Agamirov V.L.  Toutova N.V.  Andreev I.A.  Ziganshin D. 

DOI: 10.26102/2310-6018/2025.49.2.022

One of the important tasks of statistical analysis is to test statistical hypotheses, and in this group the most promising is the subgroup of nonparametric ranking criteria, which are very stable for work with small samples, when it is not possible to reliably justify the hypothetical law of distribution. In its turn, this fact causes the necessity to abandon asymptotic approximations and to have exact critical values of the criteria (or so-called p-values in modern literature). At present, analytical solutions are available only for a very limited class of criteria (signs, Wilcoxon, series, Ansari-Bradley). For all others, a computerized enumeration of a huge number of possible permutations of ranks is required for an exact solution. The creation of a universal algorithm for obtaining an accurate and fast distribution of ranks of nonparametric criteria is the focus of the present work. The algorithm, implemented in open-source programming languages C++, Javascript and Python, is based on a well-known combinatorics problem - permutations with repetitions, with its adaptation to the task of hypothesis testing by rank criteria. The following criteria are considered as such criteria: Kraskell-Wallis, Muda, Lehman-Rosenblatt, as well as a group of normal label criteria: Fisher-Yates, Capon, Klotz, Van der Varden. The algorithm is also adapted for other possible ranking problems of nonparametric statistics.

Keywords: statistical hypothesis testing, nonparametric criteria, rank criteria, exact distributions of rank criteria, permutations with repetitions, permutation algorithms, c++ programs for permutations

Quantum algorithms and cybersecurity threats

2025. T.13. № 2. id 1878
Kozachok A.V.  Tarasenko S.S.  Kozachok A.V. 

DOI: 10.26102/2310-6018/2025.49.2.019

The purpose of this article is to assess potential threats to cybersecurity arising from the development of quantum algorithms. The text analyzes existing quantum algorithms, such as Shor's algorithm and Grover's algorithm, and explores the possibility of their potential application in the context of compromising existing cryptographic systems. The research approach includes a literature review and examination of core mechanisms underlying quantum computers, along with assessment of their capability to perform algorithms potentially affecting various cryptographic systems, both symmetric and asymmetric. Additionally, the paper discusses the prospects for developing quantum-resistant cryptographic algorithms aimed at protecting against cryptanalysis using quantum computations. Based on the analysis of existing quantum algorithms and their potential impact on widely used cryptographic systems, the authors of the study conclude that, at present, there is no compelling evidence to assert the real possibility of compromising asymmetric or symmetric cryptographic algorithms in the near future within the context of quantum computations. However, considering the ongoing development of quantum technologies and the necessity of maintaining the confidentiality of information, the relevance of which will not significantly diminish over time, as well as the need to ensure the protection of confidential information in the future, there is a requirement for the development and active implementation of quantum-resistant cryptographic methods to ensure information confidentiality in the long term.

Keywords: post-quantum cryptography, shor's algorithm, grover's algorithm, asymmetric cryptography, symmetric cryptography, quantum computers, confidentiality preservation of information

Mathematical model of competition for a limited resource in ecosystems: numerical and analytical study of sustainability

2025. T.13. № 2. id 1877
Gutnik D.I.  Belykh T.I.  Rodionov A.V.  Bukin Y.S. 

DOI: 10.26102/2310-6018/2025.49.2.017

This paper investigates the dynamics of interaction between two species competing for a limited resource using a mathematical model that is an autonomous system of ordinary differential equations in normal form. The model is based on Gause's principle, Volterra's hypotheses, Tilman's theory of resource competition, and the Michaelis-Menten equation to describe population growth. The system of nonlinear ordinary differential equations is analyzed for stability at stationary points using the first approximation analytical method proposed by A.A. Lyapunov, which is suitable for the study of systems consisting of two or more equations, and analytically and numerically solved for various values of model parameters. The results show that species survival and coexistence depend on the level of the limiting resource, the ratio of fertility and mortality rates and intraspecific competition, and substrate concentration. Numerical simulations correspond to scenarios of extinction of one species, dominance of one species, or their coexistence depending on environmental conditions. The results obtained in this work are consistent with natural ecological relationships and emphasize the importance of considering anthropogenic factors, such as eutrophication, when predicting changes in ecological systems.

Keywords: population dynamics, limiting resource, mathematical model, lyapunov method, simulation, eigenvalues, stability of equilibrium state

Enhancing the trustworthiness of explainable artificial intelligence through fuzzy logic and ontology

2025. T.13. № 2. id 1872
Kosov P.I.  Gardashova L.A. 

DOI: 10.26102/2310-6018/2025.49.2.014

The insufficient explainability of machine learning models has long constituted a significant challenge in the field. Specialists across various domains of artificial intelligence (AI) application have endeavored to develop explicable and reliable systems. To address this challenge, DARPA formulated a contemporary approach to explainable AI (XAI). Subsequently, Bellucci et al. expanded DARPA's XAI concept by proposing a novel methodology predicated on semantic web technologies. Specifically, they employed OWL2 ontologies for the representation of user-oriented expert knowledge. This system enhances confidence in AI decisions through the provision of more profound explanations. Nevertheless, XAI systems continue to encounter difficulties when confronted with incomplete and imprecise data. We propose a novel approach that utilizes fuzzy logic to address this limitation. Our methodology is founded on the integration of fuzzy logic and machine learning models to imitate human thinking. This new approach more effectively interfaces with expert knowledge to facilitate deeper explanations of AI decisions. The system leverages expert knowledge represented through ontologies, maintaining full compatibility with the architecture proposed by Bellucci et al. in their work. The objective of this research is not to enhance classification accuracy, but rather to improve the trustworthiness and depth of explanations generated by XAI through the application of "explanatory" properties and fuzzy logic.

Keywords: explainable artificial intelligence, explainability, ontology, fuzzy system, fuzzy clustering

Modeling of radiopaque angiographic images for determining vessel parameters using dual spectral scanning

2025. T.13. № 2. id 1871
Kuzmin A.A.  Sukhomlinov A.Y.  Zhilin I.A.  Filist S.A.  Korobkov S.V.  Serebrovskiy V.V. 

DOI: 10.26102/2310-6018/2025.49.2.011

The purpose of the study is to develop a methodology for cognitive determination of medical halftone images’ parameters based on dual spectral scanning methods. The mathematical model of radiopaque images of vessels is described in this work. Based on this model, the method for determining the vessel parameters using spectral scanning was developed. The model is based on the representation of oriented brightness differences using Walsh functions. This vessel model was convolved with wavelets based on the first Walsh functions. The result of the convolution will yield extremes at the points of brightness differences. We can use this result as an informative parameter for the presence of a vessel contour. Information from many such parameters in a local area is aggregated and gives an averaged characteristic of this area. This leads to a significant decrease in the influence of noise on the final result due to an acceptable decrease in the resolution of localization of significant arterial occlusions. The averaged results of the convolution of Walsh functions are recommended to be calculated using a two-dimensional spectral Walsh transform in a sliding window with subsequent frequency selection. The method is illustrated by the example of classifying the contour of the boundary of a vessel model and a real radiopaque image of an artery with a high noise level. A comparison of theoretical and practical approaches to solving the problem of detecting the contour of arteries is carried out. Experimental studies of the proposed method have shown the possibility of estimating informative parameters even under conditions of analyzing images with unsatisfactory contrast and with a low signal-to-noise ratio. The use of the dual spectral scanning method in systems for automatic analysis of radiopaque angiographic images allows obtaining informative parameters in conditions of high noise in the images.

Keywords: spectral analysis, informative parameters, image of a vessel, radiopaque angiography, walsh functions

A method for quantifying the danger of implementing threats to the information security of objects of critical information infrastructure by potential violators

2025. T.13. № 2. id 1870
Chernov D.V. 

DOI: 10.26102/2310-6018/2025.49.2.013

In the context of increasing informatization of various production areas, when most technological processes and information flows are automated and controlled by computer technology, the choice of measures to ensure the security of information (SI) of critical information infrastructure objects (CIIO) becomes a pressing issue. The article discusses existing methods and approaches to assessing the risk of implementing SI threats to CIIO, which include automated process control systems, information systems, and information and telecommunication networks. These approaches help SI specialists assess the risks associated with possible cyberattacks and data leaks. A method is proposed for quantitatively assessing the degree of danger of implementing SI threats based on the intelligent analysis of data stored in the CIIO logging subsystem. The method allows for a quantitative assessment of the degree of danger of implementing SI threats by potential violators with respect to a specific CIIO. The developed method complements the available assessments of SI specialists by forming expert assessments from additionally involved specialists - professionals in the field of technological processes and information flows of CIIO. The results of the study are recommended for use in modeling SI threats and developing requirements for information security tools in the CIIO.

Keywords: information security, critical information infrastructure, automated control system, technological process, threat, violator, potential, danger of threat realization, risk, damage