metadata of articles for the last 2 years
Работая с сайтом, я даю свое согласие на использование файлов cookie. Это необходимо для нормального функционирования сайта, показа целевой рекламы и анализа трафика. Статистика использования сайта обрабатывается системой Яндекс.Метрика
Научный журнал Моделирование, оптимизация и информационные технологииThe scientific journal Modeling, Optimization and Information Technology
Online media
issn 2310-6018

metadata of articles for the last 2 years

Optimal control of finite increments of model factors based on sensitivity analysis

2026. T.14. № 2. id 2202
Sysoev A.S. 

DOI: 10.26102/2310-6018/2026.53.2.010

The article addresses the topical inverse problem of target-oriented control: determining the necessary finite changes to the system's input factors to achieve a desired target state, as opposed to the classical direct problem of forecasting. To solve it, a new methodological approach is proposed. This approach is based on sensitivity analysis utilizing the Lagrange mean value theorem. This framework allows for moving beyond local linearization to precisely account for nonlinear effects and factor interactions under substantial, practically observed changes. The key scientific result is the development of a universal iterative algorithm, which, for a given mathematical model, determines the vector of finite changes for the controllable factors that ensures the required increment in the output indicator with minimal total cost of the introduced changes and within given constraints. At each iteration step, the model's gradient (sensitivity estimate) is computed at an intermediate point, whose position is sequentially refined, and an auxiliary constrained optimization problem is solved. The practical efficiency and operability of the proposed method are verified using a numerical example with the nonlinear Ishigami model. The algorithm successfully found the optimal control action, ensuring high accuracy in achieving the target.

Keywords: inverse control problem, sensitivity analysis, finite change analysis, lagrange mean value theorem, constrained optimization

Application of artificial intelligence methods to analyze human behavioral biometrics in ensuring the security of complex information systems

2026. T.14. № 2. id 2201
Shelestova O.V.  Kochkarov A.A. 

DOI: 10.26102/2310-6018/2026.53.2.015

This article examines the application of artificial intelligence methods and technologies to analyzing human behavioral biometrics in the security of complex information systems. The relevance of the study stems from the limitations of traditional authentication mechanisms, which focus primarily on the initial stage of a user session and are ineffective in detecting user impersonation during interaction with the system. An alternative approach is proposed, using user behavioral characteristics to continuously assess trust in the current session. The paper analyzes anonymized text input data on a mobile device, reflecting the temporal and structural features of user interaction with the interface. It is shown that the combination of such characteristics allows for the identification of stable behavioral patterns suitable for user profiling. Using dimensionality reduction and cluster analysis methods, typical behavioral profiles are identified, differing in input style and rhythm, as well as the nature of corrections. Cluster membership is established to be maintained across multiple sessions with acceptable variability in individual characteristics. A risk-based approach to assessing behavioral deviations is proposed, based on comparing current behavioral indicators with a typical cluster profile. The study's results confirm the feasibility of using cluster behavioral profiles in risk-based access control systems and can be used in the design and development of continuous authentication mechanisms in complex information systems.

Keywords: behavioral biometrics, information security, artificial intelligence, machine learning, cluster analysis, continuous authentication, user behavior analysis

Agent-based approach to intelligent search in library systems

2026. T.14. № 2. id 2199
Rzyankin I.S.  Baryshev R.A.  Guchko A.A. 

DOI: 10.26102/2310-6018/2026.53.2.008

The article explores the application of an agent-based Retrieval-Augmented Generation (Agentic RAG) approach to intelligent search tasks in library collections. The object of the study is the Agentic RAG architecture, which integrates information retrieval mechanisms with agent-based planning and self-evaluation of intermediate results. The addressed problem concerns the limitations of classical Retrieval-Augmented Generation in handling complex thematic and contextual queries within semantically rich library data environments. Unlike traditional RAG pipelines, the agent-based architecture enables iterative refinement of search strategies, adaptive decision-making, and reassessment of intermediate outcomes. The research methodology is based on the development of a software prototype implementing Agentic RAG and its experimental comparison with a classical RAG baseline using a real university library corpus comprising bibliographic metadata, annotations, and full-text fragments. The evaluation framework includes standard information retrieval metrics (Precision@k, Recall@k, MRR, nDCG) as well as expert-based assessment of answer relevance. The results demonstrate a consistent superiority of Agentic RAG in terms of retrieval accuracy, recall, and ranking quality, particularly for complex queries. However, the interpretation of findings is constrained by the selected evaluation metrics and the characteristics of the experimental corpus. The practical significance lies in the potential integration of agent-based architectures into library information systems without requiring substantial infrastructural changes.

Keywords: agent-based search, retrieval-Augmented Generation, library information systems, intelligent search, semantic search, neural network technologies, agent architectures

Ontology-based approach to predicting consumer purchasing behavior in e-commerce

2026. T.14. № 2. id 2196
Svyatov R.S. 

DOI: 10.26102/2310-6018/2026.53.2.018

The relevance of this study is determined by the need to improve the accuracy and interpretability of models for predicting consumer purchasing behavior in online stores. Existing machine learning methods demonstrate high performance; however, their effectiveness largely depends on the composition and structure of the feature space, which is typically formed empirically and does not reflect the causal relationships between user actions. This study aims to develop a purchasing behavior prediction method based on an ontological analysis of the e-commerce domain. A formalized approach is proposed for describing entities and their interrelations, providing a systematic construction of the feature space and enabling its scalability across various online stores. The gradient boosting algorithm CatBoost was employed as the machine learning tool, trained on data obtained from the Yandex.Metrica web analytics system. The proposed method was tested on five online stores with different thematic focuses. Experimental results demonstrated stable quality metrics, with F-scores ranging from 65 % to 83 %, confirming the applicability and reproducibility of the developed approach. The findings have practical significance for the development of intelligent decision support systems in e-commerce and can be utilized in designing scalable analytical platforms for predicting user activity and purchase conversion.

Keywords: machine learning, ontology analysis, user behavior analysis, e-commerce, consumer behavior prediction, online stores

A comparative study of deep learning architectures for interpretable diagnosis of retinal diseases

2026. T.14. № 2. id 2195
Miroshnichenko V.V.  Kashirina I.L. 

DOI: 10.26102/2310-6018/2026.53.2.016

Interpretability of deep learning decisions remains a critical requirement for their application in medical diagnostics. This study presents a comparative analysis of three modern neural network architectures—Vision Transformer (ViT), Swin Transformer, and ConvNeXt – for multiclass classification of retinal diseases using optical coherence tomography (OCT) images. The research was conducted on the open OCTDL dataset containing 2.064 images across seven diagnostic categories with pronounced class imbalance. To compensate for this imbalance, a loss function weighting strategy was employed. All three models achieved validation accuracy exceeding 0.91, with ConvNeXt demonstrating the best performance (0.945) and an optimal balance of sensitivity and specificity, particularly for rare pathologies. Model interpretability was evaluated using Grad-CAM, attention weight visualization, and the model-agnostic LIME method. The analysis revealed that ConvNeXt combined with Grad-CAM provides the most reliable localization of clinically significant features, whereas ViT attention maps and Swin Transformer activation maps often appeared blurred or focused on non-informative regions. The results confirm the advantage of ConvNeXt as the most promising architecture for clinical deployment in ophthalmological diagnostics, owing to its combination of high accuracy, interpretability, and moderate computational requirements.

Keywords: deep learning, vision Transformer, swin Transformer, convNeXt, retinal diseases, grad-CAM

Application of an actuator model to improve the performance of an unmanned aerial vehicle lateral g-load control system

2026. T.14. № 3. id 2194
Smirnov V.A.  Orlov V.P. 

DOI: 10.26102/2310-6018/2026.54.3.004

This article examines the problem of improving the performance and accuracy of g-load control loops for highly maneuverable unmanned aerial vehicles. It is noted that traditional approaches based on a full range of physical sensors and linearized models lead to design complexity and are insufficient to compensate for significant aerodynamic nonlinearities and parameter spreads. A proposed solution is a transition to model-based control, replacing the steering actuator position sensor signal with the output signal of its virtual mathematical model. The study aims to develop the structure of an astatic loop implementing this approach. A three-loop system with an integral angular velocity stabilizer and compensation for the nonlinear torque characteristic is presented, ensuring astatic control without additional integrating links. To implement the approach in practice, the introduction of correcting devices in the high-frequency loop considers the total phase delays is proposed. The effectiveness of the solution is demonstrated using statistical modeling with random variations in the system parameters. It is shown that replacing a real actuator signal with its model does not lead to a statistically significant deterioration in the quality of transient processes, which confirms the possibility of increasing the speed and reliability of the system while simultaneously simplifying its hardware implementation.

Keywords: control system, lateral g-load, astatic loop, stabilization actuator, model-based control

Improving traffic quality of service in hybrid networks with cloud and fog layers

2026. T.14. № 3. id 2193
Glushak E.V.  Mikhailova P.D. 

DOI: 10.26102/2310-6018/2026.54.3.001

Improving the quality of service (QoS) in hybrid networks with cloud and fog levels is an urgent task of modern development of telecommunication systems. As the volume of data transferred increases, traditional resource management methods become insufficiently effective. Hybrid networks combining cloud and fog computing can significantly improve performance and reduce latency. An urgent task is to ensure a balance between high throughput, minimal delays and low packet loss. Efficient resource allocation helps to reduce energy consumption and operating costs. The article is devoted to optimizing the quality of traffic service in hybrid networks combining cloud and fog computing. A mathematical model based on a system of differential equations is presented that describes the dynamics of load, queues, resource allocation, delays, and packet losses. The model formalizes the task of optimal resource management in order to minimize delays and losses with limited capabilities. Numerical integration methods are used for the solution. The developed algorithm makes it possible to effectively balance the load between cloudy and foggy levels. The proposed approach proves its effectiveness for optimizing modern telecommunication systems, especially for applications with critical response time requirements.

Keywords: hybrid networks, cloud computing, fog computing, quality of service (QoS), traffic optimization, load balancing, data latency, layered architecture, resource allocation, routing

Optimization-simulation modeling for resource allocation management in geographically distributed organizational systems with variable workloads

2026. T.14. № 4. id 2191
Boklashov I.I.  Ivanov D.V.  Lvovich Y.E. 

DOI: 10.26102/2310-6018/2026.55.4.008

This paper addresses the integration of optimization approaches and simulation modeling to manage resource allocation within an organizational system characterized by a geographically distributed operational environment and variable activity volumes. The research methodology employs a systems approach, utilizing structural modeling to represent the organization's functioning and management. By structuring the interaction between the control center and operational units, the study establishes quantitative connection characteristics, which are recorded via the system's digital monitoring. The core component of this optimization-simulation model involves the multi-alternative selection of priority units for integrated resource allocation, subject to balance constraints and a stochastic flow of requests defining work requirements. Variable activity volumes are accounted for through a multi-period distribution of integrated resources. Consequently, the set of candidate units for the subsequent period includes those excluded from the optimized subset in the previous step, alongside a random component determined by the simulation results. The study demonstrates that single-period optimization utilizes real-time data to identify priority units for resource allocation. Furthermore, the multi-period optimization-simulation process generates sufficient synthetic data on resource demand; when combined with retrospective monitoring data, this forms a representative training dataset for machine learning predictive models. Finally, the paper defines management decisions supported by these predictive models for both the operational and developmental stages of the organizational system.

Keywords: organizational system, management, optimization, simulation modeling, machine learning, forecasting

A method for implementing pseudo-realistic movement of non-player characters in open virtual worlds

2026. T.14. № 3. id 2190
Shutov K.I.  Lobanov A.A. 

DOI: 10.26102/2310-6018/2026.54.3.006

The open-world game market increasingly demands NPC (non-player character) behaviour that feels believable yet remains designer-controllable under tight computational budgets. Common solutions tend to be extreme: either they attempt full simulation and overload the system, or they rely on predictable scripted patterns. This paper proposes a pseudo-realistic NPC movement method that bridges these extremes. The core idea is to verify spawn reachability using a matrix of shortest-path distances between world areas. When the player enters an area, the algorithm selects only those NPCs that could have physically reached it given elapsed time, movement speed and available routes, making an encounter consistent with hidden travel rather than instantaneous spawning. Encounter frequency is controlled via a priority scheme, allowing designers to tune event density and the rarity of specific characters without maintaining a detailed simulation. Candidate selection is further accelerated by reordering an almost-sorted list, reducing the cost of repeated queries under similar conditions. Experiments on synthetic graphs show that the core client-side runtime stays within milliseconds for up to 1000 NPCs. The method delivers believability and control at low computational cost and can be integrated into existing engines to adjust difficulty and balance.

Keywords: game design, game development, video games, pathfinding algorithm, sorting algorithm, NPC, non-player character

Privacy-preserving threat intelligence sharing across government agencies using FEGB-Net

2026. T.14. № 3. id 2189
Arm A.  Lyapuntsova E.V. 

DOI: 10.26102/2310-6018/2026.54.3.011

Government networks are increasingly targeted by coordinated cyberattacks that exploit similarities in infrastructure and operational practices across agencies. Although early detection at one organization could provide valuable warnings to others, effective threat intelligence sharing is often constrained by data sovereignty and privacy regulations. This paper presents an extension of the federated ensemble graph-based network (FEGB-Net) framework that enables privacy-preserving threat intelligence sharing across government agencies. The proposed approach extracts compact behavioral threat signatures from locally trained federated graph neural network models, protects these signatures using differential privacy, and supports real-time cross-agency threat matching. Experimental evaluation using the CICIDS2017 dataset demonstrates that detection accuracy remains comparable to isolated operation, while coordinated attack detection time is reduced by up to 88.5 %. Privacy analysis confirms that ε-differential privacy with ε = 2.0 limits membership inference attacks to near-random success. The results show that collaborative defense can be achieved without compromising data privacy or sovereignty.

Keywords: federated learning, threat intelligence sharing, graph neural networks, differential privacy, government cybersecurity

A modified algorithm for simulated annealing for the task of diagnosing failures of analog electronic devices

2026. T.14. № 2. id 2188
Uvaysov S.U.  Chernoverskaya V.V.  Hai N.D.  Hai V.T.  Pham X.T. 

DOI: 10.26102/2310-6018/2026.53.2.013

The article presents the results of a study that developed a new method for automated diagnostics of functional components of electronic devices in order to identify parametric component failures, electrical failures, and short circuit detection. The relevance of the study is due to the ever-increasing complexity of modern electronics, when traditional diagnostic methods do not provide the necessary accuracy and efficiency of diagnostic procedures, which leads to an increase in equipment failures during operation and an increase in the cost of its maintenance and repair. The proposed method is based on a well-known algorithm for simulated annealing, which has been adapted to solve the problems of troubleshooting electronic devices. Objective: to propose a new method for diagnosing failures of electronic equipment based on a modified algorithm for simulated annealing, aimed at increasing the reliability of identification of faults occurring in nodes and modules during the operation of modern electronics, as well as to increase the degree of automation of diagnostic procedures. Physical and model experiments conducted during the study showed that the proposed method based on a modified algorithm effectively detects a number of failures, including complex cases of sequential failures that could not be identified using traditional methods. In addition, the proposed approach requires less time for analysis and makes it possible to increase the reliability of diagnostics of the studied nodes and modules of electronic equipment. The results obtained confirm the promising application of the method in the tasks of technical diagnostics, including its further integration into automated control systems of electronic equipment.

Keywords: electronic device, fault diagnosis, malfunction, annealing simulation algorithm, optimal solution, function extremum, solution generation mechanism, markov chain

Rating model with latent parameters based on the softmax function

2026. T.14. № 3. id 2185
Bratischenko V.V. 

DOI: 10.26102/2310-6018/2026.54.3.002

The relevance of the work is due to the widespread use of recommendation systems using rating assessments. Based on the results of the review of recommendation methods, it is concluded that it is possible and expedient to build a probabilistic rating model similar to the Item Response Theory models. It is proposed to use latent interest parameters for each subject, characterizing its tendency to set a certain rating, and latent agreeability parameters for each object, characterizing the frequency of obtaining a certain rating. The probabilities of the estimates are determined by a softmax function with interest and matching parameters. The equations connecting observations and latent parameters are obtained using the maximum likelihood method. An iterative procedure for calculating parameters based on rating estimates has been developed and its convergence has been substantiated. The model was tested using the well-known Nexflix set with movie ratings and statistical characteristics of the ratings predictions were presented. The accuracy of predicting ratings turned out to be comparable with the accuracy of predictions of other models. The advantage of the proposed model is a compact description of the assessment probabilities in the form of sets of latent parameters of subjects and objects, which makes it possible to predict rating estimates. The disadvantages include the computational complexity of estimating the parameters and the need to recalculate the parameters when new data becomes available. The proposed model can be used to study and predict ratings.

Keywords: recommender system, rating assessment, collaborative filtering, probabilistic model with latent parameters, softmax function

Generalized requirements set for heterogeneous IT infrastructure monitoring systems: analysis of Russian enterprises requests

2026. T.14. № 2. id 2184
Kamenev A.S. 

DOI: 10.26102/2310-6018/2026.53.2.012

This article presents the development of a generalized set of requirements for heterogeneous IT infrastructure monitoring systems based on an analysis of the Russian market from 2017 to 2025. The relevance of the study is driven by the need to create scientifically grounded approaches for building such systems under conditions of import substitution and the growing complexity of the IT landscape of Russian enterprises, characterized by the widespread adoption of hardware virtualization, containerization, and microservices architecture technologies. Based on an analysis of technical specifications and commercial requests from 30 major Russian enterprises, a generalized and formalized set of functional requirements consisting of 92 items grouped into 11 categories was developed. The application of frequency analysis revealed a stable cross-industry core of 67 highly demanded requirements (frequency > 80%). Verification of the set using public tenders confirmed its universality and completeness. The study also identified a key trend: large enterprises view the primary task of corporate IT infrastructure monitoring systems not as replacement, but as coordination and consolidation of existing heterogeneous monitoring tools. The practical significance of the results lies in creating a methodological basis for drafting technical specifications, conducting comparative analysis of solutions, and designing systems of this class, thereby contributing to improved management efficiency of complex IT infrastructure in Russian enterprises.

Keywords: system analysis, monitoring system, IT infrastructure, observability platform, AIOps

The subtleties of satellite communications in the conditions of high latitudes of the Russian Federation

2026. T.14. № 4. id 2183
Kokorich M.G.  Noskova N.V.  Ruskova E.O. 

DOI: 10.26102/2310-6018/2026.55.4.001

The article analyzes the factors influencing the organization of satellite communications in the conditions of high latitudes of Russia. Using ITU-R recommendations on radio wave propagation, an assessment of signal energy losses is provided for low elevation angles in free space and in the atmospheric layer. Calculations were performed for elevation angles from 1 to 20 degrees in the C- and Ku-bands, taking into account climatic factors characteristic of the Far North: precipitation intensity and total integrated liquid water content in the atmosphere. The calculation results are presented as dependencies on the elevation angle, which allows the obtained data to be used for assessing the energy budget of satellite communication links in critical high-latitude conditions. Signal energy losses due to antenna pointing inaccuracies are also considered, which are determined by the antenna's beamwidth and external destabilizing factors, one of which in the Far North conditions is increased wind load. An assessment of the noise parameters of a receiving earth station is provided, where under low elevation angle conditions, the antenna noise temperature is determined by atmospheric radiation noise, specifically the influence of atmospheric gases, cloudiness, and precipitation. The results are presented as the dependence of noise temperature on the elevation angle for the C- and Ku-bands, based on calculations of atmospheric losses. The conducted research is planned to be used in the development of recommendations for the energy calculation of satellite links in the Arctic regions of Russia at the edge of the visibility zone of geostationary satellites.

Keywords: satellite communications, high latitudes, energy calculation, noise temperature, signal loss in precipitation

Quantitative evaluation of the architecture of complex software systems based on a graph multi-criteria model

2026. T.14. № 2. id 2180
Saenko I.D. 

DOI: 10.26102/2310-6018/2026.53.2.014

The work explores the digital assessment of the structure of complex software systems, which is an important factor for improving reliability, performance, and scalability. Current design methods lack a formalized and reproducible method for architectural system analysis of components and their interactions, hindering the comparison of alternative architectural solutions and identifying the most effective structural configurations during the design phase. Therefore, this paper focuses on a development method that enables quantitative assessment of the architecture of complex software systems, taking into account the implementation specifics of components and their interactions. The leading approach to studying this problem is a graph representation of the architecture, where the nodes correspond to software components with numerical characteristics according to pre-defined quality criteria, and the edges reflect architectural connections with component influence coefficients. The architectural significance of components is calculated as the average value of coefficients in incoming connections, while components without incoming connections. The final architectural score is defined as a weighted average of local component scores, taking into account their architectural significance, which provides a comprehensive and systematic approach to architectural analysis. The article presents the results of applying the method to a software system with 10 and 13 components, reveals changes in the final assessment when adding new components and changing the connection structure, and identifies the most significant elements of the system from an architectural perspective. The obtained data allows for a quantitative comparison of alternative architectural solutions and identifies the impact of components on the overall system's performance. The article's materials are of practical value for the design, optimization, and modernization of complex software systems, and can also be used in research in software engineering and systems analysis.

Keywords: architecture of complex software systems, quantitative assessment, graph model, multicriteria analysis, architectural significance

Comparative analysis of machine learning methods for reconstructing the magnetic characteristic of short samples in a measurement system with a parallel magnetic shunt

2026. T.14. № 3. id 2178
Surnyaev V.A.  Grechikhin V.V. 

DOI: 10.26102/2310-6018/2026.54.3.007

The paper addresses the problem of reconstructing the magnetic characteristic of a short sample material from measurements obtained in a magnetic measurement system with a parallel magnetic shunt. Previous studies have shown that introducing a shunt increases measurement sensitivity over a wide range of magnetic permeabilities of the investigated material, which is especially important for short samples and under limitations on the magnetizing current. However, the presence of a parallel branch causes magnetic flux redistribution and complicates the interpretation of measurement data, making direct analytical reconstruction procedures difficult. In this work, machine learning methods are considered for solving the inverse problem of reconstructing the magnetic characteristic from measured dependences. In contrast to earlier research where only neural-network models were analyzed, this paper provides a comparative analysis of five learning algorithms from different model classes. Training and testing are performed under conditions close to real measurements, taking into account possible errors of the measurement channels. It is shown that the best reconstruction quality at the specified noise level is achieved by the Random Forest, which outperforms the alternatives in terms of mean squared error and robustness to disturbances.

Keywords: magnetic shunt, short samples, magnetic characteristic, inverse problem, machine learning, robustness, measurement errors

A method for hybrid filtering of information from fire sensors based on a weighted median filter with a finite impulse response and a Kalman filter

2026. T.14. № 2. id 2176
Singh S.  Pribylsky A.V. 

DOI: 10.26102/2310-6018/2026.53.2.011

The relevance of this study necessitated improving the resilience of recursive fire hazard prediction systems to various types of disturbances, such as vibrations, electromagnetic interference, and cumulative forecast errors. In such cases, even a minor impact on predicted time series can lead to false alarms or missed threats, which is especially critical in areas with high occupant illumination, such as subways. Existing filters, when used in isolation, do not consistently suppress Gaussian and impulsive signals, preserving sharp signal changes and minimizing phase shift. Therefore, a hybrid filter method combining a Kalman filter and a weighted FIR hybrid median filter was developed and evaluated. The method's effectiveness is evaluated using synthetic and in-house data (including ~6 million samples from subway sensors) using a combination of metrics: MAE, MSE, SNR, derivative result accuracy, and response time. The proposed hybrid is shown to provide the best results: a reduction in MAE to 0.419, an increase in SNR to 2.05 dB, and an accuracy level of 99.98%. The papers' materials are of practical value to fire safety system developers and early sensor data processing specialists.

Keywords: filtering, fire detectors, hybrid filter, FIR filter, kalman filter, weighted median filter

Computer implementation of exact distribution of rank statistical criteria using dynamic programming methods

2026. T.14. № 2. id 2175
Agamirov L.V.  Agamirov V.L.  Vestyak V.A.  Toutova N.V. 

DOI: 10.26102/2310-6018/2026.53.2.007

This paper considers the problem of calculating exact distributions for nonparametric rank tests in the absence of analytical solutions. The classical approach based on a complete enumeration of all possible permutations of ranks, although theoretically accurate, turns out to be practically inapplicable even for small sample sizes due to the combinatorial explosion of the number of variants. A straightforward enumeration of all possible rank permutations, which is an exact calculation method, proves computationally intractable even for small samples due to combinatorial explosion. The most well-known nonparametric rank tests lacking an analytical solution for obtaining the full distribution function are considered, including the Lehmann-Rosenblatt, Kruskal-Wallis, and Mood tests. Existing approximations (normal, chi-square) often prove unsatisfactory for small samples. This paper proposes an efficient solution based on dynamic programming, which reduces computational costs by hundreds of times compared to naive permutation generation. The methodology implemented includes generating rank sequences, calculating statistics for each sequence, and then aggregating the results to construct the distribution function. Computational experiments conducted clearly demonstrate that dynamic programming is the most effective method for generating accurate distributions. Software implementations in C++ and Python have been developed and made publicly available, and comparative testing has confirmed the expected performance advantage of C++.

Keywords: nonparametric statistics, rank tests, exact distribution, p-value, dynamic programming, computational efficiency, open source

Evaluation of the characteristics of open BCH and RS code decoders implemented on modern FPGAs

2026. T.14. № 3. id 2174
Khrustalev V.V. 

DOI: 10.26102/2310-6018/2026.54.3.005

The performance of existing data transmission and storage systems depends significantly on the hardware error control and correction units they employ. One effective method for combating errors is error-correcting coding, which allows for the correction of errors that occur during the transmission or storage of information. Developing hardware error-correcting coding units to meet specific requirements is a complex task. The competitiveness of the final product depends significantly on the quality of the solution to this problem. This paper describes a section of the library of error-correcting decoders dedicated to one of the most interesting classes of block linear codes – Bose-Chaudhuri-Hocquenghem codes (BCH codes) – and their most important subclass, non-binary BCH codes, Reed-Solomon codes (RS codes). A feature of the library is that the characteristics of all the blocks included in it were calculated using a single modern hardware base. Furthermore, the paper presents methods for comparing decoders implemented on different hardware platforms. The library of error-correcting decoders will help developers familiarize themselves with existing codecs early in the development process and potentially choose one over developing their own. The library will also help codec developers compare their codecs with existing ones. Furthermore, the library will be useful when developing new decoding algorithms designed for hardware implementation.

Keywords: error-correcting coding, BCH codes, reed-Solomon codes, hardware decoders, FPGA

Comparative analysis of large language models for generating dialogues in the gaming industry

2026. T.14. № 4. id 2172
Gobozov V.V.  Sultanov N.Z.  Rameev O.A. 

DOI: 10.26102/2310-6018/2026.55.4.002

The relevance of the study is due to the growing demands on the quality and variability of content in the modern gaming industry, in particular, to the replicas of non-player characters (NPCs), whose traditional writing methods may not fully ensure variability and replayability. This article aims to identify the most appropriate large language model (LLM) for generating NPC replicas by comparative analysis according to a number of criteria. The leading research method is a comparative analysis of two groups of models: LLM with a large number of parameters (DeepSeek-V3.2, Qwen 3-Max, GigaChat 2 Max) provided via API\Web services and models with a small number of parameters (DeepSeek-R1:14b, Qwen 3:14b, Phi4:14b) running on a personal computer. The paper presents criteria for evaluating the quality of responses and technical characteristics, shows the testing algorithm and the structure of the request. An integrated performance indicator was introduced for a comprehensive assessment of models, taking into account several key criteria for the quality of responses. As a result, the preferred LLMs were identified in both groups: the GigaChat 2 Max model showed the best compliance with the rules of generation and is recommended for use for Russian-language game projects. Among the second group the DeepSeek-R1:14b model showed the best results. The materials of the article are of practical value to developers in the gaming industry, providing sound recommendations for integrating LLM to automate the creation of NPC replicas.

Keywords: large language models, comparative analysis, dialog generation, video games, game content

Sensitivity analysis for nonlinear Kalman filters in navigation signal parameter estimation

2026. T.14. № 2. id 2170
Glushankov E.I.  Sudenkova A.V.  Kondrashov Z.K. 

DOI: 10.26102/2310-6018/2026.53.2.004

Nonlinear estimation methods are used for filtering navigation signals, the quality of which depends on the accuracy of the chosen state and observation models. In situations where model parameters are unknown or change during observation, it is necessary to resort to adaptive filtering algorithms. The need for more complex approaches is determined by how much the deviation of a particular parameter affects the filtering result. To assess the quality of filtering, criteria such as signal-to-noise ratio gain or root mean square error are typically used; however, these are not intended to determine the influence of the magnitude of parameter deviations from their true values on the estimation error variance, unlike a quality indicator such as sensitivity. The article discusses the analysis of filtering sensitivity to changes in observation and state parameters under the influence of white noise for Kalman filters of various accuracy orders and a filter optimal by the criterion of maximum a posteriori probability density. Simulation is carried out by numerical methods. The derivation of the large-scale sensitivity equation for the nonlinear Kalman filter in analytical form is presented. As a result, dependencies of sensitivity on the magnitude of the discrepancy between the true and assumed models were obtained, as well as the stability of the filtering algorithms to this discrepancy. The results can be used to formulate requirements for permissible model parameter deviations and to check the filtering quality under conditions of their a priori uncertainty.

Keywords: kalman filter, stochastic differential equations, sensitivity, radio navigation signal, white noise

Practical aspects of building private multimodal generative models: methods, constraints, and tools

2026. T.14. № 2. id 2169
Ledovskaya E. 

DOI: 10.26102/2310-6018/2026.53.2.005

The article addresses the pressing issue of developing generative artificial intelligence systems capable of working with heterogeneous data (text, images, audio) without compromising the privacy of the underlying training datasets. The aim of the study is to systematize and present, from a practical perspective, current methods for ensuring privacy applicable to multimodal architectures. Particular attention is paid to differential privacy and federated learning technologies, their adaptation, and their combination for working with complex data. The article analyzes fundamental trade-offs between generation quality, computational complexity, and the level of privacy guarantees faced by developers in practice. Examples of existing software frameworks are provided, along with recommendations for selecting protection strategies depending on the type of task and the nature of the multimodal data. Practical aspects of integrating privacy mechanisms into training cycles, assessing the accumulated privacy budget, and potential directions for developing tools to enhance the efficiency and reliability of AI systems are additionally discussed. Special attention is given to issues of modality alignment and optimizing the trade-off between privacy level and generation quality. The presented recommendations and implementation examples can serve as a guide for engineers and researchers in developing real-world multimodal systems that meet contemporary security and ethical requirements. The material of the article is intended for researchers and engineers in the field of machine learning who are engaged in creating AI systems that comply with ethical and regulatory standards.

Keywords: generative models, multimodal machine learning, data privacy, differential privacy (DP), federated learning (FL), privacy-utility trade-off, machine learning frameworks, trustworthy AI systems

Interpretable forecasting of fine particulate air pollution based on monitoring data and machine learning methods

2026. T.14. № 2. id 2167
Filushina E.V.  Orlov V.A.  Krasovskaya L.V.  Prudkiy A.S. 

DOI: 10.26102/2310-6018/2026.53.2.003

Atmospheric air pollution by fine particles with an aerodynamic diameter of less than 2.5 micrometers is a serious environmental and social problem in urban areas. In this context, short-term forecasting of fine particulate matter concentrations based on air quality monitoring data is of particular importance. This study investigates the applicability of interpretable machine learning methods for hourly forecasting of fine particulate air pollution. The publicly available Beijing PM2.5 data set, containing hourly measurements of particulate matter concentration and meteorological parameters for the period from 2010 to 2014, was used as the data source. Data preprocessing was performed, and a feature space was constructed with consideration of temporal structure and autocorrelation properties of the time series. Linear regression, random forest, and gradient boosting models were developed and evaluated. Forecasting performance was assessed using mean absolute error, root mean squared error, and the coefficient of determination. The results demonstrate that all considered models provide high accuracy for short-term forecasting, while differences in performance between models of varying complexity remain insignificant. It was found that the dominant contribution to the forecast is provided by the autocorrelation of the particulate matter concentration time series, whereas meteorological parameters play a corrective role. The obtained results confirm the feasibility of using interpretable machine learning models in air quality monitoring and forecasting systems.

Keywords: air pollution, fine particulate matter, short-term forecasting, machine learning, interpretable models, time series, air quality monitoring

Structural modeling of management process in organizational system with alternative supplies

2026. T.14. № 1. id 2156
Shevyreva E.A. 

DOI: 10.26102/2310-6018/2026.52.1.011

The article examines the characterization of an organizational system with alternative supplies and its management process at the level of structural modeling. It is shown that the formation of structural models is a preliminary stage for optimization modeling, which provides algorithmization of intellectual support for decision-making by the control center. It is determined that for further use in optimization modeling, the structuring of the functioning of an organizational system with alternative supplies should include a description of the characteristics of connections when interacting between the control center and organization objects. It is substantiated that these characteristics should include numerical sets of objects, suppliers, supply costs, and indicators characterizing the effectiveness of deliveries. The purposefulness of achieving certain results of activity is established by the requirements of the control center to integral costs and deadlines for each object. It is assessed that direct transformation of the structural model of the functioning of the studied system into a management structure based on expert-optimization decisions does not allow choosing the best option for supplies in most cases. In this case, the traditional management process should be supplemented with a subsystem of intellectual support for managerial decisions. Intellectualization is proposed to be carried out through sequential solution of several optimization problems followed by transformation of optimization results into final managerial decision ensuring coordination of expert and formalized assessments. This sequence aims at gradual reduction of multi-alternative optimization tasks starting from forming a reduced set transitioning to dominant variants' set and finally moving optimizable variables from alternatives to parameters. The feasibility of transitioning to parametric optimization based on machine learning dependencies of efficiency indicators from parameters using retrospective data for objects similar to the current state of the organizational system with alternative supplies is considered.

Keywords: organizational system, management, structural modeling, decision support system, expert analysis, optimization, machine learning

Development and analysis of cloud models for adaptive control of unmanned vehicle swarm systems

2026. T.14. № 1. id 2155
Krepyshev D.A.  Izbitskaya E.Y. 

DOI: 10.26102/2310-6018/2026.52.1.007

This article examines the problem of managing swarm systems of unmanned aerial vehicles in dynamically changing environments. To address this problem, a cloud-based mathematical model based on decentralized swarm intelligence algorithms is proposed and verified. It provides adaptive control, self-organization, and stability for a unmanned aerial vehicles group. The methodological basis of the approach is the integration of two key components: a deterministic router-rotor model for guaranteed coverage of the target zone and k-fault-tolerant gossip protocols built on Knödel graphs for reliable data exchange under conditions of unstable communication and node loss. The model was implemented on the OpenStack cloud platform, ensuring deployment flexibility and scalability of computing resources. Simulation modeling included a comparative analysis with the classical Q-Routing algorithm for various operating scenarios, including normal operation and dynamic network reconfiguration. The results demonstrated the comprehensive effectiveness of the proposed architecture. The developed solution demonstrated significantly lower and more predictable latency, high and stable throughput under increasing load, and optimal utilization of compute node memory. A critical advantage was increased system survivability, resulting in shorter recovery times after failures. The results confirm that the combination of deterministic and gossip mechanisms in a cloud environment enables the creation of highly reliable and scalable systems for monitoring and data collection tasks that require stringent real-time performance and fault tolerance.

Keywords: UAV swarms, self-organization, cloud computing, swarm intelligence, gossip protocols, openStack, management, adaptability, graph models

Optimization of management of personalized resource allocation in the territorial organizational system

2025. T.13. № 4. id 2153
Maksin A.D.  Preobrazhenskiy A.P. 

DOI: 10.26102/2310-6018/2025.51.4.065

This article examines the effectiveness of applying an optimization approach to managing personalized resource allocation in a geographically distributed organizational system. The paper characterizes the features of a multi-level resource allocation process based on a top-down approach: management center – territorial entities – support areas for a personalized entity – resource supply entities, and an bottom-up approach with adjustments to the obtained results depending on resource demand satisfaction. Quantitative characteristics of inter-level interactions are established. Optimization models for detailing centralized resource allocation are developed. It is substantiated that an optimization model for resource allocation from the management center among territorial entities is required when the needs of the regions cannot be fully met. When centralized resources are insufficient, resource requirements for territories are reduced using coefficients that assess the regions' potential for achieving a number of indicators. When resources are allocated to a limited number of territories, these coefficients are taken into account in a reduction model of multi-alternative optimization such that the total volume of detailed resources does not exceed the centralized resource. When resource provision is distributed within a territorial entity, the specifics of optimization modeling manifest themselves in the grouping of resource supply entities based on the satisfaction of their needs, the multidirectional constraints associated with meeting the control center's requirements for the effectiveness of personalized support, and balance sheet conditions. These specifics, in a formalized formulation, are taken into account within the framework of a block linear programming optimization model. An algorithmic scheme for decomposing a block problem into two related problems of compositional linear programming and an iterative procedure of the simplex method for simultaneously solving the direct and dual problems is proposed. The iterative cycle includes a scheme for grouping and modifying the structure of the objective function.

Keywords: organizational system, resource management, personalized resource allocation, priority multi-alternative optimization, block linear programming, expert assessment

Improving the methodology of spectral analysis of speech information in conditions of interference in order to assess its protection against leakage through technical channels

2026. T.14. № 1. id 2152
Meshcheryakov R.V.  Dushkin A.V.  Evsyutin O.O.  Goncharov N.I. 

DOI: 10.26102/2310-6018/2026.52.1.008

The relevance of this study is determined by increasing requirements for the protection of confidential information transmitted during negotiations in designated premises. With the increasing number of technical means for unauthorized collection of acoustic information, there is a need to improve the methods for assessing its protection, ensuring a more accurate prediction of possible leakage channels and the quality of information interception. The separation of parameters by gender, based on the modified formant method, proposed in this paper, allows us to eliminate one of the key drawbacks of existing scientific approaches to solving this problem - the use of averaged speech characteristics that do not reflect the real diversity of speech signals. In the course of the study, the methodology for spectral analysis of speech information in interference conditions was improved; a modified version of the formant method for assessing speech intelligibility was implemented, taking into account the variability of speech characteristics in different groups of speakers; specialized software for the automated analysis of acoustic speech information with the ability to classify parameters by the gender of the speaker was developed; the results of experimental studies based on the improved methodology for spectral analysis of speech information in interference conditions for processing speech samples to obtain statistically significant data are presented. The results of the study can be used in the design of systems for protecting speech information from leakage through technical channels.

Keywords: acoustic channel, speech information analysis, program, speech intelligibility, spectral analysis

On the issue of ranking consistency assessment when using a group of multi-criteria decision-making methods

2025. T.13. № 4. id 2151
Latypova V.A. 

DOI: 10.26102/2310-6018/2025.51.4.062

To solve multi-criteria problems, several methods of decision-making are increasingly applied at the same time. It allows to find more precise solution to a problem on many criteria. In this case, a pressing need to ensure consistency between decisions obtained via different methods arises. In the current research, much attention has been attached to the consistency assessment employing different rank correlation coefficients. However, little attention has been given to application of metrics, reflecting general consistency for three and more rank sequences; and emphasis has been placed on conducting pairwise assessment of multi-criteria decision-making methods with one another. The paper is devoted to answering the question of whether the consistency between pairs of multi-criteria decision-making methods can reflect the consistency within the group of these methods. An experiment was conducted on the example of a problem of determining the rating of university departments with the use of three methods of multi-criteria decision-making and two metrics of rank correlation: Kendall's coefficient of concordance and Kendall’s rank correlation coefficient. The experiment results show that for the test case all the three methods give an agreed outcome, while the rankings of each pair of methods are inconsistent.

Keywords: multi-criteria decision-making, consistency assessment, kendall's coefficient of concordance, kendall’s rank correlation coefficient, multi-criteria ranking

Efficiency analysis of a self-configuring binary genetic algorithm with a modified method of dynamic correction of the search space

2026. T.14. № 2. id 2150
Malashin I.P.  Sopov E.A. 

DOI: 10.26102/2310-6018/2026.53.2.001

This paper presents a modification of the self-configuring genetic algorithm (SelfCGA) aimed at improving search efficiency in global optimization problems. The proposed approach combines dynamic correction of the search domain with phenotype clustering of the population, which makes it possible to identify promising regions of the solution space more effectively. The use of clustering helps maintain population diversity and reduces the risk of premature convergence to local optima. To evaluate the proposed modification, computational experiments were conducted using the CEC2017 benchmark suite with problem dimensions of 10, 30, and 50. Each algorithm was executed 50 independent times, ensuring statistical reliability of the results. The performance was assessed by comparing average and best fitness values, as well as by analyzing the convergence dynamics during the evolutionary process. The experimental results demonstrate that the modified SelfCGA with dynamic correction of the search domain reaches a stabilization state – where further improvements during the evolutionary search become negligible – in fewer generations for most benchmark functions. This advantage remains evident even as the dimensionality of the search space increases. The proposed modification does not require manual parameter tuning and does not increase the structural complexity of the base SelfCGA, which makes it well suited for practical applications.

Keywords: global optimization, self-configuring algorithms, search space adaptation, population clustering, dynamic correction of the search domain

Typification of the development of dangerous situations for the response of operational services in territories based on the analysis of many years of statistics

2025. T.13. № 4. id 2147
Faddeev A.O.  Nevdakh T.M.  Bronenkova Y.V. 

DOI: 10.26102/2310-6018/2025.51.4.058

The article formulates a complex problem of modeling and forecasting the development of dangerous processes that are formed in the most diverse areas of modern society's life. Solving this problem is relevant and significant for the effective functioning of emergency response services in making management decisions, which, in turn, helps to accelerate the elimination of emergencies and minimize human casualties and economic losses. Two approaches to solving this problem are presented in relation to the response of operational services. Both approaches are based on the analysis of dynamic series. A risk typology of the territories of the Russian Federation has been performed based on the quantitative analysis of the trend and seasonal components of the dynamic series of the number of emergencies that occurred between 2009 and 2021. It has been shown that the trend components determine the main trend of changes in the number of emergencies over time, while the seasonal component characterizes the variability of regular changes in their dynamics. The article highlights the federal districts where similar scenarios of emergency situation dynamics are formed. It discusses the issues of modeling the dynamics of phishing attacks in the cyberspace of the Russian Federation and solves the problem of obtaining predictive information about the number of such attacks. The article examines the structure of the dynamic series of phishing attacks to identify its trend, seasonal, and random components. The article uses a neural LSTM model for predicting phishing attacks. The error in the forecast obtained with its help is on average no more than 6 %. It is concluded that recurrent neural networks can be useful in the study of other types of cybercrimes. The materials of the article and the approaches developed in it are scientifically significant for the further development of a system of forecasting models that allow for the study of complex interactions in the implementation of dangerous phenomena in the modern territorial and information-telecommunication spaces of the Russian Federation, as well as for the analytical services of the Ministry of Emergency Situations and the Ministry of Internal Affairs.

Keywords: risk-typologization, territory of the Russian Federation, emergency situation, dynamic series, modeling the dynamics of phishing attacks, forecasting the development of dangerous processes, recurrent neural network, LSTM model