Keywords: customer satisfaction, insurance company, machine learning, prediction, gradient boosting, model accuracy
DOI: 10.26102/2310-6018/2025.50.3.046
The paper presents a study on forecasting customer satisfaction in an insurance company based on machine learning methods. The relevance of the topic is due to the high competition in the insurance market and the need to retain customers by increasing their satisfaction with the service. The purpose of the study is to evaluate the accuracy and performance of models that can predict the level of customer satisfaction with an insurance service based on data on the customer's interaction with the company. Classification algorithms were used as methods. The accuracy and performance of the models was assessed using real data from surveys of insurance company customers. The best were ensemble methods - random forest and gradient boosting, which demonstrated the accuracy of forecasting satisfaction up to 85%, significantly outperforming simpler models. It is shown that gradient boosting allows taking into account nonlinear dependencies of factors, for example, the presence of escalation of the appeal, and thereby more accurately identify "dissatisfied" customers. Currently, such forecasting in insurance companies is either not carried out or relies significantly on random factors. This leads either to too frequent complaints or to low customer satisfaction with their subsequent outflow. The materials of the article are of practical value for insurance organizations: the implementation of the developed models will allow promptly identifying customers with the risk of dissatisfaction and reasonably applying preventive measures, for example, additional service measures or compensation to increase their satisfaction.
Keywords: customer satisfaction, insurance company, machine learning, prediction, gradient boosting, model accuracy
DOI: 10.26102/2310-6018/2025.50.3.045
The article is devoted to the development of a resource-oriented technology for organizing an information process of computational resource distribution under conditions of integrating the concepts of the Internet of Things (IoT) and edge computing. During the research, an analysis of existing models and methods was conducted and their shortcomings were identified, namely: the lack of consideration of the resource cost of data transit for computing nodes involved in data transmission and the computing process and the lack of consideration of the resource costs required for the operation of distributing computing resources. Given the limited resources of devices at the network edge, these drawbacks are particularly relevant. The goal of this study is to minimize resource consumption during resource distribution and solving computational tasks within systems constrained by device limitations. The foundation of the proposed technology includes: an overall mathematical model of resouce allocation process, formulated as an optimization problem; proposed methods for solving said problem based on heuristic rules and meta-heuristics; algorithms for calculating the resource cost of data transit and migration of computational tasks, which serve auxiliary purposes within the developed methods; a repository of meta-heuristic algorithms used to select the optimal method for solving the resource distribution problem. This technology implements the distribution of computational resources while minimizing resource expenses associated with data transit, taking into account both the computational task itself and decision-making regarding resource allocation. It considers the resource constraints of devices and dynamic changes in load and network topology. Experimental modeling confirmed the effectiveness of applying the proposed technology. Significant reductions in resource expenditure for computational resource distribution have been demonstrated, leading to improved results in terms of distributed computing efficiency metrics. The results of the study demonstrate the potential of the proposed technology for organizing distributed computing in systems with limited resources, such as IoT systems and edge computing.
Keywords: computing resource allocation, distributed computing, technology, resource costs optimization, distributed computing modelling
DOI: 10.26102/2310-6018/2025.50.3.044
In the context of digitalization of education, the development of adaptive feedback mechanisms in the context of multithreading, which ensure the personalization of the interaction of participants in the educational process, is becoming a factor in increasing the effectiveness of the educational process. The analysis of existing approaches and tools for personalizing learning routes in multithreading conditions using the example of university disciplines allowed us to formulate the research problem of insufficient automation of the educational process in conditions of multithreading. The purpose of the article is to describe the development of a method for intelligent analysis of information with semantic text processing in the implementation of adaptive feedback of participants in a digital educational environment. The scientific novelty of the study consists in the development of an approach to intelligent processing of answers in free form, which ensures an increase in the efficiency of the educational process in a digital educational environment. The implementation of the stages of the intelligent information processing method in feedback with a multi-format digital assessment is considered. The main stages of the method include: data preparation, linguistic preprocessing, semantic comparison, model training, feedback generation, and analysis of the results of interaction between participants in the educational process. In conclusion, the analysis of the results of the application of the method considered in the work in the educational process is given on the example of streaming university disciplines.
Keywords: digital educational environment, adaptive feedback, natural language processing, distance learning system, tokenization, assessment metrics
DOI: 10.26102/2310-6018/2025.50.3.034
The article discusses a method for detecting DDoS attacks in digital ecosystems using tensor analysis and entropy metrics. Network traffic is formalized as a 4D tensor with the following dimensions: IP addresses, timestamps, request types, and countries of origin. The CP decomposition with rank 3 is used to analyze the data, which allows revealing hidden patterns in traffic. An algorithm for calculating the anomaly score (AS) is developed, which takes into account the factor loadings of the tensor decomposition and the entropy of time distributions. Experiments on real data have shown that the proposed method provides 92 % attack detection accuracy with a false positive rate of 1.2 %. Compared to traditional signature-based methods, the accuracy increased by 35 %, and the number of false positives decreased by 86 %. The method has proven effective in detecting complex low-rate attacks that are difficult to detect by standard methods. The results of the study can be useful for protecting various digital ecosystems, including financial services, telecommunication networks, and government platforms. The proposed approach expands the capabilities of network traffic analysis and can be integrated into modern cybersecurity systems. Further research could be aimed at optimizing the computational complexity of the algorithm and adapting the method to different types of network infrastructures.
Keywords: tensor analysis, DDoS attacks, cybersecurity, digital ecosystems, CP decomposition, entropy analysis, anomaly detection
DOI: 10.26102/2310-6018/2025.50.3.031
In the conditions of high competition for large modern companies producing mass products or providing mass services, it is typical to increase advertising costs, which does not always bring the expected effect. There is a growing need for tools for precise audience segmentation, which can increase the effectiveness of marketing communications. Traditional response prediction models do not allow us to determine whether the client's behavior has changed under the influence of marketing impact, which reduces the possibilities of constructive analysis of marketing campaigns. This article is aimed at studying uplift modeling as a tool for assessing the effect of increasing positive responses from communication and targeting optimization. The results of the study demonstrate significant advantages of the uplift modeling approach for identifying client segments with maximum sensitivity to impact. The comparative analysis of various approaches to building uplift models (such as SoloModel, TwoModel, Class Transformation, Class Transformation with Regression), based on the use of specialized uplift metrics (uplift@k, Qini AUC, Uplift AUC, weighted average uplift, Average Squared Deviation), conducted within the article, demonstrates the strengths and weaknesses of each of the modeling approaches. The study is based on open data X5 RetailHero Uplift Modeling Dataset, provided by X5 Retail Group for the study of uplift modeling methods in the context of retail.
Keywords: uplift modeling, machine learning, marketing communications, targeting, response evaluation, uplift model quality metrics
DOI: 10.26102/2310-6018/2025.50.3.042
This paper analyzes the features of the Modbus protocol, with an emphasis on its vulnerability in the context of security and protection of transmitted information. The main risks associated with the use of Modbus in automation and process control systems (APCS) are considered, including the lack of encryption and authentication mechanisms, which makes it vulnerable to various types of attacks, such as data interception or unauthorized access, as well as options for solving the problem of node verification. The Modbus protocol is one of the most common and popular industrial protocols, actively used in automation systems and control of various technological processes. The protocol is easy to implement and widespread, which makes it attractive for implementation in various industries. However, the RTU mode of the Modbus protocol has disadvantages, such as vulnerability to man-in-the-middle and substitution attacks, which carries potential risks for industrial enterprises using this protocol in production. The vulnerability is due to the lack of built-in authentication and verification mechanisms for nodes involved in data transmission. This creates risks associated with the possibility of unauthorized access and substitution of information during the exchange process. The article proposes a method for increasing confidentiality during interaction between nodes by implementing cryptographic operations that allow for verification of the authenticity of the source of transmitted data by implementing a lightweight cryptographic algorithm based on the XOR operation with a 16-bit secret. The advantage of the proposed method is its compatibility with the existing implementation of the Modbus protocol, minimal impact on system performance, and no need for deep modification of the architecture. It is also worth noting a slight increase in data transmission latency (less than 2 %) and processor time consumption.
Keywords: modbus RTU, man-in-the-middle, frame, cryptographic protection, industrial protocol
DOI: 10.26102/2310-6018/2025.50.3.036
Recognition of license plates (LP) is one of the key tasks for intelligent transport systems. In practice, such factors as blur, noise, adverse weather conditions or shooting from a long distance lead to obtaining low-resolution (LR) images, which significantly reduces the reliability of recognition. A promising solution to this problem is the use of super-resolution (SR) methods capable of restoring high-resolution (HR) images from the corresponding LR versions. This paper is devoted to the research and development of a software package using neural network super-resolution models to improve the quality and accuracy of LP recognition. The software package implements the YOLO (You Only Look Once) neural network architectures for object detection, the SORT (Simple Online and Realtime Tracking) object tracking algorithm and super-resolution models to improve LP images. This approach ensures high accuracy of LP recognition even when working with images obtained in difficult shooting conditions characterized by low quality or resolution. The experimental results demonstrate that the proposed approach can improve the accuracy of LP recognition in low-resolution images. The image restoration quality was assessed using the PSNR and SSIM metrics, which confirmed the improvement of the visual characteristics of LP for the most effective models. The developed software package has a wide potential for practical application and can be integrated into various systems, for example, for access control to protected areas, traffic monitoring and analysis, automation of parking complexes, as well as as part of solutions for ensuring public safety. The flexibility of the implemented architecture allows you to adapt the system to specific requirements with modifications, which emphasizes its versatility and practical significance.
Keywords: license plates recognition, computer vision, deep neural networks, superresolution, objects detection, objects tracking
DOI: 10.26102/2310-6018/2025.50.3.047
This article proposes an intelligent mivar decision-making system (MDMS) designed for the optimized distribution and transportation of cargo by groups of warehouse robots. This mivar decision-making system integrates three groups of different warehouse robots: the loader robot (RP), the transporter robot (RT), and the unloader robot (RR). The selection and determination of the state of each robot (loader robot, transporter robot, and unloader robot) are based on corresponding calculations performed using specially developed algorithms. These algorithms are based on a series of key equation systems, such as the transporter robot equation system, the loader robot equation system, the unloader robot equation system, and the command variable system. The equation systems take into account the robot's state, operational capability, ability to complete cargo transportation, compatibility for cargo transportation, etc. Additionally, the Manhattan distance is considered, which helps determine the robot's ability to complete its task. The article provides a detailed description of the equation systems and calculation algorithms, as well as a formalized description of the domain in which the mivar logical artificial intelligence system operates. The logical schematic of the MDMS system and decision-making rules are also outlined, which aid in robot selection, making the system more efficient. Experimental results show that this system can function normally according to pre-established logic and objectives. It accurately completed all distribution tasks, demonstrating good stability and reliability.
Keywords: mivar, mivar decision-making systems, logical AI, distribution system, group of warehouse robots, robot-loader, robot-transporter, robot-unloader
DOI: 10.26102/2310-6018/2025.50.3.038
The article presents the results of a study aimed at expanding the theoretical basis in the field of real-time computing. The issues considered include: defining indicators of computational complexity in real time, a methodology for their quantitative assessment, identifying ways to achieve the computability of algorithms in real time, and formalizing approaches to the optimal technical implementation of real-time computing systems. The research is based on existing concepts in algorithm theory and computation theory, including real-time computation. Significant new scientific results of the research include: the introduction, along with the known indicators of temporal and spatial computational complexity, of an additional indicator of configuration computational complexity, necessary for assessing computational complexity in real time; the confirmation of the possibility of controlling temporal, spatial, and configuration complexity within the framework of a given algorithm functional solely by changing the number of computation execution threads; theoretical justification of the possibility of reducing the execution time of the configuration algorithm from exponential to polynomial or even linear by condensing the initial graph of the algorithm with the formation of strongly connected components of a set of actor functions and obtaining as a result an acyclic directed graph, whose topological sorting can be performed in linear time; determination of approaches to the optimal technical implementation of the algorithm with a given configuration, including in the form of an integrated circuit with wiring optimized based on the solution of Steiner's rectangular problem.
Keywords: computational complexity, real time, computability, configuration, search algorithm, actor functions, portability
DOI: 10.26102/2310-6018/2025.50.3.037
This paper addresses the problem of improving the accuracy of determining the spectral characteristics of voice signals in audio recordings. To solve this problem, a modification of the classical Hamming window function is proposed by introducing an optimizable parameter. The study's relevance stems from the need to improve the reliability of voice recognition and identification systems, especially in the context of biometric applications and authentication tasks. The main objective is the development of an algorithm for calculating the optimal value of this parameter, maximizing the quality of spectral analysis for specific voice frequency ranges. To achieve this objective, the gradient descent method was used to optimize the parameter of the modified function. Quality assessment was performed based on a weighted sum of spectral characteristics (peak factor, spectral line width, signal-to-noise ratio). Experiments were conducted on test signals simulating male (200–400 Hz) and female (220–880 Hz) voices. The results showed that the proposed approach improves the accuracy of determining spectral components, especially in the male baritone range (up to 5.42 % improvement), by achieving clearer identification of fundamental frequencies and reducing side-lobe levels compared to the classical Hamming window. The study's conclusions indicate the potential of adapting window functions to specific frequency ranges of voice signals. The proposed algorithm can be used to improve the performance of biometric identification systems and other applications requiring accurate spectral analysis of voice.
Keywords: window function, hamming window, spectral analysis, voice signal processing, parameter optimization, gradient descent, biometric identification, spectrum estimation accuracy, STFT
DOI: 10.26102/2310-6018/2025.50.3.025
The study presents an integrated algorithm for evaluating and optimizing systems with heterogeneous data, taking into account managerial and organizational performance indicators. The proposed algorithm consists of data coverage analysis (DCA), fuzzy data analysis (FDA), and a set of statistical methods for evaluating the likelihood of the obtained results. An integrated algorithm has been developed for determining the most effective heterogeneous performance indicators, which differs in its method of selecting reliable indicators and allows for the formulation of strategies for improving organizational systems. A set of 12 criteria indicating the application of an integrated method was selected for verification. The results showed that the AOD results have a lower mean absolute percentage error (MAPE) than the fuzzy AOD results. The study also analyzes and weighs indicators, and the results showed that the indicators "investments in research and development relative to production costs" and "investments in education and retraining per employee" are the most effective. The study presents a unique algorithm for taking into account heterogeneous managerial and organizational factors. It can handle data uncertainty due to the presence of fuzzy inference mechanisms in the algorithm. The weights of the indicators are determined using a set of reliable statistical algorithms.
Keywords: integrated algorithm, heterogeneous data, data coverage analysis, fuzzy logic, verification, statistical criterion, data mining, indicator weight
DOI: 10.26102/2310-6018/2025.50.3.041
The work is devoted to topical issues of the synthesis of human-machine interaction tools, within the framework of which a model for interfacing components of graphical user interfaces (GUI) based on algebraic logic methods is considered. The components of GUI are presented as components of open information systems with standardized interfaces that determine their spatial compatibility. To formalize the components of the GUI, it is proposed to use semantic networks, while the compatibility of the components is determined by the rules of logical inference, presented in the form of a Horn disjunction. The description of the integrated visual component "Named input field" is presented in the form of a semantic network containing a description of the spatial compatibility of its constituent indivisible components. An extension of the OpenAPI specification has been developed to solve the problem of unifying and standardizing the description of GUI components and ensuring the interoperability of tools for synthesizing screen forms and supporting UX testing. The article presents the results of the synthesis of chains of geometric shapes that mimic the components of GUI, which can also be presented declaratively in the form of semantic networks, and, consequently, in the RDF format. In addition to the components themselves, semantic networks include a description of filters that can be used to control the choice of ways to spatially interface GUI components.
Keywords: human-machine interaction, graphical user interface, specification, component, horn's disjunction
DOI: 10.26102/2310-6018/2025.50.3.033
One of the significant areas of investment in civil aviation is air transportation subsidies. The article considers the possibility of optimizing management decisions on the distribution of investment volumes among airlines participating in the air transportation route selection program that ensures growth of efficiency indicators with a limited investment resource. To formulate the optimization problem, continuous optimized variables that determine investment volumes and alternative variables corresponding to the choice of a specific transportation route are introduced. The initial data provided by the airlines are used to assess the fulfillment of extreme and boundary requirements for the subsidizing process. Each indicator, on the basis of which the specified requirements are formed, is calculated according to the parameters recorded in the initial data, depending on the values of the variables. In this case, it becomes necessary to split the condition of a limited integrated resource into two particular boundary conditions. As a result, we have a multi-criteria optimization problem with constraints, defined on sets of continuous and alternative optimized variables. To solve it, it is proposed to use a combination of an adaptive algorithm of directed randomized search and a particle swarm algorithm. We conduct a computational experiment using an optimization approach, which is compared with actual data on air transport subsidies. The optimized variant of investment distribution and route selection is characterized by values of performance indicators that are better than those actually achieved.
Keywords: investment management, centralized control, air transport subsidies, airlines, optimization
DOI: 10.26102/2310-6018/2025.50.3.026
Networks are widely used to represent the interactive relationships between individual elements in complex big data systems, such as the cloud-based Internet. Determinable causes in these systems can lead to a significant increase or decrease in the frequency of interaction within the corresponding network, making it possible to identify such causes by monitoring the level of interaction within the network. One method for detecting changes is to first create a network graph by drawing an edge between each pair of nodes that have interacted within a specified time interval. The topological characteristics of the graph, such as degree, proximity, and mediation, can then be considered as one-dimensional or multidimensional data for online monitoring. However, the existing statistical process control (SPC) methods for unweighted networks almost do not take into account either the sparsity of the network or the direction of interaction between two network nodes, that is, pair interaction. By excluding inactive pair interactions, the proposed parameter estimation procedure provides higher consistency with lower computational costs than the alternative approach when the networks are large-scale and sparse. The matrices developed on the basis of a matrix probabilistic model for describing directed pair interactions within time-independent, unweighted big data networks with cloud processing significantly simplify parameter estimation, the effectiveness of which is increased by automatically eliminating pair interactions that do not actually occur. Then the proposed model is integrated into a multidimensional distribution function for online monitoring of the level of communication in the network.
Keywords: cloud computing, big data, network status changes, real-time monitoring, unweighted networks, pair interaction, matrix probability model
DOI: 10.26102/2310-6018/2025.50.3.043
It is known that the use of Non-Orthogonal Multiple Access (NOMA) methods can improve the spectral efficiency and capacity of communication networks. However, in the presence of nonlinear distortions or synchronization issues, the orthogonality of user signals within a CDMA group is disrupted, leading to inter-channel interference and a reduction in interference immunity as the number of users increases. This must be taken into account when analyzing the interference immunity in broadband radio communication networks. The paper presents simulation results demonstrating the possibility of using orthogonal synchronous code division multiple access in combination with non-orthogonal multiple access, where the system's interference immunity is determined solely by the characteristics of NOMA. The influence of power distribution among users on the network's interference immunity, depending on their distance, is shown. For the analysis, mathematical models and MATLAB implementations were used, enabling the study of key system parameters, including bit error rate (BER), capacity, and power allocation strategies. The results demonstrate that the proposed approach allows for effective analysis and optimization of NOMA systems, taking into account the impact of nonlinear distortions and power distribution. Examples of calculations are provided, confirming the feasibility of using NOMA in broadband radio communication networks.
Keywords: non-Orthogonal Multiple Access (NOMA), spectral efficiency, interference immunity, nonlinear distortions, power allocation, radio networks
DOI: 10.26102/2310-6018/2025.50.3.027
In the context of the increasing complexity of managing national projects aimed at achieving the National Development Goals of the Russian Federation, an urgent task is to automate the analysis of the relationships between the activities planned within these projects and the indicators that reflect the degree of achievement of the objectives set in the project. Traditional methods of manual document processing are characterized by high labor intensity, subjectivity and significant time costs, which necessitates the development of intelligent decision support systems. This article presents an approach to automating the analysis of links and indicators of national projects, which allows for automatic detection and verification of semantic links "event-indicator" in national project documents, significantly increasing the efficiency of analytical work. This approach is based on the use of the Retrieval-Augmented Generation (RAG) system, which combines a locally adapted language model with vector search technologies. The work demonstrates that the integration of the RAG approach with vector search and taking into account the project ontology allows achieving the required accuracy and relevance of the analysis. The system is particularly valuable not only for its ability to generate interpretable justifications for the identified links, but also for its ability to identify key events that affect the achievement of indicators for several national projects at once, including those whose impact on the implementation of these indicators is not obvious. The proposed solution opens up new opportunities for the digitalization of public administration and can be adapted for other tasks, such as identifying risks in the implementation of events and generating new events.
Keywords: RAG systems, large language models, national projects, semantic search, automation, national goals, artificial intelligence in public administration
DOI: 10.26102/2310-6018/2025.50.3.040
This paper presents a procedure for dynamically modifying the binary encoding scheme in a genetic algorithm (GA), enabling adaptive adjustment of the search space during the algorithm’s execution. In the proposed approach, the discretization step for each coordinate is updated from generation to generation based on the current boundaries of regions containing high-quality solutions and the density of individuals within them. For each such region, the number of bits in the binary string representing solutions is determined according to the number of encoded points, after which the discretization step is recalculated. The encoding scheme is restructured in a way that ensures the correctness of genetic operators in the presence of discontinuities in the search space, preserves the fixed cardinality of the solution set at each generation, and increases the precision of the solutions due to the dynamic adjustment of the discretization step. Experimental results on multimodal test functions such as Rastrigin and Styblinski–Tang demonstrate that the proposed GA modification progressively refines the search area during evolution, concentrating solutions around the global extrema. For the Rastrigin function, initially fragmented regions gradually focus around the global maximum. In the Styblinski–Tang case, the algorithm shifts the search from an intentionally incorrect initial area toward one of the global optima.
Keywords: adaptive encoding, genetic algorithm, discretization, multimodal optimization, search space
DOI: 10.26102/2310-6018/2025.50.3.024
The growing volume of processed data and the widespread adoption of cloud technologies have made efficient task distribution in high-load computing systems a critical challenge in modern computer science. However, existing solutions often fail to account for resource heterogeneity, dynamic workload variations, and multi-objective optimization, leaving gaps in achieving optimal resource utilization. This study aims to address these limitations by proposing a hybrid load-balancing algorithm that combines the strengths of Artificial Bee Colony (ABC) and Max-Min scheduling strategies. The research employs simulation in the CloudSim environment to evaluate the algorithm’s performance under varying workload conditions (100 to 5000 tasks). Tasks are classified into "light" and "heavy" based on their MIPS requirements, with ABC handling lightweight tasks for rapid distribution and Max-Min managing resource-intensive tasks to minimize makespan. Comparative analysis against baseline algorithms (FCFS, SJF, Min-Min, Max-Min, PSO, and ABC) demonstrates the hybrid approach’s superior efficiency, particularly in large-scale and heterogeneous environments. Results show a 15–30% reduction in average task completion time at high loads (5000 tasks), confirming its adaptability and scalability. The study concludes that hybrid algorithms, integrating heuristic and metaheuristic techniques, offer a robust solution for dynamic cloud environments. The proposed method bridges the gap between responsiveness and strategic resource allocation, making it viable for real-world deployment in data centers and distributed systems. The practical significance of the work lies in increasing energy efficiency, reducing costs and ensuring quality of service (QoS) in cloud computing.
Keywords: cloud computing, scheduling, task allocation, virtual machines, hybrid algorithm, load balancing, optimization, cloudSim
DOI: 10.26102/2310-6018/2025.50.3.035
The article explores modern methods for automatic detection of atypical (anomalous) musical events within a musical sequence, such as unexpected harmonic shifts, uncharacteristic intervals, rhythmic disruptions, or deviations from musical style, aimed at automating this process and optimizing specialists' working time. The task of anomaly detection is highly relevant in music analytics, digital restoration, generative music, and adaptive recommendation systems. The study employs both traditional features (Chroma Features, MFCC, Tempogram, RMS-energy, Spectral Contrast) and advanced sequence analysis techniques (self-similarity matrices, latent space embeddings). The source data consisted of diverse MIDI corpora and audio recordings from various genres, normalized to a unified frequency and temporal scale. Both supervised and unsupervised learning methods were tested, including clustering, autoencoders, neural network classifiers, and anomaly isolation algorithms (isolation forests). The results demonstrate that the most effective approach is a hybrid one that combines structural musical features with deep learning methods. The novelty of this research lies in a comprehensive comparison of traditional and neural network approaches for different types of anomalies on a unified dataset. Practical testing has shown the proposed method's potential for automatic music content monitoring systems and for improving the quality of music recommendations. Future work is planned to expand the research to multimodal musical data and real-time processing.
Keywords: musical sequence, anomaly, tempogram, musical style, MFCC, chroma, autoencoder, music anomaly detection
DOI: 10.26102/2310-6018/2025.50.3.029
The relevance of the study is due to the need to increase the efficiency of agent training under conditions of partial observability and limited interaction, which are typical for many real-world tasks in multiagent systems. In this regard, the present article is aimed at the development and analysis of a hybrid approach to agent training that combines the advantages of gradient-based and evolutionary methods. The main method of the study is a modified Advantage Actor-Critic (A2C) algorithm, supplemented with elements of evolutionary learning — crossover and mutation of neural network parameters. This approach allows for a comprehensive consideration of the problem of agent adaptation in conditions of limited observation and cooperative interaction. The article presents the results of experiments in an environment with two cooperative agents tasked with extracting and delivering resources. It is shown that the hybrid training method provides a significant increase in the effectiveness of agent behavior compared to purely gradient-based approaches. The dynamics of the average reward confirm the stability of the method and its potential for more complex multiagent interaction scenarios. The materials of the article have practical value for specialists in the fields of reinforcement learning, multi-agent system development, and the design of adaptive cooperative strategies under limited information.
Keywords: reinforcement learning, evolutionary algorithms, multiagent system, a2C, LSTM, cooperative learning
DOI: 10.26102/2310-6018/2025.50.3.039
The central role of the infosphere in network-centric control systems for groups of mobile cyber-physical systems determines the fundamental importance of ensuring functional reliability and survivability of information interaction systems. One of the factors of functional reliability of information interaction systems is the structural reliability of data transmission systems. The work is devoted to the construction of descriptive models of structural reliability indicators of mobile data transmission systems under the influence of destructive effects on network channels and nodes. Using the method of simulation modeling, a study was conducted on the influence of edge destruction in a random graph on network connectivity depending on the indicator – the proportion of destroyed graph nodes. The features of the average values and stability of the indicator for different characteristics of random graphs are revealed. The influence of the mobility property of cyber-physical devices in the «swarm» group on the indicators of structural reliability – complexity and unevenness of load distribution between the nodes of the data transmission system is assessed. It is shown that the use of such a resource of mobile groups of cyber-physical systems as the ability of devices to move is a way to counter destructive effects. As a result of the movement of nodes, there is an increase in the stability of structural reliability indicators – the complexity of the structure and the uneven distribution of the load between network nodes.
Keywords: network-centric control, mobile groups of cyber-physical devices, structural reliability of data transmission systems, descriptive models, destructive effects, countering destructive effects
DOI: 10.26102/2310-6018/2025.50.3.028
Modern computer graphics offers many different visual effects for processing three-dimensional scenes during rendering. The burden of calculating these graphic effects falls on the user hardware, which leads to the need to compromise between performance and image quality. In this regard, the development of systems capable of automatically assessing the quality of three-dimensional rendering and images in general becomes relevant. The relevance of this topic is expressed in two directions. First, the ability to predict user reactions will allow for more accurate customization of graphic applications. Second, understanding preferences can help in optimizing 3D scenes by identifying visual effects that can be disabled. In a broader sense, this also poses the challenge of optimally managing the rendering process so that it becomes possible to maximize the use of available hardware capabilities. Therefore, it becomes a significant task to model the process of rendering 3D graphics in such a form, in which it will be as simple as possible to deal with its optimization. The purpose of this study is to create such a model, which will allow to perform the stage of expert evaluation to automatically determine the quality of three-dimensional rendering and use it for optimal control of the rendering pipeline. A number of important issues that require special attention in the research are also discussed. The range of applications of the developed system includes various spheres of human activity involving three-dimensional modeling. Such a system can become a useful tool for both developers and users, which is especially important in education, video game development, virtual reality technologies, etc., where it is necessary to model realistic objects or visualize complex processes.
Keywords: quadratic knapsack problem, multidimensional knapsack problem, artificial neural networks, three-dimensional rendering, user preference analysis, visual quality assessment, future technologies
DOI: 10.26102/2310-6018/2025.50.3.018
Based on the system engineering principles, the technological aspects of designing a prototype electric vehicle with a combined control system are considered, which assumes the possibility of simple and safe switching from manual mode to remote (via radio channel) or software. The design and physical implementation of an object are based on consideration of prototyping, machining process, and programming technologies that are interrelated throughout the entire structure. The project is implemented on the basis of the Bigo.Land set (in its mechanical and mechatronic parts) and based on ArduPilot/Pixhawk (in its software and hardware parts). The basic set of Bigo.Land is complemented by a two-way overrunning clutch, which, along with the software, allows the pilot to take part in the control process if necessary. The result of the work is a fully functional prototype of an electric vehicle with a sensing system and functions of unmanned control and autonomous behavior; as well as its virtual (CAD/CAE) model and software in the form of the Ardupilot/Pixhawk flight controller firmware, which extends and complements the standard functionality of the base Ardupilot software. The project and the results obtained can be useful to specialists developing and operating unmanned mobile vehicles, as well as educational institutions implementing pedagogical technologies based on the project learning method.
Keywords: unmanned electric vehicle, technological process aspects of design, combined control, two-way overrunning clutch, prototyping, system engineering, project-based learning
DOI: 10.26102/2310-6018/2025.50.3.032
The relevance of the study is due to the growing need for a highly accurate and interpretable emotion recognition system based on video data, which is crucial for the development of human-centered technologies in education, medicine, and human–computer interaction systems. In this regard, the article aims to identify the differences and application prospects of the local DeepFace solution and the cloud-based GPT-4o (OpenAI) model for analyzing short video clips with emotional expressions. Methodologically, the study is based on empirical comparative analysis: a moving average method was used to smooth the time series of emotional assessments and to evaluate stability and cognitive interpretability. The results showed that DeepFace provides stable local processing and high resistance to artifacts, while GPT-4o demonstrates the ability for complex semantic interpretation and high sensitivity to context. The effectiveness of a hybrid approach combining computational autonomy and interpretative flexibility is substantiated. Thus, the synergy of local and cloud solutions opens up prospects for creating more accurate, adaptive, and scalable affective analysis systems. The materials of the article are of practical value to specialists in the fields of affective computing, interface design, and cognitive technologies.
Keywords: affective computing, emotion recognition, video data analysis, deepFace, GPT-4o language model, hybrid analysis system, semantic text analysis, multimodal interaction, neural network interpretability, cognitive technologies
DOI: 10.26102/2310-6018/2025.50.3.023
The issue of wireless transmission of information via radio communication is raised. It is indicated that the key parameter of the radio channel quality is the signal-to-noise ratio at the input of the receiving device. The importance of ensuring a high signal-to-noise ratio in radio transmitting and receiving devices and systems is emphasized. An analytical review and comparative analysis of common methods for determining the signal-to-noise ratio at the input of the receiving device is carried out. Theoretical and practical methods for determining the signal-to-noise ratio are considered, in particular, the method of complex envelope, the method of spectral analysis, as well as the method of calculating losses in free space. Their advantages and disadvantages are revealed. The mathematical and methodological apparatus of the considered methods is described. A brief description of the algorithms for measuring the signal-to-noise ratio in these methods is given. Information about the conducted experimental studies of the methods is provided. The initial data and the results of the experiment are described. The results of a comparative analysis of theoretical and practical methods are presented according to the criterion of accuracy in estimating the signal-to-noise ratio at the input of the receiving device. The main reasons and factors that reduce the accuracy of the theoretical assessment of the signal-to-noise ratio compared with the practical measurement are analyzed. Possible ways to increase the value of the signal-to-noise ratio in theoretical methods are proposed.
Keywords: wireless communication, radio signal, signal-to-noise ratio, complex envelope method, spectral analysis method, loss calculation method
DOI: 10.26102/2310-6018/2025.50.3.030
In this study, a new mechanism for generating training data for a neural network for the task of image-based code generation is proposed. In order for a system to be able to perform the task assigned to it, it must be trained. The initial dataset that is provided with the pix2code system allows the system to be trained, but it relies on the data that is provided in the domain-specific dictionary. Expanding or changing words in the dictionary does not affect the data set in any way, which limits the flexibility of the system's application by not allowing for the rules that may apply to the enterprise to be taken into account. Some studies claim to have created their own dataset, but its lack of public access makes it difficult to assess the complexity of the images it contains. To solve this problem, within the framework of this study, a submodule was developed that allows, based on a modified dictionary of a domain-specific language, to create a custom training dataset consisting of an image-source code pair corresponding to this image. To test the functionality of the created dataset, the modified pix2code system performed training and was then able to predict the code on test examples.
Keywords: code generation, image, machine learning, dataset, source code
DOI: 10.26102/2310-6018/2025.50.3.014
This paper considers a method for increasing the search speed in hash tables with links if the problem assumes that the performance is limited by the throughput of one of the interfaces between the storage levels (caches L1, L2, L3, memory). To reduce the impact of this limitation, an algorithm for optimal use of the cache line size, the minimum portion of information transferred between the storage levels, is proposed. The paper shows that there is an optimal size of information about a key in a hash table (key representation) for a specific problem and architecture; equations are given for its numerical and approximate analytical calculation for the cases of a key present and absent in the table. A separate case of using a part of a key as its representation in the table is considered. An algorithm for working with inconvenient key representation sizes that are not a power of two is proposed. The presented calculation results confirm the increase in search performance when using a calculated key representation size compared to other options. The presented experimental result confirms the assumption that the associated complication of the code has virtually no effect on performance due to partial processor idleness. The work assumes collision resolution via chains, but similar calculations should be applicable to other methods given their specific features.
Keywords: hash, hash-table, open addressing, chain, collision, memory level parallelism, cache, cache-line, cache miss
DOI: 10.26102/2310-6018/2025.50.3.013
The paper proposes a new method for suppressing artifacts generated during image blending. The method is based on differential activation. The task of image blending arises in many applications; however, this work specifically addresses it from the perspective of face attribute editing. Existing artifact suppression approaches have significant limitations: they employ differential activation to localize editing regions followed by feature merging, which leads to loss of distinctive details (e.g., accessories, hairstyles) and degradation of background integrity. The state-of-the-art artifact suppression method utilizes an encoder-decoder architecture with hierarchical aggregation of StyleGAN2 generator feature maps and a decoder, resulting in texture distortion, excessive sharpening, and aliasing effects. We propose a method that combines traditional image processing algorithms with deep learning techniques. It integrates Poisson blending and the MAResU-Net neural network. Poisson blending is employed to create artifact-free fused images, while the MAResU-Net network learns to map artifact-contaminated images to clean versions. This forms a processing pipeline that converts images with blending artifacts into clean artifact-free outputs. On the first 1000 images of the CelebA-HQ database, the proposed method demonstrates superiority over existing approach across five metrics: PSNR: +17.11 % (from 22.24 to 26.06), SSIM: +40.74 % (from 0.618 to 0.870), MAE: −34.09 % (from 0.0511 to 0.0338), LPIPS: −67.16 % (from 0.3268 to 0.1078), and FID: −48.14 % (from 27.53 to 14.69). The method achieves these results with 26.3 million parameters (6.6× fewer than the 174.2 million parameters of comparable method) and 22 % faster processing speed. Crucially, it preserves accessory details, background elements, and skin textures that are typically lost in existing methods, confirming its practical value for real-world facial editing applications.
Keywords: deep learning, facial attribute editing, blending artifact suppression network, image-to-image translation, differential activation, MAResU-Net, generative adversarial network (GAN)
DOI: 10.26102/2310-6018/2025.50.3.010
The relevance of this study is driven by the rapid growth of unstructured textual data in the digital environment and the pressing need for its systematic analysis. The lack of universal and easily reproducible methods for grouping textual information complicates interpretation and limits practical application across various domains, including healthcare, education, marketing, and the corporate sector. In response to this challenge, the present article aims to identify key algorithmic approaches to clustering unstructured texts and to analyze software systems implementing these methods. The primary research strategy is based on a comparative and analytical approach that enables the generalization and classification of contemporary machine learning algorithms applied to text data processing. The study reviews both traditional clustering techniques and advanced architectures incorporating unsupervised learning, numerical vector representations, and neural network models. Software tools are examined with a focus on their levels of accuracy, interpretability, and adaptability. As a result, the study systematizes criteria for selecting methods according to specific tasks, highlights limitations of existing approaches, and outlines promising directions for further development. The findings are intended to support professionals engaged in designing and deploying software solutions for the automatic processing and analysis of textual information.
Keywords: text clustering, unstructured data, topic modeling, machine learning, vector representations, unsupervised algorithms, software frameworks, text mining
DOI: 10.26102/2310-6018/2025.50.3.012
In recent years, the development of virtual reality (VR) technologies has been largely associated with the introduction of machine learning (ML) methods. The use of ML methods is aimed at increasing the level of comfort, efficiency and effectiveness of VR. ML algorithms can analyze interaction data, recognize patterns and adapt interaction scenarios based on the user's behavior and emotional state. The article analyzes the key modern areas of joint use of VR and ML, which have already been tested in practice and have shown fairly high efficiency. One of these areas is improving interaction in VR, including improving the quality of VR systems, more realistic graphics, adapting content to the user and accurate tracking of movements. The article considers the problems of using ML in VR technologies in the field of education, psychotherapy, rehabilitation, medicine, traffic management, in technologies for the creation, transmission, distribution, storage and use of electricity and other areas. A brief analysis of ML tools used in VR is also provided, among which generative neural networks can be distinguished that can create dynamic virtual environments. The study shows that the combination of VR and ML opens up new possibilities for creating intelligent and interactive systems and can lead to significant breakthroughs not only in VR but also in related technology areas.
Keywords: virtual reality technologies, machine learning, machine learning efficiency, adaptive algorithms, education, medicine, rehabilitation