Keywords: programming automation, generative artificial intelligence, large language models, history of programming, integrated development environments, low-code/no-code, devOps, machine learning
DOI: 10.26102/2310-6018/2025.51.4.009
The rapid development of automation tools for programming is a key factor in the digital transformation of society. The purpose of this work is a comprehensive analysis of the evolution of automation tools, including high-level programming languages, structured and object-oriented programming, integrated development environments, low-code/no-code platforms and large language models. The study examines the principles of operation of generative artificial intelligence, its capabilities and limitations, as well as the specifics of Russian solutions in this area. Particular attention is paid to the challenges associated with the widespread introduction of automation: problems of intellectual property, security of generated code, transformation of the programmer's role and adaptation of educational programs. A conclusion is made about the formation of a new paradigm of joint work of humans and artificial intelligence in software development. The practical significance of the work is to provide developers and managers with structured information for making decisions on the implementation of automation tools, the choice of technologies and the assessment of associated risks.
Keywords: programming automation, generative artificial intelligence, large language models, history of programming, integrated development environments, low-code/no-code, devOps, machine learning
DOI: 10.26102/2310-6018/2025.50.3.034
The article discusses a method for detecting DDoS attacks in digital ecosystems using tensor analysis and entropy metrics. Network traffic is formalized as a 4D tensor with the following dimensions: IP addresses, timestamps, request types, and countries of origin. The CP decomposition with rank 3 is used to analyze the data, which allows revealing hidden patterns in traffic. An algorithm for calculating the anomaly score (AS) is developed, which takes into account the factor loadings of the tensor decomposition and the entropy of time distributions. Experiments on real data have shown that the proposed method provides 92 % attack detection accuracy with a false positive rate of 1.2 %. Compared to traditional signature-based methods, the accuracy increased by 35 %, and the number of false positives decreased by 86 %. The method has proven effective in detecting complex low-rate attacks that are difficult to detect by standard methods. The results of the study can be useful for protecting various digital ecosystems, including financial services, telecommunication networks, and government platforms. The proposed approach expands the capabilities of network traffic analysis and can be integrated into modern cybersecurity systems. Further research could be aimed at optimizing the computational complexity of the algorithm and adapting the method to different types of network infrastructures.
Keywords: tensor analysis, DDoS attacks, cybersecurity, digital ecosystems, CP decomposition, entropy analysis, anomaly detection
DOI: 10.26102/2310-6018/2025.51.4.007
Digitalization of education necessitates a formalized representation and systematic organization of information flows that ensure effective interaction of participants in the educational process in the digital educational environment (DEE). The aim of the study is to model information flows based on an ontological representation of the interaction of a decision maker (DM) and feedback. An ontological model has been developed that reflects key classes, instances with the identification of relationships between them and the semantics of information flows circulating between the DEE components. The article presents a decomposition of an instance of the "adaptive feedback algorithm" class of the ontological model of information flows. Digital tools operate in a single circuit of the educational environment, implementing a continuous cycle of assessment, analysis, feedback and correction. An instance of the "unified test question bank" class of the ontological model, including artificial intelligence technologies for the implementation of automated verification of free-form answers in the conditions of streaming learning, allows for variable and level assessment. Feedback implementation tools include LMS, social networks and a virtual information and communication assistant. The relationship of the tools supplemented in the DEE is shown in the ontological model when describing the information flows of the "DM – feedback" connection. The application of the model considered in the article will allow structuring and unifying the description of educational processes with the automation of the digital footprint analysis. The conclusion provides findings with the decomposition of the ontological model using the example of the knowledge assessment process in the context of digitalization and multithreading with the identification of relations in the form of prerequisites of instances of classes of the ontological model.
Keywords: ontology, digital educational environment, distance learning system, information flows, educational technologies, class instances
DOI: 10.26102/2310-6018/2025.51.4.002
Unmanned trains are a key component of the next level of railway automation. Launching locomotives in unmanned mode requires the development of reliable computer vision systems using artificial intelligence technologies. The paper presents a method for improving the quality of learning convolutional neural networks for detecting railway infrastructure objects. The reliability of visual object detection by a computer vision system can be achieved through algorithmic expansion of training datasets. The proposed method takes into account the variability of weather conditions in which identical objects must be detected, and it allows generating image modifications with added effects of rain, snow or fog. The original dataset included 21700 annotated images and contained 7 classes of objects. Based on them, an extended set of 65100 images was formed using the developed method. To evaluate the effectiveness of the proposed approach, comparative learning of the advanced YOLOv11 model was carried out on the original and extended datasets. The F1-measure and mean average precision (mAP) metrics were used to compare the learning results. The results of the computational experiments confirm that using the extended dataset improves the quality of learning. In particular, the F1-measure for the YOLO model trained on the original dataset was 0.72, while on the extended dataset this parameter reached an increased value of 0.90. The value of the second used metric mAP (50–95) increased from 0.67 on the original dataset to 0.83 on the extended dataset. Comparative values of the metrics were obtained at the same confidence threshold of 0.8. The developed method has been implemented in a hardware and software system, which is ready for testing as part of an integrated control and safety system for freight trains.
Keywords: machine vision, machine learning, convolutional neural networks, YOLOv11, rail transport automation, unmanned transport
DOI: 10.26102/2310-6018/2025.50.3.031
In the conditions of high competition for large modern companies producing mass products or providing mass services, it is typical to increase advertising costs, which does not always bring the expected effect. There is a growing need for tools for precise audience segmentation, which can increase the effectiveness of marketing communications. Traditional response prediction models do not allow us to determine whether the client's behavior has changed under the influence of marketing impact, which reduces the possibilities of constructive analysis of marketing campaigns. This article is aimed at studying uplift modeling as a tool for assessing the effect of increasing positive responses from communication and targeting optimization. The results of the study demonstrate significant advantages of the uplift modeling approach for identifying client segments with maximum sensitivity to impact. The comparative analysis of various approaches to building uplift models (such as SoloModel, TwoModel, Class Transformation, Class Transformation with Regression), based on the use of specialized uplift metrics (uplift@k, Qini AUC, Uplift AUC, weighted average uplift, Average Squared Deviation), conducted within the article, demonstrates the strengths and weaknesses of each of the modeling approaches. The study is based on open data X5 RetailHero Uplift Modeling Dataset, provided by X5 Retail Group for the study of uplift modeling methods in the context of retail.
Keywords: uplift modeling, machine learning, marketing communications, targeting, response evaluation, uplift model quality metrics
DOI: 10.26102/2310-6018/2025.50.3.042
This paper analyzes the features of the Modbus protocol, with an emphasis on its vulnerability in the context of security and protection of transmitted information. The main risks associated with the use of Modbus in automation and process control systems (APCS) are considered, including the lack of encryption and authentication mechanisms, which makes it vulnerable to various types of attacks, such as data interception or unauthorized access, as well as options for solving the problem of node verification. The Modbus protocol is one of the most common and popular industrial protocols, actively used in automation systems and control of various technological processes. The protocol is easy to implement and widespread, which makes it attractive for implementation in various industries. However, the RTU mode of the Modbus protocol has disadvantages, such as vulnerability to man-in-the-middle and substitution attacks, which carries potential risks for industrial enterprises using this protocol in production. The vulnerability is due to the lack of built-in authentication and verification mechanisms for nodes involved in data transmission. This creates risks associated with the possibility of unauthorized access and substitution of information during the exchange process. The article proposes a method for increasing confidentiality during interaction between nodes by implementing cryptographic operations that allow for verification of the authenticity of the source of transmitted data by implementing a lightweight cryptographic algorithm based on the XOR operation with a 16-bit secret. The advantage of the proposed method is its compatibility with the existing implementation of the Modbus protocol, minimal impact on system performance, and no need for deep modification of the architecture. It is also worth noting a slight increase in data transmission latency (less than 2 %) and processor time consumption.
Keywords: modbus RTU, man-in-the-middle, frame, cryptographic protection, industrial protocol
DOI: 10.26102/2310-6018/2025.50.3.036
Recognition of license plates (LP) is one of the key tasks for intelligent transport systems. In practice, such factors as blur, noise, adverse weather conditions or shooting from a long distance lead to obtaining low-resolution (LR) images, which significantly reduces the reliability of recognition. A promising solution to this problem is the use of super-resolution (SR) methods capable of restoring high-resolution (HR) images from the corresponding LR versions. This paper is devoted to the research and development of a software package using neural network super-resolution models to improve the quality and accuracy of LP recognition. The software package implements the YOLO (You Only Look Once) neural network architectures for object detection, the SORT (Simple Online and Realtime Tracking) object tracking algorithm and super-resolution models to improve LP images. This approach ensures high accuracy of LP recognition even when working with images obtained in difficult shooting conditions characterized by low quality or resolution. The experimental results demonstrate that the proposed approach can improve the accuracy of LP recognition in low-resolution images. The image restoration quality was assessed using the PSNR and SSIM metrics, which confirmed the improvement of the visual characteristics of LP for the most effective models. The developed software package has a wide potential for practical application and can be integrated into various systems, for example, for access control to protected areas, traffic monitoring and analysis, automation of parking complexes, as well as as part of solutions for ensuring public safety. The flexibility of the implemented architecture allows you to adapt the system to specific requirements with modifications, which emphasizes its versatility and practical significance.
Keywords: license plates recognition, computer vision, deep neural networks, superresolution, objects detection, objects tracking
DOI: 10.26102/2310-6018/2025.50.3.047
This article proposes an intelligent mivar decision-making system (MDMS) designed for the optimized distribution and transportation of cargo by groups of warehouse robots. This mivar decision-making system integrates three groups of different warehouse robots: the loader robot (RP), the transporter robot (RT), and the unloader robot (RR). The selection and determination of the state of each robot (loader robot, transporter robot, and unloader robot) are based on corresponding calculations performed using specially developed algorithms. These algorithms are based on a series of key equation systems, such as the transporter robot equation system, the loader robot equation system, the unloader robot equation system, and the command variable system. The equation systems take into account the robot's state, operational capability, ability to complete cargo transportation, compatibility for cargo transportation, etc. Additionally, the Manhattan distance is considered, which helps determine the robot's ability to complete its task. The article provides a detailed description of the equation systems and calculation algorithms, as well as a formalized description of the domain in which the mivar logical artificial intelligence system operates. The logical schematic of the MDMS system and decision-making rules are also outlined, which aid in robot selection, making the system more efficient. Experimental results show that this system can function normally according to pre-established logic and objectives. It accurately completed all distribution tasks, demonstrating good stability and reliability.
Keywords: mivar, mivar decision-making systems, logical AI, distribution system, group of warehouse robots, robot-loader, robot-transporter, robot-unloader
DOI: 10.26102/2310-6018/2025.51.4.001
This paper examines the availability of satellite communications in the Arctic zone of the Russian Federation. It provides information on existing satellite communications systems, the number of which is currently limited due to sanctions and the geographic features of the region. After analyzing the actually available satellite communications systems, it is noted that satellite communications systems using the geostationary orbit (GEO) are currently the only option for providing data transmission services. An analysis of the problems typical of using the geostationary orbit in high-latitude conditions is given; an overview of Russian geostationary satellites and the conditions of their use in the Arctic is made, taking into account the coverage areas of the beams and frequency ranges. The result of calculating the geometric relationships when organizing communications between a satellite in GEO and earth stations in the Arctic region is given. For further study of the quality of communication in the northernmost parts of the region, the range of slant range and elevation angle values typical for the waters of the Northern Sea Route is calculated. The results of calculations of the required distance of the earth station from ground objects are presented, allowing for rational placement of the earth station both from the point of view of ensuring direct visibility of the satellite and the required elevation angle, and for reducing the noise temperature of the receiver.
Keywords: satellite communication, geostationary orbit, arctic region, elevation angle, slant range
DOI: 10.26102/2310-6018/2025.50.3.038
The article presents the results of a study aimed at expanding the theoretical basis in the field of real-time computing. The issues considered include: defining indicators of computational complexity in real time, a methodology for their quantitative assessment, identifying ways to achieve the computability of algorithms in real time, and formalizing approaches to the optimal technical implementation of real-time computing systems. The research is based on existing concepts in algorithm theory and computation theory, including real-time computation. Significant new scientific results of the research include: the introduction, along with the known indicators of temporal and spatial computational complexity, of an additional indicator of configuration computational complexity, necessary for assessing computational complexity in real time; the confirmation of the possibility of controlling temporal, spatial, and configuration complexity within the framework of a given algorithm functional solely by changing the number of computation execution threads; theoretical justification of the possibility of reducing the execution time of the configuration algorithm from exponential to polynomial or even linear by condensing the initial graph of the algorithm with the formation of strongly connected components of a set of actor functions and obtaining as a result an acyclic directed graph, whose topological sorting can be performed in linear time; determination of approaches to the optimal technical implementation of the algorithm with a given configuration, including in the form of an integrated circuit with wiring optimized based on the solution of Steiner's rectangular problem.
Keywords: computational complexity, real time, computability, configuration, search algorithm, actor functions, portability
DOI: 10.26102/2310-6018/2025.50.3.037
This paper addresses the problem of improving the accuracy of determining the spectral characteristics of voice signals in audio recordings. To solve this problem, a modification of the classical Hamming window function is proposed by introducing an optimizable parameter. The study's relevance stems from the need to improve the reliability of voice recognition and identification systems, especially in the context of biometric applications and authentication tasks. The main objective is the development of an algorithm for calculating the optimal value of this parameter, maximizing the quality of spectral analysis for specific voice frequency ranges. To achieve this objective, the gradient descent method was used to optimize the parameter of the modified function. Quality assessment was performed based on a weighted sum of spectral characteristics (peak factor, spectral line width, signal-to-noise ratio). Experiments were conducted on test signals simulating male (200–400 Hz) and female (220–880 Hz) voices. The results showed that the proposed approach improves the accuracy of determining spectral components, especially in the male baritone range (up to 5.42 % improvement), by achieving clearer identification of fundamental frequencies and reducing side-lobe levels compared to the classical Hamming window. The study's conclusions indicate the potential of adapting window functions to specific frequency ranges of voice signals. The proposed algorithm can be used to improve the performance of biometric identification systems and other applications requiring accurate spectral analysis of voice.
Keywords: window function, hamming window, spectral analysis, voice signal processing, parameter optimization, gradient descent, biometric identification, spectrum estimation accuracy, STFT
DOI: 10.26102/2310-6018/2025.50.3.025
The study presents an integrated algorithm for evaluating and optimizing systems with heterogeneous data, taking into account managerial and organizational performance indicators. The proposed algorithm consists of data coverage analysis (DCA), fuzzy data analysis (FDA), and a set of statistical methods for evaluating the likelihood of the obtained results. An integrated algorithm has been developed for determining the most effective heterogeneous performance indicators, which differs in its method of selecting reliable indicators and allows for the formulation of strategies for improving organizational systems. A set of 12 criteria indicating the application of an integrated method was selected for verification. The results showed that the AOD results have a lower mean absolute percentage error (MAPE) than the fuzzy AOD results. The study also analyzes and weighs indicators, and the results showed that the indicators "investments in research and development relative to production costs" and "investments in education and retraining per employee" are the most effective. The study presents a unique algorithm for taking into account heterogeneous managerial and organizational factors. It can handle data uncertainty due to the presence of fuzzy inference mechanisms in the algorithm. The weights of the indicators are determined using a set of reliable statistical algorithms.
Keywords: integrated algorithm, heterogeneous data, data coverage analysis, fuzzy logic, verification, statistical criterion, data mining, indicator weight
DOI: 10.26102/2310-6018/2025.50.3.041
The work is devoted to topical issues of the synthesis of human-machine interaction tools, within the framework of which a model for interfacing components of graphical user interfaces (GUI) based on algebraic logic methods is considered. The components of GUI are presented as components of open information systems with standardized interfaces that determine their spatial compatibility. To formalize the components of the GUI, it is proposed to use semantic networks, while the compatibility of the components is determined by the rules of logical inference, presented in the form of a Horn disjunction. The description of the integrated visual component "Named input field" is presented in the form of a semantic network containing a description of the spatial compatibility of its constituent indivisible components. An extension of the OpenAPI specification has been developed to solve the problem of unifying and standardizing the description of GUI components and ensuring the interoperability of tools for synthesizing screen forms and supporting UX testing. The article presents the results of the synthesis of chains of geometric shapes that mimic the components of GUI, which can also be presented declaratively in the form of semantic networks, and, consequently, in the RDF format. In addition to the components themselves, semantic networks include a description of filters that can be used to control the choice of ways to spatially interface GUI components.
Keywords: human-machine interaction, graphical user interface, specification, component, horn's disjunction
DOI: 10.26102/2310-6018/2025.50.3.033
One of the significant areas of investment in civil aviation is air transportation subsidies. The article considers the possibility of optimizing management decisions on the distribution of investment volumes among airlines participating in the air transportation route selection program that ensures growth of efficiency indicators with a limited investment resource. To formulate the optimization problem, continuous optimized variables that determine investment volumes and alternative variables corresponding to the choice of a specific transportation route are introduced. The initial data provided by the airlines are used to assess the fulfillment of extreme and boundary requirements for the subsidizing process. Each indicator, on the basis of which the specified requirements are formed, is calculated according to the parameters recorded in the initial data, depending on the values of the variables. In this case, it becomes necessary to split the condition of a limited integrated resource into two particular boundary conditions. As a result, we have a multi-criteria optimization problem with constraints, defined on sets of continuous and alternative optimized variables. To solve it, it is proposed to use a combination of an adaptive algorithm of directed randomized search and a particle swarm algorithm. We conduct a computational experiment using an optimization approach, which is compared with actual data on air transport subsidies. The optimized variant of investment distribution and route selection is characterized by values of performance indicators that are better than those actually achieved.
Keywords: investment management, centralized control, air transport subsidies, airlines, optimization
DOI: 10.26102/2310-6018/2025.50.3.026
Networks are widely used to represent the interactive relationships between individual elements in complex big data systems, such as the cloud-based Internet. Determinable causes in these systems can lead to a significant increase or decrease in the frequency of interaction within the corresponding network, making it possible to identify such causes by monitoring the level of interaction within the network. One method for detecting changes is to first create a network graph by drawing an edge between each pair of nodes that have interacted within a specified time interval. The topological characteristics of the graph, such as degree, proximity, and mediation, can then be considered as one-dimensional or multidimensional data for online monitoring. However, the existing statistical process control (SPC) methods for unweighted networks almost do not take into account either the sparsity of the network or the direction of interaction between two network nodes, that is, pair interaction. By excluding inactive pair interactions, the proposed parameter estimation procedure provides higher consistency with lower computational costs than the alternative approach when the networks are large-scale and sparse. The matrices developed on the basis of a matrix probabilistic model for describing directed pair interactions within time-independent, unweighted big data networks with cloud processing significantly simplify parameter estimation, the effectiveness of which is increased by automatically eliminating pair interactions that do not actually occur. Then the proposed model is integrated into a multidimensional distribution function for online monitoring of the level of communication in the network.
Keywords: cloud computing, big data, network status changes, real-time monitoring, unweighted networks, pair interaction, matrix probability model
DOI: 10.26102/2310-6018/2025.50.3.043
It is known that the use of Non-Orthogonal Multiple Access (NOMA) methods can improve the spectral efficiency and capacity of communication networks. However, in the presence of nonlinear distortions or synchronization issues, the orthogonality of user signals within a CDMA group is disrupted, leading to inter-channel interference and a reduction in interference immunity as the number of users increases. This must be taken into account when analyzing the interference immunity in broadband radio communication networks. The paper presents simulation results demonstrating the possibility of using orthogonal synchronous code division multiple access in combination with non-orthogonal multiple access, where the system's interference immunity is determined solely by the characteristics of NOMA. The influence of power distribution among users on the network's interference immunity, depending on their distance, is shown. For the analysis, mathematical models and MATLAB implementations were used, enabling the study of key system parameters, including bit error rate (BER), capacity, and power allocation strategies. The results demonstrate that the proposed approach allows for effective analysis and optimization of NOMA systems, taking into account the impact of nonlinear distortions and power distribution. Examples of calculations are provided, confirming the feasibility of using NOMA in broadband radio communication networks.
Keywords: non-Orthogonal Multiple Access (NOMA), spectral efficiency, interference immunity, nonlinear distortions, power allocation, radio networks
DOI: 10.26102/2310-6018/2025.50.3.027
In the context of the increasing complexity of managing national projects aimed at achieving the National Development Goals of the Russian Federation, an urgent task is to automate the analysis of the relationships between the activities planned within these projects and the indicators that reflect the degree of achievement of the objectives set in the project. Traditional methods of manual document processing are characterized by high labor intensity, subjectivity and significant time costs, which necessitates the development of intelligent decision support systems. This article presents an approach to automating the analysis of links and indicators of national projects, which allows for automatic detection and verification of semantic links "event-indicator" in national project documents, significantly increasing the efficiency of analytical work. This approach is based on the use of the Retrieval-Augmented Generation (RAG) system, which combines a locally adapted language model with vector search technologies. The work demonstrates that the integration of the RAG approach with vector search and taking into account the project ontology allows achieving the required accuracy and relevance of the analysis. The system is particularly valuable not only for its ability to generate interpretable justifications for the identified links, but also for its ability to identify key events that affect the achievement of indicators for several national projects at once, including those whose impact on the implementation of these indicators is not obvious. The proposed solution opens up new opportunities for the digitalization of public administration and can be adapted for other tasks, such as identifying risks in the implementation of events and generating new events.
Keywords: RAG systems, large language models, national projects, semantic search, automation, national goals, artificial intelligence in public administration
DOI: 10.26102/2310-6018/2025.50.3.040
This paper presents a procedure for dynamically modifying the binary encoding scheme in a genetic algorithm (GA), enabling adaptive adjustment of the search space during the algorithm’s execution. In the proposed approach, the discretization step for each coordinate is updated from generation to generation based on the current boundaries of regions containing high-quality solutions and the density of individuals within them. For each such region, the number of bits in the binary string representing solutions is determined according to the number of encoded points, after which the discretization step is recalculated. The encoding scheme is restructured in a way that ensures the correctness of genetic operators in the presence of discontinuities in the search space, preserves the fixed cardinality of the solution set at each generation, and increases the precision of the solutions due to the dynamic adjustment of the discretization step. Experimental results on multimodal test functions such as Rastrigin and Styblinski–Tang demonstrate that the proposed GA modification progressively refines the search area during evolution, concentrating solutions around the global extrema. For the Rastrigin function, initially fragmented regions gradually focus around the global maximum. In the Styblinski–Tang case, the algorithm shifts the search from an intentionally incorrect initial area toward one of the global optima.
Keywords: adaptive encoding, genetic algorithm, discretization, multimodal optimization, search space
DOI: 10.26102/2310-6018/2025.50.3.024
The growing volume of processed data and the widespread adoption of cloud technologies have made efficient task distribution in high-load computing systems a critical challenge in modern computer science. However, existing solutions often fail to account for resource heterogeneity, dynamic workload variations, and multi-objective optimization, leaving gaps in achieving optimal resource utilization. This study aims to address these limitations by proposing a hybrid load-balancing algorithm that combines the strengths of Artificial Bee Colony (ABC) and Max-Min scheduling strategies. The research employs simulation in the CloudSim environment to evaluate the algorithm’s performance under varying workload conditions (100 to 5000 tasks). Tasks are classified into "light" and "heavy" based on their MIPS requirements, with ABC handling lightweight tasks for rapid distribution and Max-Min managing resource-intensive tasks to minimize makespan. Comparative analysis against baseline algorithms (FCFS, SJF, Min-Min, Max-Min, PSO, and ABC) demonstrates the hybrid approach’s superior efficiency, particularly in large-scale and heterogeneous environments. Results show a 15–30% reduction in average task completion time at high loads (5000 tasks), confirming its adaptability and scalability. The study concludes that hybrid algorithms, integrating heuristic and metaheuristic techniques, offer a robust solution for dynamic cloud environments. The proposed method bridges the gap between responsiveness and strategic resource allocation, making it viable for real-world deployment in data centers and distributed systems. The practical significance of the work lies in increasing energy efficiency, reducing costs and ensuring quality of service (QoS) in cloud computing.
Keywords: cloud computing, scheduling, task allocation, virtual machines, hybrid algorithm, load balancing, optimization, cloudSim
DOI: 10.26102/2310-6018/2025.51.4.016
This article presents the development of an automatic longitudinal motion control system for vehicle platoons based on fuzzy logic methods. The relevance of the study stems from the growing need for efficient and safe solutions for freight transportation automation. The scientific novelty of the work lies in the development and verification of a control system implementing the leader – follower principle with a specialized fuzzy controller rule base, adapted for heavy-duty truck control (exemplified by the KAMAZ-65111) and implemented in software within numerical and visual modeling environments. Unlike universal approaches, the proposed rule base formalizes expert driving strategies while accounting for the control object's high inertia. The leader – follower system was implemented and tested in two distinct environments: mathematical modeling in MATLAB/Simulink and interactive 3D simulation in the Unity game engine. Comprehensive testing covered four driving scenarios: uniform motion, acceleration-braking, emergency braking, and off-road driving. Simulation results demonstrated high accuracy (distance root mean square error not exceeding 1.21 m) and safety (minimum distance exceeding 6.3 m in critical scenarios). The strong correlation of results between both platforms confirms the adequacy and robustness of the proposed model. The developed system demonstrates potential for use in autonomous vehicles and can be improved by implementing adaptive mechanisms for adjusting the fuzzy controller parameters. It is noted that the developed control system can be further improved through the use of hybrid neuro-fuzzy rules or the creation of intelligent traffic management systems.
Keywords: vehicle platoon, automatic control, leader – follower, fuzzy controller, MATLAB, unity, KAMAZ-65111
DOI: 10.26102/2310-6018/2025.51.4.005
The acetylene hydrogenation process is an important step in the production of ethylene and other valuable chemical products. However, its effectiveness largely depends on the accuracy of control of technological parameters, such as temperature, pressure and consumption of reagents. Despite this, most research in the field of acetylene hydrogenation focuses on improving the technological aspects of the process, while the development of modern information, measuring and control systems remains poorly understood. As part of the study, an information-measuring and control system was proposed aimed at increasing the efficiency of the acetylene hydrogenation process. The system is based on a virtual analyzer, which allows you to calculate the degree of conversion in real time based on data from instrumentation. Optimization of the virtual analyzer model was performed using a genetic algorithm, which ensured high accuracy of calculations. Based on the data of the virtual analyzer, a control algorithm was developed that corrects the process parameters to maintain optimal reaction conditions. The control system was implemented in the Centum VP environment, which will allow it to be integrated into the existing automation infrastructure.
Keywords: ethylene production, acetylene hydrogenation, petrochemistry, control system, process automation
DOI: 10.26102/2310-6018/2025.50.3.035
The article explores modern methods for automatic detection of atypical (anomalous) musical events within a musical sequence, such as unexpected harmonic shifts, uncharacteristic intervals, rhythmic disruptions, or deviations from musical style, aimed at automating this process and optimizing specialists' working time. The task of anomaly detection is highly relevant in music analytics, digital restoration, generative music, and adaptive recommendation systems. The study employs both traditional features (Chroma Features, MFCC, Tempogram, RMS-energy, Spectral Contrast) and advanced sequence analysis techniques (self-similarity matrices, latent space embeddings). The source data consisted of diverse MIDI corpora and audio recordings from various genres, normalized to a unified frequency and temporal scale. Both supervised and unsupervised learning methods were tested, including clustering, autoencoders, neural network classifiers, and anomaly isolation algorithms (isolation forests). The results demonstrate that the most effective approach is a hybrid one that combines structural musical features with deep learning methods. The novelty of this research lies in a comprehensive comparison of traditional and neural network approaches for different types of anomalies on a unified dataset. Practical testing has shown the proposed method's potential for automatic music content monitoring systems and for improving the quality of music recommendations. Future work is planned to expand the research to multimodal musical data and real-time processing.
Keywords: musical sequence, anomaly, tempogram, musical style, MFCC, chroma, autoencoder, music anomaly detection
DOI: 10.26102/2310-6018/2025.50.3.029
The relevance of the study is due to the need to increase the efficiency of agent training under conditions of partial observability and limited interaction, which are typical for many real-world tasks in multiagent systems. In this regard, the present article is aimed at the development and analysis of a hybrid approach to agent training that combines the advantages of gradient-based and evolutionary methods. The main method of the study is a modified Advantage Actor-Critic (A2C) algorithm, supplemented with elements of evolutionary learning — crossover and mutation of neural network parameters. This approach allows for a comprehensive consideration of the problem of agent adaptation in conditions of limited observation and cooperative interaction. The article presents the results of experiments in an environment with two cooperative agents tasked with extracting and delivering resources. It is shown that the hybrid training method provides a significant increase in the effectiveness of agent behavior compared to purely gradient-based approaches. The dynamics of the average reward confirm the stability of the method and its potential for more complex multiagent interaction scenarios. The materials of the article have practical value for specialists in the fields of reinforcement learning, multi-agent system development, and the design of adaptive cooperative strategies under limited information.
Keywords: reinforcement learning, evolutionary algorithms, multiagent system, a2C, LSTM, cooperative learning
DOI: 10.26102/2310-6018/2025.50.3.039
The central role of the infosphere in network-centric control systems for groups of mobile cyber-physical systems determines the fundamental importance of ensuring functional reliability and survivability of information interaction systems. One of the factors of functional reliability of information interaction systems is the structural reliability of data transmission systems. The work is devoted to the construction of descriptive models of structural reliability indicators of mobile data transmission systems under the influence of destructive effects on network channels and nodes. Using the method of simulation modeling, a study was conducted on the influence of edge destruction in a random graph on network connectivity depending on the indicator – the proportion of destroyed graph nodes. The features of the average values and stability of the indicator for different characteristics of random graphs are revealed. The influence of the mobility property of cyber-physical devices in the «swarm» group on the indicators of structural reliability – complexity and unevenness of load distribution between the nodes of the data transmission system is assessed. It is shown that the use of such a resource of mobile groups of cyber-physical systems as the ability of devices to move is a way to counter destructive effects. As a result of the movement of nodes, there is an increase in the stability of structural reliability indicators – the complexity of the structure and the uneven distribution of the load between network nodes.
Keywords: network-centric control, mobile groups of cyber-physical devices, structural reliability of data transmission systems, descriptive models, destructive effects, countering destructive effects
DOI: 10.26102/2310-6018/2025.51.4.032
The relevance of the study is due to the need to improve the efficiency of vocational training programs in the context of limited data and resources. Modern employment centers are faced with the task of quickly and accurately identifying the risks of early participant attrition, which requires the use of adapted analytical tools. The article proposes a Markov model of the educational process that allows, based on a minimum set of input data, to predict the trajectories of students and identify key points for management intervention. Empirical testing of the model was carried out on the data of the Lipetsk Employment Center (2024), which made it possible to assess the probabilities of successful program completion, the risks of dropouts, the average duration of involvement and sensitivity to various types of interventions. A sensitive analysis showed that investments in retaining active students provide a greater increase in efficiency compared to attempts to engage passive participants. The results obtained are of practical value for professional retraining systems and can be used to increase the ROI of programs by optimizing curatorial strategies and attendance rules. The introduction of such models contributes to a more rational distribution of resources, reduction of losses and the formation of personalized trajectories, which is especially important in the context of a dynamically changing labor market.
Keywords: markov chains, educational process model, employment, ROI training, professional retraining, employment center
DOI: 10.26102/2310-6018/2025.50.3.028
Modern computer graphics offers many different visual effects for processing three-dimensional scenes during rendering. The burden of calculating these graphic effects falls on the user hardware, which leads to the need to compromise between performance and image quality. In this regard, the development of systems capable of automatically assessing the quality of three-dimensional rendering and images in general becomes relevant. The relevance of this topic is expressed in two directions. First, the ability to predict user reactions will allow for more accurate customization of graphic applications. Second, understanding preferences can help in optimizing 3D scenes by identifying visual effects that can be disabled. In a broader sense, this also poses the challenge of optimally managing the rendering process so that it becomes possible to maximize the use of available hardware capabilities. Therefore, it becomes a significant task to model the process of rendering 3D graphics in such a form, in which it will be as simple as possible to deal with its optimization. The purpose of this study is to create such a model, which will allow to perform the stage of expert evaluation to automatically determine the quality of three-dimensional rendering and use it for optimal control of the rendering pipeline. A number of important issues that require special attention in the research are also discussed. The range of applications of the developed system includes various spheres of human activity involving three-dimensional modeling. Such a system can become a useful tool for both developers and users, which is especially important in education, video game development, virtual reality technologies, etc., where it is necessary to model realistic objects or visualize complex processes.
Keywords: quadratic knapsack problem, multidimensional knapsack problem, artificial neural networks, three-dimensional rendering, user preference analysis, visual quality assessment, future technologies
DOI: 10.26102/2310-6018/2025.50.3.018
Based on the system engineering principles, the technological aspects of designing a prototype electric vehicle with a combined control system are considered, which assumes the possibility of simple and safe switching from manual mode to remote (via radio channel) or software. The design and physical implementation of an object are based on consideration of prototyping, machining process, and programming technologies that are interrelated throughout the entire structure. The project is implemented on the basis of the Bigo.Land set (in its mechanical and mechatronic parts) and based on ArduPilot/Pixhawk (in its software and hardware parts). The basic set of Bigo.Land is complemented by a two-way overrunning clutch, which, along with the software, allows the pilot to take part in the control process if necessary. The result of the work is a fully functional prototype of an electric vehicle with a sensing system and functions of unmanned control and autonomous behavior; as well as its virtual (CAD/CAE) model and software in the form of the Ardupilot/Pixhawk flight controller firmware, which extends and complements the standard functionality of the base Ardupilot software. The project and the results obtained can be useful to specialists developing and operating unmanned mobile vehicles, as well as educational institutions implementing pedagogical technologies based on the project learning method.
Keywords: unmanned electric vehicle, technological process aspects of design, combined control, two-way overrunning clutch, prototyping, system engineering, project-based learning
DOI: 10.26102/2310-6018/2025.50.3.032
The relevance of the study is due to the growing need for a highly accurate and interpretable emotion recognition system based on video data, which is crucial for the development of human-centered technologies in education, medicine, and human–computer interaction systems. In this regard, the article aims to identify the differences and application prospects of the local DeepFace solution and the cloud-based GPT-4o (OpenAI) model for analyzing short video clips with emotional expressions. Methodologically, the study is based on empirical comparative analysis: a moving average method was used to smooth the time series of emotional assessments and to evaluate stability and cognitive interpretability. The results showed that DeepFace provides stable local processing and high resistance to artifacts, while GPT-4o demonstrates the ability for complex semantic interpretation and high sensitivity to context. The effectiveness of a hybrid approach combining computational autonomy and interpretative flexibility is substantiated. Thus, the synergy of local and cloud solutions opens up prospects for creating more accurate, adaptive, and scalable affective analysis systems. The materials of the article are of practical value to specialists in the fields of affective computing, interface design, and cognitive technologies.
Keywords: affective computing, emotion recognition, video data analysis, deepFace, GPT-4o language model, hybrid analysis system, semantic text analysis, multimodal interaction, neural network interpretability, cognitive technologies
DOI: 10.26102/2310-6018/2025.50.3.023
The issue of wireless transmission of information via radio communication is raised. It is indicated that the key parameter of the radio channel quality is the signal-to-noise ratio at the input of the receiving device. The importance of ensuring a high signal-to-noise ratio in radio transmitting and receiving devices and systems is emphasized. An analytical review and comparative analysis of common methods for determining the signal-to-noise ratio at the input of the receiving device is carried out. Theoretical and practical methods for determining the signal-to-noise ratio are considered, in particular, the method of complex envelope, the method of spectral analysis, as well as the method of calculating losses in free space. Their advantages and disadvantages are revealed. The mathematical and methodological apparatus of the considered methods is described. A brief description of the algorithms for measuring the signal-to-noise ratio in these methods is given. Information about the conducted experimental studies of the methods is provided. The initial data and the results of the experiment are described. The results of a comparative analysis of theoretical and practical methods are presented according to the criterion of accuracy in estimating the signal-to-noise ratio at the input of the receiving device. The main reasons and factors that reduce the accuracy of the theoretical assessment of the signal-to-noise ratio compared with the practical measurement are analyzed. Possible ways to increase the value of the signal-to-noise ratio in theoretical methods are proposed.
Keywords: wireless communication, radio signal, signal-to-noise ratio, complex envelope method, spectral analysis method, loss calculation method
DOI: 10.26102/2310-6018/2025.50.3.030
In this study, a new mechanism for generating training data for a neural network for the task of image-based code generation is proposed. In order for a system to be able to perform the task assigned to it, it must be trained. The initial dataset that is provided with the pix2code system allows the system to be trained, but it relies on the data that is provided in the domain-specific dictionary. Expanding or changing words in the dictionary does not affect the data set in any way, which limits the flexibility of the system's application by not allowing for the rules that may apply to the enterprise to be taken into account. Some studies claim to have created their own dataset, but its lack of public access makes it difficult to assess the complexity of the images it contains. To solve this problem, within the framework of this study, a submodule was developed that allows, based on a modified dictionary of a domain-specific language, to create a custom training dataset consisting of an image-source code pair corresponding to this image. To test the functionality of the created dataset, the modified pix2code system performed training and was then able to predict the code on test examples.
Keywords: code generation, image, machine learning, dataset, source code