Keywords: RAG systems, large language models, national projects, semantic search, automation, national goals, artificial intelligence in public administration
DOI: 10.26102/2310-6018/2025.50.3.027
In the context of the increasing complexity of managing national projects aimed at achieving the National Development Goals of the Russian Federation, an urgent task is to automate the analysis of the relationships between the activities planned within these projects and the indicators that reflect the degree of achievement of the objectives set in the project. Traditional methods of manual document processing are characterized by high labor intensity, subjectivity and significant time costs, which necessitates the development of intelligent decision support systems. This article presents an approach to automating the analysis of links and indicators of national projects, which allows for automatic detection and verification of semantic links "event-indicator" in national project documents, significantly increasing the efficiency of analytical work. This approach is based on the use of the Retrieval-Augmented Generation (RAG) system, which combines a locally adapted language model with vector search technologies. The work demonstrates that the integration of the RAG approach with vector search and taking into account the project ontology allows achieving the required accuracy and relevance of the analysis. The system is particularly valuable not only for its ability to generate interpretable justifications for the identified links, but also for its ability to identify key events that affect the achievement of indicators for several national projects at once, including those whose impact on the implementation of these indicators is not obvious. The proposed solution opens up new opportunities for the digitalization of public administration and can be adapted for other tasks, such as identifying risks in the implementation of events and generating new events.
Keywords: RAG systems, large language models, national projects, semantic search, automation, national goals, artificial intelligence in public administration
DOI: 10.26102/2310-6018/2025.50.3.040
This paper presents a procedure for dynamically modifying the binary encoding scheme in a genetic algorithm (GA), enabling adaptive adjustment of the search space during the algorithm’s execution. In the proposed approach, the discretization step for each coordinate is updated from generation to generation based on the current boundaries of regions containing high-quality solutions and the density of individuals within them. For each such region, the number of bits in the binary string representing solutions is determined according to the number of encoded points, after which the discretization step is recalculated. The encoding scheme is restructured in a way that ensures the correctness of genetic operators in the presence of discontinuities in the search space, preserves the fixed cardinality of the solution set at each generation, and increases the precision of the solutions due to the dynamic adjustment of the discretization step. Experimental results on multimodal test functions such as Rastrigin and Styblinski–Tang demonstrate that the proposed GA modification progressively refines the search area during evolution, concentrating solutions around the global extrema. For the Rastrigin function, initially fragmented regions gradually focus around the global maximum. In the Styblinski–Tang case, the algorithm shifts the search from an intentionally incorrect initial area toward one of the global optima.
Keywords: adaptive encoding, genetic algorithm, discretization, multimodal optimization, search space
DOI: 10.26102/2310-6018/2025.50.3.024
The growing volume of processed data and the widespread adoption of cloud technologies have made efficient task distribution in high-load computing systems a critical challenge in modern computer science. However, existing solutions often fail to account for resource heterogeneity, dynamic workload variations, and multi-objective optimization, leaving gaps in achieving optimal resource utilization. This study aims to address these limitations by proposing a hybrid load-balancing algorithm that combines the strengths of Artificial Bee Colony (ABC) and Max-Min scheduling strategies. The research employs simulation in the CloudSim environment to evaluate the algorithm’s performance under varying workload conditions (100 to 5000 tasks). Tasks are classified into "light" and "heavy" based on their MIPS requirements, with ABC handling lightweight tasks for rapid distribution and Max-Min managing resource-intensive tasks to minimize makespan. Comparative analysis against baseline algorithms (FCFS, SJF, Min-Min, Max-Min, PSO, and ABC) demonstrates the hybrid approach’s superior efficiency, particularly in large-scale and heterogeneous environments. Results show a 15–30% reduction in average task completion time at high loads (5000 tasks), confirming its adaptability and scalability. The study concludes that hybrid algorithms, integrating heuristic and metaheuristic techniques, offer a robust solution for dynamic cloud environments. The proposed method bridges the gap between responsiveness and strategic resource allocation, making it viable for real-world deployment in data centers and distributed systems. The practical significance of the work lies in increasing energy efficiency, reducing costs and ensuring quality of service (QoS) in cloud computing.
Keywords: cloud computing, scheduling, task allocation, virtual machines, hybrid algorithm, load balancing, optimization, cloudSim
DOI: 10.26102/2310-6018/2025.51.4.016
This article presents the development of an automatic longitudinal motion control system for vehicle platoons based on fuzzy logic methods. The relevance of the study stems from the growing need for efficient and safe solutions for freight transportation automation. The scientific novelty of the work lies in the development and verification of a control system implementing the leader – follower principle with a specialized fuzzy controller rule base, adapted for heavy-duty truck control (exemplified by the KAMAZ-65111) and implemented in software within numerical and visual modeling environments. Unlike universal approaches, the proposed rule base formalizes expert driving strategies while accounting for the control object's high inertia. The leader – follower system was implemented and tested in two distinct environments: mathematical modeling in MATLAB/Simulink and interactive 3D simulation in the Unity game engine. Comprehensive testing covered four driving scenarios: uniform motion, acceleration-braking, emergency braking, and off-road driving. Simulation results demonstrated high accuracy (distance root mean square error not exceeding 1.21 m) and safety (minimum distance exceeding 6.3 m in critical scenarios). The strong correlation of results between both platforms confirms the adequacy and robustness of the proposed model. The developed system demonstrates potential for use in autonomous vehicles and can be improved by implementing adaptive mechanisms for adjusting the fuzzy controller parameters. It is noted that the developed control system can be further improved through the use of hybrid neuro-fuzzy rules or the creation of intelligent traffic management systems.
Keywords: vehicle platoon, automatic control, leader – follower, fuzzy controller, MATLAB, unity, KAMAZ-65111
DOI: 10.26102/2310-6018/2025.51.4.005
The acetylene hydrogenation process is an important step in the production of ethylene and other valuable chemical products. However, its effectiveness largely depends on the accuracy of control of technological parameters, such as temperature, pressure and consumption of reagents. Despite this, most research in the field of acetylene hydrogenation focuses on improving the technological aspects of the process, while the development of modern information, measuring and control systems remains poorly understood. As part of the study, an information-measuring and control system was proposed aimed at increasing the efficiency of the acetylene hydrogenation process. The system is based on a virtual analyzer, which allows you to calculate the degree of conversion in real time based on data from instrumentation. Optimization of the virtual analyzer model was performed using a genetic algorithm, which ensured high accuracy of calculations. Based on the data of the virtual analyzer, a control algorithm was developed that corrects the process parameters to maintain optimal reaction conditions. The control system was implemented in the Centum VP environment, which will allow it to be integrated into the existing automation infrastructure.
Keywords: ethylene production, acetylene hydrogenation, petrochemistry, control system, process automation
DOI: 10.26102/2310-6018/2025.50.3.035
The article explores modern methods for automatic detection of atypical (anomalous) musical events within a musical sequence, such as unexpected harmonic shifts, uncharacteristic intervals, rhythmic disruptions, or deviations from musical style, aimed at automating this process and optimizing specialists' working time. The task of anomaly detection is highly relevant in music analytics, digital restoration, generative music, and adaptive recommendation systems. The study employs both traditional features (Chroma Features, MFCC, Tempogram, RMS-energy, Spectral Contrast) and advanced sequence analysis techniques (self-similarity matrices, latent space embeddings). The source data consisted of diverse MIDI corpora and audio recordings from various genres, normalized to a unified frequency and temporal scale. Both supervised and unsupervised learning methods were tested, including clustering, autoencoders, neural network classifiers, and anomaly isolation algorithms (isolation forests). The results demonstrate that the most effective approach is a hybrid one that combines structural musical features with deep learning methods. The novelty of this research lies in a comprehensive comparison of traditional and neural network approaches for different types of anomalies on a unified dataset. Practical testing has shown the proposed method's potential for automatic music content monitoring systems and for improving the quality of music recommendations. Future work is planned to expand the research to multimodal musical data and real-time processing.
Keywords: musical sequence, anomaly, tempogram, musical style, MFCC, chroma, autoencoder, music anomaly detection
DOI: 10.26102/2310-6018/2025.50.3.029
The relevance of the study is due to the need to increase the efficiency of agent training under conditions of partial observability and limited interaction, which are typical for many real-world tasks in multiagent systems. In this regard, the present article is aimed at the development and analysis of a hybrid approach to agent training that combines the advantages of gradient-based and evolutionary methods. The main method of the study is a modified Advantage Actor-Critic (A2C) algorithm, supplemented with elements of evolutionary learning — crossover and mutation of neural network parameters. This approach allows for a comprehensive consideration of the problem of agent adaptation in conditions of limited observation and cooperative interaction. The article presents the results of experiments in an environment with two cooperative agents tasked with extracting and delivering resources. It is shown that the hybrid training method provides a significant increase in the effectiveness of agent behavior compared to purely gradient-based approaches. The dynamics of the average reward confirm the stability of the method and its potential for more complex multiagent interaction scenarios. The materials of the article have practical value for specialists in the fields of reinforcement learning, multi-agent system development, and the design of adaptive cooperative strategies under limited information.
Keywords: reinforcement learning, evolutionary algorithms, multiagent system, a2C, LSTM, cooperative learning
DOI: 10.26102/2310-6018/2025.50.3.039
The central role of the infosphere in network-centric control systems for groups of mobile cyber-physical systems determines the fundamental importance of ensuring functional reliability and survivability of information interaction systems. One of the factors of functional reliability of information interaction systems is the structural reliability of data transmission systems. The work is devoted to the construction of descriptive models of structural reliability indicators of mobile data transmission systems under the influence of destructive effects on network channels and nodes. Using the method of simulation modeling, a study was conducted on the influence of edge destruction in a random graph on network connectivity depending on the indicator – the proportion of destroyed graph nodes. The features of the average values and stability of the indicator for different characteristics of random graphs are revealed. The influence of the mobility property of cyber-physical devices in the «swarm» group on the indicators of structural reliability – complexity and unevenness of load distribution between the nodes of the data transmission system is assessed. It is shown that the use of such a resource of mobile groups of cyber-physical systems as the ability of devices to move is a way to counter destructive effects. As a result of the movement of nodes, there is an increase in the stability of structural reliability indicators – the complexity of the structure and the uneven distribution of the load between network nodes.
Keywords: network-centric control, mobile groups of cyber-physical devices, structural reliability of data transmission systems, descriptive models, destructive effects, countering destructive effects
DOI: 10.26102/2310-6018/2025.50.3.028
Modern computer graphics offers many different visual effects for processing three-dimensional scenes during rendering. The burden of calculating these graphic effects falls on the user hardware, which leads to the need to compromise between performance and image quality. In this regard, the development of systems capable of automatically assessing the quality of three-dimensional rendering and images in general becomes relevant. The relevance of this topic is expressed in two directions. First, the ability to predict user reactions will allow for more accurate customization of graphic applications. Second, understanding preferences can help in optimizing 3D scenes by identifying visual effects that can be disabled. In a broader sense, this also poses the challenge of optimally managing the rendering process so that it becomes possible to maximize the use of available hardware capabilities. Therefore, it becomes a significant task to model the process of rendering 3D graphics in such a form, in which it will be as simple as possible to deal with its optimization. The purpose of this study is to create such a model, which will allow to perform the stage of expert evaluation to automatically determine the quality of three-dimensional rendering and use it for optimal control of the rendering pipeline. A number of important issues that require special attention in the research are also discussed. The range of applications of the developed system includes various spheres of human activity involving three-dimensional modeling. Such a system can become a useful tool for both developers and users, which is especially important in education, video game development, virtual reality technologies, etc., where it is necessary to model realistic objects or visualize complex processes.
Keywords: quadratic knapsack problem, multidimensional knapsack problem, artificial neural networks, three-dimensional rendering, user preference analysis, visual quality assessment, future technologies
DOI: 10.26102/2310-6018/2025.50.3.018
Based on the system engineering principles, the technological aspects of designing a prototype electric vehicle with a combined control system are considered, which assumes the possibility of simple and safe switching from manual mode to remote (via radio channel) or software. The design and physical implementation of an object are based on consideration of prototyping, machining process, and programming technologies that are interrelated throughout the entire structure. The project is implemented on the basis of the Bigo.Land set (in its mechanical and mechatronic parts) and based on ArduPilot/Pixhawk (in its software and hardware parts). The basic set of Bigo.Land is complemented by a two-way overrunning clutch, which, along with the software, allows the pilot to take part in the control process if necessary. The result of the work is a fully functional prototype of an electric vehicle with a sensing system and functions of unmanned control and autonomous behavior; as well as its virtual (CAD/CAE) model and software in the form of the Ardupilot/Pixhawk flight controller firmware, which extends and complements the standard functionality of the base Ardupilot software. The project and the results obtained can be useful to specialists developing and operating unmanned mobile vehicles, as well as educational institutions implementing pedagogical technologies based on the project learning method.
Keywords: unmanned electric vehicle, technological process aspects of design, combined control, two-way overrunning clutch, prototyping, system engineering, project-based learning
DOI: 10.26102/2310-6018/2025.50.3.032
The relevance of the study is due to the growing need for a highly accurate and interpretable emotion recognition system based on video data, which is crucial for the development of human-centered technologies in education, medicine, and human–computer interaction systems. In this regard, the article aims to identify the differences and application prospects of the local DeepFace solution and the cloud-based GPT-4o (OpenAI) model for analyzing short video clips with emotional expressions. Methodologically, the study is based on empirical comparative analysis: a moving average method was used to smooth the time series of emotional assessments and to evaluate stability and cognitive interpretability. The results showed that DeepFace provides stable local processing and high resistance to artifacts, while GPT-4o demonstrates the ability for complex semantic interpretation and high sensitivity to context. The effectiveness of a hybrid approach combining computational autonomy and interpretative flexibility is substantiated. Thus, the synergy of local and cloud solutions opens up prospects for creating more accurate, adaptive, and scalable affective analysis systems. The materials of the article are of practical value to specialists in the fields of affective computing, interface design, and cognitive technologies.
Keywords: affective computing, emotion recognition, video data analysis, deepFace, GPT-4o language model, hybrid analysis system, semantic text analysis, multimodal interaction, neural network interpretability, cognitive technologies
DOI: 10.26102/2310-6018/2025.50.3.023
The issue of wireless transmission of information via radio communication is raised. It is indicated that the key parameter of the radio channel quality is the signal-to-noise ratio at the input of the receiving device. The importance of ensuring a high signal-to-noise ratio in radio transmitting and receiving devices and systems is emphasized. An analytical review and comparative analysis of common methods for determining the signal-to-noise ratio at the input of the receiving device is carried out. Theoretical and practical methods for determining the signal-to-noise ratio are considered, in particular, the method of complex envelope, the method of spectral analysis, as well as the method of calculating losses in free space. Their advantages and disadvantages are revealed. The mathematical and methodological apparatus of the considered methods is described. A brief description of the algorithms for measuring the signal-to-noise ratio in these methods is given. Information about the conducted experimental studies of the methods is provided. The initial data and the results of the experiment are described. The results of a comparative analysis of theoretical and practical methods are presented according to the criterion of accuracy in estimating the signal-to-noise ratio at the input of the receiving device. The main reasons and factors that reduce the accuracy of the theoretical assessment of the signal-to-noise ratio compared with the practical measurement are analyzed. Possible ways to increase the value of the signal-to-noise ratio in theoretical methods are proposed.
Keywords: wireless communication, radio signal, signal-to-noise ratio, complex envelope method, spectral analysis method, loss calculation method
DOI: 10.26102/2310-6018/2025.50.3.030
In this study, a new mechanism for generating training data for a neural network for the task of image-based code generation is proposed. In order for a system to be able to perform the task assigned to it, it must be trained. The initial dataset that is provided with the pix2code system allows the system to be trained, but it relies on the data that is provided in the domain-specific dictionary. Expanding or changing words in the dictionary does not affect the data set in any way, which limits the flexibility of the system's application by not allowing for the rules that may apply to the enterprise to be taken into account. Some studies claim to have created their own dataset, but its lack of public access makes it difficult to assess the complexity of the images it contains. To solve this problem, within the framework of this study, a submodule was developed that allows, based on a modified dictionary of a domain-specific language, to create a custom training dataset consisting of an image-source code pair corresponding to this image. To test the functionality of the created dataset, the modified pix2code system performed training and was then able to predict the code on test examples.
Keywords: code generation, image, machine learning, dataset, source code
DOI: 10.26102/2310-6018/2025.50.3.014
This paper considers a method for increasing the search speed in hash tables with links if the problem assumes that the performance is limited by the throughput of one of the interfaces between the storage levels (caches L1, L2, L3, memory). To reduce the impact of this limitation, an algorithm for optimal use of the cache line size, the minimum portion of information transferred between the storage levels, is proposed. The paper shows that there is an optimal size of information about a key in a hash table (key representation) for a specific problem and architecture; equations are given for its numerical and approximate analytical calculation for the cases of a key present and absent in the table. A separate case of using a part of a key as its representation in the table is considered. An algorithm for working with inconvenient key representation sizes that are not a power of two is proposed. The presented calculation results confirm the increase in search performance when using a calculated key representation size compared to other options. The presented experimental result confirms the assumption that the associated complication of the code has virtually no effect on performance due to partial processor idleness. The work assumes collision resolution via chains, but similar calculations should be applicable to other methods given their specific features.
Keywords: hash, hash-table, open addressing, chain, collision, memory level parallelism, cache, cache-line, cache miss
DOI: 10.26102/2310-6018/2025.51.4.003
This article analyzes the effect of the size of the segmentation window on the quality of classification of the type of physical exercise based on data from the accelerometer and gyroscope of a smartphone. The article gives the concept and description of the HAR (Human Activity Recognition) task and its refinement for classifying specific types of physical exercises: squats, push-ups, jumps, abs, lunges. The review of existing data sets and approaches to solving problems of this class is carried out. The method of data collection for the experiment was chosen, and the attachment point of the device with sensors was determined. A tool (mobile application) has been developed to collect data from smartphone sensors such as accelerometer and gyroscope. Using the developed tool, a proprietary data set was collected under controlled conditions. The data obtained was processed based on general recommendations for the HAR class of tasks (data are reduced to a single frequency, noise-free, and segmented). Based on the obtained data sets, several models of both classical machine learning and deep neural networks with different parameters of the data segmentation window size were trained. As a result of the research, the best size of the data segmentation window was determined, as well as the classical machine learning and deep learning models that best performed the task.
Keywords: human activity analysis, machine learning, deep neural networks, data preprocessing methods, data collection, gyroscope, accelerometer
DOI: 10.26102/2310-6018/2025.50.3.013
The paper proposes a new method for suppressing artifacts generated during image blending. The method is based on differential activation. The task of image blending arises in many applications; however, this work specifically addresses it from the perspective of face attribute editing. Existing artifact suppression approaches have significant limitations: they employ differential activation to localize editing regions followed by feature merging, which leads to loss of distinctive details (e.g., accessories, hairstyles) and degradation of background integrity. The state-of-the-art artifact suppression method utilizes an encoder-decoder architecture with hierarchical aggregation of StyleGAN2 generator feature maps and a decoder, resulting in texture distortion, excessive sharpening, and aliasing effects. We propose a method that combines traditional image processing algorithms with deep learning techniques. It integrates Poisson blending and the MAResU-Net neural network. Poisson blending is employed to create artifact-free fused images, while the MAResU-Net network learns to map artifact-contaminated images to clean versions. This forms a processing pipeline that converts images with blending artifacts into clean artifact-free outputs. On the first 1000 images of the CelebA-HQ database, the proposed method demonstrates superiority over existing approach across five metrics: PSNR: +17.11 % (from 22.24 to 26.06), SSIM: +40.74 % (from 0.618 to 0.870), MAE: −34.09 % (from 0.0511 to 0.0338), LPIPS: −67.16 % (from 0.3268 to 0.1078), and FID: −48.14 % (from 27.53 to 14.69). The method achieves these results with 26.3 million parameters (6.6× fewer than the 174.2 million parameters of comparable method) and 22 % faster processing speed. Crucially, it preserves accessory details, background elements, and skin textures that are typically lost in existing methods, confirming its practical value for real-world facial editing applications.
Keywords: deep learning, facial attribute editing, blending artifact suppression network, image-to-image translation, differential activation, MAResU-Net, generative adversarial network (GAN)
DOI: 10.26102/2310-6018/2025.50.3.010
The relevance of this study is driven by the rapid growth of unstructured textual data in the digital environment and the pressing need for its systematic analysis. The lack of universal and easily reproducible methods for grouping textual information complicates interpretation and limits practical application across various domains, including healthcare, education, marketing, and the corporate sector. In response to this challenge, the present article aims to identify key algorithmic approaches to clustering unstructured texts and to analyze software systems implementing these methods. The primary research strategy is based on a comparative and analytical approach that enables the generalization and classification of contemporary machine learning algorithms applied to text data processing. The study reviews both traditional clustering techniques and advanced architectures incorporating unsupervised learning, numerical vector representations, and neural network models. Software tools are examined with a focus on their levels of accuracy, interpretability, and adaptability. As a result, the study systematizes criteria for selecting methods according to specific tasks, highlights limitations of existing approaches, and outlines promising directions for further development. The findings are intended to support professionals engaged in designing and deploying software solutions for the automatic processing and analysis of textual information.
Keywords: text clustering, unstructured data, topic modeling, machine learning, vector representations, unsupervised algorithms, software frameworks, text mining
DOI: 10.26102/2310-6018/2025.50.3.012
In recent years, the development of virtual reality (VR) technologies has been largely associated with the introduction of machine learning (ML) methods. The use of ML methods is aimed at increasing the level of comfort, efficiency and effectiveness of VR. ML algorithms can analyze interaction data, recognize patterns and adapt interaction scenarios based on the user's behavior and emotional state. The article analyzes the key modern areas of joint use of VR and ML, which have already been tested in practice and have shown fairly high efficiency. One of these areas is improving interaction in VR, including improving the quality of VR systems, more realistic graphics, adapting content to the user and accurate tracking of movements. The article considers the problems of using ML in VR technologies in the field of education, psychotherapy, rehabilitation, medicine, traffic management, in technologies for the creation, transmission, distribution, storage and use of electricity and other areas. A brief analysis of ML tools used in VR is also provided, among which generative neural networks can be distinguished that can create dynamic virtual environments. The study shows that the combination of VR and ML opens up new possibilities for creating intelligent and interactive systems and can lead to significant breakthroughs not only in VR but also in related technology areas.
Keywords: virtual reality technologies, machine learning, machine learning efficiency, adaptive algorithms, education, medicine, rehabilitation
DOI: 10.26102/2310-6018/2025.49.2.049
This article presents a project optimization procedure in the form of a network graph. The idea of optimization is to make all paths from the initial event to the final one critical by transferring resources from non-critical work with a non-zero free reserve to critical work of some critical path. Assuming that the dependence of the duration of work on the resources allocated for its execution is linear, formulas for new work durations and a new critical time are obtained. The reallocation of resources reduces the duration of some work, but makes the project more stressful. To evaluate a project with new work durations, a stress coefficient was introduced for each work as the intensity of use of the generalized project resource per unit of time. In the process of optimization, these characteristics behave differently, therefore, a generalized characteristic of the project intensity is introduced based on the aggregation of particular characteristics of work using the "fuzzy majority" principle. Note that well-known weighted averages can be used to aggregate partial estimates, while, for example, the method of paired comparisons can be used to determine the weights. The article provides an illustrative example demonstrating the operation of the proposed approach.
Keywords: network graph, critical path, resource, optimization, tension coefficient, aggregation
DOI: 10.26102/2310-6018/2025.50.3.009
This study is devoted to assessing the quality of annotations in Russian generated by a multi-agent system for time series analysis. The system includes four specialized agents: a dashboard analyst, a time series analyst, a domain-specific agent, and an agent for user interaction. Annotations are generated by analyzing dashboard and time series data using the GPT-4o-mini model and a task graph implemented with LangGraph. The quality of the annotations was assessed using the metrics of clarity, readability, contextual relevance, and literacy, as well as using an adapted Flesch readability index formula for the Russian language. Testing was developed and conducted with the participation of 21 users on 10 dashboards – a total of 210 ratings on a ten-point scale for each of the metrics. The assessment and results showed the effectiveness of annotations: clarity - 8.486, readability - 8.705, contextual relevance – 8.890, literacy – 8.724. The readability index was 33.6, which shows the average complexity of the text. This indicator is related to the specifics of the research area and does not take into account the arrangement of words and their context, but only static length indicators. An adult and a non-specialist in each field are able to perceive complex words in the annotation, which is proven by other ratings. All comments left by users will be taken into account to improve the format and interactivity of the system in further research.
Keywords: time series, annotation generation, LLM, multi-agent system, dashboards
DOI: 10.26102/2310-6018/2025.50.3.022
The relevance of this study is obvious. The rapid rise in inflation, fueled by a significant increase in wages in some sectors of the economy, and inflationary expectations are making life very difficult for society as a whole. The goal is to determine the level of GDP that will ensure stability in the country's economy and in the lives of its citizens for a long time. The article presents a study of the macroeconomic model of the Goodwin business cycle, which includes a small parameter in order to predict the dynamics of changes in vital economic indicators. For its analysis, such a method of dynamical systems theory as the method of normal forms by A. Poincare was used. It is shown that such a model can have a stable cycle in the vicinity of the state of economic equilibrium. Asymptotic formulas for calculating periodic solutions are obtained. The quantitative size of the limit cycle has been determined, which reflects periodic processes occurring in the economic system Goodwin, according to the input parameters. The stability of these processes has been proven. The results of the study clearly illustrate that the desired sustainable cyclical pattern of economic development, which allows the state to develop effectively, does not occur in all cases. In addition, it is also quite difficult to draw conclusions about the scope of this cycle from a practical point of view. But if it succeeds, then it is possible to make long-term forecasts regarding the development and the level of the main economic indicators that this development will ensure.
Keywords: dynamic systems, goodwin economic system, small parameter method, limit cycle, stability
DOI: 10.26102/2310-6018/2025.51.4.006
Breast cancer remains one of the leading causes of death among women worldwide, and microcalcifications on mammograms play a key role in the early detection of malignant neoplasms. Despite significant progress in the field of computer-aided analysis of medical images, accurate automatic classification of microcalcifications remains a challenge due to the high variability of their morphology and visual features. Microcalcifications, small calcium deposits that appear as bright point structures on mammograms, play an important role in the early detection of the disease. In this paper, we propose a novel hybrid model combining the ResNet-34 architecture supplemented with a convolutional block attention module (CBAM) and a support vector machine (SVM) classifier with a radial basis kernel. The attention module allows us to highlight the most informative spatial regions and feature channels, while the SVM provides high generalization ability even with a limited amount of data. Experiments on the CBIS-DDSM dataset showed that the proposed approach outperforms both the standard ResNet-34 and its hybrid with SVM in accuracy, sensitivity, specificity, and noise robustness. The proposed model achieves 97.47 % accuracy, 96.56 % sensitivity, and 95.17 % specificity, while ResNet-34 achieves 91.63 %, 92.80 %, and 92.87 %, and ResNet-34 with SVM achieves 96.75 %, 94.10 %, and 95.20 %, respectively.
Keywords: breast cancer, microcalcifications, deep learning, machine learning, hybrid model, CNN, resnet-34-SVM
DOI: 10.26102/2310-6018/2025.49.2.038
The article considers the feasibility of currency integration in the BRICS format, as well as the optimality of BRICS as a currency zone. In the course of the study, calculations have been made using the optimality formula for a currency zone. This model allows one to analyze the ratio of macroeconomic indicators of pairs of countries and find the average optimality coefficient of the entire association for currency integration. In addition, the research provides additional economic and geopolitical criteria, which are used to check the relevance of the primary calculations using the optimal currency zone model. Correlation of labor markets, the ratio of investment attractiveness levels correlation of business and financial cycles, inflationary convergence, geopolitical risks - all this has a direct or indirect impact on the success of integration. The data obtained after calculation and verification using additional criteria reflect the real degree of readiness of BRICS to create a single currency, as well as the predisposition of individual countries to economic integration. The purpose of the article is not to discredit the BRICS programs, but to provide a scientific approach to the analysis of one of the initiatives repeatedly promoted during BRICS summits. The feasibility of currency integration in the BRICS format is a complex multifaceted process that requires enormous time and resource expenditures from all member states of the association. This state of affairs runs counter to individual calls and statements made by politicians of the BRICS states, which may somewhat distort the idea of the subject of the study – currency integration in the BRICS format – in the eyes of the public.
Keywords: currency zone, currency integration, optimality, BRICS, criterion, economy, single currency, potential
DOI: 10.26102/2310-6018/2025.50.3.016
The article examines a conceptual approach to creating and utilizing a digital twin of stage space, which enables the implementation of higher-level control methods through synchronization with the physical space, employing automation of stage processes and their intelligent analysis. A model of stage space is proposed, encompassing static stage objects, dynamic actors, and controllable equipment, as well as intermediate software and hardware interaction systems. Based on this model, a method for constructing a digital twin is introduced, relying on bidirectional real-time synchronization between the model and the automation object. Potential applications of the resulting hardware-software system are discussed, focusing on the development of new methods for managing stage equipment and integrating immersive technologies into the stage environment. The architecture and process of developing a digital twin and a control system based on it are described. New control methods based on intelligent data analysis are proposed, including automated targeting of lighting fixtures, scene switching via triggers, and the integration of augmented reality technologies. These methods significantly streamline control processes and enhance the immersiveness of events.
Keywords: digital twin, simulation, control systems, lighting equipment, stage, theater lighting, augmented reality, cyber-physical system, intelligent control, digital transformation
DOI: 10.26102/2310-6018/2025.50.3.021
Keywords: stratified model, production management, multi-level evaluation of results, optimal resource allocation, optimal control
DOI: 10.26102/2310-6018/2025.50.3.007
With the increasing number of incidents involving the unauthorized use of unmanned aerial vehicles (UAVs), the development of effective methods for their automatic detection has become increasingly relevant. This article provides a concise overview of current approaches to UAV detection, with particular emphasis on acoustic monitoring methods, which offer several advantages over radio-frequency and visual systems. The main acoustic features used for recognizing drone sound signals are examined, along with techniques for extracting these features using open-source libraries such as Librosa and Essentia. To evaluate the effectiveness of various features, a balanced dataset was compiled and utilized, containing audio recordings of drones and background noise. A multi-stage feature selection methodology was tested using the Feature-engine library, including the removal of constant and duplicate features, correlation analysis, and feature importance assessment. As a result, a subset of 53 acoustic features was obtained, providing a balance between UAV detection accuracy and computational cost. The mathematical foundations of spectral feature extraction are described, including different types of spectrograms (mel-, bark-, and gammatone-spectrograms), as well as vector and scalar acoustic features. The results presented can be used to develop automatic UAV acoustic detection systems based on machine learning methods.
Keywords: unmanned aerial vehicle, acoustic signals, acoustic features, spectral analysis, machine learning
DOI: 10.26102/2310-6018/2025.49.2.048
Oil spills pose a serious threat to marine ecosystems, causing long-lasting environmental and economic consequences. To minimize damage, it is critically important to effectively limit the spread of pollution. One of the most common means in the fight against oil spills are booms — floating barriers that allow to localize the spill area and increase the efficiency of subsequent cleaning. However, the effectiveness of such barriers depends not only on the materials used, but also on their geometric configuration. In this regard, the task of minimizing the length of the boom necessary to cover a given spill area becomes urgent. In this paper, this problem is formulated as an isoperimetric optimization problem in the class of polygons. The problem of maximizing the area bounded by a polygon with a fixed perimeter and a fixed segment (for example, a section of shore) is investigated, provided that the boundary is a broken line rather than a smooth curve. It is proved that the optimal shape is achieved when the polygon is regular, that is, its sides and angles are equal. The results obtained can be used in the design of more efficient boom placement systems, contributing to lower material costs and improved environmental safety.
Keywords: isoperimetric problem, shape optimization, booms, oil spill, mathematical modeling, geometric optimization
DOI: 10.26102/2310-6018/2025.50.3.011
This paper is devoted to the problem of optimizing a quantum key distribution (QKD) network by combining an initial set of end nodes into small access networks with star-type topology using clustering algorithms. The study presents a modified version of the k-medoids algorithm that takes into account the constraint on the maximum quantum link length between a pair of nodes. A new non-Euclidean metric for link quality assessment based on the quantum capacitance value calculated based on the physical properties and length of the optical fiber link was also presented. The performance of the presented algorithm using two metrics, the Euclidean norm and the presented estimation metric, was then compared. A series of experiments were conducted to solve the clustering problem for multiple sets of nodes randomly distributed on the plane. It is found that the application of the presented non-Euclidean metric reduces the number of clusters by 11.7% compared to the Euclidean norm, and using multiple attempts at each iteration can improve the result by even more than 20%. The clustering method and the new metric presented in this paper allow us to reduce the number of subnets, reducing the cost of organizing central nodes, and also allows us to further solve the simplified problem of building a backbone network, combining the obtained subnets into a single QKD network.
Keywords: quantum key distribution, mathematical modeling, clustering, k-medoids algorithm, software package
DOI: 10.26102/2310-6018/2025.49.2.032
The article considers the problem of designing a system for operational short-term forecasting of wind speed at a specific point on the coast. An automated approach to designing hybrid machine learning models that combine an ensemble of multilayer neural networks and an interpretable system based on fuzzy logic is proposed. The method is based on the automated formation of an ensemble of neural networks and a system based on fuzzy logic using self-configuring evolutionary algorithms, which allows adapting to the features of the input data without manual tuning. After constructing the neural network ensemble, a separate system based on fuzzy logic is formed, learning from its inputs and outputs. This approach allows reproducing the behavior of the neural network model in an interpretable form. Based on experimental testing on a meteorological dataset, the effectiveness of the method is proven, which ensures a balance between the quality of the forecast and the interpretability of the model. It is shown that the constructed interpretable system reproduces the key patterns of the neural network ensemble, while remaining compact and understandable for analysis. The constructed model can be used in decision-making in port services and in organizing coastal events for quick and easy forecasting. The proposed approach as a whole allows obtaining similar models in various situations similar to the one considered.
Keywords: operational forecasting of wind characteristics, ensembles of neural networks, fuzzy logic systems, decision trees, self-configuring evolutionary algorithms
DOI: 10.26102/2310-6018/2025.50.3.002
Modern digital radio communication systems impose stringent requirements on energy and spectral efficiency under the influence of various types of interference, particularly in challenging radio wave propagation conditions. Consequently, the investigation of existing methods for operating in radio channels with fading, as well as the development of new approaches to address this challenge, remains highly relevant. The primary objective of this study is to investigate diversity reception techniques aimed at enhancing signal robustness against fading. The study examines approaches to combining known diversity methods and proposes a new modified spatial reception method. The methodology employed includes a comparative analysis of various combinations of spatial diversity reception techniques within an adaptive feedback system, based on simulations conducted in the MATLAB environment to evaluate the impact of different fading types on data transmission in a channel with feedback. The novelty of this work lies in the proposed diversity method, which involves signal combining through optimal summation in diversity reception, performed only on a selected subset of receiving antennas. This subset is determined based on channel state estimation results, as summing signals from all receiving antennas is deemed unnecessary and significantly increases complexity when the received signal quality is already high. The results demonstrate that the proposed solution offers advantages over the conventional optimal summation method by reducing computational complexity, as signal summation is limited to a portion of the receiving antennas rather than all of them. The proposed solution is particularly suitable for applications requiring simultaneous optimization of both energy efficiency and spectral efficiency in digital radio systems. Its relevance becomes especially pronounced under degraded reception conditions caused by environmental factors inducing severe fading effects.
Keywords: diversity reception, selection combining, equal gain combining, maximal ratio combining, adaptive system with feedback, error-control coding, fading channel