Keywords: mivar, gearboxes, production of gearboxes, mivar expert system, knowledge base, wi!Mi, MESD, razumator, big knowledge
DOI: 10.26102/2310-6018/2025.48.1.042
The article analyzes the field of production of planetary gearboxes and identifies problems that arise at enterprises during production. As a solution to the problems, the development of a mivar expert system is proposed, the task of which is to monitor the progress of gearbox production, support decision-making and timely notification of enterprise employees about errors and deviations. The relevance of the work is due to the need to increase automation in gearbox production. The decision-making basis will be the mivar knowledge base, for the compilation of which the stages and parameters of the technological process of gearbox production are formalized. The result of the work is a mivar expert system to support decision-making of personnel at an enterprise producing planetary gearboxes. The materials of the article are of practical value for specialists in the field of automation of production processes, as well as for managers and engineers seeking to improve management efficiency and optimize production processes. The scientific novelty of the work is to substantiate the feasibility of using mivar expert systems to automate production processes related to the assembly of gearboxes, their testing and storage in a warehouse. This system can serve as a basis for further developments and research in the field of integrating intelligent technologies into production processes.
Keywords: mivar, gearboxes, production of gearboxes, mivar expert system, knowledge base, wi!Mi, MESD, razumator, big knowledge
DOI: 10.26102/2310-6018/2025.49.2.006
The study of optimization of urban traffic flows becomes especially relevant in the current conditions of rapid urbanization and growth in the number of vehicles. Effective traffic flow management allows not only to reduce the level of traffic jams and congestion, but also to improve the environmental situation in cities, reduce travel time for drivers and passengers, and improve road safety. This paper focuses on the methods of traffic flow modeling on the example of a regulated intersection. The authors propose a method for modeling traffic flow based on the use of Petri nets with time constraints. The presented analysis of the computational experiment using the proposed model demonstrates its effectiveness in predicting traffic flows and identifying bottlenecks. The authors propose the structure and rules of functioning of Petri net elements, which allows to adapt the model to the specific conditions of a given intersection. The materials of the paper are of considerable practical value for solving problems of traffic flow optimization at regulated intersections. The proposed methods and models can be used by urban planners and engineers to develop more effective traffic management strategies, which ultimately contributes to improving the quality of life in cities and reducing traffic congestion. Thus, this study makes an important contribution to the development of the theory and practice of traffic flow management, offering new tools and approaches for solving current urban mobility problems.
Keywords: road traffic, controlled intersection, petri net, time restrictions, mesoscopic model
DOI: 10.26102/2310-6018/2025.49.2.003
The paper proposes an optimization approach to managing the interaction of personnel with automated devices in the information environment of a digitalized organizational system of customer order fulfillment using the results of simulation modeling. To build a simulation model of mass service system, the interaction of personnel with automated devices is considered as the interaction of non-ergatic and ergatic elements in the human-machine environment. The logical scheme of transforming the flow of consumer requests through local aggregators and service channels, including non-ergatic and ergatic elements, when transferring digital data on the order to manufacturers for their material realization is substantiated. It is proposed to vary the intensity of service not by choosing the parallel channels involved, but directly by the value of the total intensity, depending on the number of non-ergotic and ergatic elements. Formation of the optimization model is carried out, the optimizable variables of which are the values of intensities. The requirement of maximizing the system performance is considered as an extreme requirement, and the requirements of providing an acceptable level of costs and probability of erroneous actions act as boundary requirements. Transitions from the optimization problem with constraints to the equivalent optimization problem without constraints are carried out. Algorithmization of managerial decision making on choosing the number of interacting non-ergotic and ergatic elements by building an iterative search process using an optimization model in the process of simulation modeling of a digitalized organizational system of customer order fulfillment is proposed.
Keywords: digitalized organizational system, management, human-machine environment, simulation modeling, optimization
DOI: 10.26102/2310-6018/2025.48.1.037
The paper studies the problem of optimization of real-time control systems described within the actor model. The optimization problem is formulated as a problem of optimal configuration of the control cycle, i.e., distribution of functional elements-actors by groups, flows and execution sequence. We propose a configuration algorithm, which, although it does not reduce the number of analyzed configuration variants, reduces the amount of calculations for each of the variants. In addition to the optimization variants with a limit on the total cycle time and with a limit on the control system resources considered in the authors' previous works, the paper considers the problem of reducing the number of input and output ports through which the element-actors exchange data. The research shows that the number of ports can be reduced without compromising the functionality of the control system. This is due to the sequential nature of element-actors execution within one group of one flow. As a result, the same input or output ports can be used to communicate an actor element with several others. In addition to matching different control loop configurations, the problem of reducing the number of ports can also be solved by using shared memory for element-actor communication. When the control system is built according to memory-oriented architecture, small amounts of data are transferred through high-speed shared memory, which reduces the acuteness of the problem of queue formation.
Keywords: control system, actor model, loop, optimization, configuration, portability, memory-oriented architecture
DOI: 10.26102/2310-6018/2025.48.1.039
The relevance of the study is due to the fact that in the conditions of high competition for qualified personnel, research organizations seek to attract and retain talented employees. Effective motivation systems based on objective performance assessment are becoming an important tool for achieving this goal. Intelligent systems can provide management with analytical reports and recommendations based on data, which contributes to more informed decision-making in the field of motivation and management of employees. In this regard, this article is aimed at developing an intelligent system for assessing the performance of employees in research organizations, which is a powerful tool for analyzing and managing human capital in organizations. The expert method is based on the involvement of qualified specialists with deep knowledge and experience in the relevant field, which allows to increase the objectivity and reliability of the assessment results. The article describes the advantages and disadvantages of this approach. The work also proposes the use of a machine learning method to assess the performance of researchers based on key performance indicators. The main performance indicators selected for the assessment of labor activity are: scientific and educational activity, scientific work, presentation of results, scientific and educational activity. The materials presented in the article will be relevant and useful for the heads of scientific and research organizations.
Keywords: productivity of work activities, expert assessment method, machine learning, innovation, artificial intelligence, data modeling, researchers
DOI: 10.26102/2310-6018/2025.48.1.035
The paper presents the structuring of the regional organizational system and its management at the model level using the results of long-term statistical information for intelligent decision support. The first structural model allows us to assess the nature of the interaction between the control center and the components of the organizational system based on the used arrays of statistical accounting information. Population groups and territorial entities of the region carry out data transfer in the form of time series. The structural model of intelligent decision support by the control center is a component of the structure of the resource distribution management system. For its effective use as a basis for integrating the results of predictive analysis in the process of making management decisions based on optimization modeling, it is proposed to implement two-level intellectualization subsystems. An algorithmic scheme has been developed that provides two-level intellectualization in making management decisions, combining visual and predictive analysis modules for the subsequent use of the results of machine learning of predictive models in expert assessment and optimization modeling.
Keywords: regional organizational system, management, statistical accounting, predictive analysis, forecasting, optimization
DOI: 10.26102/2310-6018/2025.48.1.031
This paper considers the problem of assessing the confidentiality of data when using modules for blocking access to mobile applications. Messengers on the iOS17 platform were selected as an example. The relevance of the study is due to the need to increase the level of protection of user data in the face of growing threats to information security. The main goal is to obtain a numerical estimate, and the achievement of the goal is shown using the example of a comparative analysis of the confidentiality of data provided by the means of blocking applications VK, Telegram and WhatsApp. To achieve the goal, the methods of set-theoretical analysis and expert assessments were used. Key parameters for ensuring confidentiality (type and length of the lock code, use of biometrics, auto-lock time, etc.) were identified, normalized in the range [0,10]. The final score was calculated as the sum of the values of particle values for each application. The results showed that Telegram provides the highest level of confidentiality due to the ability to use more complex lock codes and strict security settings. VK is inferior to Telegram in a number of parameters, but demonstrates better results compared to WhatsApp, unless all parameters are forcibly disabled. The findings of the study can be used to improve data protection mechanisms in mobile applications, and the proposed methodology can be used for further research in the field of information security.
Keywords: data privacy, access blocking, PIN lock, privacy assessment, messenger security, personal data, set-theoretic analysis, application auto-locking, notification content hiding, user data protection
DOI: 10.26102/2310-6018/2025.48.1.030
This study is an assessment of the feasibility of building a system for executing functional tests for the task of generating source code from an image. There are many different metrics for assessing the quality of text predicted by a neural network: from mathematical ones, such as BLEU, Rogue, and those that use another model for evaluation, such as BERTScore, BLEURT. However, the problem with generating source code for a program is that the code is a set of instructions to perform a specific task. The relevance is that in publications related to the pix2code system, there was no mention of an automated test environment that can check whether the resulting code meets the specified conditions. In the course of the work done, a subsystem was implemented that can automatically obtain information about the differences between an image based on a predicted code and an image based on a reference code. Also, the results of this system are compared with the BLEU metric. As a result of the experiment, we can conclude that the BLEU value and the execution of tests do not have an obvious relationship between them, which means that tests are necessary for additional checks of the efficiency of the model.
Keywords: code generation, image, machine learning, BLEU, functional tests
DOI: 10.26102/2310-6018/2025.48.1.038
The theory of discrete optimization plays a crucial role in solving graph theory problems, such as the Steiner tree problem. It is widely applied in transportation infrastructure, logistics, and communication network design. Since the problem is NP-hard, heuristic methods such as genetic algorithms and artificial neural networks are often required. To solve the Steiner tree problem, a graph neural network (GNN) was selected. The GNN architecture involves iterative feature updates using information from neighboring nodes, allowing it to model complex dependencies in graphs. A message-passing neural network (MPNN) mechanism is employed for information aggregation, updating node states based on data from adjacent nodes and edges. The model is trained on graphs generated using the Melhorn heuristic algorithm. Experiments show that GNN performs well on graphs similar to the training data but experiences a significant drop in precision and recall metrics as the input graph size increases. This decline is likely due to the limitations of the MPNN mechanism, which aggregates information only from neighboring nodes within a limited range. Graph neural networks demonstrate strong potential for small- and medium-scale graph problems, particularly in analyzing complex systems such as wireless networks, where node interconnections are critical. However, as graph size increases, performance deteriorates, highlighting the need for improvements in aggregation and optimization algorithms.
Keywords: steiner tree problem, graph neural networks, graph theory, artificial neural networks, mehlhorn algorithm
DOI: 10.26102/2310-6018/2025.48.1.028
This paper presents an analysis of the impact of ventricular assist device (VAD) pump geometry on hemolytic performance. The relevance of the study is driven by the necessity to improve existing pumps, design new pumps, and address the lack of research on the correlation between pump geometry and hemolysis. The prototype is an axial four-blade ventricular assist pump currently used in clinical practice. To conduct the analysis, hydrodynamic modelling of fluid flow in the pump was performed using the finite volume method in OpenFOAM11. The numerical simulations were carried out using MRF and NonConformalCoupling technologies along with the LowRe k-ω SST turbulence model. It has been found that reducing the outer diameter, increasing the hub skew angle, and increasing the hub diameter lead to a lower total hemolysis index at a flow rate of 2.4 L/min, similarly, increasing the hub skew angle and reducing the outer diameter decrease total hemolysis index at a flow rate of 5.4 L/min. The findings of the study provide practical value for the design and modernization of axial pumps in ventricular assist devices.
Keywords: pump, computational fluid dynamics, ventricular assist device, hemolysis index, hemolysis performance, openFOAM, finite volume method
DOI: 10.26102/2310-6018/2025.48.1.036
The article presents the results of the development of an experimental setup for electrical impedance tomography of the lungs of newborns and a sealed simplified physical model of the neonatal mediastinum and a neonatal electrode system included in it. It consists of seven main devices that allow simulating conditions close to clinical ones. The sealed simplified physical model of the mediastinum (phantom) is made taking into account the possibility of placing a newborn doll; the design is made using 3D printing technologies. It is equipped with a system for controlled air filling of the lung areas, as well as drainage for removing excess conductive medium from the phantom. Three rows of electrodes are provided, providing the possibility of conducting experiments on simulating global and regional ventilation with different locations of the electrode system (row). A neonatal electrode system with metal electrodes on a flexible fabric base was developed and manufactured for use as part of the setup. The elastic base allows adjusting to the location of the electrodes on the phantom. Experimental studies performed using the ventilator for respiratory volumes from 2 to 60 ml confirmed the operability of the stand as a whole, namely the sensitivity of the phantom and neonatal electrode system to changes in air volumes, as well as sensitivity to neonatal ventilation modes on the ventilator. The developed solutions allow for research and testing of new algorithms and methods in the field of electrical impedance tomography of neonatal lungs, as well as use for diagnosing disorders of external respiration in neonatology.
Keywords: experimental stand, mediastinum, model, electrical impedance tomography, newborns, lungs
DOI: 10.26102/2310-6018/2025.48.1.032
The article discusses the current state of augmented reality technologies and prospects for their global application. To create global distributed augmented reality networks, common approaches and standards are needed. That task requires methodological support. The relevance of the research is due to the active development of XR technologies and necessity to create a simple and universal platform for immersive content exchange. The authors propose methods for classifying metaspaces and metaspace objects. Based on classification and requirements for metaspaces, a storage model and a basic technology for creating a markup language for metaspace objects are proposed. The proposed scheme is functionally independent and will allow the markup language to be expanded with new components. A model for storing metaspace objects is proposed, the main task of the model is to ensure potential extensibility. Obtained results can be used as a basis for developing a markup language for metaspace objects and browser-interpreters for various wearable devices. The similarity of markup and approaches to displaying content will allow the development process to reuse part of the classic Internet infrastructure, such as the server part. The proposed classification will also allow wearable devices to be divided into functional categories, thereby determining their capabilities in terms of interpreting metaspaces.
Keywords: extended reality, metaspace, augmented reality, geo-oriented AR internet, immersive technologies, XR, metaspace object model, ERML
DOI: 10.26102/2310-6018/2025.48.1.027
The introduction of artificial intelligence (AI) technologies into medical practice requires a thorough assessment of their effectiveness, especially for systems operating autonomously. The method proposed in this study is based on a synthesis of the requirements of national standards in the field of medical AI developed by experts of the Scientific and Practical Clinical Center for Diagnostics and Telemedicine Technologies and data obtained as part of the "Moscow Experiment" on the introduction of innovative technologies. The testing was carried out on three AI software products used to analyze fluorographic studies in the period from January to May 2023. The evaluation included an analysis of the accuracy of algorithms (sensitivity, specificity), effectiveness in real clinical conditions, as well as a comparative analysis of the results with a quantitative interpretation of the data. The emphasis in the evaluation was on providing the AI system with a high level of diagnostic sensitivity, which will allow doctors to relieve themselves of routine monotonous work in mass preventive studies. The developed method demonstrated the possibility of a comprehensive assessment of autonomous AI systems, identifying differences in the effectiveness of products by key metrics. The proposed method allows systematizing the process of validating medical AI solutions, minimizing the risks of their incorrect use in autonomous operation. The results of the study can be used to standardize the assessment of AI tools in radiology and other areas of medicine that require a high level of diagnostic reliability.
Keywords: artificial intelligence in medicine, autonomous diagnostic systems, efficiency assessment, radiation diagnostics, radiology
DOI: 10.26102/2310-6018/2025.48.1.034
The article uses AB analysis to identify markers of metabolic arginine-dependent aging among clinical and diagnostic parameters of the body in a group of 32 patients aged 29 to 89 years (14 men and 18 women) who underwent geroprophylactic treatment with L-arginine. Before and after exposure, the patient's biological age was determined based on functional data using age- and sex-dependent models, then the difference between calendar and biological age was calculated and the change in this difference before and after exposure (delta exposure) was estimated. The sample of patients was divided into 2 subgroups according to the magnitude of the exposure delta: in the first group, patients with rejuvenation effect were identified, in the second group, patients with accelerated aging or without significant changes in the exposure delta were collected. The AV analysis was performed according to clinical and diagnostic parameters before exposure to patients of the first and second subgroups. For the AV analysis, a combined technique was used using both statistical parameters and bustrap methods. The choice of the AB analysis method was determined by the distribution properties of the studied clinical parameter, according to which the subgroups were compared. The results of the analysis showed that a reliable statistically significant difference between the subgroups is observed in terms of blood pressure, diastolic, ADD, and platelet distribution width, RDW. At the same time, statistically significant differences in patient subgroups are also observed in a number of indicators (total protein, low-density lipoproteins -LDL, albumin, alanine aminotransferase-ALT, mean platelet volume-MPV, Wexler-TV test, atherogenicity coefficient-KA and Cholesterol), but due to the small sample sizes of the compared subgroups, they can be false positive.
Keywords: AB analysis, bootstrap, confidence intervals, geroprophylactic effect, predicting the effectiveness of treatment, bio-growth
DOI: 10.26102/2310-6018/2025.49.2.021
The paper considers the problem of reproducing the process of purchasing real estate, the solution of which will allow testing both existing and future dynamic pricing algorithms, building predictions of buyers' preferences and forming a demand curve. As a solution, it is proposed to use an approach based on the use of discrete choice models, which are widely represented in the economic literature and have a wide range of applications in the field of studying consumer behavior and preferences in competitive markets. This paper presents a new discrete choice model that uses a neural network to form the utility of a real estate object. An approach to training the model through Siamese neural networks is proposed. The article also proposes a non-standard architecture of the main neural network, which allows avoiding the loss of convergence during its training. The paper simulates the process of purchasing real estate using classical models based on logistic regression with random coefficients and using a neural network model, and compares them. As a result of numerical experiments, a noticeable advantage of the proposed neural network approach is shown. Using a permutation test, the statistical significance of the obtained results is proved.
Keywords: discrete choice model, siamese neural networks, sales process, real estate, customer preference, econometric modeling
DOI: 10.26102/2310-6018/2025.48.1.033
The article addresses the challenges of developing an industry-specific decision support system for education and career guidance in engineering professions under conditions of limited data availability. The system aims to facilitate informed career choices by assessing students’ aptitudes for engineering and technical fields. To formalize these aptitudes, the authors propose a set of key factors and evaluation metrics that enable data-driven conclusions using information extracted from digital educational environments. These factors are designed to leverage immersive technologies and digital educational tools for data acquisition. The study introduces a generalized mathematical model that quantifies the manifestation of multiple parameters and aligns them with potential professional trajectories. The model incorporates weighted indices and significance assessments for predictive analytics, along with methods to integrate diverse evaluation approaches into the decision support framework. Parameters include psychological diagnostics and academic performance metrics. Additionally, the paper demonstrates the application of the generalized model to the mining industry, validated through empirical testing involving a control group of industry professionals. The results highlight the model’s adaptability to sector-specific requirements and its capacity to enhance objectivity in career aptitude assessment. This research contributes to the development of scalable, data-informed tools for engineering career guidance, emphasizing the integration of emerging technologies into educational ecosystems.
Keywords: decision support systems, forecasting, mathematical modeling, data model, digital environment, career guidance
DOI: 10.26102/2310-6018/2025.48.1.025
The article discusses the issues of automation of the functionality of the admission campaign of the educational organization of higher education, in particular, issues related to the introduction of enrollment priorities. The enrollee applies for admission to higher education programs. In it, it denotes individual competitive groups and enrollment priorities for each of them. Based on the information provided, the educational organization of higher education determines the highest priorities for the further enrollment of enrollee. The complex of programs presented in this article is an urgent tool for solving the problem of automatically determining the highest priorities. The complex developed by the authors consists of two subprograms. Each subroutine contains its own implemented algorithm. One of the algorithms for solving the problem is an algorithm based on the use of the «brute force» method (the exhaustive search method). This method has proven its simplicity in implementation and readability of the code. Also, the Gale-Shapley algorithm is implemented in the complex of programs. It is characterized by the search for stable matchings between two groups of participants. Within the framework of this article, the main stages of the complex of programs are presented in detail. Finally, the authors analyzed the results of the implemented algorithms. It is concluded that the algorithms are effective. The results obtained in the article in the form of a complex of programs are proposed to be used by employees of admission commissions of educational institutions of higher education when conducting a new recruitment in terms of automation of determining the highest priorities of applicants in the competitive lists.
Keywords: complex of programs, «brute force» method, gale-Shapley algorithm, admission campaign, selection committee, enrollee, priorities, enrollment, stable matchings
DOI: 10.26102/2310-6018/2025.48.1.024
Hospital statistics on COVID-19 recoveries in Irkutsk are presented in the form of the rate of recovery over a certain number of days from the full group of patients. The recovery time varies from 1 to 182 days. The number of cases considered reaches ~100000 cases. For the convenience of using the data, it is proposed to approximate the table for the recovery rate by various types of nonlinear functions. The following variants of approximating functions have been studied: Gaussian, Lorentz, modified Lorentz, Weibull function, Johnson functions. For comparison with statistics, methods were used to minimize the standard deviations of approximating functions from experimental data. The least squares method is used for functions with two and three parameters, the coordinate descent method, and the gradient descent method for functions with four fitting parameters. It is shown that the best fitting results are provided by a modified Lorentz function with four parameters. According to the degree of discrepancy with experimental statistics, the approximating functions are arranged in the following order: the Weibull function provides the least accurate fit (16.15%), followed by the Johnson function SU (10.65%), slightly better fit for the Johnson function SB (8.49%), for the Gaussian function (5.8%), for the Lorentz function the fit is (3.2828%), the best fit is given by the modified Lorentzian function (3.2804%) under certain approximations.
Keywords: epidemic theory, optimization methods, coordinate descent, gradient descent, least squares method, gauss approximation, lorentz approximation, weibull approximation, johnson approximation, modified Lorentz distribution
DOI: 10.26102/2310-6018/2025.48.1.020
The article studies the problem of identifying signs of depression based on user data from social networks using machine learning methods and network analysis. The study includes the development of a model for detecting users with signs of depression, which relies on text analysis of their social network posts and profile metadata. Neural networks were used as algorithms in the study, showing high classification accuracy. Network analysis was implemented to examine the influence of users with signs of depression and it shows that such users have low centrality and do not form dense clusters, indicating their social isolation. The hypothesis of depression spreading through social connections was not confirmed, suggesting minimal impact of depressive users on others. The research results can be utilized to develop systems for early detection of depression. Special attention is given to the study's limitations, including the use of data from a single social network and the complexity of processing textual data. The article proposes directions for further research aimed at expanding methods for analyzing the spread of depressive behavior in social networks.
Keywords: forecasting, depression, psychological disorder, classification, social network, machine learning, neural network, network analysis
DOI: 10.26102/2310-6018/2025.48.1.014
When creating a communication network, various obstacles inevitably arise that negatively affect its effectiveness. The lack of measures to eliminate such interference makes it difficult to optimize the network. Among the problems caused by interference, the problem of blocking them is one of the most significant. This unresolved issue may make successful network design impossible. In order to solve the problems that the traditional method has a long response time to monitor the congestion of the communication network and the detection effect is not ideal, a real-time monitoring method based on cloud computing for blocking the communication network is proposed. Firstly, a communication network monitoring point is established, and the receiver completes the communication data collection process. Based on the collected data, continuous traffic calculation is performed to determine whether there is an emergency blocking state in the communication network channel and determine the exact location of the blocking point. In this way, the information generates an alarm message to obtain the monitoring results. The real-time running time and the accuracy of the monitoring method are experimentally analyzed. It is found that the monitoring method can control the delay time within 0.2 s, and the monitoring error rate is low.
Keywords: cloud computing, telecommunications, network congestion, real-time monitoring, monitoring point, system management, blocking
DOI: 10.26102/2310-6018/2025.48.1.026
Currently, deep learning, as a hot research direction, has yielded fruitful research results in natural language processing and graph recognition and generation, such as ChatGPT and Sora. Combining deep learning with decoding algorithms for channel coding has also gradually become a research hotspot in the field of communication. In this paper, we use deep learning to improve the adaptive exponential min sum (AEMS) algorithm for LDPC codes. Initially, we extend the iterative decoding procedure between check nodes (CNs) and variable nodes (VNs) in the AEMS decoding algorithm into a feedforward propagation network based on the Tanner graph derived from the H matrix of LDPC codes. Second, in order to improve the model training efficiency and reduce the computational complexity, we assign the same weight factor to all the edge information in each iteration of the AEMS decoding network, which reduces the computational complexity while guaranteeing the decoding performance, and we call it the shared neural AEMS (SNAEMS) decoding network. The simulation results show that the decoding performance of the proposed SNAEMS decoding network outperforms that of the conventional AEMS decoder, and its coding gain is gradually enhanced as the code length increases.
Keywords: LDPC, deep learning, neural network, exponential algorithm, min sum
DOI: 10.26102/2310-6018/2025.48.1.019
Modern unmanned aerial systems (UAS) play a key role in various industries, including environmental monitoring, geodesy, agriculture, and forestry. One of the most critical factors for their successful application is the integration of data from various sensors, such as global navigation satellite systems, inertial navigation systems, lidars, cameras, and thermal imagers. Sensor data fusion significantly enhances the accuracy, reliability, and functionality of control systems. This paper explores data integration methods, including traditional algorithms like Kalman filters and their extended versions, as well as modern approaches based on deep learning models, such as FusionNet and Deep Sensor Fusion. Experimental studies have shown that learning-based models outperform traditional algorithms, achieving up to a 40 % improvement in navigation accuracy and enhanced resilience to noise and external disturbances. The proposed approaches demonstrate the potential to expand UAS applications in autonomous navigation, cartography, and monitoring, particularly in challenging operational environments. Future development prospects include the implementation of hyperspectral sensors and the development of adaptive data integration methods to further improve the efficiency and effectiveness of unmanned systems.
Keywords: sensor data integration, unmanned aerial systems, kalman filter, fusionNet, deep Sensor Fusion, autonomous navigation, resilience to disturbances
DOI: 10.26102/2310-6018/2025.48.1.023
The paper proposes a distributed control algorithm for multi-agent systems with a leader. The main objective is to ensure the asymptotic convergence of the states of all follower agents to the state of the leader, under the condition that each agent uses only local information obtained from neighboring nodes. The dynamics of the agents are modeled by a second-order system – a double integrator, which allows to take into account both the position and velocity of the agents. This description more accurately reflects the properties of real systems compared to the commonly used simplified first-order models. Graph theory is employed to formalize the topology of communication links between agents. The developed algorithm is based on the idea of pinning control and uses local information about the states of neighboring agents and the leader. The Lyapunov method and eigenvalue analysis were used to study the stability of the system and to obtain analytical conditions for the gain factors that guarantee the achievement of consensus. To illustrate the efficiency and effectiveness of the proposed algorithm, numerical simulations are conducted in MATLAB. The leader's trajectory is chosen based on the optimal trajectory obtained in previous studies by the authors. The results confirm that the states of the follower agents asymptotically converge to the state of the leader over time. The proposed algorithm can be applied to solve problems of group control of mobile robots, unmanned vehicles, and other distributed technical systems.
Keywords: multi-agent systems, distributed control, consensus, leader-follower structure, graph theory, pinning control, group control
DOI: 10.26102/2310-6018/2025.48.1.013
Rate limiting is a crucial aspect of managing the availability and reliability of APIs. Today, there are several approaches to implementing rate limiting mechanisms, each based on specific algorithms or their combinations. However, existing methods often treat all consumers as a homogeneous group, hindering the creation of flexible resource management strategies in modern distributed architectures. In this article, the author proposes two new methods for rate limiting based on the token bucket algorithm. The first method involves using a shared token bucket with different minimum fill requirements depending on the consumer class. The second method suggests using separate token buckets for each consumer class with individual parameter values but a common limit. Simulation results confirmed that both methods enable efficient API request limitation, though disparities emerged regarding resource distribution patterns across diverse consumer classes. These findings have practical implications for developers of information systems and services who need to maintain high availability while ensuring access guarantees for various consumer categories.
Keywords: rate limiting, token bucket algorithm, software interface, consumer class, quota, threshold, burst traffic
DOI: 10.26102/2310-6018/2025.48.1.021
This study presents a method for closed-ended question generation leveraging large language models (LLM) to improve the quality and relevance of generated questions. The proposed framework combines the stages of generation, verification, and refinement, which allows for the improvement of low-quality questions through feedback rather than simply discarding them. The method was tested on three widely recognized datasets: SQuAD, Natural Questions, and RACE. Key evaluation metrics, including ROUGE, BLEU, and METEOR, consistently showed performance gains across all tested models. Four LLM configurations were used: O1, O1-mini, GPT-4o, and GPT-4o-mini, with O1 achieving the highest results across all datasets and metrics. Expert evaluation revealed an accuracy improvement of up to 14.4% compared to generation without verification and refinement. The results highlight the method's effectiveness in ensuring greater clarity, factual correctness, and contextual relevance in generated questions. The combination of automated verification and refinement further enhances outcomes, showcasing the potential of LLMs to refine text generation tasks. These findings will benefit researchers in natural language processing, educational technology, and professionals working on adaptive learning systems and corporate training software.
Keywords: question generation, large language models, artificial intelligence, natural language processing, o1, o1-mini, GPT-4o, GPT-4o-mini
DOI: 10.26102/2310-6018/2025.48.1.018
The relevance of the study is determined by the need to enhance the effectiveness of marketing strategies through automated and customizable customer segmentation. This work proposes a universal customer data management system based on RFM segmentation with the ability to configure flexible logic, as well as the capability to integrate with various external systems. Traditional CRM systems and manual RFM segmentation methods are limited in functionality and do not always meet the business needs for flexibility and integration with various data sources. The study identifies the shortcomings of traditional CRM systems and suggests points for improvement in the described system. Additionally, an experiment was conducted comparing the RFM segments generated using the proposed architecture with Yandex's auto-strategies in the Yandex.Direct advertising platform. The application of the system showed significant advantages over auto-strategies, including a 30.71% increase in purchases in the case of a clothing store. The results confirm the practical value of the system for optimizing marketing campaigns and improving conversion. The results are of practical importance for companies in need of customized solutions and integrations. Further development is proposed, focusing on improving the RFM segmentation method by implementing machine learning algorithms and exploring additional effective channels for utilizing the generated segments.
Keywords: RFM analysis, marketing automation, customer loyalty, user segmentation, e-commerce, advertising strategy optimization
DOI: 10.26102/2310-6018/2025.48.1.015
Enterprises participating in housing and communal services need market energy viability and competitiveness, attractiveness for consumers. For Russian companies, it is important to adhere to relatively "soft" (flexible) tariffs and energy supply strategies. It is necessary to find effective solutions, for example, investment and reducing uncertainties such as "white noise" in the energy system. The purpose of the study is a systematic analysis of the potential of a smart contract, a digital ruble and digital payments in energy service contracts. The possibilities of energy contracts and services, as well as the content and features of such contracts, measures for sustainable energy conservation with a certain profitability and optimization of energy resources were studied by methods of system analysis and modeling. Therefore, it is necessary to identify the parameters and features of the contract and simulate the processes of energy supply. The results of the study are: 1) a systematic analysis of standard forms of contracts and a description of a set of energy-saving key procedures of the enterprise; 2) analysis of the potential of the digital ruble and its "energy capabilities"; 3) model of dynamics of management of an energy service enterprise based on diffusion of digital services and its research. The results of the work will expand the possibilities of concluding and developing energy service contracts in practice, as well as build flexible models and algorithms for energy supply.
Keywords: system analysis, smart contract, energy consumption, energy service contract, modeling
DOI: 10.26102/2310-6018/2025.48.1.016
The paper presents a system for analyzing images of nucleated bone marrow cells to form a diagnostic conclusion in oncohematology, aimed at solving the problem of constructing a data processing pipeline in automatic analyzers of biomedical images. The relevance of the study is due to the need to improve the reliability of theof automatic microscopic analysis of biomedical samples, which is aa difficult task due to high variability and morphological complexity of the investigated objects. One solution to this problem is to develop a web service that uploads, processes and describes images, then classifies them into categories of confirmed and unconfirmed cases. This web service provides cross-platform and accessibility, builds an open database of verified images and providestools for processing and analyzing images, as well as tools for correcting by the physician of the processing results. The system does not prescribe treatment and does not make diagnoses in dependently, but serves as an intelligent tool for processing, analyzing and transmission of research results in real time. The testing results showed high accuracy of the system: 91% for neural network methods and up to 97% for classical algorithms. The developed system allows for the analysis of data processing modules for computer microscopy systems.
Keywords: analysis of biomedical images, selection of objects, classification of nucleated cells, pattern recognition, oncohematology
DOI: 10.26102/2310-6018/2025.48.1.012
This study focuses on route optimization in quantum key distribution (QKD) networks, whose features are a number of physical constraints and strong topology dependence. This paper examines the application of two variations of the ant colony algorithm, the elitist ant system (EAS) and Max-Min ant system (MMAS) algorithms, to construct optimal routes in QKD networks. A metric for the communication efficiency of a route in QKD networks has been presented to evaluate the quality of a route according to given capacity and security requirements. The peculiarity of this metric is its non-additive capacity component, which depends on the minimum link efficiency in the route. A series of experiments were conducted on a randomly generated planar graph for long and short routes with EAS and MMAS algorithms, which resulted in MMAS being significantly more efficient for long routes, but in the case of short routes, EAS found the route faster without significant loss in solution quality. The results obtained in this study can be applied in solving problems of dynamic routing, as well as optimization of the topology of quantum key distribution networks.
Keywords: quantum key distribution, metaheuristics, ant algorithm, elitist ant system, max-Min ant system, pathfinding
DOI: 10.26102/2310-6018/2025.48.1.011
The article presents the results of the development and experimental study of two simplified physical models of the neonatal mediastinum for electrical impedance tomography. The created models are based on spiral computed tomography data and take into account the anatomical features of the infant chest organs. The designs were implemented using 3D printing technologies, which made it possible to achieve high accuracy of geometric parameters. The models are equipped with a controlled air filling system for the lungs and three rows of electrodes, which makes it possible to conduct experiments on modeling global and regional ventilation. Experimental studies have demonstrated that the developed models make it possible to record respiratory volumes in the range from 2 to 120 ml, which corresponds to the physiological parameters of newborn breathing. The data obtained confirmed the operability of the models, their sensitivity to changes in air volumes, as well as their suitability for research and testing of new algorithms and methods in the field of electrical impedance tomography. It was found that the proposed models provide adequate reproduction of ventilation processes and can be used to develop diagnostic solutions in the field of neonatology. The results of the work are of practical value for scientific research aimed at improving methods for diagnosing respiratory disorders in newborns, and can be used in educational practice.
Keywords: simplified physical model of mediastinum, electrical impedance tomography, newborns, process of global and regional ventilation, lungs