metadata of articles for the last 2 years
Работая с нашим сайтом, вы даете свое согласие на использование файлов cookie. Это необходимо для нормального функционирования сайта, показа целевой рекламы и анализа трафика. Статистика использования сайта отправляется в «Яндекс» и «Google»
Научный журнал Моделирование, оптимизация и информационные технологииThe scientific journal Modeling, Optimization and Information Technology
Online media
issn 2310-6018

metadata of articles for the last 2 years

Network planning and resource optimization of a project in conditions of fuzzy group expert assessment of the duration of work

2025. T.13. № 1. id 1861
Azarnova T.V.  Asnina N.G.  Bondarenko Y.V.  Sorokina I.O. 

DOI: 10.26102/2310-6018/2025.48.1.041

This article presents an algorithm for calculating time parameters and resource optimization of a network graph, the lengths of which are estimated by an expert group in the form of fuzzy triangular numbers. To account for the variation in expert assessments, the examination results are first summarized as fuzzy interval-digit numbers and then converted into fuzzy triangular numbers based on the risk factor of the decision maker. The use of fuzzy interval-valued numbers allows not only to take into account the uncertainty of expert opinions regarding the duration of work, but also the differences in expert opinion when forming the membership function of fuzzy triangular numbers. The network planning algorithm is based on the classical algorithm for finding the critical path using special methods for calculating the early and late times of events when setting the duration of work in the form of fuzzy triangular numbers. Instead of taking the maximum and minimum operations when finding the early and late times of events, a probabilistic comparison of fuzzy numbers is used. Based on the calculated fuzzy triangular estimates of the early and late completion of events, fuzzy estimates of the early and late moments of the start and completion of each job and the probability of each job being completed at each time are calculated. The probabilities obtained allow us to estimate the resource availability of the project at any given time. The paper also proposes a mathematical model for optimizing the resource availability of a project due to shifts in the beginning of each work within the early and late start.

Keywords: network graph of the project, fuzzy triangular and interval-valued representation, duration of the project work, fuzzy time parameters of the project work, resource optimization of the project

Analysis of the influence of gas dynamic processes on temperature stratification in an energy separation device, taking into account Bernoulli's law and Joule-Thomson effect

2025. T.13. № 1. id 1855
Matveev A.F.  Kovalnogov V.N. 

DOI: 10.26102/2310-6018/2025.48.1.045

The article provides an analysis of a number of gas-dynamic processes affecting the efficiency of the gas-dynamic temperature stratification device. The relevance of the study is due to the need for a more accurate description of the processes of gas-dynamic temperature stratification in energy separation devices, which is important for improving the efficiency of heat exchange and aerodynamic systems. This article is aimed at identifying patterns of energy redistribution in the flow, taking into account the Bernoulli law and the Joule-Thomson effect, as well as analyzing their impact on temperature gradients inside the gas-dynamic temperature stratification device. The study employs mathematical modeling conducted within the STAR-CCM+ framework, enabling a thorough exploration of gas flow characteristics, as well as variations in velocity, pressure, and temperature throughout the system. The article presents the results of a numerical experiment, reveals the mechanisms of influence of the main gas-dynamic effects on temperature stratification, identifies key dependencies between the input parameters of the device and the flow characteristics, and substantiates the possibility of targeted optimization of energy separation. Mathematical models are derived, supplemented by equations that take into account the role of Bernoulli's law and the Joule-Thomson effect. The corresponding equations are considered. The materials of the article are of practical value for the development and improvement of energy separation devices, optimization of working processes in gas-dynamic systems and increasing the efficiency of temperature stratification in aerodynamic installations for use in the real sector of the economy.

Keywords: gas dynamic temperature stratification, energy separation device, mathematical modeling, STAR-CCM+, bernoulli's law, joule-Thomson effect

Segmentation of liver volumetric lesions in multiphase CT images using the nnU-Net framework

2025. T.13. № 1. id 1853
Kulikov A.  Kashirina I.L.  Savkinа E. 

DOI: 10.26102/2310-6018/2025.48.1.040

The article presents a study on the application of the nnU-Net (v2) framework for automatic segmentation and classification of liver space-occupying lesions on abdominal computed tomography. The main attention is paid to the effect of the batch size and the use of data from different contrast phases on the classification accuracy of such lesions as cysts, hemangiomas, carcinomas, and focal nodular hyperplasia (FNH). During the experiments, batch sizes of 2, 3, and 4 were used, as well as data from two contrast phases ‒ arterial and venous. The results showed that the optimal batch size is 3 or 4, depending on the pathology, and the use of data from two contrast phases significantly improves the accuracy and sensitivity of space-occupying lesions classification, especially for carcinomas and cysts. The achieved best sensitivity rates were 100% for carcinomas, 94% for cysts, 81% for hemangiomas, and 84% for FNH. The paper confirms the effectiveness of nnU-Net v2 for solving medical image segmentation and classification problems and highlights the importance of choosing the right training parameters and data to achieve the best results in medical diagnostics.

Keywords: nnU-Net v2, CT images, liver pathologies, batch size, segmentation, classification, medical images, contrast phases, carcinoma

Modeling of newborn breathing patterns using electrical impedance tomography

2025. T.13. № 2. id 1849
Konko M.A.  Aleksanyan G.K.  Pyatnitsyn S.I.  Gorbatenko N.I. 

DOI: 10.26102/2310-6018/2025.49.2.001

The article presents the results of experimental studies aimed at modeling five basic breathing patterns of newborns using an electrical impedance tomograph and a simplified physical model of the neonatal mediastinum. The study covers such patterns as normal breathing (eupnea), periodic breathing, tachypnea, breathing with retractions and central apnea. The previously developed simplified physical model of the neonatal mediastinum is equipped with a controlled air filling system, which allows reproducing various volumes and modes of ventilation. Experimental studies confirmed the possibility of modeling and recording each of the five breathing patterns using an electrical impedance tomography. The developed technique allows research and testing of new data processing algorithms in the field of electrical impedance tomography of the lungs of newborns. The results confirm that electrical impedance tomography is a promising tool for diagnosing and monitoring respiratory disorders in newborns. The proposed solutions can be used to develop new approaches to the diagnosis and treatment of respiratory diseases in neonatology.

Keywords: electrical impedance tomography, newborns, patterns, diagnostics, monitoring, lungs

Optimization of consumer order fulfillment management in a digitalized organizational system of interaction with producers

2025. T.13. № 1. id 1845
Lvovich Y.E.  Preobrazhensky Y.P.  Pupykin A.N. 

DOI: 10.26102/2310-6018/2025.48.1.043

The article explores the application of an optimization approach in managerial decision-making within a digitalized organizational system for consumer order fulfillment. It is demonstrated that when constructing a model of interaction between consumers and producers, the characteristics of human-machine environment elements must be taken into account. Such consideration enables the optimization of management in the interaction between ergatic and non-ergatic elements based on performance, reliability, and cost indicators. The formation of the optimization model is based on the introduction of alternative variables characterizing the choice of the number of ergatic elements interacting with a specific non-ergatic element. The extremal requirement considered is the maximization of the performance of the consumer order fulfillment process in the digitalized organizational system, while the boundary requirements are the specified levels of reliability and costs. A transition to an equivalent unconstrained optimization function is implemented. The algorithmic procedure for managerial decision-making is oriented towards the structure of the equivalent optimization function and includes several stages: automatic generation of feasible solutions in a randomized environment, iterative settings of variables, verification of the stopping condition for the iterative process, and expert selection of the final solution.

Keywords: organizational system, digitalization, management, human-machine environment, optimization, expert evaluation

Structure of software for managing the activities of an IT company

2025. T.13. № 1. id 1843
Oleinikova S.A.  Dyatchina A.V.  Politov V.A. 

DOI: 10.26102/2310-6018/2025.48.1.044

This article is devoted to the development of software designed to manage the activities of a large IT company by assessing the start time of individual project tasks and assigning specialists to them. Optimizing the process of solving these two interrelated tasks is one of the key factors in the effective functioning of an IT company. In addition to the specific features of this industry, which include different qualifications of specialists, the need to finalize tasks after their completion, and others, a key factor in planning is the periodic occurrence of unplanned events that increase the duration of the project (for example, adjusting certain tasks after agreement with the customer, the emergence of new tasks during discussion, etc.). All this requires the use of new algorithms that take into account the above nuances. This necessitates the development of software that implements the main management mechanisms for IT companies and allows for a prompt response to random factors that lead to a change in the previously found characteristics of the IT project. This software will combine a management system, client applications that allow recording all the nuances related to individual tasks (their implementation, changes in customer requirements, correction, etc.) and a database containing all the data on the project tasks, their interdependence, specialists, etc. As a result, a software structure has been obtained that manages the activities of an IT company by planning the start time of individual tasks, assigning specialists to them, and monitoring execution by introducing subsystems for planning, correction and evaluation of stochastic parameters.

Keywords: project management, IT company management, software, planning, schedule adjustment

Mivar expert system for supporting personnel decision-making in the production of planetary gearboxes

2025. T.13. № 1. id 1842
Antonova A.A.  Varlamov O.O. 

DOI: 10.26102/2310-6018/2025.48.1.042

The article analyzes the field of production of planetary gearboxes and identifies problems that arise at enterprises during production. As a solution to the problems, the development of a mivar expert system is proposed, the task of which is to monitor the progress of gearbox production, support decision-making and timely notification of enterprise employees about errors and deviations. The relevance of the work is due to the need to increase automation in gearbox production. The decision-making basis will be the mivar knowledge base, for the compilation of which the stages and parameters of the technological process of gearbox production are formalized. The result of the work is a mivar expert system to support decision-making of personnel at an enterprise producing planetary gearboxes. The materials of the article are of practical value for specialists in the field of automation of production processes, as well as for managers and engineers seeking to improve management efficiency and optimize production processes. The scientific novelty of the work is to substantiate the feasibility of using mivar expert systems to automate production processes related to the assembly of gearboxes, their testing and storage in a warehouse. This system can serve as a basis for further developments and research in the field of integrating intelligent technologies into production processes.

Keywords: mivar, gearboxes, production of gearboxes, mivar expert system, knowledge base, wi!Mi, MESD, razumator, big knowledge

An importance of the portability factor for configuring the cycle of real-time actuator control system

2025. T.13. № 1. id 1839
Zekenskii A.A.  Gribkov A.A. 

DOI: 10.26102/2310-6018/2025.48.1.037

The paper studies the problem of optimization of real-time control systems described within the actor model. The optimization problem is formulated as a problem of optimal configuration of the control cycle, i.e., distribution of functional elements-actors by groups, flows and execution sequence. We propose a configuration algorithm, which, although it does not reduce the number of analyzed configuration variants, reduces the amount of calculations for each of the variants. In addition to the optimization variants with a limit on the total cycle time and with a limit on the control system resources considered in the authors' previous works, the paper considers the problem of reducing the number of input and output ports through which the element-actors exchange data. The research shows that the number of ports can be reduced without compromising the functionality of the control system. This is due to the sequential nature of element-actors execution within one group of one flow. As a result, the same input or output ports can be used to communicate an actor element with several others. In addition to matching different control loop configurations, the problem of reducing the number of ports can also be solved by using shared memory for element-actor communication. When the control system is built according to memory-oriented architecture, small amounts of data are transferred through high-speed shared memory, which reduces the acuteness of the problem of queue formation.

Keywords: control system, actor model, loop, optimization, configuration, portability, memory-oriented architecture

An intelligent system for evaluating the performance of researchers in research organizations

2025. T.13. № 1. id 1837
Sakharov Y.S.  Kovaleva A.V. 

DOI: 10.26102/2310-6018/2025.48.1.039

The relevance of the study is due to the fact that in the conditions of high competition for qualified personnel, research organizations seek to attract and retain talented employees. Effective motivation systems based on objective performance assessment are becoming an important tool for achieving this goal. Intelligent systems can provide management with analytical reports and recommendations based on data, which contributes to more informed decision-making in the field of motivation and management of employees. In this regard, this article is aimed at developing an intelligent system for assessing the performance of employees in research organizations, which is a powerful tool for analyzing and managing human capital in organizations. The expert method is based on the involvement of qualified specialists with deep knowledge and experience in the relevant field, which allows to increase the objectivity and reliability of the assessment results. The article describes the advantages and disadvantages of this approach. The work also proposes the use of a machine learning method to assess the performance of researchers based on key performance indicators. The main performance indicators selected for the assessment of labor activity are: scientific and educational activity, scientific work, presentation of results, scientific and educational activity. The materials presented in the article will be relevant and useful for the heads of scientific and research organizations.

Keywords: productivity of work activities, expert assessment method, machine learning, innovation, artificial intelligence, data modeling, researchers

Structural modeling in resource allocation management in a regional organizational system using decision-making intellectualization tools

2025. T.13. № 1. id 1835
Lomakov A.V. 

DOI: 10.26102/2310-6018/2025.48.1.035

The paper presents the structuring of the regional organizational system and its management at the model level using the results of long-term statistical information for intelligent decision support. The first structural model allows us to assess the nature of the interaction between the control center and the components of the organizational system based on the used arrays of statistical accounting information. Population groups and territorial entities of the region carry out data transfer in the form of time series. The structural model of intelligent decision support by the control center is a component of the structure of the resource distribution management system. For its effective use as a basis for integrating the results of predictive analysis in the process of making management decisions based on optimization modeling, it is proposed to implement two-level intellectualization subsystems. An algorithmic scheme has been developed that provides two-level intellectualization in making management decisions, combining visual and predictive analysis modules for the subsequent use of the results of machine learning of predictive models in expert assessment and optimization modeling.

Keywords: regional organizational system, management, statistical accounting, predictive analysis, forecasting, optimization

Implementation of a set-theoretic approach to obtain a numerical estimate of data privacy when using modules for blocking access to mobile applications

2025. T.13. № 1. id 1831
Shulzhenko A.D.  Kurpachenko D.M.  Saveliev M.F. 

DOI: 10.26102/2310-6018/2025.48.1.031

This paper considers the problem of assessing the confidentiality of data when using modules for blocking access to mobile applications. Messengers on the iOS17 platform were selected as an example. The relevance of the study is due to the need to increase the level of protection of user data in the face of growing threats to information security. The main goal is to obtain a numerical estimate, and the achievement of the goal is shown using the example of a comparative analysis of the confidentiality of data provided by the means of blocking applications VK, Telegram and WhatsApp. To achieve the goal, the methods of set-theoretical analysis and expert assessments were used. Key parameters for ensuring confidentiality (type and length of the lock code, use of biometrics, auto-lock time, etc.) were identified, normalized in the range [0,10]. The final score was calculated as the sum of the values of particle values for each application. The results showed that Telegram provides the highest level of confidentiality due to the ability to use more complex lock codes and strict security settings. VK is inferior to Telegram in a number of parameters, but demonstrates better results compared to WhatsApp, unless all parameters are forcibly disabled. The findings of the study can be used to improve data protection mechanisms in mobile applications, and the proposed methodology can be used for further research in the field of information security.

Keywords: data privacy, access blocking, PIN lock, privacy assessment, messenger security, personal data, set-theoretic analysis, application auto-locking, notification content hiding, user data protection

Assessing the quality of the result in the problem of source code generation from an image

2025. T.13. № 1. id 1830
Nikitin I.V. 

DOI: 10.26102/2310-6018/2025.48.1.030

This study is an assessment of the feasibility of building a system for executing functional tests for the task of generating source code from an image. There are many different metrics for assessing the quality of text predicted by a neural network: from mathematical ones, such as BLEU, Rogue, and those that use another model for evaluation, such as BERTScore, BLEURT. However, the problem with generating source code for a program is that the code is a set of instructions to perform a specific task. The relevance is that in publications related to the pix2code system, there was no mention of an automated test environment that can check whether the resulting code meets the specified conditions. In the course of the work done, a subsystem was implemented that can automatically obtain information about the differences between an image based on a predicted code and an image based on a reference code. Also, the results of this system are compared with the BLEU metric. As a result of the experiment, we can conclude that the BLEU value and the execution of tests do not have an obvious relationship between them, which means that tests are necessary for additional checks of the efficiency of the model.

Keywords: code generation, image, machine learning, BLEU, functional tests

Using graph neural networks for solving the Steiner tree problem

2025. T.13. № 1. id 1828
Piminov D.A.  Pechenkin V.V.  Korolev M.S. 

DOI: 10.26102/2310-6018/2025.48.1.038

The theory of discrete optimization plays a crucial role in solving graph theory problems, such as the Steiner tree problem. It is widely applied in transportation infrastructure, logistics, and communication network design. Since the problem is NP-hard, heuristic methods such as genetic algorithms and artificial neural networks are often required. To solve the Steiner tree problem, a graph neural network (GNN) was selected. The GNN architecture involves iterative feature updates using information from neighboring nodes, allowing it to model complex dependencies in graphs. A message-passing neural network (MPNN) mechanism is employed for information aggregation, updating node states based on data from adjacent nodes and edges. The model is trained on graphs generated using the Melhorn heuristic algorithm. Experiments show that GNN performs well on graphs similar to the training data but experiences a significant drop in precision and recall metrics as the input graph size increases. This decline is likely due to the limitations of the MPNN mechanism, which aggregates information only from neighboring nodes within a limited range. Graph neural networks demonstrate strong potential for small- and medium-scale graph problems, particularly in analyzing complex systems such as wireless networks, where node interconnections are critical. However, as graph size increases, performance deteriorates, highlighting the need for improvements in aggregation and optimization algorithms.

Keywords: steiner tree problem, graph neural networks, graph theory, artificial neural networks, mehlhorn algorithm

Influence of geometric parameters of ventricular assist device pumps on hemolytic performance

2025. T.13. № 1. id 1827
Krotov K.V.  Khaustov A.I. 

DOI: 10.26102/2310-6018/2025.48.1.028

This paper presents an analysis of the impact of ventricular assist device (VAD) pump geometry on hemolytic performance. The relevance of the study is driven by the necessity to improve existing pumps, design new pumps, and address the lack of research on the correlation between pump geometry and hemolysis. The prototype is an axial four-blade ventricular assist pump currently used in clinical practice. To conduct the analysis, hydrodynamic modelling of fluid flow in the pump was performed using the finite volume method in OpenFOAM11. The numerical simulations were carried out using MRF and NonConformalCoupling technologies along with the LowRe k-ω SST turbulence model. It has been found that reducing the outer diameter, increasing the hub skew angle, and increasing the hub diameter lead to a lower total hemolysis index at a flow rate of 2.4 L/min, similarly, increasing the hub skew angle and reducing the outer diameter decrease total hemolysis index at a flow rate of 5.4 L/min. The findings of the study provide practical value for the design and modernization of axial pumps in ventricular assist devices.

Keywords: pump, computational fluid dynamics, ventricular assist device, hemolysis index, hemolysis performance, openFOAM, finite volume method

Experimental stand for electrical impedance tomography of newborn lungs

2025. T.13. № 1. id 1824
Konko M.A.  Temnyakov N.S.  Aleksanyan G.K.  Gorbatenko N.I. 

DOI: 10.26102/2310-6018/2025.48.1.036

The article presents the results of the development of an experimental setup for electrical impedance tomography of the lungs of newborns and a sealed simplified physical model of the neonatal mediastinum and a neonatal electrode system included in it. It consists of seven main devices that allow simulating conditions close to clinical ones. The sealed simplified physical model of the mediastinum (phantom) is made taking into account the possibility of placing a newborn doll; the design is made using 3D printing technologies. It is equipped with a system for controlled air filling of the lung areas, as well as drainage for removing excess conductive medium from the phantom. Three rows of electrodes are provided, providing the possibility of conducting experiments on simulating global and regional ventilation with different locations of the electrode system (row). A neonatal electrode system with metal electrodes on a flexible fabric base was developed and manufactured for use as part of the setup. The elastic base allows adjusting to the location of the electrodes on the phantom. Experimental studies performed using the ventilator for respiratory volumes from 2 to 60 ml confirmed the operability of the stand as a whole, namely the sensitivity of the phantom and neonatal electrode system to changes in air volumes, as well as sensitivity to neonatal ventilation modes on the ventilator. The developed solutions allow for research and testing of new algorithms and methods in the field of electrical impedance tomography of neonatal lungs, as well as use for diagnosing disorders of external respiration in neonatology.

Keywords: experimental stand, mediastinum, model, electrical impedance tomography, newborns, lungs

Classification and model of objects in extended reality metaspace

2025. T.13. № 1. id 1823
Dorokhin V.A.  Podgorny S.A.  Tokareva N.A. 

DOI: 10.26102/2310-6018/2025.48.1.032

The article discusses the current state of augmented reality technologies and prospects for their global application. To create global distributed augmented reality networks, common approaches and standards are needed. That task requires methodological support. The relevance of the research is due to the active development of XR technologies and necessity to create a simple and universal platform for immersive content exchange. The authors propose methods for classifying metaspaces and metaspace objects. Based on classification and requirements for metaspaces, a storage model and a basic technology for creating a markup language for metaspace objects are proposed. The proposed scheme is functionally independent and will allow the markup language to be expanded with new components. A model for storing metaspace objects is proposed, the main task of the model is to ensure potential extensibility. Obtained results can be used as a basis for developing a markup language for metaspace objects and browser-interpreters for various wearable devices. The similarity of markup and approaches to displaying content will allow the development process to reuse part of the classic Internet infrastructure, such as the server part. The proposed classification will also allow wearable devices to be divided into functional categories, thereby determining their capabilities in terms of interpreting metaspaces.

Keywords: extended reality, metaspace, augmented reality, geo-oriented AR internet, immersive technologies, XR, metaspace object model, ERML

Method of evaluation of autonomous software based on artificial intelligence technologies for mass preventive studies

2025. T.13. № 1. id 1822
Zinchenko V.V.  Erizhokov R.A.  Arzamasov K.M. 

DOI: 10.26102/2310-6018/2025.48.1.027

The introduction of artificial intelligence (AI) technologies into medical practice requires a thorough assessment of their effectiveness, especially for systems operating autonomously. The method proposed in this study is based on a synthesis of the requirements of national standards in the field of medical AI developed by experts of the Scientific and Practical Clinical Center for Diagnostics and Telemedicine Technologies and data obtained as part of the "Moscow Experiment" on the introduction of innovative technologies. The testing was carried out on three AI software products used to analyze fluorographic studies in the period from January to May 2023. The evaluation included an analysis of the accuracy of algorithms (sensitivity, specificity), effectiveness in real clinical conditions, as well as a comparative analysis of the results with a quantitative interpretation of the data. The emphasis in the evaluation was on providing the AI system with a high level of diagnostic sensitivity, which will allow doctors to relieve themselves of routine monotonous work in mass preventive studies. The developed method demonstrated the possibility of a comprehensive assessment of autonomous AI systems, identifying differences in the effectiveness of products by key metrics. The proposed method allows systematizing the process of validating medical AI solutions, minimizing the risks of their incorrect use in autonomous operation. The results of the study can be used to standardize the assessment of AI tools in radiology and other areas of medicine that require a high level of diagnostic reliability.

Keywords: artificial intelligence in medicine, autonomous diagnostic systems, efficiency assessment, radiation diagnostics, radiology

An algorithm for detecting markers of the aging process of the human body by AV analysis methods during L-arginine geroprophylaxis

2025. T.13. № 1. id 1820
Limanovskaya O.V.  Gavrilov I.V.  Meshchaninov V.N. 

DOI: 10.26102/2310-6018/2025.48.1.034

diagnostic parameters of the body in a group of 32 patients aged 29 to 89 years (14 men and 18 women) who underwent geroprophylactic treatment with L-arginine. Before and after exposure, the patient's biological age was determined based on functional data using age- and sex-dependent models, then the difference between calendar and biological age was calculated and the change in this difference before and after exposure (delta exposure) was estimated. The sample of patients was divided into 2 subgroups according to the magnitude of the exposure delta: in the first group, patients with rejuvenation effect were identified, in the second group, patients with accelerated aging or without significant changes in the exposure delta were collected. The AV analysis was performed according to clinical and diagnostic parameters before exposure to patients of the first and second subgroups. For the AV analysis, a combined technique was used using both statistical parameters and bustrap methods. The choice of the AB analysis method was determined by the distribution properties of the studied clinical parameter, according to which the subgroups were compared. The results of the analysis showed that a reliable statistically significant difference between the subgroups is observed in terms of blood pressure, diastolic, ADD, and platelet distribution width, RDW. At the same time, statistically significant differences in patient subgroups are also observed in a number of indicators (total protein, low-density lipoproteins -LDL, albumin, alanine aminotransferase-ALT, mean platelet volume-MPV, Wexler-TV test, atherogenicity coefficient-KA and Cholesterol), but due to the small sample sizes of the compared subgroups, they can be false positive.

Keywords: AB analysis, bootstrap, confidence intervals, geroprophylactic effect, predicting the effectiveness of treatment, bio-growth

Mathematical model for constructing an idustry-specific career guidance decision support system

2025. T.13. № 1. id 1816
Stupina A.A.  Osipov V.S.  Bobyleva O.V.  Yakovlev D.A. 

DOI: 10.26102/2310-6018/2025.48.1.033

The article addresses the challenges of developing an industry-specific decision support system for education and career guidance in engineering professions under conditions of limited data availability. The system aims to facilitate informed career choices by assessing students’ aptitudes for engineering and technical fields. To formalize these aptitudes, the authors propose a set of key factors and evaluation metrics that enable data-driven conclusions using information extracted from digital educational environments. These factors are designed to leverage immersive technologies and digital educational tools for data acquisition. The study introduces a generalized mathematical model that quantifies the manifestation of multiple parameters and aligns them with potential professional trajectories. The model incorporates weighted indices and significance assessments for predictive analytics, along with methods to integrate diverse evaluation approaches into the decision support framework. Parameters include psychological diagnostics and academic performance metrics. Additionally, the paper demonstrates the application of the generalized model to the mining industry, validated through empirical testing involving a control group of industry professionals. The results highlight the model’s adaptability to sector-specific requirements and its capacity to enhance objectivity in career aptitude assessment. This research contributes to the development of scalable, data-informed tools for engineering career guidance, emphasizing the integration of emerging technologies into educational ecosystems.

Keywords: decision support systems, forecasting, mathematical modeling, data model, digital environment, career guidance

Complex of programs for determining the highest priorities of enrollees in the competitive lists

2025. T.13. № 1. id 1814
Baryshnikova N.Y.  Vasin A.V.  Galin A.V.  Ratmanov A.S. 

DOI: 10.26102/2310-6018/2025.48.1.025

The article discusses the issues of automation of the functionality of the admission campaign of the educational organization of higher education, in particular, issues related to the introduction of enrollment priorities. The enrollee applies for admission to higher education programs. In it, it denotes individual competitive groups and enrollment priorities for each of them. Based on the information provided, the educational organization of higher education determines the highest priorities for the further enrollment of enrollee. The complex of programs presented in this article is an urgent tool for solving the problem of automatically determining the highest priorities. The complex developed by the authors consists of two subprograms. Each subroutine contains its own implemented algorithm. One of the algorithms for solving the problem is an algorithm based on the use of the «brute force» method (the exhaustive search method). This method has proven its simplicity in implementation and readability of the code. Also, the Gale-Shapley algorithm is implemented in the complex of programs. It is characterized by the search for stable matchings between two groups of participants. Within the framework of this article, the main stages of the complex of programs are presented in detail. Finally, the authors analyzed the results of the implemented algorithms. It is concluded that the algorithms are effective. The results obtained in the article in the form of a complex of programs are proposed to be used by employees of admission commissions of educational institutions of higher education when conducting a new recruitment in terms of automation of determining the highest priorities of applicants in the competitive lists.

Keywords: complex of programs, «brute force» method, gale-Shapley algorithm, admission campaign, selection committee, enrollee, priorities, enrollment, stable matchings

Approximations of hospital statistics of recoveries from COVID-19

2025. T.13. № 1. id 1812
Borovsky A.V.  Galkin A.L.  Doroshenko S.S. 

DOI: 10.26102/2310-6018/2025.48.1.024

Hospital statistics on COVID-19 recoveries in Irkutsk are presented in the form of the rate of recovery over a certain number of days from the full group of patients. The recovery time varies from 1 to 182 days. The number of cases considered reaches ~100000 cases. For the convenience of using the data, it is proposed to approximate the table for the recovery rate by various types of nonlinear functions. The following variants of approximating functions have been studied: Gaussian, Lorentz, modified Lorentz, Weibull function, Johnson functions. For comparison with statistics, methods were used to minimize the standard deviations of approximating functions from experimental data. The least squares method is used for functions with two and three parameters, the coordinate descent method, and the gradient descent method for functions with four fitting parameters. It is shown that the best fitting results are provided by a modified Lorentz function with four parameters. According to the degree of discrepancy with experimental statistics, the approximating functions are arranged in the following order: the Weibull function provides the least accurate fit (16.15%), followed by the Johnson function SU (10.65%), slightly better fit for the Johnson function SB (8.49%), for the Gaussian function (5.8%), for the Lorentz function the fit is (3.2828%), the best fit is given by the modified Lorentzian function (3.2804%) under certain approximations.

Keywords: epidemic theory, optimization methods, coordinate descent, gradient descent, least squares method, gauss approximation, lorentz approximation, weibull approximation, johnson approximation, modified Lorentz distribution

Detection of depression features with user data from social network using neural network

2025. T.13. № 1. id 1810
Solokhov T.D.  Kochkarov A.A. 

DOI: 10.26102/2310-6018/2025.48.1.020

The article studies the problem of identifying signs of depression based on user data from social networks using machine learning methods and network analysis. The study includes the development of a model for detecting users with signs of depression, which relies on text analysis of their social network posts and profile metadata. Neural networks were used as algorithms in the study, showing high classification accuracy. Network analysis was implemented to examine the influence of users with signs of depression and it shows that such users have low centrality and do not form dense clusters, indicating their social isolation. The hypothesis of depression spreading through social connections was not confirmed, suggesting minimal impact of depressive users on others. The research results can be utilized to develop systems for early detection of depression. Special attention is given to the study's limitations, including the use of data from a single social network and the complexity of processing textual data. The article proposes directions for further research aimed at expanding methods for analyzing the spread of depressive behavior in social networks.

Keywords: forecasting, depression, psychological disorder, classification, social network, machine learning, neural network, network analysis

Real-time monitoring of communication networks based on cloud computing

2025. T.13. № 1. id 1809
Amoa K.  Sidorenko E.V.  Ryndin N.A. 

DOI: 10.26102/2310-6018/2025.48.1.014

When creating a communication network, various obstacles inevitably arise that negatively affect its effectiveness. The lack of measures to eliminate such interference makes it difficult to optimize the network. Among the problems caused by interference, the problem of blocking them is one of the most significant. This unresolved issue may make successful network design impossible. In order to solve the problems that the traditional method has a long response time to monitor the congestion of the communication network and the detection effect is not ideal, a real-time monitoring method based on cloud computing for blocking the communication network is proposed. Firstly, a communication network monitoring point is established, and the receiver completes the communication data collection process. Based on the collected data, continuous traffic calculation is performed to determine whether there is an emergency blocking state in the communication network channel and determine the exact location of the blocking point. In this way, the information generates an alarm message to obtain the monitoring results. The real-time running time and the accuracy of the monitoring method are experimentally analyzed. It is found that the monitoring method can control the delay time within 0.2 s, and the monitoring error rate is low.

Keywords: cloud computing, telecommunications, network congestion, real-time monitoring, monitoring point, system management, blocking

Neural network to optimize the adaptive exponential min sum decoding algorithm

2025. T.13. № 1. id 1807
Zhang W.  Mouhamad I.  Saklakov V.M.  Jayakody D.K. 

DOI: 10.26102/2310-6018/2025.48.1.026

Currently, deep learning, as a hot research direction, has yielded fruitful research results in natural language processing and graph recognition and generation, such as ChatGPT and Sora. Combining deep learning with decoding algorithms for channel coding has also gradually become a research hotspot in the field of communication. In this paper, we use deep learning to improve the adaptive exponential min sum (AEMS) algorithm for LDPC codes. Initially, we extend the iterative decoding procedure between check nodes (CNs) and variable nodes (VNs) in the AEMS decoding algorithm into a feedforward propagation network based on the Tanner graph derived from the H matrix of LDPC codes. Second, in order to improve the model training efficiency and reduce the computational complexity, we assign the same weight factor to all the edge information in each iteration of the AEMS decoding network, which reduces the computational complexity while guaranteeing the decoding performance, and we call it the shared neural AEMS (SNAEMS) decoding network. The simulation results show that the decoding performance of the proposed SNAEMS decoding network outperforms that of the conventional AEMS decoder, and its coding gain is gradually enhanced as the code length increases.

Keywords: LDPC, deep learning, neural network, exponential algorithm, min sum

Sensor data integration system in onboard control systems of unmanned aerial systems

2025. T.13. № 1. id 1806
Guliutin N.N.  Ermienko N.A.  Antamoshkin O.A. 

DOI: 10.26102/2310-6018/2025.48.1.019

Modern unmanned aerial systems (UAS) play a key role in various industries, including environmental monitoring, geodesy, agriculture, and forestry. One of the most critical factors for their successful application is the integration of data from various sensors, such as global navigation satellite systems, inertial navigation systems, lidars, cameras, and thermal imagers. Sensor data fusion significantly enhances the accuracy, reliability, and functionality of control systems. This paper explores data integration methods, including traditional algorithms like Kalman filters and their extended versions, as well as modern approaches based on deep learning models, such as FusionNet and Deep Sensor Fusion. Experimental studies have shown that learning-based models outperform traditional algorithms, achieving up to a 40 % improvement in navigation accuracy and enhanced resilience to noise and external disturbances. The proposed approaches demonstrate the potential to expand UAS applications in autonomous navigation, cartography, and monitoring, particularly in challenging operational environments. Future development prospects include the implementation of hyperspectral sensors and the development of adaptive data integration methods to further improve the efficiency and effectiveness of unmanned systems.

Keywords: sensor data integration, unmanned aerial systems, kalman filter, fusionNet, deep Sensor Fusion, autonomous navigation, resilience to disturbances

On achievability of consensus in multi-agent control systems with a leader

2025. T.13. № 1. id 1805
Yang S. 

DOI: 10.26102/2310-6018/2025.48.1.023

The paper proposes a distributed control algorithm for multi-agent systems with a leader. The main objective is to ensure the asymptotic convergence of the states of all follower agents to the state of the leader, under the condition that each agent uses only local information obtained from neighboring nodes. The dynamics of the agents are modeled by a second-order system – a double integrator, which allows to take into account both the position and velocity of the agents. This description more accurately reflects the properties of real systems compared to the commonly used simplified first-order models. Graph theory is employed to formalize the topology of communication links between agents. The developed algorithm is based on the idea of pinning control and uses local information about the states of neighboring agents and the leader. The Lyapunov method and eigenvalue analysis were used to study the stability of the system and to obtain analytical conditions for the gain factors that guarantee the achievement of consensus. To illustrate the efficiency and effectiveness of the proposed algorithm, numerical simulations are conducted in MATLAB. The leader's trajectory is chosen based on the optimal trajectory obtained in previous studies by the authors. The results confirm that the states of the follower agents asymptotically converge to the state of the leader over time. The proposed algorithm can be applied to solve problems of group control of mobile robots, unmanned vehicles, and other distributed technical systems.

Keywords: multi-agent systems, distributed control, consensus, leader-follower structure, graph theory, pinning control, group control

Development of API rate limiting methods based on consumer classes

2025. T.13. № 1. id 1803
Seleznev R.M. 

DOI: 10.26102/2310-6018/2025.48.1.013

Rate limiting is a crucial aspect of managing the availability and reliability of APIs. Today, there are several approaches to implementing rate limiting mechanisms, each based on specific algorithms or their combinations. However, existing methods often treat all consumers as a homogeneous group, hindering the creation of flexible resource management strategies in modern distributed architectures. In this article, the author proposes two new methods for rate limiting based on the token bucket algorithm. The first method involves using a shared token bucket with different minimum fill requirements depending on the consumer class. The second method suggests using separate token buckets for each consumer class with individual parameter values but a common limit. Simulation results confirmed that both methods enable efficient API request limitation, though disparities emerged regarding resource distribution patterns across diverse consumer classes. These findings have practical implications for developers of information systems and services who need to maintain high availability while ensuring access guarantees for various consumer categories.

Keywords: rate limiting, token bucket algorithm, software interface, consumer class, quota, threshold, burst traffic

A method for generating closed-type questions using LLMs

2025. T.13. № 1. id 1799
Dagaev A.E. 

DOI: 10.26102/2310-6018/2025.48.1.021

This study presents a method for closed-ended question generation leveraging large language models (LLM) to improve the quality and relevance of generated questions. The proposed framework combines the stages of generation, verification, and refinement, which allows for the improvement of low-quality questions through feedback rather than simply discarding them. The method was tested on three widely recognized datasets: SQuAD, Natural Questions, and RACE. Key evaluation metrics, including ROUGE, BLEU, and METEOR, consistently showed performance gains across all tested models. Four LLM configurations were used: O1, O1-mini, GPT-4o, and GPT-4o-mini, with O1 achieving the highest results across all datasets and metrics. Expert evaluation revealed an accuracy improvement of up to 14.4% compared to generation without verification and refinement. The results highlight the method's effectiveness in ensuring greater clarity, factual correctness, and contextual relevance in generated questions. The combination of automated verification and refinement further enhances outcomes, showcasing the potential of LLMs to refine text generation tasks. These findings will benefit researchers in natural language processing, educational technology, and professionals working on adaptive learning systems and corporate training software.

Keywords: question generation, large language models, artificial intelligence, natural language processing, o1, o1-mini, GPT-4o, GPT-4o-mini

Automated user segmentation using RFM analysis in marketing strategies

2025. T.13. № 1. id 1798
Svyatov R.S. 

DOI: 10.26102/2310-6018/2025.48.1.018

The relevance of the study is determined by the need to enhance the effectiveness of marketing strategies through automated and customizable customer segmentation. This work proposes a universal customer data management system based on RFM segmentation with the ability to configure flexible logic, as well as the capability to integrate with various external systems. Traditional CRM systems and manual RFM segmentation methods are limited in functionality and do not always meet the business needs for flexibility and integration with various data sources. The study identifies the shortcomings of traditional CRM systems and suggests points for improvement in the described system. Additionally, an experiment was conducted comparing the RFM segments generated using the proposed architecture with Yandex's auto-strategies in the Yandex.Direct advertising platform. The application of the system showed significant advantages over auto-strategies, including a 30.71% increase in purchases in the case of a clothing store. The results confirm the practical value of the system for optimizing marketing campaigns and improving conversion. The results are of practical importance for companies in need of customized solutions and integrations. Further development is proposed, focusing on improving the RFM segmentation method by implementing machine learning algorithms and exploring additional effective channels for utilizing the generated segments.

Keywords: RFM analysis, marketing automation, customer loyalty, user segmentation, e-commerce, advertising strategy optimization

System analysis and modeling of the profitability of the energy service contract based on the digital ruble

2025. T.13. № 1. id 1797
Kaziev V.M.  Kazieva B.V. 

DOI: 10.26102/2310-6018/2025.48.1.015

Enterprises participating in housing and communal services need market energy viability and competitiveness, attractiveness for consumers. For Russian companies, it is important to adhere to relatively "soft" (flexible) tariffs and energy supply strategies. It is necessary to find effective solutions, for example, investment and reducing uncertainties such as "white noise" in the energy system. The purpose of the study is a systematic analysis of the potential of a smart contract, a digital ruble and digital payments in energy service contracts. The possibilities of energy contracts and services, as well as the content and features of such contracts, measures for sustainable energy conservation with a certain profitability and optimization of energy resources were studied by methods of system analysis and modeling. Therefore, it is necessary to identify the parameters and features of the contract and simulate the processes of energy supply. The results of the study are: 1) a systematic analysis of standard forms of contracts and a description of a set of energy-saving key procedures of the enterprise; 2) analysis of the potential of the digital ruble and its "energy capabilities"; 3) model of dynamics of management of an energy service enterprise based on diffusion of digital services and its research. The results of the work will expand the possibilities of concluding and developing energy service contracts in practice, as well as build flexible models and algorithms for energy supply.

Keywords: system analysis, smart contract, energy consumption, energy service contract, modeling