Keywords: network graph, critical path, resource, optimization, tension coefficient, aggregation
DOI: 10.26102/2310-6018/2025.49.2.049
This article presents a project optimization procedure in the form of a network graph. The idea of optimization is to make all paths from the initial event to the final one critical by transferring resources from non-critical work with a non-zero free reserve to critical work of some critical path. Assuming that the dependence of the duration of work on the resources allocated for its execution is linear, formulas for new work durations and a new critical time are obtained. The reallocation of resources reduces the duration of some work, but makes the project more stressful. To evaluate a project with new work durations, a stress coefficient was introduced for each work as the intensity of use of the generalized project resource per unit of time. In the process of optimization, these characteristics behave differently, therefore, a generalized characteristic of the project intensity is introduced based on the aggregation of particular characteristics of work using the "fuzzy majority" principle. Note that well-known weighted averages can be used to aggregate partial estimates, while, for example, the method of paired comparisons can be used to determine the weights. The article provides an illustrative example demonstrating the operation of the proposed approach.
Keywords: network graph, critical path, resource, optimization, tension coefficient, aggregation
DOI: 10.26102/2310-6018/2025.50.3.009
This study is devoted to assessing the quality of annotations in Russian generated by a multi-agent system for time series analysis. The system includes four specialized agents: a dashboard analyst, a time series analyst, a domain-specific agent, and an agent for user interaction. Annotations are generated by analyzing dashboard and time series data using the GPT-4o-mini model and a task graph implemented with LangGraph. The quality of the annotations was assessed using the metrics of clarity, readability, contextual relevance, and literacy, as well as using an adapted Flesch readability index formula for the Russian language. Testing was developed and conducted with the participation of 21 users on 10 dashboards – a total of 210 ratings on a ten-point scale for each of the metrics. The assessment and results showed the effectiveness of annotations: clarity - 8.486, readability - 8.705, contextual relevance – 8.890, literacy – 8.724. The readability index was 33.6, which shows the average complexity of the text. This indicator is related to the specifics of the research area and does not take into account the arrangement of words and their context, but only static length indicators. An adult and a non-specialist in each field are able to perceive complex words in the annotation, which is proven by other ratings. All comments left by users will be taken into account to improve the format and interactivity of the system in further research.
Keywords: time series, annotation generation, LLM, multi-agent system, dashboards
DOI: 10.26102/2310-6018/2025.49.2.038
The article considers the feasibility of currency integration in the BRICS format, as well as the optimality of BRICS as a currency zone. In the course of the study, calculations have been made using the optimality formula for a currency zone. This model allows one to analyze the ratio of macroeconomic indicators of pairs of countries and find the average optimality coefficient of the entire association for currency integration. In addition, the research provides additional economic and geopolitical criteria, which are used to check the relevance of the primary calculations using the optimal currency zone model. Correlation of labor markets, the ratio of investment attractiveness levels correlation of business and financial cycles, inflationary convergence, geopolitical risks - all this has a direct or indirect impact on the success of integration. The data obtained after calculation and verification using additional criteria reflect the real degree of readiness of BRICS to create a single currency, as well as the predisposition of individual countries to economic integration. The purpose of the article is not to discredit the BRICS programs, but to provide a scientific approach to the analysis of one of the initiatives repeatedly promoted during BRICS summits. The feasibility of currency integration in the BRICS format is a complex multifaceted process that requires enormous time and resource expenditures from all member states of the association. This state of affairs runs counter to individual calls and statements made by politicians of the BRICS states, which may somewhat distort the idea of the subject of the study – currency integration in the BRICS format – in the eyes of the public.
Keywords: currency zone, currency integration, optimality, BRICS, criterion, economy, single currency, potential
DOI: 10.26102/2310-6018/2025.50.3.007
With the increasing number of incidents involving the unauthorized use of unmanned aerial vehicles (UAVs), the development of effective methods for their automatic detection has become increasingly relevant. This article provides a concise overview of current approaches to UAV detection, with particular emphasis on acoustic monitoring methods, which offer several advantages over radio-frequency and visual systems. The main acoustic features used for recognizing drone sound signals are examined, along with techniques for extracting these features using open-source libraries such as Librosa and Essentia. To evaluate the effectiveness of various features, a balanced dataset was compiled and utilized, containing audio recordings of drones and background noise. A multi-stage feature selection methodology was tested using the Feature-engine library, including the removal of constant and duplicate features, correlation analysis, and feature importance assessment. As a result, a subset of 53 acoustic features was obtained, providing a balance between UAV detection accuracy and computational cost. The mathematical foundations of spectral feature extraction are described, including different types of spectrograms (mel-, bark-, and gammatone-spectrograms), as well as vector and scalar acoustic features. The results presented can be used to develop automatic UAV acoustic detection systems based on machine learning methods.
Keywords: unmanned aerial vehicle, acoustic signals, acoustic features, spectral analysis, machine learning
DOI: 10.26102/2310-6018/2025.49.2.048
Oil spills pose a serious threat to marine ecosystems, causing long-lasting environmental and economic consequences. To minimize damage, it is critically important to effectively limit the spread of pollution. One of the most common means in the fight against oil spills are booms — floating barriers that allow to localize the spill area and increase the efficiency of subsequent cleaning. However, the effectiveness of such barriers depends not only on the materials used, but also on their geometric configuration. In this regard, the task of minimizing the length of the boom necessary to cover a given spill area becomes urgent. In this paper, this problem is formulated as an isoperimetric optimization problem in the class of polygons. The problem of maximizing the area bounded by a polygon with a fixed perimeter and a fixed segment (for example, a section of shore) is investigated, provided that the boundary is a broken line rather than a smooth curve. It is proved that the optimal shape is achieved when the polygon is regular, that is, its sides and angles are equal. The results obtained can be used in the design of more efficient boom placement systems, contributing to lower material costs and improved environmental safety.
Keywords: isoperimetric problem, shape optimization, booms, oil spill, mathematical modeling, geometric optimization
DOI: 10.26102/2310-6018/2025.49.2.032
The article considers the problem of designing a system for operational short-term forecasting of wind speed at a specific point on the coast. An automated approach to designing hybrid machine learning models that combine an ensemble of multilayer neural networks and an interpretable system based on fuzzy logic is proposed. The method is based on the automated formation of an ensemble of neural networks and a system based on fuzzy logic using self-configuring evolutionary algorithms, which allows adapting to the features of the input data without manual tuning. After constructing the neural network ensemble, a separate system based on fuzzy logic is formed, learning from its inputs and outputs. This approach allows reproducing the behavior of the neural network model in an interpretable form. Based on experimental testing on a meteorological dataset, the effectiveness of the method is proven, which ensures a balance between the quality of the forecast and the interpretability of the model. It is shown that the constructed interpretable system reproduces the key patterns of the neural network ensemble, while remaining compact and understandable for analysis. The constructed model can be used in decision-making in port services and in organizing coastal events for quick and easy forecasting. The proposed approach as a whole allows obtaining similar models in various situations similar to the one considered.
Keywords: operational forecasting of wind characteristics, ensembles of neural networks, fuzzy logic systems, decision trees, self-configuring evolutionary algorithms
DOI: 10.26102/2310-6018/2025.50.3.002
Modern digital radio communication systems impose stringent requirements on energy and spectral efficiency under the influence of various types of interference, particularly in challenging radio wave propagation conditions. Consequently, the investigation of existing methods for operating in radio channels with fading, as well as the development of new approaches to address this challenge, remains highly relevant. The primary objective of this study is to investigate diversity reception techniques aimed at enhancing signal robustness against fading. The study examines approaches to combining known diversity methods and proposes a new modified spatial reception method. The methodology employed includes a comparative analysis of various combinations of spatial diversity reception techniques within an adaptive feedback system, based on simulations conducted in the MATLAB environment to evaluate the impact of different fading types on data transmission in a channel with feedback. The novelty of this work lies in the proposed diversity method, which involves signal combining through optimal summation in diversity reception, performed only on a selected subset of receiving antennas. This subset is determined based on channel state estimation results, as summing signals from all receiving antennas is deemed unnecessary and significantly increases complexity when the received signal quality is already high. The results demonstrate that the proposed solution offers advantages over the conventional optimal summation method by reducing computational complexity, as signal summation is limited to a portion of the receiving antennas rather than all of them. The proposed solution is particularly suitable for applications requiring simultaneous optimization of both energy efficiency and spectral efficiency in digital radio systems. Its relevance becomes especially pronounced under degraded reception conditions caused by environmental factors inducing severe fading effects.
Keywords: diversity reception, selection combining, equal gain combining, maximal ratio combining, adaptive system with feedback, error-control coding, fading channel
DOI: 10.26102/2310-6018/2025.50.3.004
Clinical gait analysis is a key tool for diagnosis and rehabilitation planning in patients with motor disorders; however, accurate and automatic detection of gait events remains a challenging task in resource-limited settings. Force plates are considered the gold standard for automatic gait event detection, but their application is limited in cases of pathological gait patterns and when patients use assistive rehabilitation devices. This paper presents an approach to automatic detection of gait events in children with pathological gait using recurrent neural networks. The presented methodology effectively identifies key gait events (heel strike and toe off). The study used kinematic data from patients with gait disorders, collected using an optical motion capture system under various conditions: barefoot walking, in orthopedic footwear, with orthoses, and other technical rehabilitation aids. Four models were trained to detect gait events (one for each leg and event type). The models demonstrated high sensitivity with small time delays between predicted and actual events. The proposed method can be used in clinical practice to automate data annotation and reduce processing time for gait analysis results.
Keywords: gait events, neural networks, recurrent neural networks, motion capture, biomechanics, cerebral palsy, foot kinematics, machine learning
DOI: 10.26102/2310-6018/2025.49.2.037
This article proposes an algorithm for evaluating project resource allocation that takes into account various fuzzy expert recommendations regarding the start times of tasks within float constraints, aiming to select the optimal set of expert suggestions. To determine the float constraints for task start and finish times, the classical critical path method is used. Expert recommendations on task start times are modeled as fuzzy trapezoidal or triangular numbers defined along the time axis. Based on the fuzzy start and finish times of project tasks, a fuzzy representation of the probability that a task will be performed at a specific moment is constructed. Building alpha-cuts for this fuzzy probability representation allows the identification of intervals, within float constraints, during which a task is likely to be performed at a certain level of fuzzy probability, thus enabling resource planning for those periods. The obtained results allow for: evaluating the expert recommendations that are optimal in terms of resource distribution; minimizing subcontracting needs for task execution; and calculating the associated subcontracting costs. The proposed algorithmic and software solution can serve as an effective decision support tool in the implementation of multi-component projects.
Keywords: network graph of the project, critical path, fuzzy expert recommendations, work completion dates on project, project resource optimization
DOI: 10.26102/2310-6018/2025.49.2.042
This article presents a concept for analyzing tea raw materials using the YOLO family of models, as well as comparative analysis of two versions of YOLOv8: Nano and Small. The study highlights metrics used to compare these models' performance. An experimental comparison was conducted on real examples of tea raw material images. For this purpose, a training dataset was collected containing images of tea samples classified by fermentation type: green tea, red tea, white tea, yellow tea, oolong, shou puerh, and sheng puerh. To increase the number of training samples, augmentation methods were applied such as image rotation, sharpening, perspective distortion, and blurring. Based on the experiment results, it is concluded that choosing between the two presented models depends on the task at hand and available computational resources. YOLOv8s (Small) outperforms YOLOv8n (Nano) in terms of accuracy but consumes more time to provide results. On the other hand, YOLOv8n processes data faster and can be effectively utilized under limited computing power conditions, making it particularly suitable for handling large volumes of data.
Keywords: image analysis, machine learning, computer vision, tea raw material, convolutional neural networks
DOI: 10.26102/2310-6018/2025.50.3.003
The paper is devoted to the pressing issue of automating the logistics processes of emergency medical services (EMS). The present macro-management structure of EMS logistics is examined. Deficiencies and current problems are highlighted. It is considered advisable to start the solution from automating the central EMS warehouses in the region. During the analysis, quantitative parameters and warehouse functionalities were identified. An analysis of current solutions revealed the impracticality of effectively using off-the-shelf developments. It is proposed to implement an original development, and the tasks for initiating work on it have been set. In solving these tasks, an improved EMS logistics management structure for the region has been proposed, including an automated specialized warehouse. Its architecture is presented as a hardware-software solution with distribution of business processes and functions across levels. A storage organization methodology is proposed, enabling the implementation of a warehouse with the specified parameters. Algorithms for executing key processes such as automatic loading and unloading are provided. To maximize warehouse utilization, models are presented to determine dimensional parameters and the capacity of the racking group. Models are also provided to determine and minimize the execution time of basic automatic warehouse procedures. This mathematical apparatus is to be used in designing and automating warehouses built according to the proposed methodology. Its application demonstrated that even with a non-optimized motion scheme for actuating mechanisms (which is not recommended for an operational solution), the technical requirements for the drive units of the robotic system are easily achievable with minimal costs. Based on the results of the work performed, we decided to proceed to the next stage: creating a prototype.
Keywords: logistics, management structure, automation, methodology for organizing an automated warehouse, software and hardware package, robotic solution, medicine warehouse, emergency medical care
DOI: 10.26102/2310-6018/2025.50.3.008
Reliable operation of small-diameter pipeline systems is an important task in ensuring the process safety of industrial facilities operating under high temperatures and pressures. One of the key factors influencing the occurrence of emergency situations is the thinning of pipe walls caused by erosion, corrosion, and stress corrosion cracking. In conditions of limited space and the impossibility of using standard non-destructive testing tools, there is an increasing need to develop compact automated solutions for internal diagnostics of pipeline geometric parameters. This paper presents the development and experimental study of a robotic diagnostic device designed for internal scanning of pipes with a minimum cross-sectional diameter of 130 mm. The device is a mechatronic system with eight drive wheels driven by gear motors and controlled by a Raspberry Pi 3 microcomputer. The body design is made using additive technologies and includes measurement and power modules located in separate sections. Laboratory tests have been conducted to confirm the operability of the device and its control algorithms. The developed software package ensures autonomous movement of the device, collection and recording of diagnostic data. The results obtained allow forming a detailed geometry of the pipeline, identifying areas with an increased level of ovality and deformations, which is important for assessing the residual resource. The developed solution can be used both in research tasks and as part of industrial non-destructive testing systems for small-diameter pipes.
Keywords: modeling, diagnostic device, mechatronics, control, defects, robotic system, small diameter pipelines
DOI: 10.26102/2310-6018/2025.49.2.039
A brief overview of new approaches to characterizing the quality of surfaces with hydrophobic properties is given. These approaches are based on mathematical procedures involving a large amount of computation, including fractal methods. The relationship between the wetting angle of a hydrophobic surface and surface parameters such as roughness and fractal dimension of the profile has been studied. A model of a superhydrophobic surface is developed, its parameters are described, such as the effective hydrophobic wetting angle, the proportion of the solid phase of the surface in contact with the liquid, and the parameters of the hierarchical structure. It has been established that the use of nanostructured columns in the formation of a superhydrophobic surface, taking into account the hierarchical structure, makes it possible to significantly increase the values of the wetting edge angle. The dependence of the wetting edge angle on the fraction of the liquid-solid contact at the interface is determined, which is explained by the complication of the surface structure, and the relationship between the fraction of the solid phase and the fractal dimension is determined. It was found that when estimating the wetting edge angle, the relationship of the fractal dimension is significantly higher in comparison with the roughness parameters Ra and Rz. The correlation coefficients between the wetting angle and other parameters of the hydrophobic surface were determined using regression analysis. The results obtained can be used in the processing of measurement information in accordance with modern standards in the field of geometric characteristics of surfaces, including in the development of software for measuring parameters of hydrophobic surfaces.
Keywords: hydrophobicity, roughness, geometric characteristics of the surface, fractal dimension, surface microprofile, scale
DOI: 10.26102/2310-6018/2025.49.2.046
Modern methods of facial attribute editing suffer from two systemic issues: unintended modification of secondary features and loss of contextual details (such as accessories, background, and hair textures, etc.), which lead to artifacts and restrict their application in scenarios requiring photographic accuracy. To address these problems, we propose an improved differential activation module designed for precise editing while preserving contextual information. In contrast to the existing solution (EOGI), the proposed solution includes: the use of second- and third-order gradient information for precise localization of editable areas, applying test-time augmentation (TTA) and principal component analysis (PCA) to center the class activation map (CAM) around objects and remove a lot of noise, the integration of semantic segmentation data to enhance spatial accuracy. The evaluation on the first 1,000 images of the CelebA-HQ dataset (resolution 1024×1024) demonstrates significant superiority over the current method EOGI: a 13.84 % reduction in the average FID (from 27.68 to 23.85), a 7.03 % reduction in the average LPIPS (from 0.327 to 0.304), and a 10.57 % reduction in the average MAE (from 0.0511 to 0.0457). The proposed method outperforms existing approaches in both quantitative and qualitative analyses. The results demonstrate improved detail preservation (e.g., earrings and backgrounds), which makes the method applicable in tasks demanding high photographic realism.
Keywords: deep learning, facial attribute editing, differential activation, class activation maps (CAM), semantic segmentation, generative adversarial network (GAN)
DOI: 10.26102/2310-6018/2025.49.2.040
This article presents methods for user authentication based on handwritten signature features characterizing its length as a scalar value and as a function of the dependence of the signature part curve length on time. The main emphasis is on methods for extracting static and dynamic features from a handwritten signature, these features are unique to each person and can be used to accept the truth or falsity of a particular user. During the analysis, data on time characteristics are collected, including the time spent writing each symbol and pauses between individual signature elements. The relevance of this study is due to the need to improve the security of user authentication in various systems where a handwritten signature serves as an important authentication element. The results of the study can be useful for creating more reliable authentication systems in such areas as banking, legal procedures, and other areas where a high degree of confidence in the authenticity of documents is required. The presented approaches not only contribute to increasing the level of authorization security, but also expand the horizons for further research in the field of biometric authentication. This, in turn, may lead to wider implementation of these technologies in practical applications in both online and offline systems.
Keywords: mathematical expectation, variance, function, handwritten signature, authentication, measure, metric, derivative, machine learning
DOI: 10.26102/2310-6018/2025.49.2.041
This paper is devoted to the development of a method for training classifiers that takes into account relationships between classes, represented as additional labels. The loss functions used in classification and the approaches to incorporating additional labels into them were analyzed. Based on this analysis, we propose as the foundation of our method a triplet loss with a flexible margin, designed on the basis of the original triplet loss. The flexible margin allows adjusting the distances between the embeddings of images depending on the difference degree between their corresponding classes. This makes it possible to model different levels of similarity between classes: category, group, and subgroup levels. In addition, we develop a triplet mining strategy that prevents the model’s weights from collapsing to zero and getting stuck in a trivial solution. The method is validated on tasks of product classification and gastrointestinal disease classification. As a result of applying the method, classification accuracy increased by 9 % in the disease recognition task and by 6 % in the product recognition task. The number of severe classification errors was reduced. The image embedding space formed by the triplet loss allows clustering and recognition of new classes without additional model training.
Keywords: loss function, classification, computer vision, triplets, labels, vector space
DOI: 10.26102/2310-6018/2025.50.3.005
Modern Internet of Vehicles (IoV) applications place high demands on reliability and minimal response time in dynamic traffic conditions. However, high-speed vehicles and complex infrastructure such as intersections can lead to loss of communication and increased delays in data transmission and processing. The paper proposes an innovative framework for cluster interaction of vehicles based on peripheral computing (CCVEC), implemented on the OpenStack platform. The development is focused on ensuring stable communication and rational allocation of computing resources in intelligent transport systems. The conducted testing covered various traffic scenarios, including high-density traffic areas. The results showed that the proposed solution supports stable communication between onboard sensors and cloud services. Under optimal conditions, the average latency was about 390 ms, and the throughput reached 30 kB/s. The platform has demonstrated high performance and efficient memory usage when allocating resources. Thus, the CCVEC framework is able to reduce delays, increase connection reliability and efficiently use local resources, which makes it promising for implementation in IoV-based systems and peripheral computing.
Keywords: internet of vehicles (IoV), peripheral computing, intelligent transport systems, communication reliability, data transmission latency, cluster interaction, computing resource allocation, openStack, cloud services, on-board sensors
DOI: 10.26102/2310-6018/2025.49.2.043
The article is devoted to the issue of designing an automated information system for monitoring seismological activity in the Far Eastern region of Russia. The Far East belongs to earthquake-prone areas, but due to the peculiarities of territorial development, the system of monitoring the seismological situation in the region is not sufficiently developed. Currently, researchers are working on organizing a system for collecting seismological data. The collected information on seismological events in the region provides an opportunity for their further analysis in order to identify previously unknown patterns and develop methods for predicting earthquakes before their impact on the region's infrastructure. The study examines the existing methods of measuring and marking seismic waves and the features of the territory for drawing up requirements for the system. As a result of the research, logical and physical schemes of the monitoring system are proposed, based on the use of neural networks to track the arrival of P and S waves in a mode close to the real-time mode. The system under development includes modules for obtaining and accumulating primary data, as well as a neural network module. The structure of the information system is planned to be as flexible as possible for convenient configuration of the network architecture and its training.
Keywords: monitoring system, seismic waves, earthquakes, STA/LTA, engineering, neural network, big data
DOI: 10.26102/2310-6018/2025.49.2.036
The article presents a study of a human body pose estimation system based on the use of two neural networks. The proposed system allows determining the spatial location of 33 key points corresponding to the main joints of the human body (wrists, elbows, shoulders, feet, etc.), as well as constructing a segmentation mask for accurate delineation of human figure boundaries in an image. The first neural network implements object detection functions and is based on the Single Shot Detector (SSD) architecture with the application of Feature Pyramid Network (FPN) principles. This approach ensures the effective combination of features at different levels of abstraction and enables the processing of input images with a resolution of 224×224 for subsequent determination of people's positions in a frame. A distinctive feature of the implementation is the use of information from previous frames, which helps optimize computational resources. The second neural network is designed for key point detection and segmentation mask construction. It is also based on the principles of multi-scale feature analysis using FPN, ensuring high accuracy in localizing key points and object boundaries. The network operates on images with a resolution of 256×256, which allows achieving the necessary precision in determining spatial coordinates. The proposed architecture is characterized by modularity and scalability, enabling the system to be adapted for various tasks requiring different numbers of control points. The research results have broad practical applications in fields such as computer vision, animation, cartoon production, security systems, and other areas related to the analysis and processing of visual information.
Keywords: neural networks, convolutional neural networks, machine learning, computer vision, human pose estimation, keypoints, image segmentation
DOI: 10.26102/2310-6018/2025.49.2.047
The article examines the possibility of increasing the efficiency of managing the development of the process of medical examination of the population of the region based on predictive modeling of the rate of change in morbidity. As a prerequisite for optimizing management predictive estimates serve decisions in the distribution of resource provision planned within the forecasting horizon by the managing center of the organizational system of regional healthcare. Based on long-term statistical data, the average annual rates of change in morbidity and volumes of medical examinations are calculated as estimates of additional resource provision. The initial data allow us to calculate the rates of change for a group of nosological units as a whole, for each nosological unit and for each territorial entity. The use of visual expert modeling allows us to estimate the degree of synchronization of changes in morbidity with the allocation of additional volumes of medical examination. The expediency of using the analysis of long-term medical and statistical information and predictive estimates for making management decisions within the organizational system of healthcare in the Voronezh Region is substantiated. Based on the analysis of retrospective data, a conclusion was made about the insufficient degree of synchronization of the rate of change in morbidity with the allocated resource without using an optimization approach. In addition, it is proposed to conduct a preliminary classification of the territorial entities of the Voronezh Region by the rate of change into three groups: low, medium, and high, taking into account the large number of territorial entities of the Voronezh Region. The main classes of problems of optimization of management decisions in the distribution of additional volumes of medical examinations within the planning horizon of the development of the organizational system are considered. For this purpose, prognostic estimates are transformed into coefficients of priority of use of the planned resource. To distribute the volumes of medical examinations between nosological units, they focus on optimization using expert choice. An optimization problem of resource distribution among territorial entities is formed, management decisions on the basis of which are determined using the algorithm of multi-alternative optimization.
Keywords: organizational system, development management, predictive modeling, visual expert modeling, optimization
DOI: 10.26102/2310-6018/2025.50.3.006
This article discusses the task of preliminary assessment of incoming electronic document management based on computer vision technologies. The authors synthesized a dataset of images with structured data based on the invoice form and also collected scans of various documents from pages of scientific articles and documentation in the electronic mailbox of a scientific organization to Rosstat reports. Thus, the first part of the dataset refers to structured data with a strict form, and the second part refers to unstructured scans, since information can be presented in different ways on different scanned documents: only text, text and images, graphs, since different sources have different requirements and their own standards. The primary analysis of data in streaming sources can be done using computer vision models. The experiments performed have shown high accuracy of convolutional neural networks. In particular, for a neural network with the Xception architecture, the result is achieved with an accuracy of more than 99%. The advantage over the simpler MobileNetV2 model is about 9%. The proposed approach will allow for the primary filtering of documents by department without using large language and character recognition models, which will increase speed and reduce computational costs.
Keywords: intelligent document processing, computer vision, convolutional neural networks, stream data processing, machine learning
DOI: 10.26102/2310-6018/2025.49.2.023
The amount of AI-based software used in radiology has been rapidly increasing in recent years, and the effectiveness of such AI services should be carefully assessed to ensure the quality of the developed algorithms. Manual assessment of such systems is a labor-intensive process. In this regard, an urgent task is to develop a specialized unified platform designed for automated testing of AI algorithms used to analyze medical images. The proposed platform consists of three main modules: a testing module that ensures interaction with the software being tested and collects data processing results; a viewing module that provides tools for visually evaluating the obtained graphic series and structured reports; a metrics calculation module that allows calculating diagnostic characteristics of the effectiveness of artificial intelligence algorithms. During the development, such technologies as Python 3.9, Apache Kafka, PACS and Docker were used. The developed platform has been successfully tested on real data. The obtained results indicate the potential of using the developed platform to improve the quality and reliability of AI services in radiation diagnostics, as well as to facilitate the process of their implementation in clinical practice.
Keywords: platform, diagnostic imaging, testing, medical images, artificial intelligence
DOI: 10.26102/2310-6018/2025.49.2.028
Creating high-quality distractors for test items is a labor-intensive task that plays a crucial role in the accurate assessment of knowledge. Existing approaches often produce implausible alternatives or fail to reflect typical student errors. This paper proposes an AI-based algorithm for distractor generation. It employs a large language model (LLM) to first construct a correct chain of reasoning for a given question and answer, and then introduces typical misconceptions to generate incorrect but plausible answer choices, aiming to capture common student misunderstandings. The algorithm was evaluated on questions from the Russian-language datasets RuOpenBookQA and RuWorldTree. Evaluation was conducted using both automatic metrics and expert assessment. The results show that the proposed algorithm outperforms baseline methods (such as direct prompting and semantic modification), generating distractors with higher levels of plausibility, relevance, diversity, and similarity to human-authored reference distractors. This work contributes to the field of automated assessment material generation, offering a tool that supports the development of more effective evaluation resources for educators, educational platform developers, and researchers in natural language processing.
Keywords: distractor generation, artificial intelligence, large language models, knowledge assessment, test items, automated test generation, NLP
DOI: 10.26102/2310-6018/2025.49.2.034
Industrial robots are one of the ways to increase production volumes. Bundling, milling, welding, laser processing, and 3D printing are a number of processes that require maintaining high precision positioning of industrial robots throughout the entire operation cycle. This article analyzes the use of the Denavit-Harterberg (DH) method to determine the positioning and orientation errors of an industrial robot. In this study, the DH method is used to create a model of possible errors in industrial robots and to create a database of deviations of the links and the working body of the robot from a predetermined trajectory. Special attention is paid to the presentation of practical steps to create a synthetic data set for the deviation of axes of an industrial robot, starting from the kinematic model of the robot and ending with the preparation of the final data format for subsequent analysis and the construction of a predictive analytics model. The importance of careful data preparation is highlighted by examples from other research in the field of predictive analytics of industrial equipment, demonstrating the economic benefits of timely detection and prevention of possible equipment failures. The developed model is used in the future to generate a synthetic data set for the deviation of the axes of an industrial robot. The proposed data collection model and methodology for creating a data set for predictive analytics are being tested on a six-axis robot designed for this purpose.
Keywords: inverse kinematics problem, predictive analytics, simulation modeling, industrial robot malfunction assessment, denavit-Hartenberg method, automation, fault diagnosis
DOI: 10.26102/2310-6018/2025.49.2.031
The article addresses the pressing issue of developing a unified information space for the integration of artificial intelligence components in the context of information support for design and technological preparation of production. It considers the challenge of creating a digital engineering assistant whose functions include the analysis of design documentation, processing of two-dimensional and three-dimensional models, and generation of new design and technological solutions. The model of interaction between the digital assistant and the engineer is proposed within the framework of integrating computer-aided design systems, engineering data management systems, and digital content management systems. This integration is based on the novel concept of "affordance", which is widely used to describe the characteristics of artificial intelligence systems, as well as in perception psychology and design to describe human interaction with technical devices. Using this concept, an information-logical model of an integrated enterprise information environment has been developed—an environment that brings together natural and artificial intelligence for the purpose of facilitating creative engineering activity. The classification of implementation options based on affordances is proposed as a foundation for compiling and annotating training datasets for generative models, as well as a guideline for formulating subsequent prompt queries. The proposed concept has been practically implemented and illustrated through the unification of medical device designs, including rehabilitation products, surgical navigation systems, multisensory simulators, and a modular expert virtual system. The findings presented in the article have practical value for the automation of engineering decision-making support, as well as for higher education in training engineering specialists, including in interdisciplinary fields such as medical engineering.
Keywords: computer-aided design systems, product information support, artificial intelligence, scientific and technical creativity, engineering activities, affordance
DOI: 10.26102/2310-6018/2025.48.1.033
Eating disorders (EDs) are among the most pressing issues in public health, affecting individuals across various age and social groups. With the rapid growth of digitalization and the widespread use of social media, there emerges a promising opportunity to detect signs of EDs through the analysis of user-generated textual content. This study presents a comprehensive approach that combines natural language processing (NLP) techniques, Word2Vec vectorization, and a neural network architecture for binary text classification. The model aims to identify whether a post is related to disordered eating behavior. Additionally, the study incorporates social network analysis to examine the structure of interactions among users who publish related content. Experimental results demonstrate high precision (0.87), recall (0.84), and overall performance, confirming the model’s practical applicability. The network analysis revealed clusters of users with ED-related content, suggesting the presence of a "social contagion" effect – here dysfunctional behavioral patterns may spread through online social connections. These findings highlight the potential of NLP and graph-based modeling in the early detection, monitoring, and prevention of eating disorders by leveraging digital traces left in online environments.
Keywords: eating disorders, text analysis, machine learning, neural network models, natural language processing, social graph, network analysis
DOI: 10.26102/2310-6018/2025.49.2.030
The article proposes an original algorithm for statistical estimation of the probability of reaching a target price based on the analysis of returns and volatility using a drifted random walk model and integration of data from different timeframes. The relevance of the study stems from the need to make informed decisions in algorithmic trading under market uncertainty. The key feature of the approach is the aggregation of probabilities computed from different time intervals using Bayesian adjustment and weighted averaging, with weights dynamically determined based on volatility. The use of a universal fuzzy scale for qualitative interpretation of the evaluation results is also proposed. The algorithm includes the calculation of logarithmic returns, trend, and volatility, while stability is improved through data cleaning and anomaly filtering using a modified Hampel method. The article presents a calculation example using real OHLCV data and discusses possible approaches to validating the accuracy of the estimates when historical records of target price attainment are available. The results demonstrate the practical applicability of the proposed method for assessing the feasibility of reaching forecasted targets and for filtering trading signals. The developed algorithm can be used in risk management, trading strategy design, and expert decision support systems in financial markets.
Keywords: statistical probability estimation, target price, return, volatility, random walk with drift, timeframe integration, bayesian adjustment, fuzzy logic, logarithmic return, financial modeling
DOI: 10.26102/2310-6018/2025.50.3.001
In the context of the digital transformation of education, MOOC platforms face the need to optimize operational processes while maintaining the quality of education. Traditional approaches to resource management often do not take into account complex temporal patterns of user behavior and individual learning characteristics. This paper proposes an innovative solution based on interpreted reinforcement learning (RL) integrated with the Shapley Value method to analyze the contribution of factors. The study demonstrates how data on activity time, user IDs, training goals, and other parameters can be used to train an RL agent capable of optimizing the allocation of platform resources. The developed approach allows: quantifying the contribution of each factor to operational efficiency; identifying hidden temporal patterns of user activity; and personalizing load management during peak periods. The article contains a mathematical justification of the method, practical implementation in MATLAB, as well as the results of testing, which showed a reduction in operating costs while increasing user satisfaction. Special attention is paid to the interpretability of the RL agent's decisions, which is critically important for the educational sphere. The work provides a ready-made methodology for the implementation of intelligent management systems in digital education, combining theoretical developments with practical recommendations for implementation. The results of the study open up new opportunities for improving the effectiveness of MOOC platforms in the face of growing competition in the educational technology market.
Keywords: reinforcement learning, shapley Value, operational efficiency, digital transformation, interpreted AI, business process optimization
DOI: 10.26102/2310-6018/2025.49.2.035
In today's competitive market, companies face the challenge of choosing optimal marketing strategies that maximize customer engagement, retention, and revenue. Traditional methods such as rule-based approaches or A/B testing are often not flexible enough to adapt to dynamic customer behavior and long-term trends. Reinforcement Learning (RL) offers a promising solution, allowing you to make adaptive decisions through continuous interaction with the environment. This article explores the use of RL in marketing, demonstrating how customer data – such as purchase history, campaign interactions, demographic characteristics, and loyalty metrics – can be used to train an RL agent. The agent learns to choose personalized marketing actions, such as sending discounts or customized offers, in order to maximize metrics such as increased revenue or reduced customer churn. The article provides a step-by-step guide to implementing an RL-based marketing strategy using MATLAB. The creation of a user environment, the design of an RL agent and the learning process are considered, as well as practical recommendations for interpreting agent decisions. By simulating customer interactions and evaluating agent performance, we demonstrate the potential of RL to transform marketing strategies. The aim of the work is to bridge the gap between advanced machine learning methods and their practical application in marketing by offering a roadmap for companies seeking to use the capabilities of RL for decision making.
Keywords: reinforcement learning, customer behavior, marketing strategies, state of the environment, agent actions, agent reward
DOI: 10.26102/2310-6018/2025.49.2.024
Often, when constructing regression models, it is necessary to resort to nonlinear transformations of explanatory variables. Both elementary and non-elementary functions can be used for this. This is done because many patterns in nature are complex and poorly described by linear dependencies. Usually, the transformations of explanatory variables in a regression model are constant for all observations of the sample. This work is devoted to constructing nonlinear regressions with switching transformations of the selected explanatory variable. In this case, the least absolute deviations method is used to estimate the unknown regression parameters. To form the rule for switching transformations, an integer function "floor" is used. A mixed 0–1 integer linear programming problem is formulated. The solution of this problem leads to both the identification of optimal estimates for nonlinear regression and the identification of a rule for switching transformations based on the values of explanatory variables. A problem of modeling the weight of aircraft fuselages is solved using this method. The nonlinear regression constructed with the proposed method using switching transformations turned out to be more accurate than the model using constant transformations over the entire sample. An advantage of the mechanism developed for constructing regression models is that thanks to the knowledge of the rules for switching transformations, the resulting regression can be used for forecasting.
Keywords: regression analysis, nonlinear regression, least absolute deviations method, mixed 0–1 integer linear programming problem, integer function «floor», weight model of aircraft fuselage