Keywords: motor evoked potentials, transcranial magnetic stimulation, TKEO, tiger-Kaiser energy operator, hilbert transform, amplitude envelope, electromyography, signal processing
DOI: 10.26102/2310-6018/2026.54.3.021
The reliable and objective determination of the characteristics of the motor evoked potential (MEP) – latency of occurrence, amplitude from peak to peak, duration and morphology of the waveform – is fundamental for clinical neurophysiology, but in modern practice it largely depends on the judgments of the operator. Mathematical algorithms for signal processing offer a transparent, deterministic and reproducible alternative. We present, characterize, and systematically evaluate a complete mathematical algorithm for identifying MEP features, consisting of three stages: determining the origin based on TKEO, the Tiger-Kaiser energy operator applied to a pre-processed signal with an adaptive threshold k∙σ_baseline; Estimating the displacement of the Hilbert transform – amplitude envelope tracking using a baseline return criterion; and morphological classification by counting significant zero crossings to assign monophase, two-phase, or multiphase labels. At the marker verification stage, tests in which the detected signs do not exceed the minimum noise level are rejected. With an SNR value of 3.0, performance decrease, the MAE delay increases from 1.4 ms (SNR ≥ 5) to 9.7 ms (SNR < 3). The accuracy of morphological classification is 94% for studies with high SNR and decreases to 61% for studies with very low SNR. The mathematical pipeline provides clinically acceptable accuracy for MEP with high and medium SNR levels and serves as an interpretable reference standard with zero training costs. Its failure modes are well characterized, SNR-dependent, and predictable – properties that make it a basic baseline comparator for evaluating more advanced automated analysis methods.
Keywords: motor evoked potentials, transcranial magnetic stimulation, TKEO, tiger-Kaiser energy operator, hilbert transform, amplitude envelope, electromyography, signal processing
DOI: 10.26102/2310-6018/2026.54.3.019
Motor evoked potentials (MEPs) are electrophysiological signals of crucial diagnostic and monitoring importance in neurology, neurosurgery, and rehabilitation medicine. Traditionally, feature extraction from MEP data has been based on manual control and measurements performed by trained clinicians according to established rules, a process that is inherently subjective, time-consuming, and subject to significant differences between observers. This article provides a comprehensive rationale for using convolutional neural network (CNN)-based approaches to extract MEP features. CNNs provide superior performance in key parameters, including accuracy, reproducibility, processing speed, and the ability to detect hidden morphological patterns that may escape human visual perception, compared to traditional manual methods. In addition, automated CNN-based analysis eliminates the variability between patients, allowing for real-time intraoperative monitoring. Performance estimates based on computer modeling and a structured comparative analysis of the two methods strongly confirm this statement. The introduction of CNNs represents a revolutionary step towards objective, scalable, and clinically reliable analysis that can standardize the interpretation of MEP in a variety of clinical settings and potentially improve patient outcomes through more consistent neurological assessment.
Keywords: motor evoked potentials, convolutional neural networks, feature extraction, transcranial magnetic stimulation, intraoperative neurophysiology, deep learning, electrophysiology, automated analysis, interdisciplinary reliability, signal processing
DOI: 10.26102/2310-6018/2026.55.4.009
In New IP and ManyNets architectures (ITU-T Network 2030), the need to predict network characteristics, including path delay, without heavy simulation grows; it remains unclear when graph neural networks outperform simple computational methods and how such models generalize to different graph sizes. This article aims to assess applicability of a graph neural network to the path delay task on synthetic graphs with a formula accounting for link load, and to evaluate generalization to larger graphs. A comparative experiment on Erdős–Rényi graphs was applied: a graph convolution-based model was compared with a baseline method; two experiments were conducted: a load-aware target latency experiment and a test on graphs with 15 and 20 nodes after training on graphs with 15 nodes. Results (single run): in the first experiment the baseline gave MAE 1.85 and MAPE 7.89 %, the graph model 9.91 and 59.20 %; in the second, when moving from 15- to 20-node test graphs, the graph model’s MAE decreased by about 7 % and the baseline’s increased by about 8 %. The approach is concluded applicable on synthetic data as a first step toward models for predicting network characteristics in New IP and ManyNets architectures. The materials are of practical value for specialists when choosing and validating delay prediction methods and planning experiments on synthetic topologies.
Keywords: graph neural networks, network characteristics prediction, new IP, manyNets, delay prediction, synthetic network topologies, erdős–Rényi graphs, quality of service, network topology, graph convolution
DOI: 10.26102/2310-6018/2026.55.4.004
In modern conditions, the success of project activities is determined not only by the professional competencies of participants but also by their socio-psychological compatibility. Existing mathematical models of team formation, based on the classical assignment problem, are focused exclusively on the resource-based approach and do not take into account interpersonal relationships, which also affect the efficiency of joint activities. The aim of the work is to develop a mathematical model and software for forming project teams that combines the professional competencies of candidates and the sociometric characteristics of their relationships to achieve a synergistic effect. A model is proposed that extends the generalized assignment problem by incorporating sociometric indices of cohesion and conflict, and also excludes teams with mutual antipathies. To solve the NP-hard optimization problem, a genetic algorithm implemented in Python using the DEAP framework was applied. An individual is represented by a fixed-length chromosome, where the position corresponds to the role and the value to the candidate's index. The operation of the algorithm is demonstrated on a test example. The model and algorithm can be used by project managers, HR specialists, and educators for the informed formation of student and professional teams with a favorable socio-psychological climate.
Keywords: project team formation, assignment problem, mathematical model, sociometry, genetic algorithm
DOI: 10.26102/2310-6018/2026.55.4.012
This article discusses the development of a multi-task hybrid neural network model with a multi-branch regression block structure for the simultaneous detection and quantitative assessment of defect sizes based on ultrasonic non-destructive testing data. The primary objective of the study is to improve the accuracy of determining defect geometric parameters through parallel feature processing using different activation functions within a single multi-task architecture. The study utilizes ultrasonic testing data for a welded joint made of austenitic stainless steel with artificial cracks. The methodology included expanding the previously developed CNN-GRU model for binary classification to a multi-task model, where the regression block is implemented as a multi-branch structure with parallel transformations and subsequent feature integration. Training was conducted with balanced loss functions to jointly optimize classification and regression problems. The results demonstrated the high efficiency of the proposed approach. The model demonstrated absolute classification accuracy and a low regression error: the average absolute error was 0.118 mm (5.3% of the average defect size). A comparison with a model of a similar architecture without a multi-branch structure confirmed that the proposed solution reduces the error by more than twofold and eliminates systematic prediction bias. The developed architecture may have practical implications for automated ultrasound diagnostic systems that require not only detection but also precise measurement of defect parameters.
Keywords: ultrasonic testing, multitasking learning, multi-branch architecture, classification, regression, neural networks, flaw detection
DOI: 10.26102/2310-6018/2026.55.4.017
In the context of increasing heterogeneity in software development practices and documentation standards, ensuring the completeness and structural consistency of technical specifications remains a complex and labor-intensive task. Existing regulatory frameworks, including GOST 34, IEEE 830, ISO/IEC/IEEE 29148, and the Volere methodology, propose different approaches to structuring requirements; however, their simultaneous use in real-world projects often results in section duplication, structural inconsistency, and significant manual verification effort. This paper proposes an adaptive technical specification template based on a parameterized graph model that enables the formal integration of a mandatory regulatory core with flexibly connected extensions depending on the software type, industry-specific requirements, and the required level of detail. An automated structural verification algorithm for DOCX and PDF documents is developed, combining hierarchy extraction with fuzzy heading matching. Template adaptability metric has been introduced. Experimental validation on real-world technical specifications demonstrates structural extraction accuracy of up to 92 % for DOCX documents. The proposed approach can serve as a basis for intelligent tools for analyzing technical documentation.
Keywords: technical specifications, graph model, template adaptability, fuzzy matching, structural analysis
DOI: 10.26102/2310-6018/2026.55.4.014
This paper examines the architecture of a distributed computing system built on heterogeneous mobile devices and employing a combined dynamic load balancing method. This approach is focused on wireless environments where the composition of nodes and their performance vary over time. The performance of smartphones as computing nodes is analyzed, and the factors limiting their effectiveness are investigated: heterogeneity of hardware platforms, thermal throttling, heterogeneity of computing cores, and background activity load. An algorithm is proposed that combines a static assessment of node capacity and dynamic adjustment of performance factors taking into account the frequency, temperature, and current processor load. The algorithm incorporates a fault-tolerant subtask redistribution mechanism: if a node is disconnected or freezes, unfinished subtasks are automatically returned to the queue and assigned to other workers. The proposed approach ensures adaptation of load distribution to the current state of computing nodes, maintaining stability of overall performance during fluctuations in their resources. Experimental testing was performed on a set of smartphones of different classes, using a task without inter-node data exchange as the test load. The experimental evaluation confirms that the developed method significantly reduces task execution time and minimizes load variance compared to static approaches.
Keywords: distributed computing, dynamic load balancing, fault tolerance, grid approach, thermal throttling
DOI: 10.26102/2310-6018/2026.55.4.018
The paper presents a study of covariance matrix inversion methods in adaptive beamforming for antenna arrays. Two signal processing paradigms are considered, namely spatial processing and space–time processing, for which the structure of the covariance matrix and its impact on the choice of inversion algorithms are analyzed. Wiener-optimal weight vectors, obtained as the solution of the mean square error minimization problem, are used as a reference solution. The Cholesky decomposition, a recursive Levinson-type algorithm, the Bareiss method, and an FFT-based approximation are compared in terms of the accuracy of reproducing the optimal weights, the resulting training mean square error, the shape of the radiation pattern and computational complexity. Numerical simulations are performed in MATLAB for different antenna array geometries under the same noise scenario. The article considers the relationship between the structure of the covariance matrix in spatial and space-time processing tasks, the choice of algorithms for its inversion, and their computational efficiency. It is shown that exact inversion methods provide results consistent with the Wiener-optimal solution, whereas approximate methods significantly reduce computational cost at the expense of a controlled increase in error. The obtained results confirm the practical relevance of structured covariance matrix inversion methods for space-time adaptive signal processing.
Keywords: adaptive antenna array, adaptive beamforming, covariance matrix, matrix inversion, mean square error
DOI: 10.26102/2310-6018/2026.54.3.020
This article explores a method for storing images by training a neural network on a single image and storing its weights as a compact representation. This approach significantly reduces the amount of data stored while maintaining acceptable visual quality. Model parameters and training settings are analyzed to optimize recovery quality. The basic idea of the approach is that a trained model stores its weights, which act as a compact representation of the original image. When reconstruction is required, the weights are reloaded into the network to restore the visual content. Experimental results show that optimizing the network architecture and color space (YCbCr) enables high compression ratios – up to 29.4 while maintaining visual quality close to the original (MSE ≈ 10-5). However, the authors note a significant drawback of the method: long training time and significant computational costs, making it less effective than traditional compression algorithms for practical real-time applications. Nevertheless, the approach demonstrates potential for tasks where preserving fine image details is critical, such as data archiving or video stream compression.
Keywords: image compression, neural network, image archiving, single-image training, image restoration, multilayer perceptron, machine learning, positional coding, coordinate coding, artificial intelligence
DOI: 10.26102/2310-6018/2026.55.4.007
This paper addresses the problem of high-precision trajectory tracking for a nonlinear three-link robotic manipulator operating under parametric uncertainties and external disturbances. Conventional PID and classical adaptive control methods often demonstrate limited robustness and suboptimal energy efficiency when applied to dynamically coupled multi-link systems. To overcome these limitations, a Hybrid Adaptive-Optimization Control Framework is proposed. The approach integrates Adaptive Computed Torque Control with a Modified Particle Swarm Optimization algorithm for systematic controller gain tuning. The manipulator dynamics are derived using the Euler – Lagrange formulation and implemented in MATLAB through numerical time-domain integration. Controller parameters are optimized offline using a multi-objective cost function that incorporates trajectory tracking error, control effort, and energy consumption. The optimized gains are then applied within an online adaptive compensation structure to enhance robustness against modeling uncertainties. The simulation results show that the proposed approach provides a reduction in the mean square error by approximately 26 % compared to the standard adaptive control, a reduction in the settling time, a reduction in the normalized energy consumption and a reduction in torque pulsation, which confirms the improvement in the accuracy, robustness and energy efficiency of the system.
Keywords: robotic manipulator, adaptive control, hybrid optimal control, particle swarm optimization, trajectory tracking
DOI: 10.26102/2310-6018/2026.54.3.015
In a context of persistently limited budgetary resources, exacerbated by the growing social burden on regional budgets, the problem of finding effective mechanisms for distributing state social funds is of paramount importance. The social well-being of millions of citizens and the stability of social relations directly depend on how rationally and fairly resources are distributed. A key element in building such an effective system is a clear, scientifically sound, and, crucially, prioritized classification of recipient groups of social assistance. This classification allows for a shift from a egalitarian support approach to a targeted approach, focusing efforts and resources on the most vulnerable groups. This article proposes an innovative approach to algorithmizing this complex process. The proposed method is based on integrating the developed hierarchical classification of recipients with modern neural network technologies, specifically the ART-MAP family of architectures. The use of neural network data allows for the creation of a flexible, adaptive system capable of learning in real time, taking into account the dynamics of changes in the social environment, and ensuring not only accurate but also completely transparent, understandable, and justified dispersion (redistribution) of financial flows, which is critical for upholding the principles of social justice.
Keywords: financial resource allocation, regional social fund, neural networks, algorithmization, management
DOI: 10.26102/2310-6018/2026.54.3.017
This article explores the use of computer-based methods for analyzing tabular data to forecast consumption in the Russian pharmaceutical market. It describes the key stage of developing an information system designed to forecast drug procurement and support management decision-making in the pharmaceutical supply chain. It examines the specifics of medical organizations' procurement activities and the key risks associated with planning drug demand and pricing. It details the modern methods used in the study, including machine learning models and feature significance analysis using SHAP. It describes the data preparation and preprocessing process, including collecting, cleaning, transforming, and encoding features, as well as generating training and test samples for building regression models. Particular attention is paid to identifying factors influencing drug pricing and improving forecasting accuracy through the use of specialized models for specific drug groups. The economic impact of implementing the developed tool is assessed. It enables medical organizations to more effectively manage procurement, optimize budgets, reduce financial risks. Specific attention is given to forecasting drug prices and automating the planning and procurement process as part of the sustainable and rational development of the Russian pharmaceutical market.
Keywords: machine learning, artificial intelligence, SHAP analysis, information systems, demand forecasting, pharmaceutical market
DOI: 10.26102/2310-6018/2026.55.4.016
Budget-constrained localization of multiple roots of nonlinear equation systems requires both broad coverage of different attraction basins and rapid refinement of promising candidates when the number of residual evaluations is limited. Many niching variants of differential evolution perform replacement within local neighborhoods, but overly local mating can reduce basin coverage and cause premature stagnation. This paper introduces Tiered Neighborhood-Exchange Differential Evolution, a crowding-based solver that preserves neighborhood replacement while injecting controlled global information. The method uses a residual-gated dual mutation that switches between neighborhood exploitation and a global anchor, and a tiered neighborhood-exchange crossover that couples individuals across three fitness strata to counteract diversity loss. An archive of verified roots and distance-based duplicate filtering are employed to maintain a set of distinct solutions. Experiments on six benchmark systems show that, under identical evaluation budgets, the proposed method improves the recovered-root proportion and the probability of finding all distinct roots compared with representative niching differential-evolution baselines.
Keywords: differential evolution, nonlinear equation systems, multi-root localization, niching, neighborhood exchange, evaluation budget, evolutionary computation
DOI: 10.26102/2310-6018/2026.55.4.006
Abstract. The paper proposes a modification of the information process model for remote monitoring of object condition aimed at improving the correctness of result interpretation under conditions of heterogeneous data sources, different measurement frequencies, data transmission delays, and incomplete observations. The objective of the study is to extend the original model by incorporating additional stages and mechanisms that ensure data quality control, temporal alignment of data streams, robustness of notifications, and reproducibility of the obtained assessments. The research methods include structural and functional decomposition of the information process and formalization of data processing principles at each added stage. The proposed modification introduces: an object profile serving as a context for parameter interpretation and a mechanism for unambiguous assignment of measurements to a specific object; temporal synchronization of data streams based on window processing; a data quality control loop with validity labeling and anomaly detection; a confidence indicator for state assessment considering the completeness and quality of observations; event-based interpretation of results; robust notification mechanisms based on an extended threshold model with hysteresis and message rate limiting; explainable inference tools identifying the parameters that influenced the assigned status; and traceability of results through logging of input data, interpretation rules, and output assessments. As a result, a refined structure of the information process has been developed, enabling state assessment that accounts for the quality and consistency of input data and ensuring stable delivery of results to the monitoring subject.
Keywords: remote monitoring, object condition, heterogeneous data source, information process, structural-functional model, data quality control, temporal synchronization, window processing, robust notifications, traceability of results
DOI: 10.26102/2310-6018/2026.55.4.010
The relevance of the study is due to the need to increase the efficiency of analyzing the dynamic characteristics of damping bearings of gas turbine engines, since existing finite element models are computationally complex and are not applicable for operational analysis, and simplified analytical models are focused on a generalized assessment of characteristics and have limited capabilities in the study of nonlinear contact and hydrodynamic effects. In this regard, this article is aimed at developing a multiphysical simulation model of a damping support of a gas turbine engine, providing a reliable study of its dynamic and damping characteristics as part of a virtual test complex. The leading research method is a systematic approach based on the integration of the Simscape libraries and the MATLAB Simulink Multibody environment, which allows for consistent modeling of mechanical, contact, and hydrodynamic processes in the bearing assembly and damping package, as well as parametric analysis of the effect of design characteristics on the dynamic response of the system. The article develops a multiphysical model of a damping support that implements the interaction of rolling elements, elastic-dissipative elements and a hydrodynamic medium, and studies the effect of the number of bands and corrugations of the damping package on the power and frequency characteristics of the support. The simulation results obtained on the basis of the developed model make it possible to quantify the effect of the design parameters of the damping bearings on the vibration stability of the rotor and can be used in the design, optimization and virtual prototyping of the support units of gas turbine engines.
Keywords: virtual test facility, gas turbine engine, damping supports, multiphysical model, hydrodynamic model
DOI: 10.26102/2310-6018/2026.55.4.011
The relevance of this research is determined by the need to ensure rapid access for emergency service vehicles to the territory of secured facilities, whose access in the modern urban environment is often restricted by automatically controlled barriers and other physical obstacles. This issue can be addressed by implementing intelligent identification systems for emergency service vehicles. Consequently, this paper aims to develop an algorithm for the automatic identification of emergency service vehicles based on images. The core idea of the proposed algorithm relies on the combined use of an artificial neural network and an ontological knowledge model of emergency service vehicles. The ontology was developed using the Protégé editor and the OWL language, based on an analysis of open data concerning the classification and equipment of emergency services. The YOLOv8 architecture, trained on an extended Roboflow dataset, was chosen as the foundation for the an artificial neural network. The results of the experimental study confirmed the high efficiency of the proposed model, achieving an accuracy of 89 %, which indicates its practical applicability for solving the target task. The developed algorithm can be integrated into intelligent access control systems for residential complexes and commercial facilities, thereby contributing to an increased level of safety and optimized service delivery.
Keywords: OWL ontology, semantic model, artificial neural network, image recognition algorithm, emergency service vehicles
DOI: 10.26102/2310-6018/2026.55.4.020
Development of Artificial Intelligence technologies in medicine requires a systematic approach to collecting and processing structured datasets for training, testing, and validating machine learning models. This paper proposes a solution to this problem through simulation modeling based on queueing theory. This modeling requires estimating the planned throughput of each data collection point, ensuring a sufficient number of patients, the availability and reliability of their medical information, and meeting legal requirements regarding personal data protection and medical ethics. The proposed approach was studied using the analysis of biomedical data collection processes designed to train artificial intelligence models for remote diagnostic methods. The empirical part of the study was conducted at biomedical signal collection points over a six-month period. The total sample size was 574 patients. A simulation model was developed to optimize the data collection process. According to the simulation modeling, the average data collection intensity was 7.28 patients per day with significant variability in the workload. During the optimization process, changes were made to the data collection process through parallelization, which increased productivity by reducing the time spent on questionnaires and temperature measurements and increasing patient throughput. The optimization of the data collection process increased the workload from 4.67 to 12.12 patients per day. The proposed approach allows us to validate the architecture of the organizational and technological process for data collection before scaling and minimizes the risk of exceeding the schedule deadlines for generating medical datasets.
Keywords: medical dataset, simulation modeling, queueing theory, digital twin, throughput, artificial intelligence
DOI: 10.26102/2310-6018/2026.54.3.010
The relevance of this study is caused by the rapid development of electronic commerce and the growing need for effective tools to predict user behavior in online retail environments. The main problem lies in the fact that existing solutions in this domain are often limited to specific datasets, lack sufficient scalability, and rarely support real-time automation of the forecasting process. The purpose of this study is to develop a decision support system that enables the estimation of the probability of future purchase completion based on the analysis of user behavioral data and provides decision-makers with actionable recommendations for subsequent marketing activities. The methodological framework of the study is based on the use of a web analytics system as a source of information on user activities, data preprocessing and structuring procedures, and the application of gradient boosting as a machine learning algorithm for predicting the probability of purchase. To identify internal and external factors that could have a positive or negative impact on achieving the goal, a SWOT analysis was conducted. Experimental validation of the system was conducted using data from four online stores representing different business domains. The results demonstrate that the overall F-score exceeds 80 % across all experiments. The materials presented in this article have practical relevance for e-commerce professionals, data analysts, and marketing specialists, as well as for decision-makers, since the proposed system enables automated prediction of purchasing behavior, the formation of interpretable user segments, and the application of the obtained results to marketing personalization and optimization of managerial decision-making.
Keywords: machine learning, decision support system, user behavior analysis, e-commerce, consumer behavior prediction, online stores
DOI: 10.26102/2310-6018/2026.55.4.021
In modern conditions, due to the unstable economic and political situation around the world, emergencies of various natures are becoming more frequent and large-scale phenomena. This is caused both by natural factors and man-made reasons, as well as by deliberate actions resulting from conflicts and sabotage, which necessitates the improvement of rapid response methods. Consequently, the relevance of developing automated decision support systems for effectively countering contemporary challenges and threats in the field of emergency consequence management is increasing. This paper describes a methodology for the effective management of a set of works and measures for emergency response, based on multi-criteria optimization methods. The following were chosen as optimization criteria: efficiency or the ability to complete assigned tasks in the shortest possible time, availability or the ability to provide resources for all work being carried out in the required volume, and information content or the implementation of measures to ensure up-to-date and objective information about the current situation. Three models for conducting optimization and obtaining a Pareto-optimal solution are considered: the generalized objective function method, the criterion constraints method, and the method of successive concessions. The article provides the mathematical formulation and description of the models and presents an algorithm for selecting a model for different conditions.
Keywords: emergencies, decision making, threat response, multi-criteria optimization, mathematical modeling
DOI: 10.26102/2310-6018/2026.55.4.015
Traffic jams often occur due to inefficient control of traffic lights at intersections, that is, due to the fact that their settings are not sufficiently adapted to specific conditions. Currently, foreign research is actively underway in the field of applying machine learning methods with reinforcement to optimize traffic flows at intersections, which once again shows the urgency of the problem. The prospect of using reinforcement learning lies in the ability to control the dynamics of complex processes without human intervention. To maintain the efficiency and safety of moving cars in urban environments, there are systems that control traffic flows using traffic lights. The paper considers the existing types of traffic flow management systems. The analysis revealed their positive and negative qualities. The article proposes an intelligent control system based on the principles of reinforcement learning, supplemented by an approximator using a neural network. The network architecture is a multi-layered perceptron, with two hidden layers with ReLU activation functions. The process of agent training and the results of control system modeling in the SUMO microscopic modeling environment are presented. The results are presented in the form of a graph of the dynamics of agent training, heat maps of intersections when simulating rush hour traffic and in case of an accident before and after exposure. The proposed system makes it possible to increase the traffic intensity in the intersection network by 40% and 25% during rush hour and traffic accidents, respectively. In addition, the future prospects of its development are reflected.
Keywords: traffic flow, traffic management, reinforcement learning, neural networks, machine learning, adaptive management
DOI: 10.26102/2310-6018/2026.54.3.013
The relevance of the study is determined by the continuous growth of textual information in library information systems and the need to ensure fast and meaningful navigation across electronic collections under constrained computational resources. Existing automatic summarization solutions are primarily oriented toward large-scale language models, which limits their practical deployment within local library infrastructures. In this context, the paper aims to develop a resource-efficient method of semantic text reduction that balances the quality of semantic representation with computational feasibility. The proposed approach is based on a hybrid architecture that sequentially combines lexical reduction using word clouds with neural summarization performed by compact models. In addition, a context-oriented evaluation metric is introduced to assess relevance with regard to semantic coherence, structural characteristics, and domain-specific terms significant for the library environment. An experimental study conducted on a corpus of 1178 documents demonstrates that the hybrid approach improves relevance indicators while simultaneously reducing inference time compared to direct neural summarization of the full text. The obtained results confirm the practical applicability of the proposed method for library information systems operating under limited computational infrastructure and its usefulness for navigation and cataloging tasks.
Keywords: semantic text reduction, automatic summarization, word cloud, library information systems, hybrid text processing methods, neural models, relevance evaluation, library Relevance Score
DOI: 10.26102/2310-6018/2026.55.4.019
The relevance of this study is driven by the growing number and complexity of cyberattacks, in particular the need to continually improve organizations' security levels, as well as the ongoing planning and modeling of security strategies in the face of limited resources. This work aims to develop a model for developing an information security strategy for a given organization, taking into account economic indicators. The primary research methods are modeling, comparative analysis, and synthesis. The paper contains the characteristics of the simulated organizations, the formulas and algorithms used in the prototype, as well as numerical indicators of criteria and parameters. The relationships between the model parameters are presented. As a result, the model's performance on the simulated organizations was demonstrated: optimal strategies were obtained for each of them, correlating with generally accepted approaches to developing strategies in real companies. The resulting graphs of the system states are demonstrated. For all organizations, integrated strategies proved to be the most optimal. In the short term, the use of a Markov decision process allows for the successful optimization of management decisions, regardless of the company's maturity level. Allocating a large budget for information security has a significant impact on efficiency only for companies with a low maturity level. The results of the work are of practical value to information security specialists and managers, providing a tool for developing an optimal information security strategy within a given budget.
Keywords: markov decision process, information security strategy, security strategy modeling, economic costs, strategy optimization
DOI: 10.26102/2310-6018/2026.54.3.016
This project is dedicated to the development of an adaptive resource management system for containerised computer-aided design (CAD) applications using reinforcement learning. Modern CAD workloads are characterised by highly variable computing requirements, which makes traditional threshold-based auto-scaling mechanisms insufficient for maintaining performance and reliability in dynamic conditions. To address this issue, the proposed system compares classic Kubernetes pod scaling based on thresholds (HPA) with a Q-learning-based auto-scaling strategy applied to container clusters. The experimental setup is implemented as a simulation of a distributed containerised cluster and includes customisable workload models representing light, medium, heavy, and peak request patterns. System performance is evaluated using metrics such as response time, throughput, availability, cost-effectiveness, mean time to recovery, and false positive scaling events. A reinforcement learning agent monitors tracked system metrics and learns scaling policies that optimise long-term performance and stability through repeated interactions with the environment. The application interface allows users to control simulation parameters, including the number of policy runs, the number of episodes per run, and the number of steps per episode, as well as cluster configuration parameters such as the number of nodes and cores per node. The workload intensity can be adjusted to analyse system behaviour in different operating scenarios. This configuration allows for systematic evaluation of adaptive auto-scaling strategies and their impact on resource efficiency and fault tolerance in containerised CAD systems. The study represents a methodological innovation thanks to its interactive, experiment-based evaluation interface, which combines modelling and orchestration logic.
Keywords: adaptive resource management, experimental setup, containerized cluster, workloads, kubernetes, classic pod autoscaling, thresholds (HPA), autoscaling strategy, q-learning
DOI: 10.26102/2310-6018/2026.54.3.018
This article is devoted to the relevant scientific field – interpretable machine learning. Previously, the author introduced the concept of «fully interpretable linear regression», which is constructed using ordinary least squares for the entire set of statistical data. In this article, this concept is generalized to segmented linear regression, in which data is first divided into segments, and then its own linear regression is constructed on each of them. An algorithm for constructing fully interpretable segmented linear regressions has been developed. Its peculiarity is that, firstly, the division of the predictor space into segments is carried out using logical activation functions for the arguments of binary operations min. Secondly, paired regression is construct in each segment, which completely solves the problem of multicollinearity. Using the developed algorithm, a segmented linear regression of concrete compressive strength was constructed based on a sample of 1030 observations. In all its eight segments, the values of the linear regression determination coefficients do not exceed 0.8, which indicates the presence of unaccounted-for factors, so the constructed model cannot be strictly attributed to fully interpretable ones. However, all other interpretability conditions are met. In addition, the segmented model turned out to be much better in terms of approximation quality than simple linear regression.
Keywords: regression analysis, interpretability, segmented linear regression, ordinary least squares, multicollinearity, significance of estimates
DOI: 10.26102/2310-6018/2026.54.3.012
The article provides a comprehensive systematic analysis of modern deep learning architectures for automatic segmentation of multiphase CT images. The specific features of multiphase data are considered in detail, the main of which are spatial mismatches (offsets) between phases caused by patient movements and the different nature of the accumulation of contrast agent in pathological tissues at different phases. These features make direct adaptation of classical segmentation methods ineffective and require the development of specialized architectures. The article traces the evolution of approaches: from basic convolutional networks (U-Net, 3D U-Net, nnU-Net) and hybrid models (TransUNet, UNETR) combining convolutions and transformers to specialized solutions. Special attention is paid to models with mechanisms of cross-attention between phases, such as PA-ResSeg, M3Net and MULLET, which allow for implicit alignment of features and adaptive merging of information from different phases without explicit registration (alignment) of images. The paper also analyzes the comparative advantages of various data fusion strategies from different phases (early, late, cross-interaction), discusses issues of computational efficiency and availability of open datasets. Key trends and promising areas of development of the field have been identified, including the use of fundamental models (MedSAM, VoxTell) and modal-agnostic learning. It is concluded that further progress in the field of multiphase segmentation of CT images is associated with the creation of computationally efficient architectures capable of integration into the real clinical process to support diagnostic solutions.
Keywords: hybrid architectures, image segmentation, attention mechanisms, multiphase CT, feature fusion, medical imaging, deep learning, computer vision, PA-ResSeg, m3Net
DOI: 10.26102/2310-6018/2026.53.2.009
The relevance of this study is determined by the fact that, in road-infrastructure monitoring platforms, errors at the stage of detection and interpretation of object conditions can propagate into normative and managerial decision errors, especially under real-world acquisition conditions (shadows, glare, wet/snow-covered pavement, contamination, and ambiguous defect boundaries), where the risk of misclassification and inaccurate localization increases. This is critical for threshold-based normative assessment, since even small inaccuracies may change the condition category and, consequently, lead either to unjustified maintenance assignments or to missing hazardous defects. Therefore, this paper investigates the use of detection uncertainty for road-surface defect monitoring within a multi-agent pipeline, where observation results are transferred between components together with the processing context via the Model Context Protocol as a unified mechanism for exchanging events, metadata, and interpretation parameters. The main approach is to build a computational pipeline that includes video-data preprocessing, defect detection, computation of the uncertainty indicator H(p) from the class-probability distribution, assignment of the status "automatic/validation/refinement" subsequent normative interpretation, and aggregation over road-network segments. To ensure reproducibility, each run is recorded as a unified "experiment context" (scene/frame identifier, model version, threshold parameters, decision status), enabling comparable mode-to-mode evaluation and auditing of discrepancy causes. Verification is based on comparing normative decisions with expert assessment and analyzing how the share of erroneous normative decisions depends on the automatic-decision threshold for H(p), while the risk-oriented logic routes high-uncertainty detections to validation and reduces the probability of errors in borderline cases. The results show that context logging via Model Context Protocol and accounting for H(p) improve experimental reproducibility and the soundness of normative interpretation, decreasing the risk of incorrect maintenance prioritization by separating ambiguous observations and preserving the decision rationale.
Keywords: multi agent system, road surface monitoring, road surface defects, computer vision, detection uncertainty, normative interpretation, context logging
DOI: 10.26102/2310-6018/2026.54.3.008
In the context of accelerated growth of heterogeneous textual data volumes, universal approaches to information extraction that are independent of the specific structure and domain of source texts have become particularly important. Despite the widespread adoption of large generative language models, the problem of accurate and resource-efficient information extraction from textual data remains relevant. While possessing broad capabilities, generative models are often excessive for specialized information retrieval tasks and may demonstrate low interpretability of results. This study is part of research work aimed at developing an alternative method for information extraction from unstructured texts to form a structural model of a text document. The proposed approach focuses on identifying semantically rich text fragments through relevance analysis relative to given thematic aspects of the text. This research presents an information extraction method using an extractive question answering model, based on multi-level answer aggregation combining strategies for assessing text fragment relevance, semantic clustering, and final answer selection for a given question. The proposed approach enables identification of words in the text that are most relevant to the target thematic aspects, which can subsequently be used to extract reliable information from the document. The article presents experimental results confirming the effectiveness of the proposed method in identifying semantically relevant elements of a text document. The obtained results have practical value for developing automated systems of text semantic structure construction and can be applied in document analysis, information retrieval, and intelligent text processing tasks.
Keywords: natural language processing, information extraction, unstructured text, question-answering model, self-attention mechanism
DOI: 10.26102/2310-6018/2026.55.4.013
In most modern unmanned aerial vehicles (UAVs), global navigation satellite systems (GNSS) are used as the main means of determining spatial position. However, civilian navigation signals have low energy security and are vulnerable to deliberate radio frequency influences at the physical level, such as signal suppression and substitution, which can lead to loss of navigation solutions or the formation of false coordinates. The purpose of this work is an experimental analysis of the stability of UAV navigation receivers to deliberate radio frequency influences and an assessment of the influence of interfering signal parameters on the reliability of receiving GNSS navigation information. As part of the study, the frequency and signal characteristics of GPS, GLONASS, Galileo and BeiDou systems were analyzed, as well as experimental measurements of the signal-to-noise ratio C/N₀ when exposed to barrage interference of various power and geometry of the interference source location. Additionally, the effect of shielding the navigation receiver was investigated and an asynchronous attack using software-defined radio devices was implemented. As a result, it was found that a decrease in C/N₀ below 25–28 dB·Hz leads to a loss of stable navigation reception, regardless of the navigation system used. It is shown that low-power sources of interference can disrupt the navigation support of UAVs at distances up to several hundred meters, and the shielding of the receiver reduces the effectiveness of interference, but does not provide complete protection.
Keywords: unmanned aerial vehicles, global navigation satellite systems, navigation receivers, radio frequency interference, navigation stability
DOI: 10.26102/2310-6018/2026.54.3.009
A computational method for semantic image segmentation with distributional uncertainty estimation is proposed based on representing the prediction as a Dirichlet distribution field. Unlike approaches that require multiple stochastic inference runs (MC dropout) or averaging over an ensemble of independent models, the method computes uncertainty maps in closed form based on the Dirichlet field parameters predicted in a single forward pass of the neural network. The method is formulated as the minimization of a composite functional including the expected logarithmic loss function (expected log-loss), KL regularization for controlling the distribution concentration, and spatial smoothing that takes into account local image intensity variations (edge-aware). For fixed smooth fields, the asymptotic discretization accuracy of the spatial regularizers used is established: the discrete Dirichlet energy approximates the corresponding continuous integral with a first-order error over the grid step. Additionally, a formal decomposition of the overall uncertainty into epistemic and data-supported components was introduced, which can be used in further analysis of the method's behavior and the development of extensions. Computational experiments were performed on three medical image datasets (ACDC, Synapse, CHAOS) with 10 independent initializations. In the main comparison with the baseline model trained using cross-entropy, the differences are statistically significant across initializations on all datasets; for ACDC, significance at the patient level was further confirmed. The method improves segmentation quality and improves the calibration of probability estimates with an overhead of approximately 17 %. In the task of detecting pixel-level segmentation errors, the uncertainty map achieves an AUROC of 0.891.
Keywords: image segmentation, neural network methods, dirichlet distribution, uncertainty estimation, calibration, dirichlet energy, edge-aware regularization, asymptotic sampling accuracy
DOI: 10.26102/2310-6018/2026.55.4.003
The sharp increase in the burden on healthcare systems during the COVID-19 pandemic has shown the inefficiency of traditional methods of calculating labor productivity based on mathematical formulas. They do not take into account the dynamics of work processes, problems in the planning of labor resources, equipment and areas. This leads to inefficient load distribution, especially when, using the example of clinical laboratories, it became necessary to process thousands of samples for PCR testing every day. The aim of the research is to develop and analyze a method for workload planning using simulation modeling in AnyLogic, which allows visualizing and optimizing laboratory processes. The tasks include an analysis of existing approaches, a description of the methodology, application using the example of a PCR laboratory, and an assessment of the benefits in a pandemic. The proposed approach includes timekeeping of technological processes, data collection in tabular form, and creation of a digital laboratory model to identify bottlenecks, equipment and personnel downtime. Using the example of a PCR laboratory, the possibility of optimizing resources, calculating maximum productivity, and justifying purchases is demonstrated. The method makes it possible to increase the efficiency of laboratory production in situations of unpredictable demand, minimizing the risks of disruptions and financial losses.
Keywords: simulation modeling, anyLogic, workload planning, laboratory production, COVID-19 pandemic