Keywords: competencies, subsystem, curriculum, educational plan, adaptation, educational standards, student preferences, FSES requirements
DOI: 10.26102/2310-6018/2025.51.4.034
The article presents the development of a model for generating and adapting an individual educational plan for distance learning based on the requirements of the Federal State Educational Standard (FSES) and the student model. A systematic approach with a multi-agent structure is used, including subsystems for initializing the student's model, planning the educational process, and evaluating the assimilation of material. The student's model takes into account preferences, goals, and progress, which makes it possible to adapt the educational plan based on the competencies being formed and the requirements of the standard. The quantitative characteristics of the educational process are formalized, such as the time to study, the form of final control, as well as the coefficients of similarity and discrepancy between student preferences and the requirements of the Federal State Educational Standard. The author applied an optimization model to minimize time and maximize the effectiveness of the plan, taking into account the connectivity of courses represented by an oriented graph. Special attention is paid to the consideration of conflicts of interest between the individual preferences of the student and the mandatory requirements of the Federal State Educational Standard. The results of the conducted research show the possibility of effectively creating an adaptive educational plan that meets the regulatory requirements and individual characteristics of students, which contributes to improving the quality of distance education. In conclusion, the author draws a conclusion about the criteria for effective modeling of the educational plan.
Keywords: competencies, subsystem, curriculum, educational plan, adaptation, educational standards, student preferences, FSES requirements
DOI: 10.26102/2310-6018/2025.51.4.043
This study focuses on analyzing the content of organizational websites with textual documents in order to support decision-making in the management of educational programs of a higher education organization. The presence of textual documents on an organization's website is one of the key criteria for assessing website effectiveness. These effectiveness criteria, in turn, are determined by the type of website and the type of organization that created and maintains it. The paper examines websites of higher education institutions and their specific characteristics. One such characteristic is the necessity of having curricula (working programs) available in the form of textual documents. Besides being a mandatory requirement, these curricula serve as informational materials for prospective students, thereby increasing the value of such information. Analyzing the availability and content of curricula can help address various management tasks; however, this requires designing and developing a tool to verify the presence of curricula. To solve the problem of verifying the availability and analyzing the content of curricula, an information system was designed and developed. The design phase involved creating an IDEF0 context diagram, a decomposed IDEF0 diagram, and an action (use case) diagram. The context diagram defined the system, inputs, outputs, controls, and mechanisms of the information system. The decomposed diagram includes the following modules: web parsing, document processing, curriculum analysis, data integration, and data export. The action diagram identifies the following actors: administrator, external website, database, visualization system, and includes the following use cases: website parsing, document processing, curriculum analysis, data integration, data export, and data visualization. The implementation of the information system enabled the creation of comprehensive dashboards for educational organizations, faculty-level dashboards, and department-level dashboards. The results of the system’s operation support managerial decision-making based on information about the availability of curricula on educational institution websites.
Keywords: information system, website analysis, work programs of disciplines, higher education institutions, website parsing, document processing, IDEF0 diagrams, data visualization
DOI: 10.26102/2310-6018/2025.51.4.022
The paper proposes an innovative approach to managing factoring applications based on multi-agent ontological clustering with a feedback mechanism. Unlike traditional clustering methods, the proposed approach takes into account not only the numerical parameters of applications but also their semantic proximity, defined using ontologies. The system is implemented through the interaction of autonomous application agents and cluster agents, between which a two-way message exchange with an extended negotiation protocol is carried out. This allows agents to adaptively join existing clusters, create new ones, or reorganize existing ones to maintain internal semantic homogeneity. A distinctive feature of the proposed method is the built-in mechanism for automatic adjustment of rejected applications by selecting the closest approved analogues within semantically homogeneous clusters. This significantly increases the adaptability and efficiency of decision-making in factoring systems. The comparison with classical clustering algorithms showed that the proposed approach surpasses them in terms of flexibility, noise resistance, and the ability to take into account semantic relationships between data. The proposed methodology opens up wide prospects for practical application in banking, insurance, and government systems, where not only the accuracy of data analysis is important, but also the possibility of justified recommendations for adjusting and improving applications.
Keywords: multi-agent systems, factoring, ontology, clustering, feedback, semantic analysis
DOI: 10.26102/2310-6018/2025.51.4.020
The article presents a methodology for assessing the alignment between the content of educational programs and labor market requirements using intelligent text analysis tools. It addresses the issue of mismatch between university-acquired competencies and the actual needs of employers, especially in the context of rapid digitalization and economic transformation. The study substantiates the need to move from manual expert procedures to automated monitoring based on natural language processing models and ontological modeling. The proposed decision support system integrates the RuBERT model, the ESCO ontology, and the RCA metric, enabling the identification of gaps between curricula and job postings, data visualization, and the formulation of recommendations for curriculum adjustments. A practical case is presented, applying the methodology to a training program in the field of information security. The results demonstrate high accuracy in detecting mismatches and confirm the potential of using the system in the design and adaptation of educational programs. The scientific novelty lies in the comprehensive approach to competency analysis, combining linguistic and ontological methods with economic metrics. The methodology can be scaled to other industries and levels of education.
Keywords: graduate competencies, labor market, educational program, intelligent decision support system, ruBERT, ESCO ontology, vacancy analysis, RCA metric, monitoring automation, ontology gap
DOI: 10.26102/2310-6018/2025.51.4.019
The rapid evolution of cyber threats and their increasing sophistication necessitate the critical integration of machine learning methods into web application protection systems. This study presents a comprehensive analysis of modern approaches to applying machine learning algorithms within Web Application Firewall (WAF) architectures, with a focus on enhancing zero-day attack detection efficacy. The methodological framework of the research involves a comparative performance analysis of ensemble methods, deep learning, and transformer architectures on standardized datasets CSIC 2010 and CIC-IDS2017. The empirical basis of the study comprised 2,847,372 HTTP requests analyzed using 14 different machine learning algorithms between June and December 2024. The results demonstrate the superiority of hybrid LSTM-Transformer architectures, achieving an accuracy of 98.73% for SQL injection detection and 97.84% for XSS attacks, which exceeds the performance of traditional signature-based methods by 23.7%. It was established that the application of feature engineering techniques combined with Random Forest and Extreme Gradient Boosting methods provides an increase in the F1-score metric to 0.989 while reducing request processing time by a factor of 18 compared to rule-based engines. The practical significance of the research lies in the development of an adaptive WAF architecture capable of automatic real-time adjustment of detection parameters in response to the evolving threat landscape. The theoretical contribution of the work consists of the formalization of principles for integrating self-attention mechanisms into HTTP traffic analysis tasks and the justification of optimal multi-head attention configurations for different types of web attacks.
Keywords: machine learning, web application firewall, deep learning, transformer architectures, anomaly detection, cybersecurity, ensemble methods
DOI: 10.26102/2310-6018/2025.51.4.031
The relevance of the study is due to the growing need for high-resolution images in fields such as agriculture, architecture, transportation, environmental monitoring, etc. A promising method for generating high-resolution images is based on keypoint matching and contour analysis using a low-resolution reference image, which reduces hardware requirements. At the same time, the use of a group of mobile objects allows you to reduce the time required to obtain images of dynamic scenes, which significantly expands the possibilities of using this method. In this approach, each object receives one or more parts of the final image, which are then "stitched" together. However, mobile objects often have limited computational resources, which significantly reduces the applicability of this approach. Therefore, this article focuses on developing algorithms for the joint processing of information by a group of mobile objects using the aforementioned method. The paper presents the results of a study of the effectiveness of these algorithms, both in a sequential mode on a single mobile object, and in a distributed mode with the cooperation of a group of objects. The experimental studies also included a test of the stability of the parallel implementation of the method to various types of distortion: noise, blur, and geometric deformations. The results showed that the parallel implementation of the method of forming a high-resolution image based on the alignment of fragments by key points and the analysis of contours using a reference low-resolution image provides high-quality high-resolution images, resistance to distortion, and a significant reduction in processing time in a group mode. The article's materials are of practical value for developers of real-time collaborative mapping systems, inspection of long or complex objects using groups of robots, as well as in photogrammetry and 3D terrain modeling tasks.
Keywords: information processing, image generation, high resolution, mobile objects, distributed processing, image stitching, low resolution template, key points, contour analysis
DOI: 10.26102/2310-6018/2025.51.4.037
Mathematical remodeling is a modern approach in the field of mathematical modeling, the essence of which is the transformation of an existing model of one class into a new model belonging to a different, often simpler or computationally more efficient class. Unlike the classical modeling process, where a model is created "from scratch" based on primary data, remodeling starts from the premise that there already exists some adequate initial model f1 that describes an object or process accurately enough. However, this model may be too complex for practical application, require significant computational resources, or be presented in a form that is inconvenient for further use, for example, in real-time systems or on devices with limited performance. The key task in the remodeling process is the generation of a representative training dataset on which the new model f2 will be built. The accuracy and adequacy of the newly obtained model directly depend on the quality and structure of this synthesized dataset. Traditional generation methods, such as uniform random distribution of points in a given domain or using design of experiments methods, often prove to be ineffective: they either do not account for the behavioral features of the original function or become computationally infeasible in high-dimensional problems. Consequently, there is a need to develop intelligent algorithms for adaptive data generation that could purposefully place points in those regions of the input variable space where the original function f1 demonstrates the greatest variability and nonlinearity. This work is devoted to the development and research of precisely such an approach, based on the principles of interval analysis and sequential bisection of the domain. This allows for the optimal distribution of a limited volume of generated data and significantly improves the accuracy of mathematical remodeling.
Keywords: mathematical modeling, remodeling, data generation, interval analysis, numerical methods
DOI: 10.26102/2310-6018/2025.51.4.014
The article is devoted to the study of the possibility of using machine learning methods to solve the problem of classifying buildings and structures by their functional purpose based on geospatial data. The problem of determining the buildings and structure types in real conditions with limited initial data is outlined. Existing approaches to solving the problem of classifying objects are considered. A new dataset was created, which includes about 66 thousand objects of various functional affiliations in the territory of the Russian Federation. The stages of data preparation, feature extraction and the process of normalizing the objects’ geometries on the map are considered. Experiments were conducted using machine learning methods, including artificial intelligence methods. The research results show that the maximum classification accuracy using a graph neural network is 83%, which makes the proposed approach promising for practical applications in geographic information systems. A number of factors have been identified that reduce the classification accuracy associated with the insufficiency of geometric information and the shape details common for buildings of certain categories in real development conditions. Recommendations are given for improving the classification accuracy by optimizing the neural network architecture and expanding the feature set. Thus, the article proposes an effective approach to the automated classification of buildings and structures based on the analysis of geometric properties and the environment, which can significantly facilitate the processes of design and infrastructure management.
Keywords: classification, machine learning, coordinate transformation, geometric characteristics, geographic information system, random forest method, graph neural network, GIS
DOI: 10.26102/2310-6018/2025.51.4.018
A comparative analysis of existing methods for snow avalanche modeling – physical, simulation, and numerical approaches based on continuum mechanics. Their assumptions, limitations, and application features have been identified, which hinder accurate prediction of snow mass dynamics and its interaction with obstacles under natural conditions. It has been shown that the further development of avalanche hazard forecasting and emergency response methods is associated with the use of intelligent decision-support information systems that should possess high scalability, the ability to process large data volumes, and a flexible architecture that allows integration of new modules for modeling, analysis, and data visualization. To address the problem of three-dimensional avalanche flow modeling, a hybrid approach is proposed that combines the advantages of physical and simulation models, ensuring computational efficiency and adaptability of the method to various avalanche formation conditions. A model of snow mass movement has been developed, based on a modified numerical method of smoothed particle hydrodynamics (SPH). A distinctive feature of the method is the use of dimensionless adjustable coefficients instead of constant physical parameters of snow and the application of a hyperbolic smoothing function, which increases the stability and accuracy of numerical calculations while preventing nonphysical particle clustering during compression. The performed computational experiments confirmed that the proposed model adequately describes the motion of snow masses, makes it possible to assess the intensity of their interaction with infrastructure objects, and allows prediction of potential destructive effects in avalanche-prone areas.
Keywords: snow avalanches, mathematical modeling, hydrodynamics of smoothed particles, information system, simulation
DOI: 10.26102/2310-6018/2025.51.4.012
This article presents an innovative mathematical model for generating 12-lead electrocardiograms (ECG), based on a fundamentally novel approach to accounting for spatial dependencies between leads. The primary scientific contribution of this research lies in the development of a method utilizing linear transformation of a set of physiologically grounded basis signals representing projections of the heart's electric field, supplemented with correlated noise that accurately simulates real clinical interference. Unlike traditional generative models (VAE, GAN, Diffecg), which operate as "black boxes", the proposed model enables explicit control over the morphology of key waveforms (P, QRS, T) and strict adherence to physiological constraints, including Kirchhoff's laws for limb leads. This ensures anatomical consistency of signals across all 12 leads, an achievement not previously attained in similar studies. The model demonstrated high performance on the PhysioNet PTB-XL dataset: MSE = 0.015, cosine similarity = 0.94, F1-score = 0.88 for normal rhythms and 0.82 for arrhythmias. A significant advantage of the model is its computational efficiency (generation time 50 ms) and relatively low memory requirements (2.5 GB). Comparative analysis with contemporary generative models (VAE, GAN, CardioDiff) revealed the superiority of the proposed approach in interpretability, parameter control, and physiological authenticity of synthesized signals. The developed model opens new possibilities for creating high-quality synthetic ECG data essential for training AI-based medical diagnostic systems, as well as for applications in telemedicine and medical education. The integration of physical modeling with machine learning presents particular value for researchers and clinicians requiring interpretable and clinically reliable ECG generation tools.
Keywords: electrocardiogram, spatial dependencies, generative models, interpretability, physiological modeling, synthetic ECG data, machine learning in cardiology
DOI: 10.26102/2310-6018/2025.51.4.015
A comprehensive comparative study of several machine learning algorithms for predicting customer churn in an insurance company was conducted using data from an open dataset. Both predictive quality metrics and computational efficiency were examined. The topic is relevant due to intense competition in the insurance market and the substantial costs of losing customers; early detection of a customer’s intention to leave enables targeted retention actions. The aim of the study is to assess the accuracy and performance of different machine-learning models capable of predicting churn. The experiments used open data on insurance customers (life-insurance industry) containing features that describe claim events, historical records, and the churn outcome. We also added factor analysis: correlations between features and the target variable were investigated, factor analysis was performed, and feature importance related to churn was evaluated. The results show that most models achieved similarly high predictive quality due to the presence of a dominant churn-risk factor, but differed in performance: logistic regression and gradient boosting trained an order of magnitude faster than support vector machines and random forests while using substantially less memory. These findings confirm that modern ensemble algorithms can provide high-accuracy churn prediction at reasonable resource costs. Their use is advisable for insurers to promptly identify high-risk clients, such as those with large claims, and to take proactive measures to retain them.
Keywords: customer churn, insurance, machine learning, prediction, model accuracy, model performance, factor analysis, feature importance
DOI: 10.26102/2310-6018/2025.51.4.050
The paper studies the issue of creating models for testing coastal objects of navigation safety systems in the medium and very high frequency ranges in the absence of the possibility of directly ensuring the worst conditions for testing and parameters of ship equipment while ensuring that such tests comply with the definition of the term "verification test in site". The test object, test equipment and test conditions and their distribution on the ship side and on the shore station side are determined. The criteria for confirming the boundaries of working zones in the form of field strength on the ship side and in the form of electromotive force on the shore station side are determined. The corresponding mathematical models for taking into account the worst values of test conditions caused by the influence of external factors and the worst values of permissible parameters of ship equipment for the main technical means, the use of which ensures that the tests comply with the definition of the term "verification test in site", are proposed and analyzed. The need was noted for introducing requirements into the relevant regulatory documents regarding the percentage of required availability of technical means in the very high frequency range and the admissibility of tests that do not fully correspond to the definition of the term "verification test in site" in the event of the absence of organizational or technical capability to send a test vessel to the boundaries of working zones.
Keywords: full-scale tests, coast object, marine radio communication, boundary range, coverage area, signal-to-noise ratio, electromagnetic field strength, actual sensitivity of a radio receiver, medium frequencies, very high frequencies
DOI: 10.26102/2310-6018/2025.51.4.033
The paper presents the architecture and implementation of a client-server solution that provides mobile access to university educational data. The aim of the work was to improve the efficiency of a number of educational activity processes that provide work with educational data such as class schedules, current and midterm assessment results, information on teachers and the study group, and a news portal. The project not only bypasses a number of shortcomings inherent in the university’s current IT infrastructure, but also creates a basis for further digital transformation of the educational process. The paper describes the architecture of the solution and its implementation features, including the choice of technology stack. The main components of the application architecture are considered in detail, such as the gRPC bridge for integration with legacy systems, REST API for interaction with external services, and a monitoring system based on Prometheus and Grafana. The use of modern technologies such as Golang, PostgreSQL, gRPC, and Yandex Cloud services ensured high performance, scalability, and security. Integration with existing university systems via the gRPC bridge ensured compatibility and efficient data exchange. The results of the application implementation in the real IT infrastructure of the university are presented, demonstrating the increase in accessibility and convenience of working with educational data and the reduction of the load. The developed approach can be successfully adapted and applied to other higher education institutions. In the future, it is planned to expand the functionality of the application through integration with AI algorithms for predicting academic risks and forming individual educational trajectories.
Keywords: digitalization of education, microservice architecture, gRPC, REST API, postgreSQL
DOI: 10.26102/2310-6018/2025.51.4.010
The article presents a study and comparative analysis of modern access control models used in telecommunication systems. Three main models are considered: role-based access control (RBAC), attribute-based access control (ABAC), and privilege-based access control (PBAC). The bank's telecommunications infrastructure, including 800 workstations, 200 servers, 800 employees in the office area, and a data center with 50 servers processing critical applications, is used as an example. The bandwidth between the offices and the data center is 10 Gbit/s, and in the public area it is 1 Gbit/s. Active Directory with Kerberos support and a SIEM monitoring system are used to ensure security. The study assessed performance metrics such as response time, throughput, and resilience to peak loads. A security experiment was conducted that tested attack resilience, response flexibility, and protection levels under various system operating scenarios: under daily loads reflecting typical employee work; under peak loads occurring during periods of high resource usage (e.g., at the end of a reporting period); and under emergency loads associated with security incidents or equipment failures. This approach allowed us to identify differences in the effectiveness of access models in real operational situations.
Keywords: access control models, telecommunication systems, role-based access control model, attribute-based access control model, authority-based access control model
DOI: 10.26102/2310-6018/2025.51.4.008
Standards and approaches are considered in the field of ensuring the security of critical information infrastructure objects applied to banking system organizations. The aspects under study include the organizational structure and management, which affect the level of security in terms of the degree of personnel training, distribution of roles and powers, and the organization's readiness to recover from security incidents. Based on the internal audit methodology used in banking system organizations to maintain the security of information infrastructure objects at a sufficient level, a model is proposed, taking into account expert assessments of the indicators of the organizational structure and management. The directions for improving the method are shown. It is proposed to take into account the hierarchy of security requirements, use logical rules in expert assessment, on the basis of which an improved model is built. As a result, a hierarchy of private indicators is built based on their verbal formulations, data are modeled and an assessment of the level of information security is performed using the proposed approaches. The practical value of the work consists in the possibility of improving the internal audit activities of the banking system entities on its basis to ensure a sufficient level of security of critical information infrastructure objects.
Keywords: ensuring information security, security requirements indicators, objects protection level, banking system organization, conformity assessment methodology, critical information infrastructure
DOI: 10.26102/2310-6018/2025.51.4.011
The article presents a system for assessing the durability of the software development life cycle based on the use of artificial intelligence technologies. An analysis of existing approaches to the science of labor costs and development times is presented, based on which the choice of neural network technologies is substantiated as the most promising direction for solving forecasting problems under uncertainty. The main groups of factors influencing the duration of the development process are identified and classified: technical, organizational, team, historical, resource, external. Based on the classes of factors, constant distribution of input parameters, application for training neural networks, as well as their hyperparameters. The architectural characteristics of neural networks, the number of layers, types of activation functions, optimization methods and control parameters studied in the experiments are given. An algorithm for assessing the timing has been developed, implemented as a software system that provides operational forecasting of the durability of project development based on the analysis of historical data and current project analytics. An example of assessing the development times using the developed system is given and the results are compared with an expert assessment. The proposed system for analyzing the duration of the reduction and increasing the accuracy of the estimate in comparison with the reduction methods.
Keywords: neural network, software development life cycle, time estimating, software system, software engineering
DOI: 10.26102/2310-6018/2026.52.1.002
This paper presents the results of an experimental study of a shape descriptor based on a Rotation Profile for tasks of leaf classification. The descriptor is a sequence of values obtained by rotating the shape around itself with a fixed angular step within the range of 0 to 180 degrees. For each rotation angle, the Jaccard measure, reflecting the similarity between the original and rotated shapes, is calculated. The proposed descriptor is invariant to similarity transformations, ensuring its effectiveness in analyzing objects with varying shapes. Experiments were conducted on four classification tasks using three types of classifiers: Support Vector Machine (SVM), Gradient Boosting (XGBoost), and a simple neural network (NN Simple). The descriptor’s performance was compared with traditional approaches, including Zernike moments, geometric moments, and Hu moments. Additionally, recognition was performed directly on raster images using convolutional neural networks (ResNet50, VGG16, CNN Simple). The results demonstrated high accuracy and stability of the proposed shape descriptor across different classification contexts and confirmed its strong potential for shape analysis tasks in computer vision.
Keywords: computer vision, binary raster image, shape analysis, jaccard measure, rotation profile
DOI: 10.26102/2310-6018/2025.50.3.048
In many applied fields, the challenge of making optimal decisions is frequently transformed into discrete optimization problems. A common approach to solving such problems involves the use of evolutionary algorithms. While these methods have proven to be effective, they demand careful adjustment of parameters for each particular task and are usually examined separately, without exploring possibilities for their cooperative use or dynamic interchange. Moreover, existing studies have been limited to relatively low-dimensional problems, which has hindered the evaluation of algorithm scalability in real-world large-scale tasks (involving up to thousands of variables). This article aims to refine the set of effective configurations for evolutionary algorithms to optimize the performance of a developed intelligent algorithm-switching system. A comparative analysis of configurations for four classes of evolutionary algorithms – genetic, ant colony, bee colony, and simulated annealing – was conducted. Experiments were performed on high-dimensional test problems (up to 20000 points). The primary research methods included comparison and grouping of results, as well as analysis of computational experiment series to assess algorithm scalability and robustness against the "curse of dimensionality". In prior experiments with low-dimensional problems, differences in algorithm configurations were barely noticeable, whereas significant performance disparities emerged in high-dimensional tasks. As a result, optimal configurations for each algorithm class were identified. The findings hold practical value for developing automated decision-support systems in logistics, manufacturing, and other engineering applications requiring reliable and scalable optimization tools.
Keywords: discrete optimization, evolutionary algorithms, supply chain modeling, production scheduling, ant colony algorithm, genetic algorithm
DOI: 10.26102/2310-6018/2025.50.3.046
The paper presents a study on forecasting customer satisfaction in an insurance company based on machine learning methods. The relevance of the topic is due to the high competition in the insurance market and the need to retain customers by increasing their satisfaction with the service. The purpose of the study is to evaluate the accuracy and performance of models that can predict the level of customer satisfaction with an insurance service based on data on the customer's interaction with the company. Classification algorithms were used as methods. The accuracy and performance of the models was assessed using real data from surveys of insurance company customers. The best were ensemble methods - random forest and gradient boosting, which demonstrated the accuracy of forecasting satisfaction up to 85%, significantly outperforming simpler models. It is shown that gradient boosting allows taking into account nonlinear dependencies of factors, for example, the presence of escalation of the appeal, and thereby more accurately identify "dissatisfied" customers. Currently, such forecasting in insurance companies is either not carried out or relies significantly on random factors. This leads either to too frequent complaints or to low customer satisfaction with their subsequent outflow. The materials of the article are of practical value for insurance organizations: the implementation of the developed models will allow promptly identifying customers with the risk of dissatisfaction and reasonably applying preventive measures, for example, additional service measures or compensation to increase their satisfaction.
Keywords: customer satisfaction, insurance company, machine learning, prediction, gradient boosting, model accuracy
DOI: 10.26102/2310-6018/2025.50.3.045
The article is devoted to the development of a resource-oriented technology for organizing an information process of computational resource distribution under conditions of integrating the concepts of the Internet of Things (IoT) and edge computing. During the research, an analysis of existing models and methods was conducted and their shortcomings were identified, namely: the lack of consideration of the resource cost of data transit for computing nodes involved in data transmission and the computing process and the lack of consideration of the resource costs required for the operation of distributing computing resources. Given the limited resources of devices at the network edge, these drawbacks are particularly relevant. The goal of this study is to minimize resource consumption during resource distribution and solving computational tasks within systems constrained by device limitations. The foundation of the proposed technology includes: an overall mathematical model of resouce allocation process, formulated as an optimization problem; proposed methods for solving said problem based on heuristic rules and meta-heuristics; algorithms for calculating the resource cost of data transit and migration of computational tasks, which serve auxiliary purposes within the developed methods; a repository of meta-heuristic algorithms used to select the optimal method for solving the resource distribution problem. This technology implements the distribution of computational resources while minimizing resource expenses associated with data transit, taking into account both the computational task itself and decision-making regarding resource allocation. It considers the resource constraints of devices and dynamic changes in load and network topology. Experimental modeling confirmed the effectiveness of applying the proposed technology. Significant reductions in resource expenditure for computational resource distribution have been demonstrated, leading to improved results in terms of distributed computing efficiency metrics. The results of the study demonstrate the potential of the proposed technology for organizing distributed computing in systems with limited resources, such as IoT systems and edge computing.
Keywords: computing resource allocation, distributed computing, technology, resource costs optimization, distributed computing modelling
DOI: 10.26102/2310-6018/2025.51.4.017
The article is devoted to the development of a prototype of a computer-aided diagnostics system for recognizing cerebral aneurysms using the 3D Slicer platform. The relevance of the work is due to the growing workload of specialists involved in the interpretation of medical images, which requires automation of diagnostic processes to improve the quality of medical care. The importance of prototyping computer-aided diagnostic systems at the initial stages of work on the system is determined by the need to test the concept of the system and the algorithms used, identify potential problems and improve interaction between technical specialists and experts in the field of medicine. The article describes key aspects of the development, including the use of open libraries and plugins, as well as the application of design patterns to increase the flexibility and modularity of the code. The main focus is on the design of the system, including the software architecture, the choice of technologies used and the implementation of key components. The prototype of the system allows the user to select images and recognition models, as well as build 3D visualizations of the highlighted areas. The results of the work demonstrate the effectiveness of the proposed approach, as well as the possibilities of subsequent integration of the developed prototype with medical information systems and picture archiving and communication systems (PACS).
Keywords: computer-aided diagnostics system, software prototyping, medical imaging, 3D Slicer, artificial intelligence in medicine
DOI: 10.26102/2310-6018/2025.51.4.024
The criterion for mastering a cluster window of terms in an adaptive learning system is proposed. The adaptive learning technique of L.A. Rastrigin is taken as a basis for the study. Its application in combination with a frequency dictionary of terms is considered. The criterion for mastering a cluster window of terms is calculated as a weighted sum of probabilities of ignorance of terms, normalized by the sum of their weights. This criterion allows regulating the issuance of cluster terms for training, ensuring their priority display during training using the adaptive learning technique. Also, the main criterion of training quality has been modified; a threshold value has been introduced for it, the change of which changes the behavior of the system during student testing. Thus, before reaching the threshold value, terms are issued from the cluster window, after - in accordance with the classical criterion of training quality. Student testing is simulated on a sample of 210 terms of the frequency dictionary according to system analysis with a duration of 100 sessions. The analysis of the modified adaptive learning system operation has been carried out. The proposed criterion of learning quality was compared with the previously used one. For cluster (target) terms, a decrease in the probability of ignorance and an increase in the frequency of their occurrence during testing on the developed algorithm were revealed. Which is a good indicator of achieving the goals set during the study.
Keywords: adaptive learning system, frequency dictionary, cluster window mastery criterion, learning quality criterion, student testing
DOI: 10.26102/2310-6018/2025.50.3.044
In the context of digitalization of education, the development of adaptive feedback mechanisms in the context of multithreading, which ensure the personalization of the interaction of participants in the educational process, is becoming a factor in increasing the effectiveness of the educational process. The analysis of existing approaches and tools for personalizing learning routes in multithreading conditions using the example of university disciplines allowed us to formulate the research problem of insufficient automation of the educational process in conditions of multithreading. The purpose of the article is to describe the development of a method for intelligent analysis of information with semantic text processing in the implementation of adaptive feedback of participants in a digital educational environment. The scientific novelty of the study consists in the development of an approach to intelligent processing of answers in free form, which ensures an increase in the efficiency of the educational process in a digital educational environment. The implementation of the stages of the intelligent information processing method in feedback with a multi-format digital assessment is considered. The main stages of the method include: data preparation, linguistic preprocessing, semantic comparison, model training, feedback generation, and analysis of the results of interaction between participants in the educational process. In conclusion, the analysis of the results of the application of the method considered in the work in the educational process is given on the example of streaming university disciplines.
Keywords: digital educational environment, adaptive feedback, natural language processing, distance learning system, tokenization, assessment metrics
DOI: 10.26102/2310-6018/2025.51.4.025
The article investigates the problem of multiclass classification of metal surface defects using deep learning methods. The primary approach employs a "one-vs-all" strategy, which effectively separates different defect classes. Initial analysis utilized the NEU dataset, comprising six defect categories. The resulting metrics were compared against existing solutions, after which the dataset was extended with an additional class from the "Severstal: Steel Defect Detection" dataset. Two convolutional neural network architectures were proposed, each tailored to the respective set of classes. The first architecture consists of five convolutional layers, five max-pooling layers, and two fully connected layers. The second architecture includes two additional layers: an extra convolutional layer and an additional max-pooling layer. Evaluation on the NEU dataset demonstrated high performance: the final model achieved an accuracy of 98.33 %, precision of 98.39 %, recall of 98.33 %, and an F1-score of 98.33 %. Analysis of the results showed that the proposed approach achieves performance comparable to other research results, and the proposed architectures are on par with state-of-the-art solutions. The model also exhibits good processing speed – up to 103 frames per second on the CPU – making it suitable for industrial deployment and enabling real-time defect detection. After extending the solution with the additional class, the model maintained strong performance, achieving an accuracy of 97.14 %, precision of 97.24 %, recall of 97.14 %, and an F1-score of 97.12 %, which suggests robustness and scalability of the proposed solution based on the "one-vs-all" approach.
Keywords: neural networks, convolutional neural networks, dataset, classification, defects of metal surfaces, deep learning
DOI: 10.26102/2310-6018/2025.51.4.009
The rapid development of automation tools for programming is a key factor in the digital transformation of society. The purpose of this work is a comprehensive analysis of the evolution of automation tools, including high-level programming languages, structured and object-oriented programming, integrated development environments, low-code/no-code platforms and large language models. The study examines the principles of operation of generative artificial intelligence, its capabilities and limitations, as well as the specifics of Russian solutions in this area. Particular attention is paid to the challenges associated with the widespread introduction of automation: problems of intellectual property, security of generated code, transformation of the programmer's role and adaptation of educational programs. A conclusion is made about the formation of a new paradigm of joint work of humans and artificial intelligence in software development. The practical significance of the work is to provide developers and managers with structured information for making decisions on the implementation of automation tools, the choice of technologies and the assessment of associated risks.
Keywords: programming automation, generative artificial intelligence, large language models, history of programming, integrated development environments, low-code/no-code, devOps, machine learning
DOI: 10.26102/2310-6018/2025.50.3.034
The article discusses a method for detecting DDoS attacks in digital ecosystems using tensor analysis and entropy metrics. Network traffic is formalized as a 4D tensor with the following dimensions: IP addresses, timestamps, request types, and countries of origin. The CP decomposition with rank 3 is used to analyze the data, which allows revealing hidden patterns in traffic. An algorithm for calculating the anomaly score (AS) is developed, which takes into account the factor loadings of the tensor decomposition and the entropy of time distributions. Experiments on real data have shown that the proposed method provides 92 % attack detection accuracy with a false positive rate of 1.2 %. Compared to traditional signature-based methods, the accuracy increased by 35 %, and the number of false positives decreased by 86 %. The method has proven effective in detecting complex low-rate attacks that are difficult to detect by standard methods. The results of the study can be useful for protecting various digital ecosystems, including financial services, telecommunication networks, and government platforms. The proposed approach expands the capabilities of network traffic analysis and can be integrated into modern cybersecurity systems. Further research could be aimed at optimizing the computational complexity of the algorithm and adapting the method to different types of network infrastructures.
Keywords: tensor analysis, DDoS attacks, cybersecurity, digital ecosystems, CP decomposition, entropy analysis, anomaly detection
DOI: 10.26102/2310-6018/2025.51.4.007
Digitalization of education necessitates a formalized representation and systematic organization of information flows that ensure effective interaction of participants in the educational process in the digital educational environment (DEE). The aim of the study is to model information flows based on an ontological representation of the interaction of a decision maker (DM) and feedback. An ontological model has been developed that reflects key classes, instances with the identification of relationships between them and the semantics of information flows circulating between the DEE components. The article presents a decomposition of an instance of the "adaptive feedback algorithm" class of the ontological model of information flows. Digital tools operate in a single circuit of the educational environment, implementing a continuous cycle of assessment, analysis, feedback and correction. An instance of the "unified test question bank" class of the ontological model, including artificial intelligence technologies for the implementation of automated verification of free-form answers in the conditions of streaming learning, allows for variable and level assessment. Feedback implementation tools include LMS, social networks and a virtual information and communication assistant. The relationship of the tools supplemented in the DEE is shown in the ontological model when describing the information flows of the "DM – feedback" connection. The application of the model considered in the article will allow structuring and unifying the description of educational processes with the automation of the digital footprint analysis. The conclusion provides findings with the decomposition of the ontological model using the example of the knowledge assessment process in the context of digitalization and multithreading with the identification of relations in the form of prerequisites of instances of classes of the ontological model.
Keywords: ontology, digital educational environment, distance learning system, information flows, educational technologies, class instances
DOI: 10.26102/2310-6018/2025.51.4.002
Unmanned trains are a key component of the next level of railway automation. Launching locomotives in unmanned mode requires the development of reliable computer vision systems using artificial intelligence technologies. The paper presents a method for improving the quality of learning convolutional neural networks for detecting railway infrastructure objects. The reliability of visual object detection by a computer vision system can be achieved through algorithmic expansion of training datasets. The proposed method takes into account the variability of weather conditions in which identical objects must be detected, and it allows generating image modifications with added effects of rain, snow or fog. The original dataset included 21700 annotated images and contained 7 classes of objects. Based on them, an extended set of 65100 images was formed using the developed method. To evaluate the effectiveness of the proposed approach, comparative learning of the advanced YOLOv11 model was carried out on the original and extended datasets. The F1-measure and mean average precision (mAP) metrics were used to compare the learning results. The results of the computational experiments confirm that using the extended dataset improves the quality of learning. In particular, the F1-measure for the YOLO model trained on the original dataset was 0.72, while on the extended dataset this parameter reached an increased value of 0.90. The value of the second used metric mAP (50–95) increased from 0.67 on the original dataset to 0.83 on the extended dataset. Comparative values of the metrics were obtained at the same confidence threshold of 0.8. The developed method has been implemented in a hardware and software system, which is ready for testing as part of an integrated control and safety system for freight trains.
Keywords: machine vision, machine learning, convolutional neural networks, YOLOv11, rail transport automation, unmanned transport
DOI: 10.26102/2310-6018/2025.50.3.031
In the conditions of high competition for large modern companies producing mass products or providing mass services, it is typical to increase advertising costs, which does not always bring the expected effect. There is a growing need for tools for precise audience segmentation, which can increase the effectiveness of marketing communications. Traditional response prediction models do not allow us to determine whether the client's behavior has changed under the influence of marketing impact, which reduces the possibilities of constructive analysis of marketing campaigns. This article is aimed at studying uplift modeling as a tool for assessing the effect of increasing positive responses from communication and targeting optimization. The results of the study demonstrate significant advantages of the uplift modeling approach for identifying client segments with maximum sensitivity to impact. The comparative analysis of various approaches to building uplift models (such as SoloModel, TwoModel, Class Transformation, Class Transformation with Regression), based on the use of specialized uplift metrics (uplift@k, Qini AUC, Uplift AUC, weighted average uplift, Average Squared Deviation), conducted within the article, demonstrates the strengths and weaknesses of each of the modeling approaches. The study is based on open data X5 RetailHero Uplift Modeling Dataset, provided by X5 Retail Group for the study of uplift modeling methods in the context of retail.
Keywords: uplift modeling, machine learning, marketing communications, targeting, response evaluation, uplift model quality metrics
DOI: 10.26102/2310-6018/2026.52.1.012
The study presents approaches to multicriteria optimization in information processes using the example of omnichannel marketing. The purpose of the article is to create and formalize a model of multicriteria optimization of information processes for managing marketing campaign resources in the context of omnichannel promotion. The methods of integrating various promotion channels to ensure a consistent customer experience and improve the effectiveness of marketing campaigns are considered. A conceptual model has been developed that takes into account a variety of campaigns, channels, stages of the customer journey and key performance indicators (KPIs). The influence of synergetic effects and resource constraints on strategic planning is analyzed. The results of constructing a mathematical model that allows to increase the marketing effect and reduce financial costs are presented. The structure of the mathematical model of multicriteria optimization is described. The presentation of a marketing campaign in the context of a mathematical model is considered. A diagram of the interaction of the considered subsets is considered. The results obtained in assessing the effectiveness of the model in real conditions demonstrate the prospects for increasing the profitability of marketing strategies, taking into account current constraints. The application context, key metrics, and evaluation methods are described. Recommendations on the implementation of the model in the activities of enterprises for optimizing information management processes of omnichannel campaigns are proposed. The prospects of applying the results obtained in further research are presented, in which the described mathematical model of multicriteria optimization, along with the method of processing and annotating marketing information, will serve as the basis for the functioning of an automated decision support information system in the field of omnichannel marketing.
Keywords: optimization model of information processes, digital marketing, omnichannel approach, information system, MCDM, big data, artificial intelligence, KPI