Keywords: automated control system (ACS), efficiency management of the organizational and economic system, expert assessment, availability, affordability, demand, controlling influence
DOI: 10.26102/2310-6018/2023.41.2.018
The modern automated control system (ACS) of production processes is characterized by a fast pace of updating, an increase in the volume of incoming information, and the development of integrated management processes. This is achieved through the introduction of corporate information systems, information and telecommunication technologies, and professional training of specialists. At the same time, experience shows that all the capabilities of the currently functioning automated control systems are not fully utilized, and the available information technologies and resources are not used effectively enough. There are also problems with the coordination of information resources and the availability of specialists who are able and ready to use them. In the process of evaluating the subsystems of an industrial enterprise (organization) automated control system, the principle of uniformity is highlighted which determines the continuity of the managerial business processes of industrial enterprise management when improving the technical, system-logical, applied and organizational-methodological subsystems of an automated control system ensuring the integration and coordination of linear and functional link interaction in the rational allocation of resources for the production of marketable products. The paper considers business procedures for analysis, evaluation, and design of an automated control system according to various criteria. The following criteria are proposed: availability, accessibility, demand. The current performance indicators of the enterprise (organization) operation are evaluated and compared with the specified ones. The method of automated control system improvement has been developed, which made it possible to identify "bottlenecks" using forecasting tools, make the necessary management decisions and evaluate the results of their impact. Based on the conducted research, the model for evaluating the improvement of the automated management system of an industrial enterprise has been developed using an approach that evaluates the current performance indicators of the enterprise (organization) operation. The materials of the article are of practical value for improving the economic activities of enterprises and organizations of various kinds using the proposed mathematical, software and methodological tools.
Keywords: automated control system (ACS), efficiency management of the organizational and economic system, expert assessment, availability, affordability, demand, controlling influence
DOI: 10.26102/2310-6018/2023.41.2.008
Air pollution is one of the biggest threats to the environment and humans. Due to meteorological and transport factors, industrial activity and emissions of power plants are the main agents of air pollution. Therefore, environmental authorities are focused on the effects of air pollution and the development of guidelines to minimize it. The main objective of this study is to design a system that uses a machine learning approach for predicting urban air pollution by analyzing a set of data on air pollutants, PM2.5 particulate matter in particular. A linear controlled machine learning algorithm, which has a RMSE error value of 31.29 and a Decision Forest Regression algorithm with an RMSE value of 29.26, is used for predictions. The system is developed on a web-based platform and is accessible for mobile phones; it is user-friendly and represents the values of air pollutant concentration with PM2.5 particles and the values of the air quality index. Values of PM2.5 particle concentration are dependent on other sources and background levels, which indicates the importance of localized factors for understanding spatio-temporal model of air pollution at intersections and supporting individuals making decisions in the field of regulating and controlling pollution in cities.
Keywords: air quality, PM2.5 microparticles, machine learning, regression models, SDS011 sensor, forecasting
DOI: 10.26102/2310-6018/2023.41.2.024
Forecasting of epidemic processes makes it possible to develop and substantiate measures to prevent the spread of infectious diseases among the population as well as eliminate the negative consequences caused by epidemics. The paper deals with modeling the development of the epidemic process by means of an individual-based model. In these models, modeling is carried out using not an average group, but an individual level with consideration to the heterogeneity of the population by characteristics. Each individual can have three states: Susceptible (S), Infected (I), or Recovered (R). Transmission in a population occurs from individuals in state I to individuals in state S. After recovering, individuals I change state to R and become immune. Immunity wanes over time and individuals R revert to a susceptible state S. This paper is devoted to the development and software implementation of an algorithm for solving an individually oriented model, which helps to study the population dynamics of those groups. The results obtained for various model parameter values are presented. The results obtained using the individual-based simulation are compared with the results obtained by solving numerically the well-known SIRS model, which is a system of ordinary differential equations. As a further work, it is planned to modify the model by introducing additional groups of individuals while taking into account additional individual parameters (age, spatial coordinates, social contacts, etc.). To reduce the computation time in the study of the epidemic spread in large populations, algorithm parallelizing appears to be a prospective option.
Keywords: modeling of the epidemic process, epidemic models, individual-based model, computer modeling
DOI: 10.26102/2310-6018/2023.41.2.017
Nowadays, intelligent systems are widely used in the field of medicine. Especially relevant is the problem of developing intelligent computer-aided diagnostics (CAD) systems which can be used as an auxiliary tool to improve specialist’s efficiency in the context of the growing volume of medical data requiring analysis and processing. One of the important components of modern CAD systems is the module for recognizing pathological changes in medical images. The paper considers the problem of training a convolutional neural network to recognize cerebral vascular aneurysms. The architecture of a fully convolutional neural network based on the UNet architecture, a data preprocessing technique, a technique for constructing a seamless prediction based on the separation of the original image into a set of intersecting fragments are proposed. The influence of the size of image fragments used for training on the effectiveness of neural network training was investigated. Drawing on the statistical analysis of the results of the conducted computational experiments, it was concluded that the size of the fragment is not a determining parameter since no increase in recognition accuracy is observed with its increase. At the same time, experiments have shown that increasing the batch size while fixing the remaining parameters at the same level can significantly improve the recognition accuracy.
Keywords: convolutional neural network, pattern recognition, medical images, cerebral aneurysm, computer-aided diagnostics system
DOI: 10.26102/2310-6018/2023.41.2.020
Modern electronic devices are complex technical systems, the functioning of which is accompanied by various physical processes occurring in their nodes and blocks. The combination of circuitry, structural and technological complexity of radio-electronic devices is the cause of various defects in them including hidden ones with a long latency period. This, in turn, imposes higher requirements for the diagnosis and control of the technical condition of electronic devices. The relevance of the research presented in this article is due to the need to increase the reliability and accuracy of defect identification in nodes and blocks of electronic devices, the development of new methods and means of technical diagnostics combining traditional approaches with actively developing technologies of artificial neural networks, big data processing, computational experiment. The article presents a study on ultrasound diagnostics of internal defects in the delamination of printed circuit boards. The method of modeling various defects using specialized software ABAQUS is described. The features of the subsequent processing of experimental data – amplitude-time, amplitude-frequency characteristics, the formation of numerical arrays of the parameters under study – are defined. The structure of an artificial neural network for diagnosing and identifying defects of printed circuit boards is given and the technology of its training and testing is defined. The materials of the article are of practical value for design engineers, circuit and system engineers of electronic systems as well as developers of complex technical complexes.
Keywords: printed circuit board, non-destructive testing, ultrasound diagnostics, delamination, hidden defects, ultrasonic wave, piezoelectric transducer, artificial neural network, training, identification
DOI: 10.26102/2310-6018/2023.41.2.011
Cluster analysis has become a widely used tool for analyzing medical data to identify groups of patients. But despite the widespread use of cluster analysis, it is rare to find publications where the identification of groups of patients and the attributes by which the division into groups occurred are mathematically justified. To solve this problem, a method called clustering with a teacher can be applied, the essence of which is to apply multiclass classification methods using cluster labels as a target variable. In this paper, this method is employed to identify indicators by which groups of patients will be divided in the databases of the autonomous public health care institutions SOCP Hospital for War Veterans and Institute of Medical Cell Technologies for years 1995-2022 in volume 6440. The HDBscan method was used for clustering method, and the CatBoost method in the multiclass classification mode was used as a verification method for the obtained clusters of patients. As a result, 4 clusters were obtained divided by gender and the patient's condition. In order to identify statistical differences between the obtained clusters, an AB analysis of these clusters was carried out by means of the Kruskal-Walis criterion. The results of the AB analysis showed that the clusters have statistically significant differences in all functional parameters included in the analysis. Further, an AB analysis of the differences in the functional indicators of patients in outpatient and inpatient status for the female and male cluster was carried out. For the AB analysis, a permutation criterion and a bootstrap were used with the construction of confidence intervals of averages from samples generated in the bootstrap.
Keywords: supervision clustering, AB analysis, geroprophylactic treatment, prediction of treatment effectiveness, bio-growth
DOI: 10.26102/2310-6018/2023.41.2.009
The article is devoted to the problem of 3D reconstruction of objects in dynamic scenes by stereo images. When performing any complex tasks by autonomous robots (repair work, inspection of the seabed), there is a need to simultaneously restore the trajectory of the autonomous robot and build a 3D model of the environment using video information. Data on the trajectories of robots and information about the environment are necessary for specialists to further correct drone operation and track the progress of work performed. Сurrently existing object identification solutions help to restore the geometry of dynamic objects with imposed restrictions that prevent from reconstructing the entire scene with the necessary accuracy. Also, the existing methods do not involve detailed visualization of the entire 3D scene using previously unknown point data and do not include the restoration of invisible parts of object surfaces. An approach to solving the problem of identification and 3D reconstruction of objects based on video information in relation to dynamic scenes is proposed. The basis of the software system implementing the proposed algorithmic and architectural solutions is described. Data on model scenes and features of scene objects are given. The results of computational experiments with virtual scenes are discussed. The regularities discovered as a result of tests affecting the accuracy of model reconstruction are considered.
Keywords: dynamic scene, object identification, openGL, 3D reconstruction, visualization, epipolar constraints, delaunay method
DOI: 10.26102/2310-6018/2023.41.2.012
To improve the rehabilitation effectiveness for people with disabilities, an individual approach is required while taking into account the constitutional peculiarities of each patient with a view to optimizing the choice of means for rehabilitation measures or treatment. For the rehabilitation of people with disabilities, a method for classifying the adaptive potential is proposed to control and manage their functional state during therapy or a session of a rehabilitation procedure. A method for localizing clusters in the space of surrogate markers has been developed, which includes four stages differing in that the first stage reveals relevant markers that characterize the change in the adaptive potential of a living system under the influence of an exogenous factor; at the second stage, the proof of the reliability of adaptive potential level clustering is carried out; at the third stage, the classification results are analyzed on dynamic training samples, and at the fourth stage, the statistical heterogeneity and / or heterogeneity of the identified clusters is analyzed. A hybrid adaptive potential classifier has been developed, which includes five "weak" classifiers built on the basis of fuzzy decision-making logic, and a fully connected neural network of direct signal propagation as an aggregator. Testing of the hybrid classifier was carried out on the experimental group of postinfarction patients. Efficiency was evaluated using ROC analysis. The quality indicators of the synthesized hybrid classifier classification make it possible to recommend it for biotechnical systems of a rehabilitation type which carry out medical and restorative procedures for post-infarction patients.
Keywords: adaptive potential, hybrid classifier, virtual model, algorithm, recurrent myocardial infarction, cumulative survival
DOI: 10.26102/2310-6018/2023.42.3.002
The paper touches upon the problem of incident management as part of the bank IT service performance. The relevance of the issue is due to the variety of types of incidents, the consequences of their impact on the performance and quality of business processes in the context of continuous improvement of information technology. The aim of the research is to study the process of managing incidents using IDEF modeling tools. The objectives of the study are reduced to the construction and analysis of an appropriate business model as in the case of considering the activities of the bank IT support service as well as the development of proposals for improving the information system. The study used theoretical and empirical general scientific methods: systematic data collection, review of electronic sources, generalization and analysis, the IDEF modeling method, which was employed to design context diagrams that reflect the essence, features and changes in the analyzed business process. The analysis has demonstrated the shortcomings in the implementation of incident management process related to the registration and further transmission of information in the support service system. The means to eliminate them are outlined with a view to minimizing the time for the implementation of the process and saving human and information resources, after which the requirements for a modified information system are defined that involve maintaining a database with examples of technical malfunctions. The result of the study was the construction of a conceptual model of the information process for registering incidents. The decomposition of the process has been made and changes have been introduced to quickly update information about potential incidents and notify the support staff about malfunctions. The subsequent transformation of the bank information system with due regard for the proposed changes contributes to the optimization of incident management, thus reducing the response time and improving bank performance.
Keywords: incident management, information system, business process, automation, IDEF model, support service
DOI: 10.26102/2310-6018/2023.41.2.015
The paper is concerned with maritime safety. The problem of planning a route for a vessel crossing water areas with heavy traffic is considered. When sailing under such conditions, navigators follow a trajectory that is established in a specific water area. It can be defined officially or be accepted on an informal basis while representing collective navigation experience. If the latter, it seems productive to plan a route using the data on the traffic of other ships that crossed the water area earlier (the same idea underlies "big data" task methods). In the papers published earlier, such route planning was based on a cluster analysis of retrospective data on ship traffic, which involved dividing the water area into sections and highlighting characteristic values of speeds and courses in them. The problem with this approach was the choice of partitioning parameters which had to be set for each specific water area separately. In this paper, another approach is proposed, when the graph of possible routes includes a selection of the trajectories of individual ships that were previously implemented in the specific water area. This article further develops the methods for solving the problem of ship route planning in areas with heavy traffic. The proposed method is based on the formation of a possible route graph from a set of intersecting broken lines, each of which represents a route implemented earlier. Each edge of the graph is assigned a measure of its “popularity”, which characterizes the proximity of other edges to it. The shortest path on a weighted graph is constructed considering not only the geometric length of the edges, but also the measure of their “popularity”. The paper regards the formation of a possible route graph, a number of its nodes and edges is esteemed, recommendations as to how to select a method for defining the shortest path on its graph are provided. Examples of route planning for the Tsugaru Strait and the Seaport of Vladivostok are provided.
Keywords: ship traffic management, unmanned navigation, e-navigation, route transit planning, high-density traffic, automatic identification system, big Data, graph algorithm
DOI: 10.26102/2310-6018/2023.41.2.001
This article is devoted to development of a methodology for assessing customer satisfaction of an enterprise that provides medical services to population taking into consideration emotionally charged information coming from customers. Authors analyzed several related publications on this issue. Based on that, the existing shortcomings of the methods under review were identified. To eliminate them, it is proposed to improve one of these methods. The improvement consists in adding an additional parameter to the mathematical model which characterizes emotional response of a client of a medical organization as feedback. A model for assessing patient satisfaction was chosen with due regard for customers’ emotions using fuzzy classifiers. A general scheme for calculating the integral indicator was given. The proposed methodology is described step by step. Each stage of the methodology was also studied in greater detail. During one of the stages, the experts determined a set of indicators for further research, which includes a parameter that describes a patient's emotional reaction. A numerical experiment was carried out that implements the proposed method and its results are described. Following on from the results of the computational experiment, conclusions were drawn.
Keywords: affective computing, fuzzy sets, fishburn scores, customer satisfaction, medical services
DOI: 10.26102/2310-6018/2023.41.2.007
The article develops a logical compromise approach to decision-making in organizational systems under the conditions of conflict of interests among their constituent entities. In relation to systems of this type, the interpretation of the criteria for decision security, their Pareto optimality and Nash equilibrium are given. The conceptual gradation of risk levels in decision-making is introduced and empirical formulas are provided to assess the corresponding functions of subject efficiency. Taking into account the correlation of individual and systemic interests, the problem of choosing the most suitable solution from among Pareto-optimal and simultaneously equilibrium solutions in the sense of Nash is solved. Recommendations on practical ways to ensure the balance of compromise solutions, including imposing sanctions on violators of agreements, creating coalitions from among subjects with similar interests, increasing the level of mutual awareness of participants in negotiations, are given. An algorithm is proposed that implements a logical compromise approach to decision-making in the context of a conflict of interests of the subjects in the organizational system. This algorithm along with the methods of multi-criteria mathematical optimization, the provisions of coalition game theory and neural network programming can be used to create software systems to support the adoption of compromise decisions made in organizational systems of various functional purposes.
Keywords: organizational system, decision-making, conflict of interest, risk of decision-making, guaranteeing decision, pareto optimal decision, nash equilibrium, individual interests, systemic interests, decision-making algorithm
DOI: 10.26102/2310-6018/2023.41.2.006
This paper presents methods of mathematical analysis used to solve the applied problems of the theory of transport of solid media – thermal flows and viscous liquids in network-like objects. The initial-boundary problem for the Navier-Stokes system, which lies at the basis of the mathematical description of the so-called turbulent transport processes of Newtonian liquids with a given viscosity, is defined and studied. It is assumed that the liquid has a complex internal rheology and is a multi-phase continuous medium. The distinctive feature of the process under consideration is the absence of a classical differential equation at the node points of the network-like area (the surfaces of mutual adhesion of subdomains). Sufficient conditions for the unique weak solvability of the initial-boundary problem are presented, which are obtained by the classical analysis of approximations of the exact solution by means of a priori estimates derived from the energy inequality for norms of solutions of the Navier-Stokes equation. An optimization problem, which is natural in the analysis of transport processes of continuous media on a network-like carrier, is considered. The state spaces of the Navier-Stokes system, spaces of controls and observations, for which the uniqueness of the solution of the optimization problem is proved, are indicated. The suggested approach and corresponding methods are equipped with the necessary algorithm and illustrated by the examples of numerical analysis of test problems. The basis of the analysis lies in the classical approach to studying mathematical models of transport processes of continuous media. The paper is aimed at developing qualitative and approximate methods for investigating mathematical models of various types of continuous media transport.
Keywords: transfer of hydroflows, network carrier, optimization problem, algorithms, numerical analysis
DOI: 10.26102/2310-6018/2023.41.2.003
The ability to predict student academic performance is valuable to any institution seeking to improve student achievement and motivation. Based on the predictions generated, students identified as being at risk for expulsion or failure can be supported in a more timely manner. This article discusses various classification models for predicting student performance using data collected from universities in Penza. The data include student enrollment data as well as activity data from the university electronic information and education environment (EIE). An important contribution of this study is the consideration for student heterogeneity in the construction of predictive models. This is based on the observation that students with different socio-demographic characteristics or modes of learning may exhibit different motivation to learn. Experiments confirmed the hypothesis that models trained using instances in student subgroups outperform models built using all data instances. In addition, the experiments showed that accounting for both enrollment and learning activity patterns helped to identify vulnerable students more accurately. Experimental results have demonstrated that no single method has superior performance in all aspects. The homegrown analytics platform Loginom was employed as a tool to create a predictive model.
Keywords: data Mining, intellectual analysis of educational data, forecasting of student progress, heterogeneity of students, electronic information and educational environment
DOI: 10.26102/2310-6018/2023.41.2.004
Scheduling makes it possible to solve complex and large-scale modern problems that require a combination of many projects distributed among various organizational systems in the context of growing globalization and an increase in the share of project work in organizations compared to non-project work. The article describes the problem of scheduling the work of project teams, which consists in minimizing the duration of all work taking into account a limited number of specialists of different types. Such a problem is characterized by a non-deterministic polynomial time difficulty. The solution of the optimization problem was carried out using the example of determining the sequence of operation in two projects with consideration to a limited number of specialists. At the same time, the maximum sets of commands for simultaneous execution of work was presented in the form of possible combinations of commands formed in reliance on the restrictions on the number of specialists of each type. The proposed approach to optimizing the work schedule of project teams includes the use of a heuristic algorithm, according to which the operation with the longest duration is performed first. To obtain a linear programming problem, sets of commands representing joint combinations and formed taking into account the restrictions on the number of specialists of each type operating simultaneously are considered. An example of calculating the minimum duration of project work execution as a whole by sets of teams is given. The use of a heuristic algorithm helped to determine the best operation sequence taking into account the composition of teams of specialists and the duration of the work performed.
Keywords: project, project team, work, specialist, set of teams, duration of work
DOI: 10.26102/2310-6018/2023.40.1.023
This article explores the process of information dissemination, in which each agent is represented by a continuous-time Markov chain with two states: L and M. L-state refers to the “home” while M-state refers to the “meeting place”. When the two agents remain together, they “meet” and form a connection. This means that they can exchange information, conduct commercial transactions and etc. The aim of the research is to develop an effective way to calculate the propagation time and study the dependence of the propagation process on parameters such as the number of agents, the number of uninformed agents at the end of the process and the intensity of contact. It is implied that all agents are initially in L-state and one of them necessarily has some information. A distribution model with mobile agents in a star-shaped network has been created, which can be reduced to a network with two nodes. An increase in population size has two contradictory effects that cause the propagation time to increase at first, then decrease, and, eventually, increase with asymptotic behavior similar to a harmonic sum. In this regard, the expected time required to inform an additional agent is small at first, and then increases, and the probability of informing all agents within a given period has an S-shape. Additionally, information as to how changes in the modeling parameters, such as initial and ending number of the informed agents and the intensity of contacts, affect the process is given.
Keywords: distribution process, multi-agent system, propagation time, distribution model, star-shaped network
DOI: 10.26102/2310-6018/2023.41.2.002
In database management systems, since the creation of user applications, the problem of data backup has not lost its relevance. With the development of technologies in the field of Internet programming, cloud data backup methods have appeared. Cloud-based backups are gaining ground in the information technology space. Situation-oriented databases (SODBs) at the current stage need their own backup tools. As part of the microservice architecture, since heterogeneous sources and results of data processing in the SODB are taken out of the local infrastructure, it is required to use modern backup capabilities. First of all, it is necessary to reserve virtual data arrays collected from virtual multi-documents as well as dynamic data processing objects. In SODB, multi-documents and dynamic data processing objects are the main elements involved in data manipulation; their content is heterogeneous data sources, intermediate processing results and the final processing result before uploading to the data receiver. It is proposed to solve this problem using a situation-oriented approach by adding a backup model, as well as developed algorithms for backup and operating cloud disks and cloud storages. Previously, the issues of backup in SODB were not given due attention because the model assumed the use of the current state memory mechanism, which guaranteed the protection of data from possible damage and a return to the previous processing steps was provided by editing it. In addition, each state of the model provided for error handling that occur during processing. With the growing need for redundancy of external heterogeneous sources, new equipment is required to eliminate gaps in the backup implementation of SODB. This kind of equipment has not been suggested before; this paper discusses its implementation, and a prototype of the SODB software, accompanying the process of course design in "Databases" course, is used.
Keywords: situation-oriented database, built-in dynamic model, heterogeneous data sources, backup, virtual multi-documents, dynamic data processing objects, RESTful-services
DOI: 10.26102/2310-6018/2023.41.2.028
The purpose of this paper is to assess the possibility of determining the size of a perfectly conducting object – a two-dimensional corner structure with a maximum average scattering characteristic. Based on the solution of Fredholm integral equation of the 1st kind, the scattering characteristics of electromagnetic waves of a two-dimensional corner structure were determined. These characteristics were the multiextremal functions depending on the size of the cylinder and the length of its contour. The function was researched by using the method of grids and local optimization method – the method of golden section. The following results were obtained: the dependencies of the corner structure size on the length of the contour for different sectors of observation angles that give the maximum value of the average characteristics of the scattering; the coefficients of polynomials, which give a reasonable approximation of the relative error of the obtained relationships were determined. The algorithm presented in this paper and the results can then be used to create objects that contain in their composition corner structures with the specified requirements for the average characteristics of the scattering. The results of the study can be generalized for the case when several corners are included in the electrodynamic object. In that instance, it is necessary to determine the total scattered field with consideration to the phase difference in the arrival of an electromagnetic wave from different reflectors.
Keywords: characteristics of the scattering, electromagnetic waves, optimization, control of electromagnetic environment, approximation of characteristics
DOI: 10.26102/2310-6018/2023.41.2.005
Nowadays, knowledge graphs are used as a model of telecommunication networks and for storing data on their state. Knowledge graphs make it possible to combine within one model many particular models of information systems used by operators, which allow joint analysis of data from various sources and, as a result, increase the efficiency of solving network management tasks. Knowledge graph helps to solve complex problems. Filling the knowledge graph requires processing large amounts of raw data. For their processing, it is necessary to use machine learning algorithms, which is difficult when building such models due to the fact that the configurations of modern networks change over time, which requires frequent reconfiguration of machine learning algorithms. In addition, automated machine learning algorithms have a high computational complexity. The purpose of the research is to develop an approach that makes it possible to employ automated machine learning (AutoML) to analyze live data coming from the network by means of metamining capabilities to control the choice of machine learning algorithms and the selection of hyperparameters. The method of determining the state of a telecommunications network using both managed machine learning and metamining, followed by building a network model in the form of a knowledge graph, was utilized. An approach has been developed to provide controlled machine learning when building models of telecommunication networks in the form of a knowledge graph, which has a reduced computational complexity by decreasing the number of candidate algorithms supplied to the AutoML input. The statement and solution of the problem of classifying the state of the vehicle according to the data coming from the network are given; a description of the monitoring system based on the use of the proposed approach is presented. The application of the approach is illustrated by the example of solving the task of determining the state of cable TV operator's network.
Keywords: knowledge graph, autoML, telecommunication network, meta-learning, meta-mining
DOI: 10.26102/2310-6018/2023.40.1.016
The paper proposes the approach for detecting the traffic violations based on illegal vehicle’s trajectory on video streams. As an example of such violations, illegal left turn is considered. This approach was implemented in a decision support system. YOLO neural network was employed as an object detector as part of the approach, LPRNet network for license plate recognition, and Ramer-Douglas-Pecker algorithm for the trajectory thinning. Using the example of the illegal left turn, a number of classifiers was studied: SVM, GaussianNB, KNeighbors, Decision Tree, Random Forest сlassifiers. These classifiers can be utilized to identify trajectories that violate road traffic regulations. Numerical experiments demonstrate that the SVM has about 95 % of classification accuracy among other algorithms. The computational cost also decreased due to the use of the trajectory thinning algorithm and lightweight neural network models. The capabilities of decision support system integration into the Centre for Automated Recording of Traffic Offences were illustrated by the example of left turn detection.
Keywords: intelligent transport system, decision support system, video image processing, machine learning, neural networks, trajectory classification
DOI: 10.26102/2310-6018/2023.41.2.026
The article is devoted to the study of determining the key aspects of fuel and energy complex companies in terms of corporate social responsibility for modeling indicators that are essential in the development of oil and gas companies in modern economic challenges. The object of the study is the corporations of the oil and gas sector leading an active social policy. The subject of the study is the indicators of criteria characterizing the results of corporate social activity of the company. The methodology of the research undertaken is the analysis of ESG reports of the leading companies in the fuel and energy complex of Russia for compliance with the 17 principles of sustainable development of the United Nations. The result of the experiment is the formation of an optimal multicomponent structure of corporate social responsibility covering a full range of indicators characterizing both the internal and external environment as well as performance indicators for determining the development strategy of corporate social responsibility and high performance of oil and gas corporations in the context of modern economics by means of the complex stakeholder evaluation system. Practical significance of the study consists in the application of the developed corporate social responsibility evaluation system for subsequent implementation at corporate and country levels. The obtained system of indexes can be used by the management team of an individual company as well as government agencies or evaluation agencies.
Keywords: corporate social responsibility, fuel and energy complex, project management, sustainable development, green economy
DOI: 10.26102/2310-6018/2023.41.2.013
The relevance of the research is due to the need to counteract hidden data transmission channels in the form of file steganography in institutional and corporate computer networks. The article is devoted to the formation of a feature vector based on the brightness histogram to identify the steganography that distorts several bit planes of the spatial domain in the image. It is assumed that this type of steganography is most likely to be used by inner violator because it does not require deep knowledge in the field of information technology. Additionally, it is implemented in software products of the freeware segment and helps to payload up to 50 % of the container size. A numerical experiment was performed to verify the results. The description of the initial data and the experimental methodology is given. Datasets are obtained by MatLab. To ensure reproducibility of the experiments, the datasets and MatLab scripts are presented in Kaggle. The machine learning procedure based on SVM regression is applied. Based on experimental data, the basic metrics of machine learning effectiveness of feature vectors for BPCS- and LSB-steganalysis are calculated. The dependence of the regression error for feature vectors based on combinations of different bit planes is shown. With the help of the obtained estimates, the analyst can include one features or another in the complex vector.
Keywords: steganalysis, feature vector, reliability, BPCS-steganography, LSB-steganography, steganography channel, machine learning, support vector machine, regression
DOI: 10.26102/2310-6018/2023.40.1.007
The analysis of existing digital counterparts of the cardiovascular system and medical decision support systems in cardiology is carried out. There is a low degree of elaboration or lack of consideration for the mechanisms for regulating blood circulation in them. The structure of a new biotechnical system is proposed, which makes it possible to form recommendations for the doctor as to decide on therapeutic effects in order to optimize the functions (indices) of the patient's cardiovascular system. The problem of optimizing the patient's condition for the medical decision support system is defined and the solution to it is provided. The structure of a biotechnical system for optimizing the patient's condition using a digital twin of the cardiovascular system as a virtual personalized model of the circulatory system connected to the patient by two-way information communication is described. A diagram of the elements of the biotechnical system detailing the ways of transmitting diagnostic information from the patient to the digital twin of the cardiovascular system is presented. The hardware for checking the adequacy (validation), verification and identification of the digital twin of the cardiovascular system is given. An example of the search for optimal properties necessary to optimize the indices of the functions of the cardiovascular system of an average patient is considered. The current and found optimal values of the indices of the patient's condition are obtained. To achieve indices that ensure the normalization of the patient's condition, optimal values of the properties of the cardiovascular system were found.
Keywords: decision support system, regulation, mathematical modeling, cardiovascular system, neurocontrol, optimization problem
DOI: 10.26102/2310-6018/2023.40.1.026
The paper examines some approaches to processing big spatiotemporal uncertain data in GLONASS+112. This system is used for managing interaction between operational services in the Republic of Tatarstan and collecting and processing data characterizing various incidents, based on calls received by a common emergency number "112". The performance and scalability of several basic operations for managing big data (query with threshold, JOIN, k-nearest neighbors algorithm) were studied; they were adapted for operating data under spatial and temporal uncertainty. New approaches to clustering and associative rules mining for uncertain data are suggested. Modernization of ST-DBSCAN algorithm for clustering spatiotemporal data is proposed. This algorithm is integrated into the association rules mining process. The program complex for forming the associative rules for spatiotemporal data under uncertainty has been developed. The complex is applied to analyze GLONASS+112 data as well as the information about weather conditions obtained from external sources. The associative rules being formed can be used by various units in operating services for decision-making and resource-planning. This would help to increase the efficiency of managing the emergencies and undesired incidents.
Keywords: data mining, spatiotemporal data, uncertainty, clustering, associative rules, emergency management
DOI: 10.26102/2310-6018/2023.41.2.010
Passenger public transport is very important for the social-economic development of any territory, and that is why consideration of the issues of sustainable functioning and optimization of the transportation management is relevant, which underlies the authors' research. Owing to this, the article addresses the problem of passenger transportation management optimization under the conditions of unstable seasonable passenger traffic in cities. The leading method to study this transport problem is dynamic programming, which is based on the package of recurrence relations. The article presents an optimization criterion in the situational task to manage passenger traffic, demonstrates the objective function that allows optimal additional distribution of passenger vehicles along each city route depending on the time of year, identifies the optimal number of passenger vehicles and substantiates the method of dynamic programming in solving a transportation problem. As the result of the study, an algorithm that determines the required number of rolling stock on the route of public transport by dynamic programming has been developed and the results of calculations depending on the period of instability of seasonal passenger traffic have been provided. The materials of the article are of practical value for applied researchers in the auto transport complex.
Keywords: dynamic programming, task, criterion, modeling, optimization, transportation, seasonality, transport
DOI: 10.26102/2310-6018/2023.40.1.018
The article deals with the regression mathematical models that describe the influence of mechanical and automatic microclimate control systems on the growth and development of Arbor Acres cross broiler chickens in Sayansky Broiler agro-industrial complex with outdoor maintenance. The paper regards the influence of such parameters as microclimate, temperature, humidity and illumination. To test the statistical hypothesis of homogeneity of the two considered samples, the Cramer-Welch and Wilcoxon tests are employed. Chou's test is presented concerning the possibility of constructing two different mathematical models of the same type that illustrate the patterns of the modeled indicators development. Statistical estimates of the significance of the constructed models and the factors included in the models are calculated. An interpretation of the results of regression analysis in relation to the subject area under study is given. In addition, a graphical visualization of the analysis of the initial and output data of the constructed models was performed. The ranking of factors is carried out according to the degree of their impact on the resulting indicator using elasticity coefficients and the shares of their influence. The main production indicators are calculated based on the results of livestock rearing: average daily gain, absolute gain, relative growth rate, safety. The article calculates the economic effect for one full cycle of farming broiler chickens.
Keywords: mathematical modeling, regression model, determination coefficient, statistical significance of the model, arbor Acres cross broilers, microclimate
DOI: 10.26102/2310-6018/2023.40.1.019
The article considers an approach aimed at enhancing the accuracy of human identification by facial image in video surveillance systems, using the reconstruction method based on generative-adversarial networks. During the investigation of offenses, one often encounters video recordings of people of interest for the investigation with a low resolution or containing visual disturbances of different genesis, which limits the implementation of techniques for identifying the person by means of deep learning neural networks. This causes two problems: one pertaining to face detection of a certain person in the video data and another regarding the search for a selected person in the frame contained in the database. The reconstruction of a face using generative adversarial networks is known to significantly improve low-quality face images, but this method is demanding of the content of the original image as any occlusions and disturbances are multiply amplified. The paper presents an approach composed of image preprocessing on the basis of the known property of video recordings – the presence of object image versioning. The proposed algorithm helps to correct much of the visual noise and subsequently reconstruct the face image with high quality. During the experiments, we have also found a method of facial elements restoration which enables the increase in the recognizability of an unknown face by a person, which can be important during the identification by witnesses.
Keywords: face recognition, face restoration, video analytics, superresolution image, generative and adversarial networks, computer vision
DOI: 10.26102/2310-6018/2023.40.1.021
Classification of ultrasound images is the prevailing tool in the diagnosis of many pancreas diseases. It takes years of experience and training for a doctor to interpret an ultrasound image. Therefore, the development of models, methods and algorithms for improving the reliability and quality of interpretation of ultrasound images through the use of specialized software tools that reduce the risk of diagnostic errors is a relevant issue. The proposed method involves the segmentation of ultrasound images into segments of prescribed size of a rectangular shape and their correlation to one of three classes: oncology, pancreatitis, indifferent class. Classification is carried out by means of "strong" and "weak" classifiers. For "weak" classifiers, the Walsh-Hadamard transform is employed in the formation of descriptors. Descriptors are calculated for three "weak" classifiers. For the first "weak" classifier, the spectral coefficients of the Walsh-Hadamard transform are used, calculated for the window of the entire segment. After that, the descriptors are calculated for other "weak" classifiers, which are windows with sizes that are two and four times smaller than the sizes of the original window. The classifier consists of three independently trained neural networks – "weak" classifiers. To combine the output data of neural networks, an averaging block over the ensemble is used. Software has been developed for classifying ultrasound images which helps to create a database for the "oncology" and "pancreatitis" class segments, determine the two-dimensional Walsh-Hadamard spectrum of ultrasound image segments, train fully connected neural networks and conduct exploratory analysis to study the relevance of two-dimensional spectral coefficients. Experimental studies on the classification of ultrasound images containing oncology and pancreatitis showed an average accuracy of oncology detection – 88.4 %, and pancreatitis – 85.7 %. Errors of the second type averaged 10.2 % when pancreatitis was detected and 5.2 % when oncology was detected. To set up and test the classifiers, real data from pancreatic ultrasound were used.
Keywords: ultrasound, pancreas, oncology, pancreatitis, disease detection, segmentation of ultrasound images, neural network, classification of ultrasound images
DOI: 10.26102/2310-6018/2023.40.1.028
The article presents a methodology for solving the adjective ordering problem in English sentences by determining their hypernyms. The determining of a hypernym can be represented as a classification task; therefore, the most popular machine-learning classification methods were compared, they include the following: nearest neighbors method, logistic regression, decision classifier, support vector machine and naive Bayes method. The models were trained on a sample that contained adjectives and their hypernyms. For each adjective, similar adjectives from the training sample were selected; the most semantically appropriate hypernym was determined based on them. The use of information about word similarity from GloVe embeddings is proposed. The optimal values of hyperparameters for the K-Nearest Neighbors method were selected by means of the gridsearch technique. The quality of data classification was evaluated applying the metrics of precision, recall, and F1-measure for each of the methods. Since there were no ready-made datasets of classified adjectives, 300 adjectives were classified manually to create necessary samples.
Keywords: adjective ordering, natural language processing, word vector representation, gloVe, classification methods, hypernyms
DOI: 10.26102/2310-6018/2023.40.1.017
Coiled tubing technologies are actively used in the process of well drilling and well intervention. During the operation of a coiled tubing unit, it is necessary to obtain a real-time assessment of the residual life of the installation equipment, in particular, the residual life of the coiled tubing. The main damaging factors of a flexible pipe include bending loads, internal pressure effects, axial impacts, exposure to aggressive media. The most important task of predicting the state of a coiled tubing is the construction of a mathematical model that allows the most accurate description of the process of fatigue damage accumulation under low-cycle loads. An analysis of the literature sources available for study showed that nowadays it is essential to develop methods and algorithms that enable assessing the knee fatigue of the flexible tubing material on a complex trajectory of movement where the pipe is subjected to bending loads with different intensities. The solution to this problem substantiates the development of a mathematical model that relates the calculation of damage in the area of low-cycle deformations, taking into account damage that has been previously kneeled. The purpose of this research is to develop methods and algorithms for constructing a predictive model of the current state of the coiled tubing material considering the accumulated damage based on semi-empirical models as part of the kinetic theory of fatigue. By means of the methods for constructing algorithms for processing data from low-cycle tests as part of the kinetic theory of fatigue and mathematical models for estimating the residual life of the test sample, the article proposes a solution that helps to calculate the damage parameter of the sample in the event of damage accumulation in various sections of the coiled tubing trajectory. The materials of the article are of practical value for researchers dealing with the problems of calculating the residual life of flexible pipes under the conditions of their cyclic deformation.
Keywords: coiled tubing, low-cycle fatigue, damage accumulation, cyclic stresses, kinetic theory of mechanical fatigue, equivalent stresses