Keywords: decision support, electronic design documentation, documentation development management, document agreement, intelligent technologies, comment, intelligent comment analysis
Design documentation development process is a multi-stage process, which requires a significant time and labor costs as well as involvement of many participants. It includes documentation preparation, documentation agreement and refinement to the required level of quality. The move towards electronic design and utilization of automated information systems in documentation agreement impacts in two ways on the process. On the one hand, certain operations are implemented quicker. On the other hand, new operations appear within the process. Also, there is an increasingly urgent need for response creation as a list of comments for developers. It requires more time for electronic response creation. Generally, the list of comments is formed by approving persons without automation tools, which is related to specificity of design documentation, the absence of the only design solution. Existing approaches to electronic design documentation development management are reviewed in the paper. A suggested approach to information support in the documentation development management based on intelligent approving person comment analysis and knowledge engineering technology is described. Information support system, which implements the suggested approach, was successfully piloted in a metal-working equipment company.
Keywords: decision support, electronic design documentation, documentation development management, document agreement, intelligent technologies, comment, intelligent comment analysis
The use of modern technologies and methods of data analysis allows you to create more advanced information systems for the organization of high-quality business. Therefore, when designing a cor-porate data architecture, it is necessary to take into account many factors and correctly select a cer-tain level of maturity. This article provides an overview of technologies and methods in the field of data analysis. The tools in question are in demand when building the company's analytical architecture. They can be used to search for, access, and process data. The advantages and disadvantages of the methods were consid-ered. The comparison of technologies is made by a number of characteristics, namely: the organization of data access, the method of building a data warehouse, the process of extracting, converting data, and the process of building a business report. These aspects are the main ones when choosing the tools for building a corporate architecture in the field of data analysis, since they are key in analytical data processing.
Keywords: analytical data processing, maturity levels of the corporate architecture in the field of data ana, data warehouses, systematization of data, designing a corporate data architecture
The article is devoted to developing a method for managing safe evacuation conditions within the framework of a fire risk calculation procedure based on improving the approach to determining the required evacuation time. The assessment of the existing coefficient of 0.8 was conducted. This coefficient is used to determine the maximum permissible estimated time for people evacuation. It was found that the linear dependence of this coefficient does not reflect the specificity of the influence of the blocking time, both at low and high values of this time. In addition, it was found that the existing method for determining the conditions for safe evacuation requires perfection since it does not allow taking into account the value of the blocking time of escape routes. On the one hand, this leads to an increase in the danger of evacuees in the case of a short blocking time, and on the other hand, to an increase in the cost of ensuring fire safety in the case of long evacuation times. Two ways of improving the coefficient are proposed. It represents a more rational approach to determining the coefficient when assessing the conditions for safe evacuation. A parametric assessment of the proposed methods is carried out in comparison with the existing ones. The results showed that the proposed methods make it possible to more rationally determine the required evacuation time due to the absence of linear dependence on the blocking time. At the same time, the existing method significantly increases the area of unacceptable values of the required evacuation time while increasing the blocking time. Computer simulation for people evacuation and fire spreading was carried out. The effectiveness of the proposed methods was confirmed. The conditions for safe evacuation were calculated based on the existing approach and the newly proposed ones. The results showed that one of the proposed approaches makes it possible to assess the conditions for safe evacuation more rationally and to analyze the conditions under consideration at a higher quality level. At the same time, one of the proposed methods showed its imperfection and was not accepted. An algorithm for managing the condition of safe people evacuation has been developed based on the proposed method for a building in case of fire.
Keywords: fire, evacuation, fire risks assessment,, margin safety, safe evacuatios, evacuation managenet, algorithm
The article solves the problem of a quantitative assessment of the risk of fire spread to objects adjacent to construction sites, considering the likely dynamics of this process. In this case, the risk identified occurring with the probabilities of adverse events associated with the occurrence of initial and intermediate sources of fire and the creation of conditions for the spread of fire to adjacent territories. The mathematical apparatus of discrete Markov chains with twelve states are used to solve the problem, two of which are absorbing: ignition and non-ignition of objects adjacent to construction sites. The Markov chain is formalized as a system of algebraic equations, the solution of which gives the desired result. The peculiarity of the algorithm is that the calculation procedure depends on the possible scenario of the development of the fire propagation process. Both direct fires are taken into account when the fire spreads directly from the primary source to the ignition object and along the chain when the fire propagates sequentially from one place to another. The algorithm can find applications in municipal fire services.
Keywords: fire safety, fire propagation risk, discrete Markov chain, risk calculation algorithm, fire development scenario
The problem of cluster-identification of emergency situations in the management of the processes of ensuring technogenic and fire safety is considered. The assignment of a situation to a class is carried out by comparing it with typical elements of different classes and selecting the nearest one. To do this, a measure of class proximity is introduced, which depends on the form in which the signs of situations are set. In the case where these features are expressed in deterministic quantities, the square of the Euclidean distance between the vectors of feature values is used as a measure of the proximity of situations (the smaller the distance, the closer the situations are). The corresponding definition of the features of a typical situation is the arithmetic mean of the features in the sample representing the class of the situation. When the features are set by probabilistic values, the measure of proximity is the generalized probability of identifying threatening, critical, and catastrophic situations. In the case when the signs of a situation are set on conceptual scales, it is proposed to use the apparatus of semantic networks to solve the problem, and the process of identifying situations is understood as a multi-step process that includes: a) conceptualization of the problem; b) generation of solution options; c) evaluation and ranking of solutions; d) selection of the preferred solution. This understanding of the process of identifying situations most fully reflects the structure of human intellectual activity and allows us to proceed to the formalization of these operations based on the use of the apparatus of semantic networks.
Keywords: technogenic and fire safety, emergency, cluster-identification, feature, algorithm
Consider the Roach Infestation Optimization (RIO) algorithm, which belongs to the class of population-based algorithms inspired by wildlife. The RIO algorithm was proposed in 2008 and can be considered a deep modification of the well-known and one of the most effective particle swarm optimization (PSO) algorithms. The interest in the RIO algorithm is due to the fact that, due to the high efficiency of the PSO algorithm for a wide range of global optimization problems, the study of the modification of this algorithm, which is represented by the RIO algorithm, is of particular interest. The purpose of the work is to implement software and study the efficiency of the RIO algorithm for the well-known complex multimodal test functions of Ratrigin and Ackley. A feature of the study is the search for a global extremum (minimum) of these functions in a wide region of the search space, in which the number of local minima of these functions is extremely large. We present the formulation of the considered global optimization problem, as well as a description of the RIO algorithm, a distinctive feature of which is the use not of the original designations of the authors of this algorithm, but of the unified designations that we use when considering other population algorithms. We describe the software that implements the algorithm and the organization of computational experiments to study its effectiveness. Finally, the article presents the research results showing the high prospects of the RIO algorithm for solving global optimization problems.
Keywords: global unconstrained optimization, population based algorithm, particle swarm optimization algorithm, rastrigin function, ackley function
In domestic and foreign publications, there are no biomechanical studies of the correction of the structures of the musculoskeletal system in patients of early childhood. In addition to the frequency of congenital deformities of the spine, the relevance of the work is due to the progressive nature of the course of the disease, the severity and rigidity of deformities, as well as the formation of compensatory curvature arcs. The most frequent complications associated with metal structures are their destabilization due to endofixator fatigue fractures, bone resorption around screws, destruction of the cortical tissue of the arch roots. The construction of computer three-dimensional models is based on the data of computed tomography of patients with congenital deformity of the lumbar spine. Computer geometric models include the bodies of four vertebrae. Models are built in the computer programs Mimics Medical and 3-matic Medical, then exported to the SolidWorks software package. It assembles all the individual elements, and adds: ligamentous apparatus, intervertebral discs, and metal structures. Bone structures are schematized by two homogeneous isotropic layers: cortical and spongy. A method for studying the stress-strain state in the lumbar spine-endofixator system has been developed for different motor modes and different bone resorption around the screws. Biomechanical studies of the state of the lumbar spine in a three-year-old patient after surgery with an installed endofixator were carried out.
Keywords: spine, congenital scoliosis, transpedicular fixation, stress, deformity, 3d modeling
This work is about navigation safety of marine traffic at sea areas. The paper considers the problem of planning a route for a vessel to cross water areas with heavy traffic. It should be borne in mind that the trajectory of the vessel should be consistent with established navigational practices and collective navigation experience. Isolation of established patterns of movement of a specific sea area from retrospective information about its traffic by clustering the parameters of vessel movement is a promising way to identify such an experience. The task is considered relevant due to the promising development of unmanned marine vehicles. Ship's passage routes planning passage should be carried out considering the specified restrictions when moving through the water areas with established routes. Isolation of patterns of movement of a specific marine area from retrospective information about its traffic is a possible way of identifying these restrictions. Model representations of such a problem can be formulated based on the idea of clustering the parameters of ship traffic. The model of the route planning problem is based on finding the shortest path on a weighted graph. There are several ways to construct such a graph: a regular mesh of vertices and edges, a layered mesh of vertices and edges, a random mesh of vertices and edges, vertices and edges based on historical data. The weight of the ribs is proposed to be set as a function of the “desirability” of a particular course of the vessel for each point of the water area, considering the identified movement patterns. The water area is divided into sections and for each of them clustering of rates and velocities is performed. Possible clustering methods are discussed in the paper, and a choice is made in favor of subtractive clustering, which does not require preliminary specification of the number of clusters. Services of the Automatic Identification System can serve as a source of data on water area traffic. The paper shows the possibility of using AIS data available on specialized Internet resources. These data reflect well the summary features of the water area traffic despite their “sparseness”. The historical AIS data of sea traffic at Tokyo Bay and Tsugaru Straight are used for identifying traffic schema and ship routes planning with the model designed under presented research.
Keywords: marine safety, traffic intensity, ship trajectory, ship traffic, clustering, traffic area, automatic identification system
One of the urgent problems in the existing systems of behavior analysis is the extraction of signs of anomalous activity of user activity from large arrays of input data.The problem solved in this study is based on the impossibility of searching for anomalous activity of users by their movements, due to the high variability of the input data. The aim of the study is to develop a modified density clustering method for application in a mobile system of behavioral analysis using machine learning methods and algorithms to find deviations in user behavior based on their movements. This article provides a comparative analysis of the density clustering methods used in the developed software package for searching for anomalies in the behavioral biometric characteristics of system users. Smoothing interpolation of the input data is performed. The results of searching for anomalies by the modified method of spatial clustering with different input parameters are described and the results are compared with the basic method. Thanks to the use of the developed method of spatial clustering, an increase in the quality of the analysis of anomalous activity in the activities of users on their movements has been achieved. Finding deviations in the collected data will ensure a timely response of the system administrator to deviations from the user's behavioral profile
Keywords: machine learning, big data, data science, software, information system, unstructured data, behavioral analysis, behavioral biometrics, biometric characteristics, artificial intelligence
The paper considers the problem of forming a relatively complete set of factors that determine the relationship and mutual influence of the sphere of housing construction on the social sphere. The following seven aspects are highlighted, reflecting various aspects of the relationship between housing construction and the social sphere of the population's life: 1) Provision of the country's population with housing; 2) Availability of work in the field of construction and employment of the population in this area; 3) Housing construction as a socio-economic indicator of the state of the economy as a whole and the dynamics of its development with a projection on the social sphere; 4) the significance of the housing problem for solving other social problems; 5) the state of small business involved in the provision of repair and construction services; 6) innovations in the field of housing construction; 7) environmental safety during the renovation of the housing stock and the construction of new housing. For each of the selected aspects, a set of at least four factors is formed, revealing and detailing the various aspects and features of the implementation of each of the aspects. To demonstrate the direction of further research in accordance with the selected scheme of system analysis for one of the factors, the parameters that determine its value have been identified, and a mathematical model has been formed for the numerical assessment of the significance of the factor under consideration. In the future, it is planned to build similar models to assess the significance of other factors, on their basis to build an integral criterion characterizing the level of significance of the housing construction sector for the social sphere as a whole.
Keywords: system analysis, housing construction, social sphere, aspects of inter, socially significant factors, classification, model for assessing the significance of a factor
The article discusses the issue of the need for intellectual support for decision-making when managing of the securities portfolio forming process. Modern systems for making investors decisions are based on the classical portfolio management theory and supposed the market efficiency requirement fulfillment, but the modern stock market, both domestic and global, cannot satisfy this condition. For effective decisions, it is necessary to use new methods and models of the portfolio management. To find the optimal portfolio, a modified particle swarm method is used and its advantages are investigated, among which the reduction in the number of objective function computations by 34% or more. The proposed algorithm for intelligent decision support makes it possible of choice according to three aspects: under the method of determining the model parameters, under the financial risk assessment model and under the optimal portfolio structure. The knowledge base contains a database and a precedents base, a base of rules includes indicators of the financial risk assessment model effectiveness aggregated for all precedents. The growth of the precedents base makes it possible to increase the authenticity of the risk measure efficiency assessment and makes it possible to form (or adapt) production rules.
Keywords: intelligent decision-making support, particle swarm method, nonlinear optimization, portfolio optimization, risk measure
To improve the management processes of large technical systems, aspects of methodological solutions integration for the development and integrated application of a model for ensuring sustainability and methods for assessing the sustainable functioning of those systems are considered. The model is designed to simulate the dynamics of stability states of a large technical system by calculating losses, as in the case of the time and resources spent on restoring the system in adverse conditions. In the model, for systematization and the most rational use of the simulation results in computer programs, unified tabular forms have been developed, suitable for modeling the stability states of various systems. The simulation results are proposed to be used in the methodology for assessing the stability of the functioning of large technical systems. The methodology calculates a complex indicator: the functioning of a large technical system. The formula for calculating the stability coefficient is obtained based on the criterion of the effectiveness of a complex system while conserving resources and timely restoration of system elements. A computer program implements this technique and uses unified tabular forms from the model for ensuring system stability. The complex application of the model and methodology is aimed at making timely rational management decisions to ensure the stable functioning of large technical systems in the conditions of unstable dynamics of semi-structured data on the state of the system, the availability of resources, and the results of the external environment. In the future, the presented model and methodology can be used to develop methods for registering complex events (within visibility on the event horizon) and identifying scenarios for their development to improve the efficiency of proactive management of large technical systems.
Keywords: sustainability criteria, modeling the dynamics of system states, unified tabular forms, coefficient of sustainability of the functioning of the system, computer program
the technological process of accounting for gas consumption in the gas transmission system is considered in this paper. One of the problems of the metering system is the gas imbalance (imbalance) arising from the influence of many changing quantities, including such as nonlinearly dependent characteristics of the working medium (natural gas), equipment, pipelines, and the environment. One of the components of the imbalance is the amount of gas in the main pipeline, which, among other factors, is influenced by the temperature of the soil at the depth of the gas pipeline, which is updated monthly according to statistical data. The paper proposes an approach to calculating the value of the reserve based on the soil temperature, which is updated daily, and also proposes forecasting the value of the gas reserve in the pipeline using regression analysis; various machine learning methods were used using the Matlab environment, the regression results were compared based on the application of these methods, the most significant parameters in calculating the gas reserve were identified, clustering was applied to determine the sign of the gas reserve in the pipeline. Modern mathematical apparatus and computing facilities can be used for the development of software and with subsequent integration into complex computing systems.
Keywords: gas, balance, reserve, soil, temperature, transmission, system, regression, model, algorithm
Increasing of the structural complexity of corporate telecommunication networks makes the issues of ensuring the availability of network services provided by them more actual. At the moment, the main way to increase network availability is to increase bandwidth by introducing new network devices and segments into the existing network, which is an ineffective measure, since it does not take into account the structural and topological characteristics of the network. The purpose of this work is to formulate the problem of optimizing the availability of corporate software-defined telecommunication networks to develop such an algorithm for rearranging the SDN topology, which will adapt to the characteristics of traffic within the network and ensure the optimal level of network availability under existing constraints. As a result of the algorithm operation, it is proposed to find such an optimal virtual topology (a set of nodes and edges) for which, with fixed values of the availability criteria of communication channels, the values of the network availability calculated according to the described method of assessing the network availability criterion would be maximum. The calculation performed according to the described assessment method showed that an increase in the number of edges of the network graph increases the availability of the entire network with a fixed value of the availability criterion of communication channels, however, there is a possibility that the uncontrolled addition of links will introduce significant nonlinearity in changing the availability of existing communication channels, which requires further research. In order to develop an effective optimization algorithm in conditions of uncertainty and nonlinearity, the optimization problem was determined by introducing certain assumptions.
Keywords: availability, software-defined networking, SDN, openFlow, network topology
The task of detecting and observing targets has always been relevant. One of the most important objectives of radar development is to improve target recognition. There are two ways to achieve this – firstly, the installation of more powerful radar systems, which is very expensive and hard to implement under the conditions of limited space, for example, on airplanes; secondly, the quality of the received signal can be enhanced with the aid of mathematical methods, which allows to considerably save on setting up additional equipment. One of the main problems of recognition is the fact that the number and angular location of targets can be difficult to determine from the signal received by the radar system. This problem can be addressed by employing a wavelet transform. This method enables to overcome the Rayleigh criterion, that makes it possible to obtain an angular super-resolution (to surmount the classical diffraction limit of the spatial resolution of an image focused by a lens that is less than half the radiation wavelength). The article uses a mathematical model of a radar station to present the results of numerical experiments to achieve super-resolution by means of algebraic methods at a significant noise level. We examine the suitability of utilizing different types of wavelets, namely the Haar wavelet, the symmetric Haar wavelet, and the Wave wavelet.
Keywords: wavelet transform, computer modeling, super-resolution, target search, simulation model
The aging process is a complex multifactorial phenomenon, which is influenced by both external factors – climatic, economic and political conditions, and individual characteristics of the body. In this regard, modeling this process is a non-trivial task that requires a versatile approach to solve it. An analysis of the literature shows that when modeling the rate of aging, both conceptual [1-4] models are used, which give an idea of how to assess the aging process in principle, and more specific computational models [5-9], which make it possible to predict the rate of aging. When constructing computational models, there is a contradiction between the completeness of the model and the possibility of using it for forecasting. Thus, models that show all the relationships in the aging process well , which are usually constructed on graphs, are difficult to apply to the numerical estimation of the aging rate, although some of them make it possible to construct individual aging trajectories [8-9]. At the same time, models that have a strong numerical apparatus for estimating the rate of aging [5-6], as a rule, are sharpened to solve a narrow problem and do not cover the entire complexity of the aging process. In such a situation, the use of machine learning methods in computational models for estimating the rate of aging is a very promising direction [10-15], since its application allows us to take into account all the variety of factors of the aging process, without delving into the essence of the process itself. In this paper, machine learning methods are used to analyze the correlation of functional indicators of patients with their calendar age and to build models for predicting the biological age of patients. The data analysis was carried out with the help of the author's developments in the Python language in the Anaconda environment. For the analysis, we used functional indicators (10 pieces) of 1185 patients from the database of the clinical regional psycho-neurological hospital of war veterans in the number. The analysis of the data showed the presence of a statically significant correlation of the indicators used with the calendar age of the patients. In this paper, 5 regression models were constructed using various tools of the Python sklern library (batch gradient descent, stochastic gradient descent, ridge regression, ridge regression with Bayesian selection, the support vector machine method), and algorithm compositions from decision trees (random forest and boosting) were used. To improve the quality of the model, we used feature selection (add-dell) and outlier search and removal using the reference vector method, the isolating forest method, and the nearest neighbor method. All the models obtained are adequate (verification by the Fisher criterion), but the most accurate (R2 = 0.75) was shown by the model of the composition of a random forest on the full set of features after the removal of anomalies by the support vector machine. The results of modeling using linear models showed that the highest weights in the model have 3 functional indicators – accommodation, vital capacity of the lungs and hearing acuity.
Keywords: regression problem, feature selection, finding and removing anomalies, machine learning, biological age
This article addresses the issue of the effectiveness of using manufacturers' cutting data calculators. The methods utilized in them are analyzed. An experiment is presented that demonstrates the opposite effect, namely, obtaining a cutting process with an unsatisfactory surface quality and increased wear of the cutting tool when using an instrumental calculator. The processes affecting the wear of the cutting tool are considered. Their main types are distinguished for the purpose of zones construction in which one or another type will prevail. A method of searching for the optimal area of cutting conditions in the form of a wear map is proposed. It describes its flexible application with its parameters’ settings, such as limiting the capabilities of the equipment and the introduction of correction zones, which are necessary for accounting for the errors in the properties of the material being processed. An algorithm for its operation while using an abrasive, adhesive and diffusion mathematical model of cutting tool wear is also given. A comparative analysis was carried out, in which the tool calculator of the cutting tool manufacturer and the method for finding the optimal area of cutting conditions in the form of a map were used. Sources for filling the base are given, which are applied to assume the tool wear map functioning.
Keywords: area of optimal cutting conditions, wear map, abrasive wear, adhesive wear, diffusion wear
Тhe article describes the features of constructing and implementing a model and algorithm for optimal allocation of objects on the territory of oil and gas areas (OGA). The model and algorithm are based on the formalized representation of geodynamic risks in the form of fuzzy relations and the method of block risk-classification of objects. This approach allows us to assess the risk stability of OGA objects, to rank natural, technogenic, and anthropogenic processes according to the degree of their impact on these objects, as well as to assess the integral risks for OGA objects from these processes. It is shown that the numerical method for solving tasks of geodynamic risk management support is powerful in developing planning solutions and evaluating their effectiveness in managing the progress of territories containing OGA. The model and algorithm for allocation objects on the territory of the OGA to minimize risks are aimed at solving an urgent scientific and technical problem – ensuring security from possible manifestations of geodynamic threats. The task is solved in four stages: assessment of the risk values for each of the OGA objects from a certain source of impact – ranking of natural, technogenic, and anthropogenic processes by the degree of their impact on specific OGA objects – ranking of various zones of the OGA territory by the level of safety and assessment of their risk status – optimization of the placement of oil and gas complex objects taking into account geodynamic risks.
Keywords: modeling, algorithm, optimization, allocation, construction object, oil and gas field, fuzzy relations, clustering, geodynamic threat, risk management
The possibility of development of a power amplifier for a pulsed radar system with specified input and output power parameters (Pin = 1 kW, with Pout = 1 W) is considered. Reducing the number of amplification stages, and consequently minimizing the cost of the device, is achieved by using a bunch of metal-oxide-semiconductor and bipolar transistors in the design of the power amplifier. When developing the basic electrical circuit, in addition to the power requirements, the cost and weight and size indicators of the structural elements were also taken into account. For the calculated power amplifier, a schematic diagram was developed, the element base was selected, the printed circuit board substrate was selected, the matching circuits were calculated, and the power amplifier design was developed. The proposed design fully provides the required output signal parameters.
Keywords: radar system, power amplifier, secondary radar location, radioelectronic equipment, air traffic control, printed board, microstrip line
The article presents the approaches and technologies used to solve the problem of analyzing trends in technology development based on the network semantic structure "Subject-Action-Object". From the point of view of information about the invention itself, the most important is the description of the invention to the patent. In electronic databases of patents, all patents begin precisely with the description of the invention to the patent, which in turn has its own title page. This form of the description of the invention to the patent is unified, and all patents are presented in this form, that is, all patents are equally structured. It is this block of the patent - information about the invention must be investigated using the network semantic structure "Subject-Action-Object". To solve this problem, the structure of the patent was studied; Hadoop technologies, Spark MlLib, clustering methods. Grid computing technologies have been chosen as a successful and efficient means of processing large text data in the form of patents. An algorithm for parsing a patent document has been developed; an algorithm for preprocessing text documents of a patent selection; Subject-Action-Object (SAO) extraction algorithm; an algorithm for forming a patent landscape for a certain time period. The concept and architecture of the automated system have been formed, the proposed algorithms have been implemented in software.
Keywords: SAO-structures, technology development trends, semantics, automated system
Natural emergencies have a significant impact on the surrounding areas and real-world objects. Due to the large scale of the territory, climatic conditions, landscape and geographical characteristics, in Russia, the most dangerous emergencies are natural. For the Northern regions of the country, one of these situations is flooding. In the northern territories, severe climatic conditions prevail, and the average temperature growth rate is twice as high as in other regions of the country, which can lead to the retreat of permafrost, which in turn entails dangerous hydrometeorological phenomena. The goal of this work is to determine the flooded zones of the Amga river during the spring flood using geoinformation technology. The object of the study is the Amga river in the middle course, which was flooded in 2018. The subject of the study is the prediction of flooded zones of the Amga river based on satellite observations. Satellite images with a thick layer of clouds were used to determine the flooded zones. A mathematical method (based on the vegetation index), a geoinformation method (raster), and a geometric approach (DEM) were applied. Methodologies have been developed for determining flooded zones using multispectral images, radar images, and a digital terrain model. By comparing the results obtained, you can determine which zones are affected and which are at risk in the future.
Keywords: images with a thick layer of clouds, WDVI, flood, flooded zone, geoinformation systems, digital terrain model, radar image, multispectral image, amga
This paper reviews the main domestic and international approaches to the choice of information security measures for automated process control systems. The purpose of the study was to develop a method for selecting security measures at each level of the automated process control system using set theory as part of the analysis of basic sets of security measures. In the framework of the study, the current attacks on industrial infrastructure are considered, an algorithm for selecting the protection measures of the automated process control system is constructed, and assumptions are made about the need to apply protection measures for each level of the system in accordance with an individual assessment of the security class of the corresponding level. In this paper, the authors propose mathematical expressions for the minimum, basic, adapted and refined basic sets of automated process control system protection measures. It is concluded that it is necessary to exclude from the consideration of the stage "refinement of the adapted basic set" the algorithm for selecting the security measures of the automated process control system, if the adapted basic set of information security measures provides blocking of all security threats at the considered system level. The research results are recommended for use in modeling information security threats and developing requirements for information security tools in automated process control systems.
Keywords: automated control system, security measure, basic set, information security, information security system, set theory
Segmentation of cartilage tissue in 3D magnetic resonance (MR) images is used to determine the stage of degenerative and inflammatory diseases of joints. For the wrist joint, manual segmentation is an extremely laborious task due to its complex structure. This determines the relevance of the development of fully automatic segmentation methods. The only automated method previously proposed is based on deep learning. It provided non-uniform segmentation accuracy depending on the slice position within the 3D image. This work aims to improve the accuracy of automatic segmentation of cartilage tissue in lateral slices of wrist joint MR images using deep convolutional neural networks (CNN). Two CNN architectures were considered: a classical U-Net architecture and a truncated version of U-Net, in which the deepest block of convolutions was removed. The segmentation accuracy was assessed using 3D and 2D Sørensen–Dice coefficients (DSC), as well as by calculating the area under the precision-recall curve (AUC-PR). The results were compared with previously published data for an automated method of cartilage segmentation of the wrist joint using a patch-based CNN, as well as with published results for a manual segmentation procedure. The use of U-Net-based architectures have significantly improved the automatic segmentation accuracy. The truncated U-Net architecture showed the best performance in terms of time (0.05 s per slice) and the highest segmentation accuracy (2D DSC = 0.77, AUC-PR = 0.844) among the reviewed CNNs for the test dataset of images. For slices without cartilage, the DSC increased from 0.21 to 0.75 using this architecture. Thus, the use of the U-Net architecture provided more uniform segmentation of 3D images than the method using the patch-based convolutional neural network.
Keywords: deep learning, magnetic resonance imaging, wrist joint, cartilage, osteoarthritis, rheumatoid arthritis, segmentation
The article analyzes the methods of evaluating universities’ educational portals effectiveness. Among the methods considered, the following were identified: assessment of the formal educational materials’ compliance with regulatory documents; the method of expert assessments; a Web-analytical approach using SEO audit; a combined approach; the method of information and semantic systems ISS and the graphical method of Euler-Wien diagrams. The article offers an approach to the representation of the university educational portal structure in the form of an oriented graph. As a criterion for the effectiveness of the university educational portal organization, it is proposed to use the total time spent by a student on each page of the educational portal for one session of work. In this case, the total time is represented as a function of the page views sequence and the viewing time for each page. The article puts forward an approach to determining the quality of educational information presentation and the effectiveness of training by evaluating the time spent by students on each page of the educational portal. The article suggests the application of an artificial neural network in processing data regarding the time of students' stay on the educational portal. A direct-directed artificial neural network with two hidden layers was chosen as an artificial neural network. The approach proposed in the article can be utilized in the organization of both interactive learning using information technology tools and distance learning.
Keywords: mathematical model, neural network, educational discipline, educational organization, graph, sigmoidal function, algorithm
The relevance of the work is due to the fact that activities in the field of social and economic development of territories appear as one of the primary national tasks. In this regard, the purpose of this work is to describe an approach that makes it possible to increase the efficiency of management decisions in the field of planning, forecasting and programming the social and economic development of municipalities. To achieve this goal and to model the subject area, a number of tools are used, in particular, the semantic network and cognitive maps. The authors present a semantic network that describes the basic concepts and relationships in the process of managing the social and economic development of a municipality, a feature of which is the factors that determine the social and economic development of a settlement. These factors make it possible to identify typical municipalities within the region (country), for which certain strategies for the development of the territory (templates) can be used. As a supplement to the semantic model, a cognitive map is used, which makes it possible to take into account the interdependence of indicators for assessing the social and economic development of municipalities. The proposed approach was tested on the indicators of the social and economic development of the municipality (Shegarsky district of the Tomsk region) and can be used in the process of making managerial decisions in the field of planning, forecasting and programming the social and economic development of municipalities.
Keywords: social and economic development, municipality, semantic network, cognitive modeling, strategy, decision making, semi-structured system
The purpose of this work is to investigate the problem of creating a multi-agent system for monitoring a context-dependent smart home and performing appropriate actions based on the current state of the home. There are two main intelligent features introduced in the architecture of this project to facilitate a multi-agent system in a smart home environment. The first of them learns and adapts to the movements and actions of the residents. The second function predicts events that happen to residents. The integration of the system is presented using a simulation application of the technology in the working environment of a smart home. The proposed architecture expands from the conversion of passive sensors into smart devices by adding processing and analytical capabilities from the integration software. This study will be tested in a simulated environment with a visualized house plan. It focuses mainly on software that provides the infrastructure for intelligent monitoring of movement inside the home. The infrastructure includes protocols for agent interaction and communication protocols. Context awareness for smart home, proposed in this article, uses a multi-agent system to represent context and shared knowledge between agents.
Keywords: intelligent agent, multi-agent technologies, smart home, context-sensitive systems, forecasting agent
The task of finding anomalies in data when implementing predictive analytics systems. Predictive analytics have become very popular over the past few years. It helps banks approve loans or identify suspicious account activity, email providers filter spam, and retailers predict the likelihood of buying to attract customers. But predictive analytics is quite complex, and therefore its implementation is also fraught with difficulties. When companies take the traditional approach to predictive analytics (that is, treat it like any other type of analytics), they often face obstacles. This is why this area needs tools to detect anomalies in the data. These tools should help to identify outstanding values in order to draw dependencies with the factors of their occurrence and identify them in the future. This article describes a package in the R language that is anomalies in multidimensional time series. This package is capable of detecting anomalies using three different methods: the n-sigma method, the CUSUM method, and the 4th order central moment method. Also, this package searches for complex anomalies, which are a direct indicator of errors in the system due to the fact that anomalies are found in multidimensional data.
Keywords: anomaly, outlier, time Series, three Sigma Rule, r Language
The most difficult and dangerous element of the flight of a helicopter-type aircraft is landing, and its implementation on an unprepared site in conditions of insufficient visibility is one of the key problems. Landing on an unprepared (undiscovered, unequipped) site may be caused by the need to deliver cargo, ammunition in combat conditions, search and rescue transactions, evacuation of the wounded, etc. The spatial position of the aircraft and its location are determined by the crew visually on the natural horizon, ground landmarks, as well as relative to other material objects and structures. When landing on snow-covered or dry ground, due to the air jet from the helicopter's main rotor, a solid suspended rises, which critically reduces horizontal and vertical visibility and can lead to an incorrect assessment by the crew of the helicopter's spatial position relative to the ground, in addition, they may remain unnoticed obstacles in the landing zone (large stones, moving and stationary objects) may remain unnoticed. At the same time, in low light conditions or in difficult weather conditions, buildings, structures, power line masts, trees, shrubs, etc. may be located in the landing zone. d. The helicopter landing on a dusty, sandy or snow-covered area may be accompanied by the creation of a dusty or snow vortex around it, which impairs visibility and reduces or eliminates the possibility of a visual approach. The proposed method for assessing the spatial position of a helicopter-type aircraft in a snow vortex in the Arctic zone relates to the field of aviation, in particular, to systems for ensuring the safety of landing a helicopter-type aircraft and can be used in the development of control systems for landing a helicopter-type aircraft on an unprepared (unequipped, undiscovered) site in conditions of insufficient information content of the space behind the cab about the spatial position for both manual and automatic control.
Keywords: snow swirl, snow and ice cover, snow-covered pad, spatial position, insufficient information content of the space behind the cab, helicopter-type aircraft.
empirical studies have shown a clear dependence of the accident rate in the region on the quality and efficiency of software for processing data of administrative materials of traffic violations. Choosing the optimal software for computing systems for processing administrative materials of traffic violations is an urgent problem nowadays. Fifteen qualitative characteristics of computer software for processing data of administrative materials of traffic violations based on component analysis and their quantitative indicators obtained from experts in the field of road safety management are proposed to investigate in this work. The component analysis of the subject of the study is given, the results of analysis of three groups of qualitative characteristics that allow to select software both on the basis of a certain group of characteristics and on the basis of a complex quality indicator are presented. The further development of the ideas set out in this article will be the development of an application package for the optimal choice of software for computer systems for processing data of administrative materials of traffic violations.
Keywords: componential analysis, software of computing complexes, administrative materials, traffic offenses, main component, photo and video recording complexes, integrated quality indicator
The purpose of this article is to bring partial indicators of the effectiveness of solving problems of information exchange between the elements of the system of distributed situational centers - situational centers - to a single scale and establishing the order of their importance. The scientific and methodological approach is based on a selected set of performance indicators of a certain scalar function of a vector argument in the form of functions of an additive or multiplicative form. The approach makes it possible to form complex indicators of the effectiveness of solving problems of managing information exchange between situational centers, based on bringing the totality of private indicators to a single scale and establishing order relations for their importance. An essential feature of the proposed scientific and methodological approach is the probabilistic interpretation of the vagueness of ideas about the preference of some performance indicators over others. The basis for determining the relative importance of particular indicators is the principle of maximum entropy, which provides a certain objectivity to the assessment results, taking into account the uncertainty of information at the early stages of planning. Objectivity is achieved by using the distribution law, which is characterized by the maximum value of the measured entropy of uncertainty. The specified form of the probability distribution law is based on a minimum of speculation. The approach is implemented in the form of a patent of the Russian Federation for an invention and can be used in systems to support decision-making for information exchange management problems between situational centers.
Keywords: modeling, uncertainty, efficiency, management, information exchange, situational center, government departments