Keywords: digital steganography, digital images, convolutional neural network, binary classification, steganographic container, classification accuracy
The article discusses an approach to the implementation of a system for steganographic analysis of digital images based on a neural network classifier. It is used as a part of an integrated system for monitoring information security events of corporate infocommunication systems. As a basic structure for the neural network classifier, it is proposed to use a modified version of the convolutional neural network. Its preprocessing module implements the histogram method for analyzing the color and brightness characteristics of digital images. To automate the learning process of the neural network classifier, it is suggested to introduce a module for mass generation of stegocontainers with predefined values for the type and size of a digital image as well as for the size of the payload into the structure of the system being developed. Based on the developed structure of the steganalysis system for digital images, a factorial experiment was planned and conducted to evaluate the quality of the described neural network classifier in comparison with the known solutions of binary statistical classifiers. The choice of the area under the error curve (AUC ROC) as a metric for assessing the quality of classification is the main feature of the experiment. The results show that it is possible to use neural network classifiers to solve steganalysis problems, including their implementation in advanced information security tools.
Keywords: digital steganography, digital images, convolutional neural network, binary classification, steganographic container, classification accuracy
In the modern world, one of the most effective methods to maintain the functioning of an organization or business with a view to facilitating development is to design a website and then to employ it to communicate with users and customers. The website helps to systematize all information about the organization, provides a means of e-commerce and gives the opportunity for representatives of the organization and users to communicate with each other to exchange ideas or feedback on products or services. Thus, effectiveness analysis of the website and appropriate decision-making, regarding its optimization and changes to the design, which will allow the company subsequently to achieve its goals, becomes more relevant. In this article, a decision support system was implemented to analyze the effectiveness of a website using Web Usage Mining methods. Statistical methods, which enable performance improvement of the website based on the information received, were chosen as well as data mining methods, in particular, clustering and association rules that are utilized to personalize content and, in the case of selling websites, purchasing offers, which will significantly increase the loyalty of users and customers.
Keywords: decision support system, web Usage Mining, website, log file, machine learning, clusterization, association rules
The technologies for transferring continuous media (gas, oil, petroleum products, etc) use carriers (main pipelines) with a topological structure similar to that of a geometrical graph. A large volume of literature is devoted to the issues of mathematical modeling of transfer processes along such carriers as well as to the analysis of various kinds of optimization problems related to them, but the mathematical justification of the findings is not sufficient from the standpoint of the general mathematical theory of heat and mass transfer. The paper considers the problem of a differential-difference system optimization, which determines the discrete-time equivalent of a differential system for the transport equation on a graph (in applications, on a network). E. Rote's method is employed, which is based on semi-discretization with respect to the time variable of the initial-boundary value problem, which helps to establish not only the conditions for the solvability of the specified problem, but also to obtain an optimization problem for the differential-difference system. Moreover, the coercive property of the elliptic operator bilinear differential form and the continuity of the quadratic functional being minimized are necessary and sufficient conditions for the unique solvability of the optimization problem. The findings are applicable in modeling network-like processes of continuum transport by formalisms of differential-difference systems with a spatial variable fluctuating on a network-like multidimensional domain. The conditions that determine the solution of the optimization problem or the set of such solutions are presented. Concurrently, approaches to the analysis of the optimization problem for a system defined on a multidimensional network-like domain are outlined. The findings underlie the analysis of optimal control problems for differential systems with distributed parameters on a graph, which have interesting analogies with multiphase problems of multidimensional hydrodynamics.
Keywords: differential-difference system, spatial variable on a graph, optimization problem, initial-boundary value problem, network (directed graph)
The article deals with the issues of human resources management and the extraction of information about research activity in this field using the functionality of scientific electronic library eLibrary. The article reflects the analysis results of modern ideas about human resources and their management, defines the problems of extracting information about research activity and the problem statement; analyzes the known approaches to extracting information about research activity; offers a methodology for data processing for information extraction; provides quantitative characteristics obtained from the research and their interpretation; reviews the results of information extraction about the main trends in human resources and interpretation of these results. The proposed definition of the problem involves selecting from a set of scientific articles D a set of documents relevant to the query: the ranking of authors by research activity in the field of human resources; the ranking of journals with publications in the field of human resources; the ranking of organizations whose authors do research in the field of human resources; the ranking of authors of the most cited publications in the field; a set of major trends in human resources at the present time. The results from the analysis of the content of the selected articles showed that the greatest interest in the field of human resource management is associated with both the requirements imposed on the personnel in connection with the digitalization of the economy and the implementation of digitalization in companies.
Keywords: human resources, human resource management, information extraction, data processing methodology, digital platform, research activity, digital economy
The object of the research is computing clusters of cloud data centers, containing many servers, data storage systems, an input-output system interconnected by a communication network. The goal of this research is to develop methods and models for improving the performance of a data center cluster by reducing the processing time of service requests as well as reducing equipment costs due to the efficient allocation of its resources. Therefore, it is necessary to implement optimization algorithms for placing virtual machines (VMs) on physical servers in real time based on load balancing. The proposed method of resource allocation is based on an iterative greedy algorithm and a limited search procedure. Reduction in the computation time is achieved by introducing restrictions on the permissible search depth. The paper puts forward a mathematical model of resource allocation, built using the Erlang model in the form of a multi-line m-node queuing system (QS) of the M|M|m|n type with an n-seat buffer, which makes it possible to determine the main indicators of service request quality in the form of QS parameters. The efficiency of this approach was tested on a simulation model built on the basis of the system functioning statistical analysis. Its experimental study was also carried out.
Keywords: computing clusters, virtual machines, physical servers, resource allocation model, heuristic algorithms, model experiment
This article discusses the application of a situation-oriented approach to the problem of extracting semantic information from office documents. Office documents created by vector graphics editors and word processors are reviewed. The ability to extract semantic information is due to the fact that such documents are based on open XML formats that can be processed by external programs. Processing of documents based on a situational database where word documents are programmatically loaded as XML files extracted from zip-archives is considered. In the situation-oriented database, it is possible to present an office document as a virtual document that is mapped both on XML files and the ZIP archive with XML files. This applies not only to text documents, but also to graphic documents that have an internal XML representation. This enables processing of documents in Office Open XML and Open Document Format. The article discusses various aspects of identifying and finding the necessary information during document processing by means of special standard definitions as bookmarks, key phrases and text labels. Models and algorithms for extracting the required information are examined. Examples of the practical use of this approach in the field of distance learning of students at the university are given. In addition, an example of extracting metadata of scientific publications in the Open Journal Systems publishing system is regarded.
Keywords: situation-oriented database, built-in dynamic model, office Open XML, open Document Format
One of the most important steps to increase profits in oil production is not only investment in equipment, exploration and discovery of new fields, but also analytics. The efficiency of oil and gas production in existing fields can be improved through a comprehensive analysis of the existing data stream. Monitoring of oil and gas production and preventive maintenance of wells involve the collecting and processing of data on the functioning of wells. Such data are not always sufficient for making accurate decisions on well repair management. A number of problems cannot be identified due to the scarcity of information, and therefore the efficiency of the decisions is reduced. Well repair monitoring using data mining performs a number of functions. Firstly, it determines the status of critical well repair conditions for which an action plan will be developed. Secondly, it provides management with feedback by identifying the causes of past positive and negative results. The article proposes an oil and gas well repair analysis technique based on data mining with the aid of repair sequential pattern mining in management. The technique was tested in the oil and gas company Gazpromneft on oil and gas well repair data of Gazpromneft-Noyabrskneftegaz community field.
Keywords: sequential pattern mining, oil and gas well repair, data mining, oil and gas field, well repair analysis
The paper considers the problem of optimizing cognitive model parameters in the analysis of information security risks of industrial control systems (ICS), reflecting the optimal distribution of costs for the realization, implementation, and maintenance of countermeasures, taking into account their functional limitations. A genetic algorithm for optimizing the weight coefficients of cognitive models is used, which makes it possible to determine the optimal configurations of protection measures in the process of assessing ICS information security risks under the conditions of complex multi-step attacks. On the example of the oil delivery ICS and receipt point, the optimization of the countermeasure configuration is carried out to select the most effective options for the allocation of resources of means and information security systems to minimize information security risks. The proposed approach enabled the reduction of information security risk assessment by 85%, increase the assessment of the countermeasure operating efficiency, and reduce the assessment of the countermeasure operating cost. Analysis of the correlation between the obtained information security risk assessments within the allocated ICS zones and the costs of measures to reduce them helps to determine the mechanisms for managing the security of the system target resources and maintain its required level of security as well as to assess the costs required for the integration and maintenance of countermeasures. The result testifies to the effectiveness of the proposed approach to optimizing the configuration of the selected countermeasures with due regard for the multicriteria risk optimization and assessing the economic aspects of ensuring the information security of the object.
Keywords: cybersecurity, risk management, fuzzy gray cognitive maps, genetic algorithm, countermeasures
Nowadays, the problems of assessing the development of oil and gas fields are becoming increasingly important. When monitoring the implementation of the strategy for the development of the oil industry, several indicators are used in the areas of development of the industry. Since different indicators of objects relate to different problems of the oil industry, it appears to be impossible to summarize these data without special tools. In the development of oil and gas fields for decision-making, individual assessments of efficiency experts will not help due to the fact that specialists who evaluate performance have different expectations. Therefore, it can be concluded that such situation is the cause of a conflict of interest when considering the development of deposits. In this situation, there is a need to obtain an integral estimate. The article describes the proposed model for evaluating the efficiency of field development based on the calculation of an integral indicator by means of expert methods. The developed model enables the increase of decision-making efficiency in the management of oil and gas fields. The model was tested on the example of Severo-Ingolsky, Zimny, Orekhovo-Ermakovskoye, Alexander Zhagrin fields at Gazprom Neft, further implementation is expected as a module in the Integration system for long-term development system.
Keywords: field development, efficiency assessment, integrated performance indicator, expert method, oil and gas field
This paper describes an approach focused on the construction of mathematical models that illustrate from different angles typical situations arising in the implementation of software projects. The basis of the approach is the analysis of projects for creating hardware and software complexes as a kind of subject-centric systems. This lays the groundwork for scientific adaptation of well-known approaches, used for researching complex systems of a different nature, to the field of functional safety of hardware and software complexes. In the publications, typical problem situations that occur in managing complex systems of different nature are regarded at the declarative level and called system archetypes. From a practical point of view, the limitation of system archetypes is that they represent situations only at a qualitative level. They do not depict the structure of the control system and the parametric dependencies of direct and cross-links that take place in the control system. In this paper, several examples of constructing structural models corresponding to different system archetypes are considered. For the generation and analysis of alternatives for resolving situations, methods for converting archetypes to the form of structural and mathematical models are proposed. The range of applicability of the proposed approach includes projects of medium scale, i.e. mass-produced projects.
Keywords: project management, problem situation, functional safety, hardware and software complex, system archetype
Equipping absorption devices for cleaning gas emissions with automatic control systems is the most effective and promising way to improve the quality of their operation and increase energy efficiency. However, the systems for automatic control of mass transfer apparatuses, known today, do not have the ability to maintain an extremely unstable hydrodynamic emulsification mode while it has the highest efficiency. The object of industrial gas emission sorption purification control system is a mass transfer apparatus where the gas phase flow being purified contacts with a liquid absorbent. The purpose of the control is to intensify the processes of mass transfer during absorption refining of gas emissions under disturbing influences and program recognition of the desired hydrodynamic modes of the mass transfer apparatus operation according to the actual values measured during the process of technological characteristics. The constructed mathematical model is based on the approximation of the points of adjacent filtration curves on which the range of the hydrodynamic emulsification mode is isolated. An indicator of the desired emulsification mode emergence is the appearance of "bursts" in the value of the turbulence index as the flow rate of the gas phase increases. When using the proposed mathematical model in real ACS, the coefficients determined during experimental studies can be identified automatically and used subsequently in the calculation of control actions. Identification of the mathematical control model on a real mass transfer apparatus is advised to be carried out automatically during auto-calibration of technological parameters.
Keywords: mathematical model, control, automation, mass transfer, absorption, gas purification, hydrodynamics, turbulence index, emulsification mode
Equipping gas absorption apparatuses with automated control systems for the hydrodynamic mode of their operation is by far the most effective means of improving the quality and efficiency of their operation. At the same time, the most time-consuming task in commissioning such devices is to configure the parameters of the automated control system. The purpose of the study, considered in this paper, is to enhance the quality of operation and increase the energy efficiency of systems for gas emission sorption purification by maintaining the most intensive hydrodynamic modes of their operation. The main goal is to create an automated control system and an algorithm for mathematical control model identification program. The automated control system and algorithm, regarded in this article, make it possible to identify the mathematical control model (also called auto-calibration) by testing the apparatus in an automated mode. The paper gives a description of the mechanism for recognizing hydrodynamic modes and searching for an emulsification mode to identify a mathematical model for automatic control of a packed absorption apparatus. A diagram of the system for identification and control of a packed absorption apparatus operating modes is suggested. An algorithm for the identification program for the mathematical control model (auto-calibration) of a mass-exchange absorption system is presented. The proposed automated control system and auto-calibration algorithm enables the reduction of the commissioning time by up to 8 times and helps to improve the quality and energy efficiency of the gas absorption purification process.
Keywords: automated control system, identification of process parameters, mass exchange, gas absorption, sorption mass exchange apparatus, hydrodynamics, turbulent mode, emulsification
The rate of aging is a complex indicator of human health which depends on many factors that include external and internal effects on the body (disease and its correction processes), which is reflected in the biomedical indicators of the body (functional, biochemical, hematological and others). To determine the rate of aging, the concept of bio-age is widely used, which is a complex parameter based on ascertaining the degree of human body aging (wear, damage) in reliance on its biomedical parameters. This article presents the development of a client-server web-application for determining the bio-age of a user by evaluating their functional indicators - systolic blood pressure, diastolic blood pressure, breathing delay time on inhalation, breathing delay time on exhalation, the value of lungs vital capacity, hearing acuity, the state of eye lens accommodation, static balancing time, body weight, height. The web-application allows doctors and administrators to determine the patient's bio-age, drawing on the user's functional data entered in the application, taking into account the influence of geroprophylactic therapy. The web-application displays data in the form of a list and a graph and enables one to send reports to the patient's email and to upload them. The server part of the application is written in the C# programming language and ASP.NET framework. The TypeScript programming language and the React framework with the Antd user interface component library were employed to design the client part of the application. PostgresSQL is utilized as a database. As a module for predicting biological age, a previously developed mathematical model, trained on a data sample of 650 records and having an accuracy of 5.87 years, is applied. The ability to predict the patient's bio-age with consideration to the duration and a type of geoprophylactic exposure makes the developed application a suitable tool to identify the leading mechanism of a patient’s aging.
Keywords: bio-age, biological age, aging mechanisms, web-application for determining bio-age, machine learning in medicine
The aim of this article is research and development of algorithms and software for automation and support of technical creativity process by automated generation of musical compositions of different genres, based on the emotional state of a person. It relies on the method of generating musical material with the aid of artificial neural networks. To generate music, a recurrent neural network with long-short term memory is chosen because this is the type of neural networks that helps to take into account the hierarchy and codependency of musical data. The paper contains a detailed description of training data collection process, the process of neural network training, its use for generating musical compositions as well as an illustration of the network architecture. In addition, it outlines a generalized method for obtaining the emotional state of a person by analyzing an image by utilizing the principles of the Luscher test. For the synthesis of sounds with the help of the prefabricated musical material, the sampling method is applied. It is this method that makes it possible to emulate the realistic sound of musical instruments, which is also relatively easy to implement. Furthermore, the article includes a description of the software design and development process with a view to confirming the algorithms and methods under review, namely a website for generation musical composition by analyzing an image.
Keywords: automated musical generation, spotify API, sampling, recurrent neural network, correlation schemes between color and pitches
To date, one of the most common systems are those based on the results of measurement experiments. The processing of experimental data is widely used in information-measuring systems, technical control and diagnostic systems as well as in automated control systems. Spectral methods are a powerful and most widely used tool for data processing and analysis. Spectral characteristics are employed extensively in engineering due to their high informative value and reversibility, which makes it possible to perform signal compression and restoration with high calculation accuracy. Questions of spectral signal analysis and descriptions of the main methods for spectrum extraction are examined. An express method for determining the spectral composition of a signal through extreme filtering is considered. The results of processing experimentally registered signals from the paper machine scanner are presented. A method for quick spectrum extraction through extreme filtering is described, which provides the means for analyzing the spectral composition of a signal with the aid of available software tools and obtain visual representations of a wide range of characteristics that help to compile a complete description of the signal under study. The results show the convergence with the minimization of computational effort and simplification of the algorithm. These factors enable the application of this method for quick analysis in technical systems.
Keywords: spectral analysis, signal digital processing, discrete spectrum, prony filtering, extremal filtering, fourier transform
The paper discusses the approach to estimating the synchronization parameters of distributed computing systems, based on the application of mass queueing theory algorithms. The proposed approach is built upon the use of statistical approaches by means of the maximum likelihood method as well as a number of numerical algorithms to find optimal parameters of synchronization systems. The application of mass queueing theory methods and the Ricart-Agraval model helps to efficiently adapt a distributed system in terms of an optimal solution to the synchronization problem. The employment of statistical approaches in reliance on the calculation of the likelihood function allows one to obtain statistical estimates of the input and output flow intensities of resource synchronization requirements, which enables optimization of the synchronization system with a heterogeneous hardware configuration and makes it possible to determine the maximum allowable flow of requirements for this system. A computational experiment was conducted utilizing Spark as a basic distributed computing system. When conducting an experiment, the algorithm analyzed in the article is used instead of the standard synchronization algorithm included in the Spark assembly. Relations between synchronization time and volume of data transmitted between units of the analyzed system are obtained, which provides a means of calculating parameters of the synchronization system as well as selecting optimal values for the given system. The practical results presented in the scientific study prove the correctness of the theoretical approaches used in the process of creating effective systems for synchronizing distributed resources for the Spark platform in question.
Keywords: distributed computing system, synchronization, queueing system, conditional likelihood function, ricart-Agraval model, maximum posterior method, intensity of demand flows, accident punishment algorithm
The article describes the logic of an intelligent clinical decision support system (CDSS), which is based on a set of machine learning models that allow predicting the outcome of an assisted reproductive technologies (ART) protocol at various stages of its implementation. To create all the prognostic models, data from the register of ART protocols, which enables tracing the influence of the woman's history and the course of the protocol on the health of the child from birth to three years of age, were used. The outcome of the ART protocol is expressed in the likelihood of pregnancy, the most common complications of its course, such as isthmic-cervical insufficiency, arterial hypertension, placenta previa, gestational diabetes mellitus, disturbances in the amount of amniotic fluid and premature rupture of the membranes, in a term and method of delivery, as well as in the state of health of the born child for three years. The impact of predicted pregnancy complications on the outcome of childbirth as well as the impact of predicted pregnancy complications, the date and method of delivery on the health of the born child, described in the health group and the predicted group of ICD-10 diagnoses, are taken into consideration. The CDSS is provided for in vitro fertilization protocols, including those using intracytoplasmic spermatozoa injection into the oocyte (IVF/ISKI) and cryotransfer. The CDSS contains 77 predictive models, of which 72 models are binary classifiers, 5 are regression models. Random Forest Algorithm was employed to create all machine learning models. The ROC-AUC value of the binary classifiers of the system is 0.936 95% CI [0.914; 0.958], the accuracy of binary classifiers is 0.897 95% CI [0.880; 0.915], F-test for regression models does not refute the model adequacy hypothesis. The application of such a system will make it possible to obtain an objective assessment drawing on a large amount of data, which is of particular interest for specialists in the field of ART, and to visually demonstrate to the clients of ART centers the main stages of the upcoming process.
Keywords: machine learning, clinical decision support system, assisted reproductive technologies, predictive models, software application, child health prediction
The relevance of the study is due to the modern requirements for the operational reliability of software systems for critical applications. The authors develop an approach based on modern information technology of highly reliable software system multiversion formation. The paper analyzes test tasks of fault-tolerant software system multiversion formation with the aid of ant colony algorithms including standard and modified algorithms. In this article, a software system is defined by a predefined set of software modules connected in a particular way and forming a transition graph with transition probabilities. Moreover, the execution of each module is multiversional, in other words, the module is comprised of several versions with each one characterized by the value of reliability and cost of execution. As a result, the set of versions, selected for execution in the module, determines its reliability and cost, and, owing to the presence of the program graph, we are able to calculate the reliability and cost of the entire software system. The conditions of the problem feature restrictions on the reliability and cost of the final solution. A predefined scheme of the software system was used in the analysis, taking into account the long-term mode of program functions implementation and the capacity to change program structure in the process of its implementation. It is shown that the employment of the modified algorithm provides an advantage not only in the quality of the objective function value, but also in the speed of improving this solution, which is especially important for practical purposes when implementing software systems in real time.
Keywords: software system, fault tolerance, ant colony algorithm, multiversion method, test task
The intensification of research related to solutions to the problems of fire service optimal management is aimed at systematic consideration of the territorial differences of fire risks, the targeted use of its diverse resources, providing an economical mode of their expenditure, based on the multidimensional classification of the fire situation, principles and models of the active systems theory. The article continues the development of these promising ideas with reference to personnel, logistical, financial and other resources of fire service. Insufficient coverage of these areas in systemic research hinders the development of effective methodological and technological solutions to purposefully reduce fire risks and damage from fires. Identification of promising areas in the field of fire risk modeling with due account for the impact of anti-fire service resource provision on them, as well as the construction of a system of practical models for optimal management of its resources in order to reduce the tension of the fire situation. The main methods employed are the optimal approximation of empirical data, methods of the active systems theory, multidimensional classification of objects, such as territorial clusters that characterize different fire conditions in the provinces of Vietnam. Drawing on theoretical and experimental studies, the dependences of the resource provision of various types (fire trucks, logistics, etc.) on the number of firefighters in various districts of Vietnam have been determined. Based on the findings, a methodology for the territorial-dynamic allocation of fire service resources was developed. The conclusion is made about its scientific and practical novelty in comparison with existing approaches. The results of the research show that by contrasting the existing administrative structure of Vietnam and the results of classifying its territories into homogeneous groups (clusters) according to the fire situation, it is possible to apply the calculations given in the article to a more efficient and focused management of fire service resources. Fire trucks in the administrative districts of Vietnam can be distributed optimally by projected values of firefighters in the country and projected number of fires in the districts. It is advisable to connect other resources of the fire service with the forecast calculations outlined above, utilizing either traditional normative estimates or new algorithms based on the development of model studies.
Keywords: vietnam, fire service, resources, modeling, optimal territorial allocation
In modern conditions, a well-established secure information and telecommunications infrastructure as a symbiosis of informatization, automation, communications and information security tools can play a significant role in the development of a digital society. A relevant task in the current situation is to formulate the requirements for a secure information and telecommunications infrastructure of a special-purpose communication network. The paper identifies the features of ensuring information security in the context of managing the protected information and telecommunications infrastructure of a special-purpose communication network. Based on the analysis of research papers devoted to this area of study, an architecture for managing the information security of a special-purpose communication network is suggested. The system for ensuring information security, protection and control is complex and it implements centralized management of information security tools, monitoring and analysis of possible threats to information security of a special-purpose communication network. An algorithm for managing information security means of a secure information and telecommunications infrastructure of a special-purpose communication network is proposed. With the aid of functionally oriented information processes, the possibility of managing a secure information and telecommunications infrastructure of a special-purpose communication network is considered. The article demonstrates the results of the main provisions of the optimal control of functionally oriented information processes application while facilitating the security of territorial segments of a special-purpose communication network, which are presented as a dependence of a special-purpose communication network effectiveness in the implementation of the information security threats detection. The relevance of the study is due to providing security at all life cycles of a special-purpose communication network. The materials of the article are of practical value for specialists in the field of information security of special-purpose communication networks.
Keywords: information security, secure information and telecommunications infrastructure, management of information security tools, special-purpose communication network security, adaptive subsystem of security and information protection, counteraction to information security threats
The paper deals with the problem of increasing the efficiency of the decision-making process in the management of resource support for the development of an organizational system. It is shown that the focus on management based on the results of monitoring the effectiveness of the organizational system causes the necessity to use big data. In this case, the support for expert management decisions with the aid of the optimization approach does not allow one to exclude the choice of an unreasonable option for the distribution of resource support aimed at developing the system by improving the values of performance indicators that affect the achievement of established goals. To increase the validity level of management decisions, it is proposed to utilize the mechanisms of an expert’s visual-figurative intuition in reliance on the visualization of big data. At the same time, it is advisable to supplement the process of optimization modeling with visual-expert modeling. A structuring of the integrated process stages is given, which provides resource support for the achievement of goals in terms of the most significant performance indicators for the development of the organizational system. The suggested stages of the integrated process of visual-expert and optimization modeling are detailed in the form of a sequence of algorithmic operations that enable the expert to reasonably set the dimension and parameters of reductional optimization models as well as to choose a resource allocation option assisted by balance optimization.
Keywords: management, organizational systems, resource provision, optimization, data visualization, expert evaluation
The study regards the combined satellite network model, which is a network using different altitude orbits. The relevance of employing different altitudes is due to the necessity to provide different types of service, taking into account the expansion of the service area to higher latitudes. To implement this approach, satellites are applied both in geostationary and highly elliptical orbits. Owing to the considerable complexity of satellite network estimation and analysis at the stage of design and construction, various modeling methods are involved. At the same time, analytical modeling of these networks is associated with significant difficulties. In this article, simulation in GPSS World is utilized and the main objective is to develop the user request serving algorithm for combined satellite network simulation model and to evaluate probabilistic and temporal characteristics of the network, exploiting the designed algorithm. Software implementation of the algorithm has demonstrated GPSS World capabilities and made it possible to obtain results for the evaluation characteristics such as average delay time and loss probability. The findings can be used both in the analysis of existing technologies for satellite networks under review and in the design and development of new ones.
Keywords: combined satellite network, simulation modeling, service algorithm, loss probability, average delay time
Modern applications are focused on cloud services to achieve better performance, geographic replication and lower cost of ownership. Following the modern concepts of cloud services, this study draws on rich telemetry data and displays the workload performed using the Azure SQL database. The main purpose of this research is the potential improvement both of service and customer assistance employing a controlled platform. The automatic database troubleshooting system is designed to detect problems in a relational cloud database and analyze appropriately the sources of problems in order to reduce the time and cost of manual search and solution of these problems. This system was implemented on top of the Microsoft Azure platform. It is based on scientific models of general and categorical statistical data, which were developed and constructed after a thorough examination of the collected telemetry data. The final root cause of each current issue in the Azure service is gathered after analyzing the results of the models by means of an expert system. The evaluation results show that the continuous enhancement of the infrastructure has reduced the processing time, approximately, by 2 times while the number of intervals has doubled, which can be considered an overall improvement of 4 times, approximately.
Keywords: controlled platform, cloud databases, telemetry data, expert system, search automation
The continuous improvement of a potential enemy’s means of attacking from air or space leads to a sharp reduction in the available time for their destruction and makes relevant the increase in the efficiency of troop automated control, which, in turn, imposes more requirements for the speed of control body combat crews. Known approaches to lessening the working time of control body combat crews are either ineffective or incur significant financial costs. The aim of the research is to diminish the working time of control body combat calculation by reducing the time for solving polyadic automated control tasks in military automation sets of tools. To achieve the purpose of the study, a modification of the method for generating operational information by the combat calculation of the control body is proposed by presenting the necessary data when solving automated control tasks as a production-frame model. We also employed a set of subject, language and graphic models as an information resource, which allows us to take into account the predicate structure of the control body combat calculation request and the conceptual as well as graphical representation of display objects when solving polyadic automated control tasks in a dialogue mode in a natural-like language. The experimental studies on the software and hardware complex, conducted before, showed that the average value of control body combat calculation working time fell by 19.3% for all categories of participants in the experiment. It is suggested to implement this solution in the form of a software module within the framework of a high-level programming language C/C++ using the Qt library, which will enable it to be integrated into special software for a set of automation tools.
Keywords: set of automation tools, automated control tasks, dialog mode, operational information, air situation, natural-like language
The relevance of the article is due to the information and communication support of navigation by monitoring river vessels using video surveillance cameras. The main goal is to recognize ships in images, for which the application of neural networks has potential. The aim of the paper is to study the performance indicators of vessel recognition by means of available pre-trained networks after their additional training for the assigned tasks and to select the most efficient network. The research considers various pre-trained neural networks. The input data for the networks are ship images. The training sample was collected manually and includes two independent DataSets with images of river vessels and many other objects apart from ships. The networks were built and further trained with the aid of Keras and TensorFlow machine learning libraries. The employment of pre-trained convolutional artificial neural networks for pattern recognition problems and the advantages of utilizing such networks over synthesizing a neural network from scratch are presented. The architecture of efficient pre-trained VGG16 neural network is described in detail. An experiment was conducted in additional training of available pre-trained convolutional neural networks for the assigned task. The efficiency of various pre-trained neural networks was evaluated in terms of the percentage of correct pattern recognition cases on the test set. The most efficient neural network for ship pattern recognition tasks has been selected. NASNetMobile and NASNetLarge networks have shown the maximum accuracy. However, the minimum image size that these networks can work with is larger than for other available networks and the great number of parameters in the convolutional layers of these networks causes a significant increase in retraining and operation time than for other available networks. Concurrently, VGG16 neural network with a small number of parameters and a short time for additional training has proven to be highly efficient which is why it is recommended for the purposes of ship pattern recognition.
Keywords: artificial neural networks, pre-trained networks, convolutional neural networks, keras, tensorFlow, google Colaboratory, VGG16, NASNetMobile, NASNetLarge
In cloud environments, hardware configuration, data usage, and workload distribution are constantly changing. These changes make it difficult for the query optimizer of the cloud database management system to choose the optimal query execution plan (QEP). In scientific literature, it was proposed to re-optimize the query during its execution for the purpose of optimizing it with a more accurate cost estimate. However, some of these optimizations cannot provide performance gains in terms of query response time or monetary costs, which are the two optimization goals for cloud databases, and may have a negative impact on performance due to overhead. This raises the question of how to determine when the optimization is efficient. The aim of the study is to develop a method of repeated query optimization that uses computer training. The key idea of the algorithm is to employ past query executions to learn how to predict the effectiveness of query re-optimization, and this is done in order to help the query optimizer avoid unnecessary re-optimization of queries for future ones. The method runs the query step-by-step, utilizing a computer training model, to predict whether re-optimization of the query will be useful after the stage is completed, and calls the query optimizer to automatically perform re-optimization. An experimental evaluation of the effectiveness is to be carried out.
Keywords: repeated query optimization, cloud databases, computer training, multi-stage query, automation of execution
When solving reconnaissance and tactical tasks, methods of mobile robots group application show high efficiency. Robotic groups, based on the network-centric system of control, are characterized by the superiority of intelligence systems over enemy intelligence systems, including the reliability, timeliness and accuracy of the extracted information. At the same time, planning the group trajectory that facilitates communications maintenance with the aim of transmitting control signals becomes a priority in the implementation of such secure systems. The paper proposes a scientific approach to handling the task of mobile robots motion planning under the conditions of providing mechanisms for secure interaction between the agents of a robotic system, using steganographic methods to hide control signals. Previously, the authors developed and tested methods and algorithms as well as software solutions for concealing control signals and the facts of their transmission within the process of intellectual interaction of a robotic systems group when they address a common problem, as well as verification of agents by a dynamic tuple of identification attributes. In the ongoing study, we put forward the software for trajectory planning of a heterogeneous multi-agent robotic system on the condition of maintaining communications to perform the transfer of control signals.
Keywords: information security, control systems in robotic devices, communications security, multi-agent robotic system, network-centric control
The paper considers a wide range of issues related to the solution of an initial-boundary value problem for a parabolic partial differential equation with a multidimensional space variable belonging to the Euclidean space and changing on a network-like domain. The mathematical model describing the process of transferring a continuous medium over a network carrier is determined by the formalism of the initial-boundary value problem. An idea that has become classical is further developed for the case when a network-like region is a directed bounded graph, i.e., a collection of a finite number of segments connected to each other by means of end points. The study employs classical approximations of evolutionary differential equations of the 2-nd order as well as non-classical approximations of differential relations illustrated by generalized Kirchhoff conditions at the branching points of a network-like region (nodal points of the region). When using difference approximations of the initial-boundary value problem operator, the approximation error and stability conditions for the difference scheme are established. The characteristic properties of the locally one-dimensional method and the sweep method utilized to solve the stated problem are studied. An algorithm for the numerical solution of the stated problem is proposed, a computer program is designed, and a computational experiment is carried out on a series of applied problems. The findings are of interest in the analysis of applied problems of multiphase continuum media transfer along network-like 3D carriers.
Keywords: initial-boundary value transfer problem, network (directed graph), continuous medium transfer, difference scheme, locally one-dimensional method
The article deals with the formation of optimization models for component optimization of a digital management environment in organizational systems based on probabilistic assessments of component functional requirements fulfilment and estimation of the components implementation influence parameters on the achievement of established requirements. Methods and algorithms for calculating the parameters of digital environment components implementation influence on the achievement of established requirements are considered. As a principal method, it is proposed to use multi-alternative optimization and choosing the option of component integration into a single digital environment that provides the specified requirements for the functioning of a digitalized organizational system. Special attention is given to evaluating the functionality of digital environment components to determine the suitability of a component or a need to replace it in case of non-compliance with the specified requirements at the stage of its development. Boundary conditions for the transition from the stage of functioning to the stage of digital environment development are regarded in terms of fulfilling the established requirements for organizational system parameters and developing control actions: further operation of the component or its replacement; introduction of a new component to meet new system requirements; adjustment of component functionality under the conditions of the system unchanged structure.
Keywords: digital environment, probabilistic estimates, life cycle, resource allocation, component optimization
The importance of air communication between various points of the globe in the modern world is difficult to overestimate. Yet the employment of this means of transport is associated with high risks for passengers, crew, cargo and the aircraft itself due to the possibility of serious accidents at all phases of the flight, but especially during takeoff and landing. This article presents a physical and mathematical model of an aircraft takeoff-run. Its analysis helps to avoid accidents in the event of emergency situations. This model enables the creation of an electronic device for monitoring takeoff dynamic characteristics and warning the aircraft crew about arising inconsistencies. The article presents differential equations describing the dynamic characteristics of the aircraft during takeoff-run. Additionally, solutions of these equations are obtained, which explicitly determine the functional dependencies of the distance necessary for a safe takeoff on the time elapsed since the start of the takeoff-run. The influence of external factors, such as ambient air temperature, wind speed during takeoff and runway slope on the calculated characteristics is considered. As an example, the article also offers the results of an emergency takeoff modeling with the aid of modern software (flight simulator flightgear 2020.3, GeoGebra mathematical program). From the authors’ point of view, the materials of the article may be of practical value for developers of non-embedded on-board control devices, as well as for users of these devices.
Keywords: takeoff, takeoff-run, runway, gravity, friction force, lifting force, normal reaction force of the support, thrust force, drag force, satellite receiver