Keywords: hidden Markov model, queuing system, GI/G/2/0 with losses, merged semi-Markov model, state forecasting
DOI: 10.26102/2310-6018/2022.39.4.012
Semi-Markov processes are widely used to model queuing systems. The relevance of the study is due to the increase in the capabilities for analysis and performance of queuing systems for which semi-Markov models are constructed. The application of the hidden Markov model theory to them also underscores the importance of this research. In this regard, this article discusses the application of the apparatus of the hidden Markov models theory to a lossy queuing system described by a semi-Markov process with a general phase state space. This makes it possible not only to move beyond the exponential law of the distribution of service times and the flow of applications when describing the system, but also to solve the problems of forecasting and evaluating states and signals, correcting the model while the system is in operation. For transition to a discrete set of states of the Semi-Markov model, the algorithm of stationary phase enlargement is employed. As an illustrative example, a merged semi-Markov model of the GI/G/2/0 queuing system with losses is constructed. Based on it, a hidden Markov model is developed for which the problems of analyzing dynamics and predicting states are solved. The parameters of the hidden Markov model are refined by means of the Baum-Welsh algorithm; the most probable sequence of changing states of the system is determined by the received signal vector.
Keywords: hidden Markov model, queuing system, GI/G/2/0 with losses, merged semi-Markov model, state forecasting
DOI: 10.26102/2310-6018/2022.39.4.015
The article explores the problem of defining and further interpreting a key indicator in the management of information services: the availability of IT services. The relevance of the study is due to the growing digital transformation of all sectors of the economy and human life in general, which has resulted in a direct dependence of the performance of enterprises and organizations on the quality of IT services. At the same time, industry standards for managing information services are often of a framework nature because of constant technological changes. In this regard, this article aims to explain further the definition of IT service availability: the paper formulates the principles for compiling accessibility metrics avoiding the negative ‘watermelon’ effect and also proposes a method developed on their basis for calculating and analyzing the availability of IT services adapted for use in event-driven umbrella monitoring systems. It is concluded that the suggested method can be employed not only to assess availability, but also to search for the root causes of incidents, as well as in the development and implementation of decision support systems for managing information services. The materials of the article are of practical value both for engineers servicing the modern complex IT infrastructure of large enterprises and for the management of such organizations. The described method is utilized in the reporting module of a commercial software product: MONQ software platform for collecting, analyzing and managing processes implemented by a number of large Russian enterprises and government organizations.
Keywords: ITSM, availability, monitoring system, IT service, resource-service model, service management system, ITIL
DOI: 10.26102/2310-6018/2023.40.1.001
The article presents the results of a research on improving the management system of Virtual Computer Lab educational data center based on the conceptual model of operational risks. Virtual Computer Lab is an integrated assembly of virtualization and containerization hardware and software tools in the form of cloud services with an integrated knowledge management system. Virtual Computer Lab is used to master multicomponent information systems for developing students’ complex knowledge, skills and abilities in the field of IT in a classroom context and as part of independent learning. The knowledge management system and the principles of self-organization, which are the integral parts of Virtual Computer Laboratory, make it possible to create a homogeneous educational environment. For the reliability and durability of using the Virtual Computer Laboratory with a minimal downtime, an analysis of the impact of risks on its operation was carried out, and an approach to designing a risk management system was proposed. The approach considered in the article enables better assessment of performance bottlenecks of the Virtual Computer Lab as well as its key problems in a comprehensive manner, increase service uptime, enhance performance quality and reduce the number of failures by means of preventive mechanisms, successfully identify the risks that significantly affect its output and develop an effective system of measures to minimize them. The article discusses in detail such elements of risk management as risk identification and assessment, strategic risk management, risk monitoring and analysis, security and change management.
Keywords: education, information technology, virtual Computer Lab, risk management, operational risks, management of sociotechnical systems based on a risk model, digital transformation, cloud data center, modern management methods, digital learning
DOI: 10.26102/2310-6018/2023.40.1.025
The paper regards the field-programmable gate array (FPGA) implementation of a beamforming algorithm in adaptive antenna arrays. The relevance of the research is due to the need to improve the noise robustness of signal reception in radio engineering systems. The gradient algorithm was chosen as a beamforming algorithm by the criterion of the normalized least mean square error criterion (NLMS), which has the lowest computational complexity, and its use of a variable adaptation step helps to ensure the convergence of the algorithm in terms of a priori unknown power of the input signal. This paper gives a mathematical description of the adaptive signal processing procedures and formulas for calculating the optimal weight vector that provide the best approximation of the input signal to the reference signal. Approximate methods that provide a practical realization of the optimal signal processing based on iterative algorithms in the form of the normalized minimum mean square error algorithm are considered. Examples of the antenna array directional diagram synthesis facilitating adaptive signal processing, implemented on FPGA, under different signal-interference conditions are presented. An acceptable agreement between theoretical and experimental data was obtained for all implementation cases.
Keywords: adaptive antenna array, radiation pattern, MSE adaptive algorithm, FPGA
DOI: 10.26102/2310-6018/2023.40.1.012
A systematic interdisciplinary comprehensive study is presented. The object is the process of preparation and decision-making related to the implementation of the project to introduce an innovative product “vanadium flow redox battery stack”. The subject is a combination of methods of system analysis along with economic and mathematical methods to support the adoption of such a decision. The goal is the financial and economic support for the conditions of the project to introduce new products. For these purposes, a number of objectives were accomplished. A comprehensive model based on the product life cycle model has been developed, which enables the pre-project and project stages of project implementation with consideration to the model of product life cycle, distributed cash flow, calculation of the break-even point regardless of the time factor. The simulation results were compared, and the model verification and validation were evaluated. An algorithm has been developed, and calculations have been carried out for each stage of the model. In terms of practical significance, the obtained integrated result is a developed operational, adaptive, low-cost human-machine decision support system for choosing optimal options through simulation modeling. This system can be used as a template for both educational and production purposes, including formulating the requirements for technical and economic calculations as well as substantiation of objectives for the development of project materials. The scientific (academic) significance of the presented research is seen in the development of the author's original direction – the instrumental and methodological approach to the adaptation of the triple helix model in the context of Russia – at the meso-/micro-levels (in the sectoral / regional) cross-section.
Keywords: flow battery stack, system approach, system analysis, algorithm, human-machine system, simulation, life cycle, distributed cash flow, inflation, break-even point
DOI: 10.26102/2310-6018/2023.40.1.015
The relevance of the study is due to the growing popularity of the sustainability issue over the past few decades. As a result of increasing consumption and production, the issue of environmental protection becomes particularly acute. Another consequence of this growth is that the stratification between the super-rich and the poor is deepening more and more leading to social problems. Limited resources pose new challenges to economic science. To solve the problems of sustainable development, it is necessary to design a system of integrated assessments of the state of sustainability or, in other words, methods of integral assessment of the state of sustainable development. In this regard, the article is devoted to the analysis of existing methods of system analysis and assessment of territorial sustainability of development. The evaluation criteria are analyzed in detail, their relevance to the task of assessing the state of sustainability, and the criteria for an effective integrated assessment of the sustainability of the region's development are formed on their basis. The methods of system analysis used in integral assessments as well as their methodological review are considered. As a result of the conducted research, methodological requirements for the design of an integrated assessment of the sustainability of development, as well as the necessary methods for conducting such an assessment, are formulated. The results of this study can give a grounding for the design of an integrated assessment of sustainability, sustainability of development, resource potential, as well as papers in the field of interdisciplinary research, including methods of system analysis, ecology, economics, sociology.
Keywords: integral criteria, complex models, development, sustainability, integral assessment, indicators of sustainable development
DOI: 10.26102/2310-6018/2022.39.4.018
The article presents a model for creating an adaptive individual learning path based on dynamic control of the course complexity using fuzzy logic methods. The model helps to individually manage the complexity of the training course for each student and to formalize the process of solving practical tasks and feedback from students motivating them to study productively taking into account personal characteristics and preferences. Implementation of such models and corresponding systems in the training process enables teachers to choose the most appropriate tasks for each student with due regard for individual characteristics and personality. The approach allows teachers to allocate more time for scientific, methodological and creative work, especially with the option to distribute educational materials in the form of microlearning, where a large number of students, usually studying online, is invited to perform many small practical tasks. Also, adaptive learning paths are designed to promote the development of adaptive thinking and adaptive strategies for students behavior. The individual learning path is an important element of online learning management system in the cloud environment of "Virtual Computer Lab" educational data center created by M.A. Belov (https://belov.global) in 2007 at the Institute of System Analysis and Control of Dubna State University, the hallmark of which are the principles of self-organization.
Keywords: IT education methodology, distance learning, individual learning path, digital transformation, microlearning, fuzzy logic, virtual computer lab
DOI: 10.26102/2310-6018/2022.39.4.013
A necessary task for metrological synthesis is to develop mathematical and algorithmic software that will allow creating models of measuring channels and automating the calculation of the total error, which will accelerate the process of developing measuring devices. The article proposes a model of a measuring unit that takes into account the change in the transformation characteristic due to the influence of environmental parameters, the type of functional dependence of the transformation characteristic on the influencing argument, the number of influencing parameters and the pattern of the influence. It is important to describe each measuring unit according to this model; concurrently, the measuring channel consists of sequentially connected units. The mathematical model is based on the method of nonlinear transformation, the inverse of the distribution function, as well as on the methods of describing signals, the theory of mathematical statistics and the theory of probability. Following on from the analytical calculations, the authors conducted a metrological analysis and compared the parameters of a random value at the output of the measuring channel accounting for the influence of external factors on the characteristics of the measuring unit conversion with the results obtained without regard for the influence of external factors. Examples of calculations confirm the need to minimize the total error of the measuring channel as a whole and not separately for each measuring unit.
Keywords: measurement channel, conversion characteristic, measurement result error, metrological synthesis, metrological analysis, measuring unit model
DOI: 10.26102/2310-6018/2022.39.4.003
The research is focused on a situation-oriented approach to the processing of heterogeneous data obtained from microservices that are widespread due to the implementation of the microservice architecture underlying many information systems. Such information systems are sources of heterogeneous data provided to the user upon request via the Internet. Data in the form of documents is provided by services included in the information system. The volume of such data can be large, and its processing requires specialized technologies available in document-oriented big data storages (SODB). As part of a situationally oriented database, a microservice is implemented that provides data in JSON format through its programming interface. There is a problem of loading and processing large amounts of data in the storage where specialized statistical functions of Map-Reduce are implemented. The manual method of loading and obtaining results for SODB is laborious because it requires the implementation of routine operations for loading data, applying functions to the loaded data, creating functions inside the storage and obtaining results. This task was not considered within the scope of the project on creating situation-oriented databases, and the possibilities for developing specialized elements and methods for processing large-scale heterogeneous data in a hierarchical situational model with the required equipment were not studied. The developed models for processing documents make processing heterogeneous data less laborious and help to create data-driven applications by means of situation-oriented databases in the framework of the introduced data processing model as part of a hierarchical situational model with the involvement of big data processing technologies of specialized document-oriented storages. The proposed tools are examined by the example of the SODB application for solving the problems of course design in the educational process using the developed microservice saturated with heterogeneous data collected while designing a course remotely.
Keywords: situation-oriented database, built-in dynamic model, heterogeneous data sources, JSON, document storage, microservices, RESTful-services
DOI: 10.26102/2310-6018/2022.39.4.006
The article presents a methodology for extracting morphological features of technical systems in the form of device components and connections between them. The main section of Russian patents claims is chosen as the subject of the study for data extraction. Information about device components is the most fundamental and important part. It can be used in many tasks of computer-aided patent analysis, while the search for effective approaches to extracting such information is still in progress. In the present inquiry, computer-aided development of inventions is considered as a range of applications for such data. The aim of the study was to explore the quality of data extraction using dependency tree analysis for Russian language. The dependency tree is the result of markup by natural language processing tools. Several parsers were chosen for the comparison: UdPipe, Stanza, DeepPavlov and spaCy. The output data are presented in the form of semantic SAO (Subject-Action-Object) structures. The quality of data extraction has been evaluated using precision, recall and F1 metrics. For this purpose, 20 patent claims with 252 SAO structures were manually marked. Under the current methodological constraints, we were able to extract from the dataset 79 % of the SAO structures at best according to the recall metric with a non-strict data evaluation, i.e. without accounting for the completeness of noun groups. The value of F1-measure is lower and ranges from 48 % to 66 % depending on the evaluation type. Conclusions are drawn about the current level of the syntactic analyzer performance within the field of application under review. The results can be useful for developing efficient approaches to extracting structured data from Russian patent arrays.
Keywords: patent, data extraction, device components, dependency trees, SAO
DOI: 10.26102/2310-6018/2022.38.3.029
One of the urgent problems of modern management theory in organizational systems is the development of effective algorithmic procedures for managing geographically related organizational systems in which the effectiveness of geographically distributed objects operation of the main organizational system depends on the extent to which the results of their activities influence the objects of the organizational system associated with the main one. As part of this study, the authors propose problem oriented procedures for the analysis and management of geographically connected systems of an industry cluster based on spatial and temporal information including cartographic visualization of the results of a GIS-based monitoring assessment of the operation of the objects within the main and associated systems and an expert analysis of object effective interaction intensity based on GIS-oriented spatial and temporal information with consideration to long-term planning of volumetric indicators. Selection of the option that best ensures the interaction of the main and related systems within the industry cluster as well as distribution of volume indicators of a variant between objects within an industry cluster are also suggested. The article presents the results of employing the proposed methodology in the practice of managing the effective interaction of geographically related systems included in the civil aviation cluster where the branch system of higher and vocational education is the main and the geographically connected system is the system of airports of the Russian Federation. The findings confirmed the effectiveness of the proposed approach which makes it possible to recommend it for the use in the practice of managing territorial connected organizational systems of various industry clusters.
Keywords: organizational systems, management of geographically connected systems, optimization, decision making support, industry cluster, education system, civil aviation, spatial and temporal information
DOI: 10.26102/2310-6018/2022.39.4.002
The paper deals with the optimization approach to the team adaptation of personnel in the implementation of digital management in a multicomponent organizational system. It is shown that the effectiveness of the adaptation process is influenced by the content and resource components. At the same time, the first element is a set of content components that determine the development of innovative competencies in personnel and the fulfillment of new work obligations; the second is the time and financial resources provided to arrange the process of personnel adaptation. The structuring of content components is carried out with a focus on certain sets of competencies and work obligations that are most characteristic of digital management in multicomponent organizational systems. Primary structuring is an expert one and partially performs a reduction function. The final reduction of expert sets of components is undertaken by optimizing their significance and mutual influence taking into consideration the planned duration of the adaptation process. Optimization aimed at improving the efficiency of personnel team adaptation is based on expert assessments of time cycle impact on the degree of content component mastery in three modes: intensive, inertial, accelerated. This accounts for the planned duration of the adaptation process and the order of components precedence when they are mastered by the team.
Keywords: multi-component organizational system, personnel adaptation, digital management, expert assessment, optimization
DOI: 10.26102/2310-6018/2022.38.3.024
The global informatization of modern society and continuous scientific and technological progress contribute to a rapid increase in the volume of video content in the global computer network. In some cases, the tasks of unambiguous identification of the source and content authentication arise when distributing unique author's multimedia information. One of the main approaches to solving this problem is to mark a digital graphic image with a digital watermark. In order to minimize the distortion of the original graphic data as well as to hide the presence of any protection of multimedia information, an invisible digital watermark is used. Digital steganography is one of the solutions that provide the means for embedding invisible robust graphic labels in digital images. In this context of application, the purpose of steganography changes – the hidden information becomes a "watermark" whereby it is possible to identify the author or owner of the labeled content. A widespread method of introducing a digital watermark is the procedure of successive transformations in the spectral region of the image followed by the introduction of a digital watermark to the Fourier spectrum. At the same time, it is obvious that any modifications of the data in the frequency spectrum lead to the distortion of the original image and the appearance of unmasking features in the form of artifacts. The article discusses algorithms and software tools for human-machine processing of digital watermarks in a video sequence, which is characterized by continuous change in the coordinates and rotation angle of the digital watermark being implemented.
Keywords: digital watermark, video data, robustness, video stream, multimedia container, digital graphic image
DOI: 10.26102/2310-6018/2022.38.3.027
The article considers an algorithm for approximate query processing in relational database management systems. The described algorithm makes it possible to obtain approximate results of queries with aggregation and grouping, which helps to apply it for the purposes of analytical query processing in order to reduce the response time when processing queries. The presented algorithms implement the method of random cluster sampling and employ software that provides means for obtaining an optimized distribution of the sample space using a sample quality metric. The coefficient of variation is chosen as such metric. The article also proposes a model of the analytical query pipeline given in the form of a directed acyclic graph. The approximate query processing algorithm is extended for the conditions of its application in a query flow, which enables the estimation of the confidence interval along with the result of processing the query pipeline. This algorithm can be utilized in the development of special database processor software that implements the architecture of approximate query processing in relational databases. This approach finds a place in the field of research on the synthesis of the structure of hybrid data warehouses that implement transactional-analytical data processing. Further research is expected to obtain an experimental evaluation of the presented approach.
Keywords: approximate query processing, query processing algorithms, query pipeline, cluster sampling, data warehouse, hybrid transactional-analytical data processing
DOI: 10.26102/2310-6018/2022.39.4.001
The article considers the issue of detecting network attacks on the Industrial Internet of Things (IIoT) systems. The widespread use of such systems causes an increase in the vulnerability of corporate networks due to the low security of smart devices, the distributed architecture of IIoT networks, and the heterogeneous nature of IIoT devices. The article proposes to employ an advanced artificial immune system aimed at intrusion detection in the IIoT network. The main concepts and mechanisms of artificial immunity currently utilized to solve various kinds of information security and data mining problems are analyzed. Such algorithms as algorithms of negative selection, clonal selection, automatic updating of detectors, danger theory, dendritic cells and idiopathic immune network theory are examined. The features of each approach are regarded; the advantages of their joint application in integrated intrusion detection system are demonstrated. For the purposes of training and evaluating the efficiency of the given system, a set of testing data on the network interaction of Internet of things devices (Bot-IoT) was used. The results of the computational experiments verify the high efficiency of the suggested approach.
Keywords: information security, network attack, dataset Bot-IoT, internet of Things, industrial Internet of Things, artificial immune system, negative selection, clonal selection, dendritic cells, idiopathic immune network
DOI: 10.26102/2310-6018/2022.39.4.004
The article defines the task of forming the project team for countering criminal threats, which can be solved using the methodology of operations research as an optimization assignment problem. The main drawback of using the classical assignment problem to solve this problem is considered – the possibility of optimization by one criterion only. The problem of multi-criteria selection is regarded. Major methods of multi-criteria optimization are listed. Two main groups of these methods are specified. One of them is examined with reference to a set of criteria by their linear additive convolution into super criteria. Some disadvantages of this approach are indicated. Based on this approach, the author formulated a variation of the model and the method for solving the multi-criteria assignment problem with some leveling of the identified shortcomings. The proposed author's approach employs the convolution of criteria by deviating from the ideal point with the measurement of distance in Euclidean space. Possible limitations to the application of the author's version of the method for solving the multi-criteria assignment problem in practice are indicated for which reason a special heuristic method is suggested that helps to level them. The algorithm of the method for forming the project team for countering criminal threats is given.
Keywords: project for countering criminal threats, assignment problem, multi-criteria assignment problem, project team
DOI: 10.26102/2310-6018/2022.38.3.020
The paper is devoted to solving the problem of algorithmic security management processes of cyber-physical systems by detecting malicious requests from a number of other associated systems, internal services or human actions. The relevance of the research is due to the high degree of criticality of protection against possible degradation of services as part of the implementation of attacks on compound complex systems responsible for the integration of computing resources into physical entities. The authors focus on denial-of-service attacks on cyber-physical systems by sending http-flood to web management interfaces. The proposed algorithm for detecting malicious requests analyzes the activity of all investigated components of cyber-physical system web services. The research employs the method of visual analysis and data processing based on the representation as a single normalized set. Raw data of the analyzed queries is grouped in a specific way to detect a particular deviation as a suspected threat. Examples of data changes and security system responses are given. Experimental results confirm that the suggested algorithmic software achieves first- and second-order error reduction compared to commonly used regression models in modern application-level firewalls.
Keywords: information security, malicious requests, sources of malicious requests, cyber security, data analysis, threats, denial of service, DDoS, URI, HTTP
DOI: 10.26102/2310-6018/2023.40.1.020
The analysis shows that cyber-physical systems together with cyber-biological and cyber-social systems are now considered as key elements in modern infotelecommunication systems. The concept of a cyber-physical system is based on a dualism of the physical and cybernetic environments. Due to the fact that the physical, biological and social environments are combined and cyberspace is further introduced, there are significant opportunities for the implementation of a wide variety of functions. At the same time, new problems appear, for example, those associated with modeling the processes within such systems. There is also the issue of monitoring within the cyber-physical system since the data can be missed. Therefore, this article aims to develop such a structure of a cyber-physical systems so that its efficiency would be as high as possible. The paper proposes a procedure for the formation of a rational structure of such a system. The components are chosen according to the principle of ranking components relative to their value from the system's point of view. Two types of constraints are used, the first of which are related to the area of the system in question, and the second are related to the technologies employed. At the initial stage, experts implement the choice of components of a cyber-physical system by means of the information system. Next, two selection procedures are applied. As the key result, a structural scheme related to the optimal choice of components of a cyber-physical system is suggested.
Keywords: cyber-physical system, structure, optimization, expert, information system
DOI: 10.26102/2310-6018/2022.38.3.023
The article proposes a methodology for assessing the current state of the engineering and telecommunications infrastructure of a special-purpose communication network segment and tested it by the example of the regional segment of the integrated multiservice telecommunications system of the Ministry of Internal Affairs of Russia. A regional segment of a special-purpose communication network is defined as a physical or logical zone in which granting access to resources or the denial of this access are regulated by access rules and control mechanisms. Such zone has a clear boundary with other segments. Taking into consideration the need to maintain state operability of the regional segment of a special-purpose communication network, the task of assessing the current state of the engineering and telecommunications infrastructure is relevant. The paper proposes a sequence of actions aimed at conducting an audit of communication nodes at all levels of the regional segment including engineering infrastructure, telecommunications equipment, data transmission channels. As a mathematical apparatus, mathematical methods for processing expert assessments, associated with determining the significance of individual components of engineering and telecommunications infrastructure, are used. The method of hierarchy analysis with the involvement of expert groups is applied to define the significance coefficients of the factors accounted for when calculating the integral evaluation functions of the regional segment of a special-purpose communication network.
Keywords: audit of the regional segment of a communication network, transmission channels, monitoring of telecommunications equipment, assessment methodology, expert assessments, telecommunications and engineering infrastructure
DOI: 10.26102/2310-6018/2022.38.3.015
The paper deals with the issues concerning resource support management in the development of an organizational system at a given planning horizon. To solve them, integrating visual expert and optimization modeling within a single algorithmic scheme is proposed. The first task is aimed at determining the importance of the monitored indicators of the organizational system functioning in the implementation of the development process. To address it, a visualization of the initial data is suggested, which helps to accelerate and improve the accuracy of expert assessments when choosing the structure of the forecast model. It is targeted at effective processing of monitoring assessment data by means of visualization techniques The second task is a multi-alternative optimization problem that uses the solutions of the first task and ensures the distribution of the integral amount of resource support to increase the level of the indicators most crucial for development. The third task characterizes management decisions on the distribution of resource support between time ranges at a given planning horizon. It is shown that the combination of visual expert and optimization modeling makes it possible to find the optimal distribution which is more consistent with the real functioning of the organizational system compared to the traditional method of multi-step process of making optimal decisions. The solutions of these problems are combined within an algorithmic scheme and each procedure includes certain actions for data visualization, forecasting, examination and making optimal decisions.
Keywords: organizational system, management, resource support, development, visualization, forecasting, expert assessment, optimization
DOI: 10.26102/2310-6018/2022.38.3.030
The article describes the method of strategic planning – SWOT analysis – which helps to study in detail a set of parameters of both the internal environment of the institution in question and the external environment that has the strongest impact on its functioning whereby management decisions aimed at qualitative and quantitative improvement of performance indicators are made. The features of the management system and operation of a medical organization are analyzed using a key quality indicator – annual statistical reporting reflecting the results of the institution's activities in the main areas for higher state bodies with authority in the field of public health. The analysis of the form composition of accounting and analytical registers and statistical indicators was carried out, and the method of system dynamics was applied in the context of selected management processes to identify the optimal set of parameters of the internal and external environment providing the means for developing a SWOT matrix suitable for the qualitative assessment of the outpatient clinic operation. Criteria, a number of factors and actors that have the greatest impact on the control mechanism by building a tree of goals are identified. The developed SWOT matrix based on a set of key parameters and criteria is a universal tool for strategic planning and can be put to practice when conducting a SWOT analysis of medical organizations various regions.
Keywords: medical organization, SWOT analysis, accounting and analytical registers, goal tree, system dynamics, parameters, internal environment, external environment, statistical reporting
DOI: 10.26102/2310-6018/2022.38.3.019
The issues of organizing distributed computation in fog-environments are currently relevant due to the increasing amount of data circulating over global networks. Research carried out in the field of the development of new models, methods and technical means of the fog computing concept covers a wide range of topics, including resource sharing, computational planning, user authentication, and data security. Papers on resource consumption are also presented, specifically those that explore the issue of extending the expedient service life of fog devices, which have a significant impact on the system operating cost. In this article, the solution to the problem of resource saving in this aspect is associated with a reasonable distribution of the computational load over the fog nodes which affects the device indicators, such as the probability of failure-free operation, gamma-percentage time between failures and the average residual resource of a computing device. A method for evaluating the feasibility of placing a computational load on nodes as part of a "greedy" strategy is proposed, as well as a method for selecting nodes to place the load. Combining these methods constitutes a method for distributed computing planning in the fog layer of a network with optimization according to the criterion of resource-saving. The conducted experiment demonstrates the applicability of the developed method and helps to choose the area for further research.
Keywords: resource-saving, computational planning, fog computing,
DOI: 10.26102/2310-6018/2022.38.3.016
The article is devoted to the issue of identifying the author of a heterogeneous source code program by means of a hybrid neural network. The solutions to this problem are especially relevant to the fields of information security, educational process, and copyright protection. The article analyzes modern methods of addressing this problem. The authors propose their own methodology based on a proven in early studies hybrid neural network aimed at evaluating the effectiveness of this approach in simple and difficult cases. This research incorporates experiments on previously unconsidered cases of source code author identification based on heterogeneous data. Cases relevant to corporate development are examined including the analysis of source codes presented as commits and model training on datasets with more than two programming languages. Additionally, the trend of determining the authorship of an artificially generated source code, which is gaining traction, is regarded. A dataset was generated, and an appropriate experiment was performed for each case. The effectiveness of the author's methodology for all three difficult cases was evaluated using a 10 blocks cross-validation. The average accuracy for mixed datasets was 87 % for two programming languages and 76 % for three or more languages, respectively. The average accuracy of the methodology for authorship identification of artificially generated source codes was 81.5 %. Identification of the author of a program source code based on commits was carried out with an accuracy of 84 %. Experiments have shown that the effectiveness of the methodology can be improved in all three cases by using large amounts of training data.
Keywords: authorship, source code, commits, generation, neural network
DOI: 10.26102/2310-6018/2022.38.3.025
The article discusses the need for an algorithm to form the information system vulnerability base and the selection of the neural network architecture. A description of existing systems and criteria for assessing vulnerabilities as well as a group of metrics are given. The vulnerability databases were analyzed and discrepancies in the assessment of vulnerabilities, advantages and disadvantages were identified. The following architectures were identified and studied: feed forward neural network, generative adversarial network, Autoencoder, recurrent neural network without long short-term memory, recurrent neural network with long short-term memory, Rumelhart multilayer perceptron, liquid state machine, Boltzmann machine. A preliminary analysis of neural network architectures is presented taking into account significant parameters for further use in the field of information security and vulnerability classification. Based on the results obtained during the study of the parameters of neural networks, feed forward neural network, recurrent neural network with long short-term memory and generative adversarial network were identified. An alternative method of forming a vulnerability database by means of neural networks is proposed. As a result, an algorithm for forming a vulnerability base and a method for automating it using a neural network are suggested. The solution will allow the neural network to constantly receive up-to-date data for training and, owing to this, the vulnerability database will be updated as quickly as possible, which will make it the most complete, reliable and up-to-date of all existing vulnerability databases.
Keywords: vulnerabilities, neural networks, neural network architecture, algorithm, threat
DOI: 10.26102/2310-6018/2022.39.4.007
Agent-based modeling is actively used for modeling human health. The main advantages of an agent-based approach in this field are the capability to implement a modular approach to health and to account for individual patient indicators. The article presents the concept of a flexible and expandable agent model of the patient, which performs a long-term prediction of the patient's condition based on short-term test treatments administered to them, including geroprophylactic, and by predicting the patient's reaction to exposure in order to prevent future possible diseases with regard to both calendar and biological age. All interactions of the model agents are reduced to assessing the effectiveness of the anti-aging measures in the form of a calculated bio-age which characterizes the degree of decrease in the functional capacity of the organism. As part of the concept, the central agents “Patient”, “Aging Process” and “Impact” are highlighted in the model as well as a number of lower-level agents associated with the agent “Patient”. Lower-level agents are responsible for modeling the physiological processes of body systems or diseases, for example, a chronic disease is allocated its own agent, which affects the patient's condition during the modeling. The types of model agents are extensible, which makes it possible to develop this concept of the model. The paper presents the testing of the agent model concept to identify the effectiveness of the impact on the patient following on from the assessment of changes in the biological age before and after geroprophylactic therapy.
Keywords: agent modeling, patient's health, geroprophylactic treatment, predicting the efficiency of treatment, bioage
DOI: 10.26102/2310-6018/2022.38.3.008
The paper is devoted to the modeling of an organizational culture by means of an experimental study using a questionnaire and expert a priori ranking. The influence of an organizational culture on the competitiveness of an enterprise is considered. The a priori ranking method makes it possible to objectively assess the subjective opinions of experts and develop a model of factor rank ordering that fully captures the essence of regional machine-building enterprises. For clarity, the results of the study are laid out in the form of a rank histogram. Experts are divided into two groups: managers and workers, employees. In reliance on their opinions, the ranks of social and economic factors are determined. Following on from the study, the factors that fully captures the essence of this enterprise are ascertained. As a result, organizational culture analysis is carried out. In addition to modeling the system of an enterprise organizational culture, a study was conducted with a view to identifying the qualities of managers. As a result, the state of the organizational culture of regional machine-building competitive enterprises was assessed and ways of improving it were determined. A comparative characteristic of the influence of socio-economic factors on the organizational culture of two enterprises in the region has been obtained. Based on the findings that characterize the organizational culture of two competitive machine-building enterprises, the organizational culture was described from the perspective of management and employees in terms of the main parameters. The conclusion is made on the analysis of the diagnostics of the state of the organizational culture and a number of measures are proposed to improve it. Characteristics of the organizational culture were obtained according to the suggested basic parameters from the standpoint of the management and employees of enterprises.
Keywords: organizational culture, expert evaluation, ranking, modeling, analysis, competitiveness
DOI: 10.26102/2310-6018/2022.38.3.022
Currently, the issue of choosing the optimal solution is one of the most important and urgent in industry, economy, agriculture, and the military sector. Methods and approaches of linear programming theory are used to solve many applied optimization tasks. The simplex method, which is the principal method of linear programming, is characterized by a large amount of computational actions and procedures. Owing to this, modifications of the main method with higher algorithmic efficiency are employed to address this problem. In this article, a new method for solving linear programming problems has been developed. The algorithmic complexity, which is less than that of the simplex method, is provided by considering a class of problems with completely limited areas of acceptable solutions. The new method is justified by the results announced in the proven statements. The implementation of the method is described by two algorithms: 1) search for a quasi-optimal solution by analyzing the coordinates of projections on hyper planes (design algorithm); 2) search for an optimal solution by setting increments to constraints (increment algorithm). To explain the functioning of the algorithms, specific numerical examples are analyzed. Algorithmic complexity estimates of the developed method are carried out by counting the number of arithmetic operations undertaken. Formula expressions for estimating the complexity of calculations are obtained.
Keywords: algorithm, variable, hyperplane, projection, inequality, iteration, number of operations, computational complexity
DOI: 10.26102/2310-6018/2022.38.3.017
The article considers the problem of identifying the pre-selected class of an observed signal. This appears to be a relevant issue in the theory of pattern recognition, clustering, statistical decisions, technical diagnostics, and a number of other areas of science and technology. As a signal model, its doubly connected Markov model (complex Markov chain) is used based on three-dimensional probability densities of simulated random processes. The technique for forming class models according to known probabilistic characteristics or according to a classified training sample of samples is regarded. As a part of the Bayesian approach, the posterior probabilities that determine the affiliation of the observed sample of signal samples with each class are defined. An optimal signal classification algorithm is proposed, a decision-making algorithm is developed, decisive statistics are formed that depend on the observed sample of samples and matrices of transition probabilities of the analyzed classes, providing means for decision-making with a given reliability and based on the Wald procedure; their properties are also examined. Statistical simulation of the classification algorithm has been carried out, which confirms its effectiveness. The research results can be used in various systems and devices for detecting objects according to the random signals generated by them, for example, in technical diagnostics equipment.
Keywords: signal, classification, markov model, wald procedure, decision statistics
DOI: 10.26102/2310-6018/2022.38.3.014
The paper considers the problem of planning a route for sea vessel shifting. Under the conditions of heavy traffic, navigators should follow the traffic scheme accepted in this defined water area. Such a pattern may not be officially established while representing collective experience in navigation. In this case, route planning based on the data on the movement of other ships that had been in this water area before (the same idea underlies the methods of "big data" tasks) appears to be productive. In the papers published earlier, such route planning employed a cluster analysis of retrospective data on the movement of ships, which involved dividing the water area into sections and isolating their characteristic values of speeds and courses. The problem with this approach was the choice of partitioning parameters, which had to be set for each specific water area separately. This paper proposes another approach when the graph of potential routes includes a selection of the trajectories of individual ships that had been previously implemented in the selected water area. The article regards a method for constructing such a graph of possible routes, estimates the number of its vertices and edges, and gives recommendations on the choice of a method for finding the shortest path on this graph. A possible method premised on the notion of combining straight and maneuverable sections of vessel traffic that can be applied to interpolate the missing data required to build a graph is discussed. Examples of route planning in a number of real water areas are given: Vladivostok, Tokyo Bay, the Tsugaru Strait.
Keywords: maritime safety, route planning, big Data, automatic identification system, graph algorithms, shortest path
DOI: 10.26102/2310-6018/2022.38.3.009
The equilibrium at the interphase boundaries largely determines the transfer processes and therefore studying it is an important task. The paper proposes a mathematical model of the problem of salt ion stationary transfer at the onset of equilibrium, namely at zero current, in the cross section of the desalination channel formed by anion exchange and cation exchange membrane in the form of a boundary value problem for systems of Nernst-Planck and Poisson equations in the potentiostatic mode. A numerical and asymptotic solution of this boundary value problem is obtained. The numerical and asymptotic solutions are compared, and their coincidences were shown with good accuracy. The acquired asymptotic solution allows for an exhaustive analysis of the equilibrium state depending on the initial concentration, potential jump, and properties of ion-exchange membranes and helps to establish the basic transfer patterns. It is shown that the stationary state of salt ion transfer process through the channel section coincides with the equilibrium state. The location and dimensions of the spatial charge and electroneutrality regions are established. The dependence of the electric field strength and concentration on the potential jump and the boundary values for cation and anion concentrations is obtained. The results of the research can be used to determine the optimal operating modes of electrodialysis water purification devices.
Keywords: small parameter, asymptotic solution, cross section of desalination channel, electromembrane systems, numerical solution, singularly perturbed problems