Keywords: social and economic development, municipality, semantic network, cognitive modeling, strategy, decision making, semi-structured system
The relevance of the work is due to the fact that activities in the field of social and economic development of territories appear as one of the primary national tasks. In this regard, the purpose of this work is to describe an approach that makes it possible to increase the efficiency of management decisions in the field of planning, forecasting and programming the social and economic development of municipalities. To achieve this goal and to model the subject area, a number of tools are used, in particular, the semantic network and cognitive maps. The authors present a semantic network that describes the basic concepts and relationships in the process of managing the social and economic development of a municipality, a feature of which is the factors that determine the social and economic development of a settlement. These factors make it possible to identify typical municipalities within the region (country), for which certain strategies for the development of the territory (templates) can be used. As a supplement to the semantic model, a cognitive map is used, which makes it possible to take into account the interdependence of indicators for assessing the social and economic development of municipalities. The proposed approach was tested on the indicators of the social and economic development of the municipality (Shegarsky district of the Tomsk region) and can be used in the process of making managerial decisions in the field of planning, forecasting and programming the social and economic development of municipalities.
Keywords: social and economic development, municipality, semantic network, cognitive modeling, strategy, decision making, semi-structured system
The purpose of this work is to investigate the problem of creating a multi-agent system for monitoring a context-dependent smart home and performing appropriate actions based on the current state of the home. There are two main intelligent features introduced in the architecture of this project to facilitate a multi-agent system in a smart home environment. The first of them learns and adapts to the movements and actions of the residents. The second function predicts events that happen to residents. The integration of the system is presented using a simulation application of the technology in the working environment of a smart home. The proposed architecture expands from the conversion of passive sensors into smart devices by adding processing and analytical capabilities from the integration software. This study will be tested in a simulated environment with a visualized house plan. It focuses mainly on software that provides the infrastructure for intelligent monitoring of movement inside the home. The infrastructure includes protocols for agent interaction and communication protocols. Context awareness for smart home, proposed in this article, uses a multi-agent system to represent context and shared knowledge between agents.
Keywords: intelligent agent, multi-agent technologies, smart home, context-sensitive systems, forecasting agent
The task of finding anomalies in data when implementing predictive analytics systems. Predictive analytics have become very popular over the past few years. It helps banks approve loans or identify suspicious account activity, email providers filter spam, and retailers predict the likelihood of buying to attract customers. But predictive analytics is quite complex, and therefore its implementation is also fraught with difficulties. When companies take the traditional approach to predictive analytics (that is, treat it like any other type of analytics), they often face obstacles. This is why this area needs tools to detect anomalies in the data. These tools should help to identify outstanding values in order to draw dependencies with the factors of their occurrence and identify them in the future. This article describes a package in the R language that is anomalies in multidimensional time series. This package is capable of detecting anomalies using three different methods: the n-sigma method, the CUSUM method, and the 4th order central moment method. Also, this package searches for complex anomalies, which are a direct indicator of errors in the system due to the fact that anomalies are found in multidimensional data.
Keywords: anomaly, outlier, time Series, three Sigma Rule, r Language
The most difficult and dangerous element of the flight of a helicopter-type aircraft is landing, and its implementation on an unprepared site in conditions of insufficient visibility is one of the key problems. Landing on an unprepared (undiscovered, unequipped) site may be caused by the need to deliver cargo, ammunition in combat conditions, search and rescue transactions, evacuation of the wounded, etc. The spatial position of the aircraft and its location are determined by the crew visually on the natural horizon, ground landmarks, as well as relative to other material objects and structures. When landing on snow-covered or dry ground, due to the air jet from the helicopter's main rotor, a solid suspended rises, which critically reduces horizontal and vertical visibility and can lead to an incorrect assessment by the crew of the helicopter's spatial position relative to the ground, in addition, they may remain unnoticed obstacles in the landing zone (large stones, moving and stationary objects) may remain unnoticed. At the same time, in low light conditions or in difficult weather conditions, buildings, structures, power line masts, trees, shrubs, etc. may be located in the landing zone. d. The helicopter landing on a dusty, sandy or snow-covered area may be accompanied by the creation of a dusty or snow vortex around it, which impairs visibility and reduces or eliminates the possibility of a visual approach. The proposed method for assessing the spatial position of a helicopter-type aircraft in a snow vortex in the Arctic zone relates to the field of aviation, in particular, to systems for ensuring the safety of landing a helicopter-type aircraft and can be used in the development of control systems for landing a helicopter-type aircraft on an unprepared (unequipped, undiscovered) site in conditions of insufficient information content of the space behind the cab about the spatial position for both manual and automatic control.
Keywords: snow swirl, snow and ice cover, snow-covered pad, spatial position, insufficient information content of the space behind the cab, helicopter-type aircraft.
empirical studies have shown a clear dependence of the accident rate in the region on the quality and efficiency of software for processing data of administrative materials of traffic violations. Choosing the optimal software for computing systems for processing administrative materials of traffic violations is an urgent problem nowadays. Fifteen qualitative characteristics of computer software for processing data of administrative materials of traffic violations based on component analysis and their quantitative indicators obtained from experts in the field of road safety management are proposed to investigate in this work. The component analysis of the subject of the study is given, the results of analysis of three groups of qualitative characteristics that allow to select software both on the basis of a certain group of characteristics and on the basis of a complex quality indicator are presented. The further development of the ideas set out in this article will be the development of an application package for the optimal choice of software for computer systems for processing data of administrative materials of traffic violations.
Keywords: componential analysis, software of computing complexes, administrative materials, traffic offenses, main component, photo and video recording complexes, integrated quality indicator
The purpose of this article is to bring partial indicators of the effectiveness of solving problems of information exchange between the elements of the system of distributed situational centers - situational centers - to a single scale and establishing the order of their importance. The scientific and methodological approach is based on a selected set of performance indicators of a certain scalar function of a vector argument in the form of functions of an additive or multiplicative form. The approach makes it possible to form complex indicators of the effectiveness of solving problems of managing information exchange between situational centers, based on bringing the totality of private indicators to a single scale and establishing order relations for their importance. An essential feature of the proposed scientific and methodological approach is the probabilistic interpretation of the vagueness of ideas about the preference of some performance indicators over others. The basis for determining the relative importance of particular indicators is the principle of maximum entropy, which provides a certain objectivity to the assessment results, taking into account the uncertainty of information at the early stages of planning. Objectivity is achieved by using the distribution law, which is characterized by the maximum value of the measured entropy of uncertainty. The specified form of the probability distribution law is based on a minimum of speculation. The approach is implemented in the form of a patent of the Russian Federation for an invention and can be used in systems to support decision-making for information exchange management problems between situational centers.
Keywords: modeling, uncertainty, efficiency, management, information exchange, situational center, government departments
Currently, there is a development of methods related to the study of text arrays. In doing so, they aim to either measure their spatial characteristics, such as line lengths, font sizes, etc. or for consideration of general linguistic problems, in which the study of meaning-bearing units, such as sentences, phrases, etc., is carried out. In the second class of problems, the use of frequency analysis can be considered promising. The paper analyzes the approaches that can be used in this case. The authors in the article developed an algorithm for processing text in a natural language.The algorithm created in the work is programmatically implemented using Python, Jupyter Notebook, wordcloud, NLTK. During processing, the text array is split into words, after which a list of tokens is formed. Recommendations are given for removing conjunctions, prepositions and other parts of speech in order to carry out a full analysis of the topic. The main stages of the text frequency analysis algorithm are shown. They consist in the fact that the data are unloaded, the primary processing of text arrays is carried out, after which the process of replacing words is carried out, the statistical data are evaluated, unnecessary words are removed, and a visual presentation is carried out. The main stages of the algorithm have also been demonstrated based on fragments of the program code.
Keywords: text information, model, frequency analysis, program, word, language
The article examines a formalized description of the choice of a variant of the structure of digital management of the logistics process in the organizational system. Three types of interaction structures between digital platforms of the control center and components of the logistics process are considered: centralized, decentralized, and cluster. Each variant of the structure corresponds to a certain degree of decentralization of digital management and is characterized by the achieved values of economic, time and reliability indicators. It is shown that in this case the selection problem is a multicriteria problem. Deciding on the optimal variant of the digital control structure requires a transition to integral assessment on a set of indicators. It is proposed to introduce an adaptive scheme of dialogue with experts, which allows constructing an iterative algorithm with three levels of the question-answer process of dialogue with an expert, instead of a priori assessment by experts of the weighting coefficients of the integral assessment. Formalization of opinions using sign characteristics and linguistic variables allows for adaptive adjustment of the integral assessment weights using a randomized scheme for organizing a dialogue with an expert. The final choice of a variant of the structure of digital control is carried out upon completion of the iterative process and transition to a smoothed integral function.
Keywords: logistics process, organizational system, digital management, digital platform, multi-criteria modeling
The article discusses the issue of the reliability of the power generation system from the point of view of the cyber-physical control of the system. Companies that generate electricity must supply this resource without interruption and monitor the generation process to identify and correct all causes of possible malfunctions in the process. The authors present a hybrid method for detecting change-point in the operation of cyber-physical power generation systems based on data from the process of power generation by gas turbine plants, provided that they are in the «generation» operating mode. The hybrid approach to a problem is a sequence (or pipeline) of steps that improve the results of the basic approach using the n-sigma rule by comparing real generation data with a performance standard. The proposed hybrid method is based on the following methods: search for optimal parameters (the indicators of precision, recall and F1-measure of the developed method for selecting the optimal parameters were 0.7, 0.7778, 0.7369, respectively); identifying outliers; detecting change-point using heuristic rules. As methods for detecting outliers, the authors use the DBSCAN algorithm and the n-sigma rule. The hybrid method using the DBSCAN algorithm identified outliers without false positives compared to the baseline approach. Advanced heuristics for change-points detection allow cyber-physical system experts to quickly identify the cause of the change-point using information about the time of the failure and the sensors on which the failure occurs. Prompt identification of the change-point allows for more accurate and timely monitoring of the performance of individual units and the entire system as a whole, develop a strategy of actions for repairing equipment in the shortest possible time and with minimal intervention in the process (until the system reaches a critical state), which can significantly reduce costs for Maintenance. Application examples demonstrate the advantages of the proposed method for both synthetic and real data.
Keywords: cyber-physical systems, statistical methods, оutlier, power generating equipment, change-point
The article discusses about formalized approach to investment processes managing for development of sectoral organizational system using modeling and optimization methods. The problem orientation of these methods is determined by the process peculiarities under research in the civil aviation sectoral organizational system. An agent-game model is proposed as a basic model of interaction between the control center and the objects of the organizational system in the investing process spheres of the industry development program. We discussed about the sequential solution necessity of two optimization problems with the investment process centralized management. It's formalized an optimization model of Boolean programming for the game problem associated with the set formation of objects included in the resource support for certain development program areas. Then it was shown second task namely interests agent-game coordination between control center and the objects of organizational system in investment resource implementation in order to optimally search for control center strategies set with a set of industry development basic indicators. Algorithmizing of making management decisions based on the listed optimization problems such as the integration of randomized search algorithms, a genetic algorithm with adaptive mutation, and a population particle swarm algorithm.
Keywords: sectoral organizational system, investment process, centralized management, agent-game modeling, optimization
The paper suggests the index of social tension in the enterprise, which makes it possible to assess the intensity of one indicator, namely, the distribution of the number of employees. Most researchers believe that this is one of the main reasons for the formation of the conflict. Social tension in the enterprise is proposed to be estimated by the sum of deviations of the actual distribution of income from those distributed according to lognormal law. This approach is based on the Boltzmann principle, according to which the most probable state of the system is the most stable. The degree of deviation from the lognormal distribution, accepted for the most probable, characterizes the level of social tension. The paper presents an example of calculating the intensity index at the university department. The level of social tension has slightly exceeded the permissible value. As an assessment of the possible form of manifestation of an increased level of social tension, staff turnover was calculated, since in a market competition, protest activity does not bring the expected benefits, changing jobs is the more likely way to end the conflict. Staff turnover was 2.9%. Thus, in the observed team, social tension is slightly exceeded and there is no protest activity. The proposed index can be used as an independent indicator, and can be included in complex indicators of social tension. The advantage of the developed index is the ability to keep a continuous record of the level of social tension on income differentiation. If man use already installed software products, this index can be calculated automatically. The management of the enterprise can use to analyze the dynamics of the level of social tension in the team in order to prevent conflict situations
Keywords: the index of social tension in the enterprise, the distribution of the number of employees, the lognormal distribution law, the Boltzmann principle, the analysis of the dynamics of social tension, the prevention of conflict
The relevance of the study is due to the high popularity of Internet services for publishing reviews of goods / services, their impact on consumer behavior, as well as the need to automate the processing of data from such services due to the large amount of information provided. A model for assessing customer satisfaction based on their feedback has been developed, taking into account the assessment of the product by the consumer, which determines the nature of the response (negative, positive, neutral), and the assessment of the feedback by other participants. The integral indicator of satisfaction assessment is formed based on normalized values of the average assessment of consumers and the total assessments of positive, neutral and negative reviews. In this case, the determination of the weight coefficients used in the calculation of the integral indicator was carried out using the method of principal components. The article presents the results of calculating the integral indicator for six models of video cards. Based on the developed model, a program has been implemented that allows automating the collection of data from the Internet site and calculating the integral indicator. The program is implemented using the Java programming language and the IntelliJ IDEA development environment. The developed model and program can be used both by potential buyers who make a decision to purchase goods, and by enterprises selling goods, and seeking to get feedback, identify weak and strong points, improve the range and quality of service.
Keywords: consumer reviews, principal component method, rating, data collection, linear model
It is almost impossible to imagine a person without electronic digital gadgets in modern conditions of ubiquitous informatization. And this tendency is proposed to be used to fulfill socially useful tasks of controlling the infrastructure of urbanized areas. This requires the creation of solutions at the intersection of sociology, psychology, urban studies and information technology. Involving a citizen in routine tasks on a voluntary basis necessitates the use of non-trivial approaches, for example, based on gamification. In this regard, the article analyzes the existing tools for game interaction with objects of an urban environment. A new approach to infrastructure objects monitoring based on game interaction with them is proposed. The issues of development of game logic of interaction with urban objects for the simultaneous collection of data on their state are considered. The article describes a software package created using Python, Django, TypeScript, React Native, Google Maps, PostgreSQL, PostGIS and NativeBase technologies that implements the proposed approach to gamification of data collection. The concept of the developed game consists in the user's actions to «service» own and «capture» other people's objects to move up the rating among other players. The main way to increase user level in the game is to take more objects of ownership and complete missions for which experience and coins are awarded. The mobile gaming application allows users to organize their leisure time in open urban spaces, interacting contactlessly with infrastructure objects in a competitive gaming process.
Keywords: city, infrastructure object, monitoring, data collection, game process, mobile app, user, gamification
The employment of machine learning systems is an effective way to achieve goals, operating with large amounts of data, which contributes to their widespread implementation in various fields of activity. At the same time, such systems are currently vulnerable to malicious manipulations that can lead to a violation of integrity and confidentiality, which is confirmed by the fact that these threats were included in the Information Security Threats Databank by the Federal Service for Technical and Expert Control (FSTEC) in December 2020. Under these conditions, ensuring the safe use of machine learning systems at all stages of the life cycle is an important task. This explains the relevance of the study. The paper discusses the existing security methods, proposed by various researchers and described in the scientific literature, their shortcomings, and prospects for further application. In this respect, this review article aims to identify research issues, relating to machine learning system security, with a view to subsequent development of technical and scientific solutions, regarding the matter. The materials of the article are of practical value for information security specialists and developers of machine learning systems.
Keywords: machine learning, malicious impact, integrity, confidentiality, security
The development of methods for quantitative interpretation of the results of monitoring the electrophysical and geometric parameters of a multilayer medium is one of the most important problems in assessing its state, both practical and theoretical significance. The paper presents the results of a study of the potential informativeness of the method for remote identification of the state of snow-ice cover by the ratio of Fresnel reflection coefficients, using an ultra-wideband linear-frequency-modulated signal in the reconstruction of electrophysical and geometric parameters of multilayer dielectric medium. An estimation of the accuracy of reconstruction of electrophysical and geometric parameters of multilayer dielectric medium is presented, taking into account the values of electrophysical and geometric parameters of the medium layers, the noise level in the measurement data and the measurement bandwidth. The results of simulation modeling of the reconstruction of the relative permittivity and thickness of a multilayer medium in the form of snow-ice cover at different values of the mean square deviation of the noise level in the polarization relations of the measured reflection coefficients of the electromagnetic wave are presented. It is established that the accuracy of reconstruction of the electrophysical parameters of the layers of snow-ice cover decreases with increasing noise level, as well as with decreasing permittivity and layer thickness. The results of experimental studies confirm the adequacy of the developed simulation model. The presented model allows us to quantify the potential accuracy of reconstruction of the electrophysical parameters of multilayer dielectric medium for a specific measuring complex that implements the multi-frequency method of electromagnetic waves. Experimental studies and simulation results of a multi-layer dielectric medium in the form of snow-ice cover have demonstrated the theoretical possibilities of obtaining the relative permittivity and thick-ness of individual layers with a relative error of no more than 10 %, with a measurement band of 6 GHz and an RMS of the noise level of 3.8–4.8.
Keywords: snow and ice cover, subsurface sounding, numerical modeling, permittivity, multilayer medium.
This article presents a miniature aircraft (MLA), which is autonomous in the environment. The main advances in these studies are both new trajectory tracking schemes and attitude control schemes in real flight mode. This MLA is based on a traditional quadcopter. A PID controller is used to stabilize the position of the quadcopter. The proposed regulator is designed in such a way as to be able to weaken the influence of external wind influences and to guarantee stability in this state. For autonomous trajectory tracking, you must have a fixed flight altitude. In addition, the ARM cortex M4 microcontroller performs data processing. The trajectory is determined using GPS in the mission planner software for the external environment. The HMTR module is used for real-time communication between the robot and the ground station. Flight data is saved to SD card memory and converted to MATLAB code for real-time playback. The experimental results of using the proposed regulator on an autonomous Quadcopter in real conditions show the effectiveness of our approach.
Keywords: altitude control,, quadcopter, autonomous, PID controller, trajectory, stability
The purpose of this work is to optimize the work distribution process when working in an IT company using machine learning algorithms, which allows to reduce the load on the key team member - the manager, who is the master of the scrum. We consider a team in scrum according to the Takman model, highlighting 5 stages of development: formation, conflict, normalizing, executive, separation. To increase the time for micro-management within the team, it is proposed to automate routine processes using an optimization approach. It is shown that the most time-consuming part of this definition is the definition of the type of tasks. To do this, it is necessary to define the terms of reference, assign its components (tasks) to a specific type and entrust the implementation to the developer who will achieve the required result in the shortest possible time. The structure of the optimization model for the distribution of tasks between developers is considered. Preliminary classification of tasks by category, taking into account the capabilities of the team. The choice of a convolutional neural network and the use of deep machine learning for solving the classification problem has been substantiated. As the initial data when training the network at the initial stages of team development, it is proposed to use the texts of test tasks and their distribution by categories of project tasks.
Keywords: agile-methods, optimization, convolutional neural network, classification task, word encoding, recurrent neural networks
In work, for the simple and general recovery process, formulas for the variance of the recovery cost are obtained that depend on the recovery functions (average number of failures) of the models under consideration. The presence of formulas for the average number of failures, the average cost of recovery and the corresponding dispersion formulas makes it possible to consider new optimization problems in terms of price, quality, risk when organizing recovery processes. Dispersion is given a sense of risk. The wording problems that arise here remind Markowitz’s well-known tasks of forming a portfolio of securities, where the mean is given the meaning of income, dispersion is the meaning of risk. The task of minimizing the variance of the cost of recovery with the set limits on the average number of failures, the average cost of recovery and the duration of the recovery process in a simple process with exponential distribution of the operating time of the replacement elements is considered. It is noted that optimization tasks in terms of price, quality, risk can be expanded by including questions about the choice of recovery strategies, when, along with emergency recovery, preventive minimum scans are carried out intensity of cost or maximum of such importance in the operation of information systems the size of the readiness factor. In the exponential distribution of a simple recovery process, Chebyshev's inequalities and variation coefficients for the number of failures and the cost of recovery have been written. The developed mathematical apparatus is intended for use in setting and solving various optimization problems of information and computer security, as well as in the operation of technical and information systems, software and software-hardware tools of information protection when there are failures, threats of attacks, and security threats of a random nature occur.
Keywords: distribution function, recovery process, recovery function, variance of the recovery cost, chebyshev's inequality
This paper presents the use of various neural network models to solve the problem of human emotion recognition by the motor activity of his body on frames of a video stream without complex preprocessing of these frames. The paper presents three-dimensional convolutional neural networks: Inception 3D (I3D), Residual 3D (R3D), as well as convolutional-recurrent neural network architectures using the convolutional neural network of the ResNet architecture and recurrent neural networks of the LSTM and GRU architectures (ResNet + LSTM, ResNet + GRU) which do not require preliminary processing of images or video stream and at the same time potentially allow achieving high accuracy of emotion recognition. Based on the considered architectures, a method for human emotion recognition from the motor activity of the body in a video stream is proposed. Architectural features of the used models, methods of processing video stream frames by models, as well as the results of emotion recognition according to the following quality metrics: the proportion of correctly recognized instances (accuracy), precision, recall are discussed. Approbation results of the proposed neural network models I3D, R3D, ResNet + LSTM, ResNet + GRU on the FABO data set showed a high quality of emotion recognition based on the motor activity of the human body. Thus, the R3D model showed the best share of correctly recognized copies, equal to 91%. Other proposed models: I3D, ResNet + LSTM, ResNet + GRU showed 88%, 80% and 80% recognition accuracy, respectively. Therefore, according to the obtained results of the experimental evaluation of the proposed neural network models, the most preferable for use in solving the problem of a person's emotional state recognition by motor activity, from the point of view of a set of indicators of the accuracy of emotion classification, are three-dimensional convolutional models I3D and R3D. At the same time, the proposed models, in contrast to most existing solutions, make it possible to implement emotion recognition based on the analysis of RGB frames of a video stream without performing their preliminary resource-consuming processing, as well as to perform emotion recognition in real-time with high accuracy.
Keywords: neural network model, emotion recognition, convolutional neural networks, machine learning, image processing, video stream
In the modern world, digital evidence is taking an increasing role in crime investigations, video recordings from CCTV being one of the most common types of such evidence. For investigative authorities, the information contained in video recordings is of significant, and in some cases of key importance. This paper focuses on describing the development of a detector of moving and motionless objects on video recordings of CCTV systems. An analysis of a wide range of video materials from the subject area is performed based on the overview of scientific publications on object detection in video data, with the main constraints and assumptions formulated with the use of a mathematical model. The existing solutions are compared, given the set constraints and assumptions. A model of object detection is proposed on the basis of the results of the study, with the most preferable solution for the problem of detection with the required accuracy and performance. Use of the detector as one of the stages helps solve the problem of identifying criminally significant information in video data of surveillance systems. The detector can also be used in other computer vision systems for detecting both moving and inactive objects on video recordings.
Keywords: object detector, background subtraction, video analytics, object segmentation, medium frame, computer vision
The problem of increasing the efficiency of information dissemination about new threats is considered. Traditional methods of information security incident information exchange are practically not scalable and, as the number of incidents increases, they no longer cope with their task. The workload on the specialists involved in monitoring the state of the information system increases significantly, and the efficiency of their work decreases. The aim of the study is to increase the efficiency of the center for monitoring and responding to information security incidents by deploying a software platform for managing cyber intelligence data. The object of research is a center for monitoring and responding to information security incidents, the subject of research is a cyber-intelligence data management system. The approaches to the implementation of cyber intelligence as part of the center for monitoring and responding to information security incidents have been analyzed, an overview of the functionality of existing solutions has been made, and a plan for deploying a cyber-intelligence platform as part of the center for monitoring and responding to information security incidents has been developed. The main stages of deployment include preparatory work, installation, configuration and testing of the platform. The efficiency of the center for monitoring and responding to information security incidents after the implementation of the platform increased by 41.7%, and the maturity level increased from “initial” to “basic”
Keywords: cyber intelligence, information security incident monitoring and response center, cyber intelligence platform, cyber-intelligence data management system
The developed design of a bubble cap tray is described, in which the cap is made with the help of a conical spring, which will allow, under the action of an ascending gas flow and a descending liquid flow, to bring it into an oscillatory dynamic mode. This will allow to intensify the process of mass transfer, and therefore will increase productivity. Mathematical modeling of calculation of elasticity and vibration frequency of a conical spring is presented. It is shown that resonant frequencies cause the cap to vibrate with a high amplitude, which intensifies heat and mass transfer processes between liquid and gases (steam), and increases productivity. In addition, the rocking of the caps on the conical spring under the action of the caps of the vapor (gas) phase emerging from the slots also contributes to the destruction of the boundary layer and increases the rate of heat and mass transfer, which additionally contributes to an increase in productivity. An example of the calculation of the above parameters and modeling according to standard and combined models is given: cell, one-parameter diffusion and with a series connection of zones of displacement and mixing. The developed design can be used in the chemical, petrochemical, gas, food, pharmaceutical, energy and other industries, as well as in the ecological processes of separation of solutions and gases in the processes of rectification, absorption, extraction and washing of gases.
Keywords: resonance, mathematical modeling, vibration frequencies, intensification, bubble cap tray, rectification, mass transfer processes, spring
The paper proposes a new iterative method for determining the parameters of an induction motor. The calculation is based on the measured values of the no-load current and the active resistance of the stator winding. In accordance with the proposed algorithm, the parameters were calculated and the mechanical characteristics of the AIR71A4 engine were constructed. To assess the accuracy of the calculations, a comparative analysis of the resulting curve with a similar dependence was carried out, which is based on the pie chart method. The pie chart is built in accordance with the current interstate standard GOST 7217-87 based on the results of open-circuit and short-circuit tests. In the practical part of the work, using an experimental stand, measurements of the magnitude of the electromagnetic moment of the investigated asynchronous motor were carried out at various values of slip on the stable part of the mechanical characteristic. In a comparative analysis of the calculated and experimental data, it was found that the value of the determination coefficient of the data was R2 = 0.9944. This indicates a high degree of reliability of the calculation results using the proposed method. This provides a basis for the practical use of the proposed calculation method in engineering practice
Keywords: induction motor, engine parameter calculation method, modeling, pie chart, mechanical characteristic
To authenticate Layer 2 switches, the authentication code transmitted by the sender of the information to the recipient using the authentication module built into the switch can be used. To generate the authentication code, a pulse signal is used, the energy of which is equal to the energy of the photon. The path for transmitting and receiving the authentication code contains an optical radiation control device based on an integrated optical interferometer. In one of the interferometer arms, a spiral delay line is introduced, which design allows the use of the BB84 protocol to generate an authentication code. A method for modeling an interferometer with a spiral delay line based on integrated optics has been developed. The method is based on a three-dimensional analysis of channel waveguides and has no restrictions on the bending radius and waveguide type. The method uses the division of the waveguide section region into finite elements, the replacement of the wave equation in cylindrical coordinates by a variational problem. When solving the matrix problem, information is obtained on the mode composition of the waveguide and the value of the electric field strength at the split nodes. Knowing the distribution of the strength over the cross-section of the waveguide, one can calculate the power of the mode and its losses. The developed method was used to calculate the permissible bending radius of the interferometer delay line for typical integrated-optical channel waveguides at a fixed value of energy losses.
Keywords: optical communication, authentication, switches, integrated optics, interferometer, delay line, numerical methods, energy losses
Today, intrusion detection system based on signatures of known attacks is an important security tool, but this method is ineffective against zero-day vulnerabilities. Anomaly-based intrusion detection systems are a relevant approach to neutralize previously unknown computer attacks and new malicious software. Machine learning algorithms can be used to build a system that can classify input data. At the moment, using this an anomaly detection system in real conditions is not effective enough, because there is a high probability of classification errors due to the non-uniform distribution of data between classes. It is also necessary to take into account the possibility of adversarial attacks used by an attacker to overcome classification algorithms, as a result of which a real attack can be missed by the detector. Thereat, this article describes the problem of imbalance in the training dataset and instability to adversarial attacks by intruders when using an anomaly detection system based on neural networks. As a solution, it is proposed to apply an algorithm of generative adversarial networks to supplement a small class of attacks with generated examples, which also makes the classifier more resistant to adversarial attacks. An algorithm for training the generator and discriminator is considered, and a description of the NSL-KDD dataset is given, which is proposed to be used as a training and test one.
Keywords: malware, anomaly detection systems, data imbalance, generative adversarial networks, machine learning
This article outlines an approach to accounting for material consumption during scheduled repairs at the design stage of an object using an automated application. An analysis of the use of information models for capital construction objects allowed us to conclude that it is necessary to calculate the costs of operation and construction at the design stage. When building a residential capital object, it is important to immediately calculate the costs not only for construction, but also costs during operation. At the moment, such a calculation is performed outside the information model and does not take into account a number of factors that affect the final cost. This approach does not provide accurate calculation results. When calculating operating costs, among other things, the costs of scheduled maintenance should be taken into account. In order to more accurately calculate the costs of scheduled repairs, it is necessary to predict the consumption of materials during these repairs. For this, an algorithm was developed, on the basis of which an automated application was created for accounting for the consumption of materials during scheduled repairs, using data from the information model of the object to perform calculations. The use of computer-aided design systems for such calculations based on the data contained in the information model of the object will avoid time-consuming analytical calculations and inevitable errors. The results of the automated system will make it possible to more accurately assess the cost of operating the facility after its commissioning.
Keywords: information model, automation, computer-aided design systems, operation calculation, modeling, repair
This article proposes a recommendation algorithm for educational resources in e-learning systems. The new approach uses Markov's model of evaluating the systems` content by casual users to form the parameters of the initial state, which characterizes a new user of the system as evaluations of the first resources (system content) to recommend interesting system elements for an active user. Thus, the problem of "cold-start" for the new users at the first phase of interaction with the system is solved. This problem is inherent in the system under development because the e-learning system includes a module for making recommendations, which allows it to refer to the class of recommendation-based automated systems. The new approach will combine the Markov process usage and the time factor to use them as a single data source for making recommendations. This approach will be based on the principle of access analysis of similar system users (the similarity is determined by comparing their profiles) in the same periods. An integral part of the created system is also usability. Therefore, at the design phase, it is necessary to think about the ergonomics of the recommendations in the educational system.
Keywords: mathematical modeling, learning systems, e-learning, remote learning, markov chains, markov process, cloud learning
The work is focused on increasing the relatability of nuclear power plants by monitoring the overloading of fuel cartridges carried out by the refueling machine. The necessity of developing a control system that prevents failures during reloading operations is shown. It is argued that the development of the system must be preceded by the stage of observation and systematization of the current characteristics of the object in the process of repeated operations with various overloaded products. Simplification of the monitoring procedure is achieved by using the current signals consumed by the machine drives during operation. The scheme of registration of diagnostic signals is presented. Approaches to signal processing and analysis are described. The proposed approach was tested during the refueling campaign at the Rostov NPP. Based on the current signals registered during the overload process, the possibility of creating passports of overloading operations has been demonstrated. For the purpose of clustering, the principal component method is applied to the current parameters. The developed passports can be used as reference pattern for subsequent overload monitoring. It is shown that the signals corresponding to the main hoist drive can be used to control the efforts of the refueling machine when removing fuel. Thus, the paper substantiates the prospects of the control system for the reloading process based on the registration and analysis of the current consumed by the drives of the refueling machine.
Keywords: nuclear fuel reloading, monitoring system, signal registration, reference portraits, data clustering
The paper analyses the possibility of local-optimal control in an electromechanical system based on a contactless DC motor, which allows the required voltage pulse parameters to be determined, not only in the regulated signal sector, but also in the next sector during the switching of the basic vectors of the control process. The output coordinate (system state) is controlled by the switching of the reference vectors and the pulsation of each time pulse. The base vector pulsation control processes (pulse width modulation) are similar for each vector, only these vectors can be shifted in space by a certain angle. The angle at which these vectors are shifted is inversely proportional to the product of the number of pole and phase pairs. The pulsation processes become periodic, with the base vector and zero vector alternating. Depending on rotor speed and pulsation period, number of pulsations of one basic vector (without switching it in space) can reach several dozens. If we assume that processes occurring at switching of base vectors only are identical in all parameters except for location in space, we can move origin of coordinates to new point of space (location of base vector) and get periodical processes for creation of electromagnetic momentum during calculation. To synthesize an electromechanical control system with a contactless DC engine, you can use the predictive control method - Model predictive control (MPC). The purpose of this study is to assess the feasibility of locally optimal control every time the base vectors are switched, taking into account the features of building a DC contactless engine. It is aimed at forming the controlled parameters of one base vector in combination with a zero vector, which is defined by both spatial and initial conditions of the original base vector. It is shown that the state of the system will also depend on the rate of dispersal in the zero vector of accumulated electromagnetic energy during the existence of the base vector.
Keywords: contactless DC motor, electromechanical system, control system, commutation process, local-optimal control, state observers, base and zero vector
Today in information technology there is a tendency to a multiple increase in the volume of stored data. The increase in data volumes is due to the global digitalization of various spheres of human life, the spread of the use of sensors for monitoring, diagnosing and controlling various objects. Despite the growing volumes, data still needs to be processed. Processing methods include an important retrieval step, the speed of which affects the efficiency of the entire processing. Therefore, developments in the field of accelerated retrieval for the necessary data for mining in various databases are relevant. This article proposes an algorithm developed by the authors based on the CW-tree data structure, which allows data to be indexed by maximizing the capabilities of a computing system in conditions of multithreaded query processing. The CW-tree data structure, also proposed by the authors, contains two levels which are the branch level, which is designed to retrieval for a vertex according to a user-specified query, and the leaf level, which is used to store data. This paper describes a method for traversing the CW-tree leaf level when executing a retrieval query to the database. The results of testing the proposed method on a test database are also given, and the results of a comparative analysis of the execution of retrieval queries to a database based on the CW-tree structure and a database controlled by the MySQL DBMS are presented.
Keywords: database management systems, tree algorithms, data indexing, multithreading, database query optimization, cW-tree