Keywords: microwave photonics, amplitude pulse detector, time constant, video pulse shape, optimization
The aim of the article is to find the optimal values of the time constant of the integrating circuit of the microwave photonic pulse amplitude detector according to the criterion of the minimum distance between two signals: a video pulse at the detector output and an ideal video pulse. Optima were found for three forms of radio-frequency pulses: ideal square, non-ideal square, and Gaussian ones. The reference to this topic is due to the fact that the literature does not cover the issue of calculating the time constant value of the integrating circuit of the ultra-wideband radio-photonic path operating in the detection mode. The simulation was carried out in the R2017b version Matlab environment using the Simulink block library. Optimization was carried out using the golden section method. According to the criterion considered in this article, the following optimal values of the time constant of the integrating circuit were obtained: about 1.5 periods of the carrier – for an ideal square radio-frequency pulse; about 2 carrier periods – for a non-ideal square radio-frequency pulse; about 0.3 radio-frequency pulse width – for a Gaussian one. The values of the time constant obtained according to this criterion do not correspond to the well-known formula, from which it follows that the time constant value of the detector should be much greater than the carrier period and much less than the radio-frequency pulse width.
Keywords: microwave photonics, amplitude pulse detector, time constant, video pulse shape, optimization
As a result of the study, fundamentally new results have been obtained, which make it possible to create intelligent decision support systems for the diagnosis of infectious diseases. A bioimpedance analysis model has been created, based on multifrequency bioimpedance measurement, which allows decomposition of biomaterial impedance into structural elements. On the basis of the proposed model, descriptors were formed, intended for classifiers, performed on trained neural networks. To obtain descriptors, multifrequency sounding of the biomaterial was carried out, on the basis of which Cole's graphs were constructed. Using iterative algorithms and these graphs, Voigt models of the biomaterial impedance were obtained. The parameters of these models are used as descriptors for the trained classifiers. On the basis of multifrequency sensing, algorithms for differential control of tissue impedance and fluid impedance have been obtained, which will make it possible to obtain new decisive rules for diagnosing pathological conditions of the body (cardiovascular, infectious and oncological diseases). In modern Russian healthcare, the task of long-term monitoring of a person's condition is almost always associated with either his hospitalization, which is unacceptable both for the working-age population and in some cases for sick people, or with the rent of expensive monitoring systems for a period not exceeding, as a rule, 24 hours, which is not always enough for diagnostic tasks.
Keywords: infectious diseases, bioimpedance model, multifrequency sensing, trainable classifier, iterative algorithm, training set
This work is devoted to the formalized description of mathematical models and optimization problems of complex-structured objects. As an object of modeling, we consider production systems with a complex structure, including the interaction of subsystems of two main types: "processing" and "assembly". An analysis of the specifics of the functioning of complex-structured production systems is presented. A method is proposed for the formalized description of mathematical models of technological processes of processing and assembly based on the theory of queuing using the apparatus of simulation. The fundamental difference between the mathematical description of a subsystem of the "assembly" type and a subsystem of the "processing" type is that to complete the assembly process, a given number of applications from different streams must be received at the input of the device, which means that all the necessary components are available. This feature limits the use of standard methods of queuing theory, since there are no functional dependencies for the analytical description of processes of the "assembly" type. For this reason, it becomes necessary to develop simulation models of complexly structured objects. At the same time, the mathematical model of the production system takes into account the random nature of the distribution of the parameters of objects and the probability of refusals in servicing applications due to overflow of the waiting queue. The optimization problem of finding the optimal structure of the system is considered under restrictions on a given number of parts obtained within a certain period of time, as well as on the maximum capacity of the input storage and the volume of production resources.
Keywords: simulation, complexly structured object, manufacturing system, optimization, queuing theory
For video streaming to a wide audience, peer-to-peer networks at the application level of the OSI model are increasingly used, which reduce the load on the source server due to the fact that subscriber hosts provide not only receiving a video stream, but also its relaying to other hosts. Accidental disconnection of hosts from the network leads to temporary disruptions in transmission routes, which can lead to significant losses of transmitted data fragments and cause loss of video frames. Known methods for measuring the objective quality of video transmission are used to assess the loss of video quality due to its compression and do not take into account the random loss of video frames during transmission over communication channels. Loss of video frames leads to the fact that the reference and transmitted video may be shifted relative to each other by a certain number of video frames. In this case, the measured values of the metrics of the objective quality of video transmission can significantly exceed the true values, which leads to measurement errors. The study proposes a method for measuring the objective video transmission quality, which takes into account the loss of video frames during transmission in a peer-to-peer network based on achieving a match between the original and transmitted video frames. The method is based on the fact that in order to achieve the true values of the metrics evaluating the video quality, in case of frame loss, the reference video is shifted so that its frame and the frame of the evaluated video coincide. To study the effectiveness of the proposed method, an algorithm for measuring the objective quality of video transmission and the corresponding software have been developed. Experimental studies have shown that the algorithm based on the proposed method determines the correct video frames for comparison and, thus, does not introduce errors, in contrast to the existing software for measuring the objective quality of video. This makes it possible to conduct a reliable assessment of the objective quality of video transmission in a peer-to-peer networks under conditions of intense video frame loss.
Keywords: peer-to-peer network, quality of video transmission, metric of quality, data loss ratio, data fragment, video frame
The article considers the development of a numerical method for clustering homogeneous alternatives based on the sum of differences in weighted attributes, the methodological basis for which is a cluster-hierarchical approach, as well as the results of verification of the proposed numerical method on the example of the study of anti-terrorist security of objects of internal Affairs bodies. As part of the formation of homogeneous groups of objects, we consider a limited group of objects that are practically indistinguishable from each other by their distinctive features (they can and should be compared with each other). Their further clustering will allow you to determine the list of requirements for such objects, for a more detailed study of their properties. This method contributes to rational budget planning: for example, for departmental organizations in conditions of insufficient targeted funding, funds are provided for a specific type of activity (for example, to strengthen anti-terrorist security measures) and, as a rule, their volume is significantly limited, compared to the requirements for solving a whole range of tasks. A large number of identical objects of protection does not imply the distribution of sufficient budget funds for all objects due to the limited targeted funding, which is why there is a need to focus budget funds on those protected objects that need targeted funding in the first place, based on the assessment of their level of anti-terrorist protection.
Keywords: method of analysis of hierarchies, cluster analysis, objects of protection, signs, anti-terrorist security, homogeneous alternatives
The article deals with the development of a software self-adaptation method based on the technology of tracing the computational process. The urgency of the problem of creating methods for the synthesis of self-adaptive software is substantiated, the main advantages of self-adaptive software systems are considered., A description of the existing tracing tools is given, the choice of Intel Processor Trace for creating a method of self-adaptive software is justified. The definition of the program execution graph as the mathematical apparatus underlying the new method is considered. A mathematical model of the behavior of a self-adaptive program is proposed, based on the considered definition of the call graph and representing the formalization of the traces obtained using Intel Processor Trace. An algorithm for searching patterns in execution graphs is considered. On the basis of the considered definition of the execution graph and the algorithm, a new method of self-adaptation of a software system is proposed, based on the analysis of the program execution progress: the most frequently executed sections of the program source code (behavioral patterns of the system) are determined and further optimized. The resulting method will optimize the performance of the program by reducing the number of conditions calculated during the execution of conditions.
Keywords: self-adaptive software systems, graph theory, computational process tracing, search for common subgraphs, execution graph, software performance optimization
The article discusses a distributed computing system, represented by a variety of mobile terminals, providing the ability to serve the requests of users of these terminals to run programs, the need for computing resources, which exceeds the local computing resources available on these terminals. This possibility is provided by the implementation of the cooperative computing paradigm, which supports the procedure for dynamic formation of the cooperative computing resource of a plurality of mobile terminals, taking into account the possibility of their disconnection and connection to the cooperative computing procedure. Using the set-theoretic representation, such parameters of the system functioning are determined as the response time of the node to the request for the provision of computing resources, as well as the delay period in the queue for requests belonging to different mobile terminals. On the basis of these parameters, for the specified conditions, an optimization problem of the cooperative use of computing resources is posed in a generalized form. The formulation of particular problems of optimization of computing resources for a system consisting of two mobile terminals is considered in detail, taking into account various conditions of their need for computing resources, as well as the current availability of computing resources in the nodes of the system. The approaches obtained as a result of the formulation of these particular problems are extended to a system consisting of many mobile terminals.
Keywords: distributed computing system, fog computing, cooperative computing, resource allocation, parallelization of computing tasks, computing resource, node response time, cooperative computing network
The purpose of this work is to develop mathematical support for the design of the data transmission process based on the packet switching method, taking into account control operations determined by the functional dependence of the packet transmission time in telecommunication networks. Based on the packet switching method, the transmitted message at the source node was split into some packets that are transmitted along some routes to the destination node. On each route, control operations of transmitted packets and return messages or receipts (control of the correctness of transmitted packets on routes) were added. The time of control operations was determined by the functional dependence of the random variables of the packet transmission time of the corresponding routes. To obtain the distribution of the transmission time of control messages and the distribution of time on each route, it was proposed to use the theory of residues. On the basis of mathematical expressions, a method is proposed for finding the distribution of the start time of packet assembly at the recipient's node as the maximum of several random variables of the data transmission time. The obtained distribution density of data transmission time in networks allows calculating the distribution function of the packet assembly time at the receiver's node
Keywords: gERT networks, random variable, probability distribution density, check operation, exponential distribution, linear functional dependence, deduction theory, packet switching
The article deals with the actual problem of assessing the degree of resonance of an event based on the study of relevant videos and comments to them posted on the service youtube.com. The research was conducted with the aim of timely and reasonable identification of events that have signs of public resonance. To achieve this goal, the concept of public response and the mechanisms of development of this phenomenon are considered. Based on the analysis of methods and approaches to assessing public opinion, a model for assessing the resonance of an event has been developed. It is based on measuring the angle of inclination of the tangent to the graph of the distribution of the number of comments in time relative to the abscissa axis. To measure the slope of the tangent, cubic spline interpolation is selected. Based on the proposed approach, software has been developed to collect data on the distribution of the number of comments over time. It also builds an approximated graph and measures the slope of the tangent at any point on the graph to find the maximum value. Using the approach presented in the article, you can automatically identify the most resonant events, the content of which is presented in video format on video hosting sites. It is assumed that comments on them can be freely posted.
Keywords: video hosting, event resonance, information and analytical activities, public opinion, video hosting monitoring, event resonance assessment
The article presents variant of software architectural solutions that support a special function of video analytics – multi-camera support in video surveillance systems, which based on decentralized control information exchange. Considered the main capabilities of existing hardware platforms for intelligent video surveillance cameras, as well as the analysis and generalization of existing architectures of distributed computing systems, approaches to the functional design and subsequent implementation of software modules that provide a message exchange protocol during the process of multi-camera tracking of an object are proposed. The functions of multi-camera tracking focused on the use of architecture CAN P2P network (Content Addressable Network) is highlighted. A hardware and software implementation of such network based on the CAN (Controller Area Network) protocols - C2C architecture (CAN2CAN) is proposed. The features of the implementation of software modules are determined depending on the type of control of the functions of a distributed computing system and the hardware features of intelligent video cameras. On the example of a number of practical implementations of open source software and controllers, both a generalized multi-level architecture of video analytics software for the multi-camera support function and architectural templates of modules and software that implements the decentralized interaction of a set of intelligent video cameras in the process of multi-camera support, implemented using C2C network.
Keywords: distributed computing system, video surveillance system, decentralized control, multicamera tracking, video analytics, software, peer-to-peer network, content addressable network, packet switching
This article contains recommendations for customs bodies to combat the use of RFID-duplicates by importers in order to understate customs payments. The aim of the research is to identify recommendations for improving the effectiveness of the application of protection measures against the use of RFID-duplicates in foreign trade. Results. Analysis of existing approaches to protection against RFID-duplicates shows that the most expedient measure is the detection of duplicates based on checking information about tag readings. In this regard, recommendations for the development of a model for the application of RFID technology in foreign economic activity are proposed. But in order to establish what false information was recorded on the RFID duplicate, the customs authorities need to take control measures. Using the theory of games, it was proved that not in all cases it would be advisable to carry out control measures, and it is more profitable for customs authorities to refuse to customs clearance according to the norms of Chapter 18 of the Customs Code of the Eurasian Economic Union without carrying out them. To substantiate decisions on the conduct of control measures in the work, factors were identified that should be taken into account when making such decisions. Practical significance: the results obtained can be used by customs authorities during customs control in relation to imported goods marked with RFID tags, as well as by government authorities when designing a system for protecting information stored on tags issued by Goznak that are used in foreign trade.
Keywords: rFID-technology, rFID-mark, rFID-duplicate, goods marking, rFID-marking, customs bodies, customs control, rFID
To increase the energy performance an asynchronous electric drive operating mode with frequency-current control is proposed operating at a critical slip on the fan load. The structure of asynchronous electric drive of constant power with frequency-current control is proposed operating at critical slip. Theoretically the operating point of the electric drive in which the slip is equal to or equal to the critical one will be a point of unstable equilibrium. In an open-loop control system of the coordinates of an electric drive, it is impossible to ensure its stable operation on a critical slip due to uncontrolled disturbances. Therefore, it was proposed to use a closed-loop control system with subordinate regulation of the main coordinates. The stator current and rotation frequency were chosen as such coordinates. The peculiarity of this structure is the control algorithm of the frequency converter, which consists in the fact that the voltage is calculated by the subordinate coordinate control system from the condition of maintaining constant power. A model of such an electric drive was developed in Simulink and its operation was simulated. The proposed solution of an asynchronous electric drive with frequency-current control ensures stable operation at critical slip and at the same time minimizes the stator current consumption. Since the parameters of an asynchronous electric motor are not constant and can change during operation, then to calculate the required speed addition, it is advisable to use extreme regulation of this additive, achieving such a value at which the minimum current consumption is ensured.
Keywords: electric traction, induction motor, a critical slip resistance, stability, stator current
In this work, one of the most pressing problems of the synthesis of new technical solutions was solved - the automated generation of information support based on the analysis of USPTO patents. As concepts of the ontology of the subject area "Patent representation of technical systems", the structural elements of a technical object (TO) and the relationship between them, as well as descriptions of the problems solved by the invention were considered. The first claim of the patent document acted as the main source of information. The unit of extraction was the semantic structures SAO (Subject-Action-Object). The main linguistic features of patent documents were identified. Methods for preprocessing the patent array, extracting SAO from the patent formula, exporting extracted SAOs to the domain ontology have been formed. The developed methods have been tested on US patent documents. The average time for parsing one patent by an automated system is 1.72316 seconds, the accuracy of extracting information from the text of a patent is over 70%.
Keywords: technical systems, patents, ontology, fact extraction
The article presents a software for forming a matrix of technical functions performed by physical effects based on the analysis of a patent database. For the synthesis of the physical operation principle of new technical systems, physical effects from the knowledge base developed at the CAD Department of VSTU can be used. Physical effects implement technical functions, which in turn constitute the constructive functional structure of the technical system. The automated system is implemented in Python version 3.7.2. TreeTagger is used for morphological analysis, UDPipe is used for syntactic analysis. The correctness of the algorithms was evaluated on a test sample prepared by hand and consisting of 60 patent documents describing 480 technical functions and 20 physical effects. The results obtained: the method for extracting TF showed an accuracy of 0.87 on the test sample, completeness - 0.77 and F-measure - 0.82, the search for the description of the FE functions with an accuracy of 0.92.
Keywords: technical functions, physical effects, patents, fact extraction, sAO, cRUD
In connection with the saturation of technogenic production, fires and explosions are increasingly occurring,examples of which may be fires and explosions Accidents at man-made facilities cause significant harm to the environment, adjacent facilities, and the population.One of the reasons can be called the improper functioning of technological processes due to operation at critical values of the controlled parameters. The environment can also have disaster-causing impacts due to malfunctioning technological processes. Therefore, it is necessary to comprehensively study the influence of the environment and the technological process. Thus, the article examines the mutual influence of a technogenic object, using the example of a compressor station of a main gas pipeline, and adjacent territories for the analysis and formation of recommendations on the safety of the operation of the technological process of the compressor station. The analysis of a priori knowledge necessary for the correct functioning of the technological process is carried out, the operating modes are highlighted, the data providing energy efficient operation are highlighted, emergency situations at the technogenic facility are considered. A set of possible informative parameters providing an adequate assessment of the state of the technological process of a compressor station is analyzed. The results of the analysis required for the correct functioning of the information-measuring system for monitoring the characteristics of the technological process are presented.
Keywords: a priori knowledge, information-measuring system, technological process, geotaxon, environmental damage
The article is focused on identifying patterns of formation of parameters of demand for Parking space in the district N of Volgograd, which will improve the organization of parking space in this area. A sociological survey of city residents aimed at identifying public opinion on the use and operation of Parking space in certain areas of the district was conducted. The initial processing of the survey results was carried out, which resulted in the main conclusions on the most important questions of the questionnaire. The analysis of respondents ' responses using mathematical and statistical research methods was carried out. Previously, all data obtained during the survey was normalized. Clustering of respondents ' responses to all questions was performed, which allowed dividing all survey participants into two clusters. To confirm that there is a linear relationship between the various questionnaire questions, a correlation analysis of the data obtained during the survey was performed. The relationship between different pairs of questions was checked by performing regression analysis of the data. Correlation and regression analyses were performed for each of the obtained clusters separately to improve the accuracy of estimating the relationships between regression variables. According to the results of mathematical and statistical analysis, the dependence between the responses of respondents to various questionnaire questions was revealed.
Keywords: urban parking space, transport, transport system, statistical methods, cluster, correlation, regression analysis
This work relates to the direction of automation of medical diagnostics using computer microscopy. The effect of focusing a microscope on the textural characteristics of chromatin images of the nuclei of bone marrow cells in the computer microscopy system when solving diagnostic problems in oncomorphology for the recognition of malignant tumors is investigated. These questions are of particular importance when solving the problem of analyzing images of low-contrast objects-chromatin of the nucleus of bone marrow cells in the diagnosis of dangerous oncological diseases of the blood system-acute leukemia. During the experiment, bone marrow preparations from patients with acute lymphoblastic leukemia were used as test samples. The preparations were provided by the laboratory of hematopoiesis immunology of the N.N. Blokhin National Medical Research Center of Oncology. The results of the experiment among the characteristics of images of the structure of the chromatin of the nuclei of bone marrow cells revealed the high sensitivity of the focusing optical system of the microscope texture characteristic «moment of inertia» of the red components R of RGB color model. Practical recommendations are given for developers of automated systems on the use of the texture analysis apparatus in the design of cancer diagnostics systems based on microscopic methods of studying samples of biological materials.
Keywords: digital image processing, computer microscopy, texture analysis, automatic focusing, acute leukemia diagnosis
Existing control systems for the paper web formation process with mass and pressure level regulators in the inlet device do not provide the desired quality indicators. At the same time, the use of high-precision and noise-proof control algorithms based on extreme and predictive controllers reduces the performance of systems. Thus, further improvement of the efficiency of the control system for the paper web formation process is possible within the framework of APC control and fuzzy logic. The introduction of cross-correlation optical speed calculators into the system will allow generating data on the main parameters of the paper mass flow at the moments when the web is formed on the machine grid, which is necessary and sufficient for implementing a speed ratio controller in extreme control systems that increases their speed. To replace the cascade connection of regulators in the control systems for the process of forming a paper web, it is proposed to use a coordinating fuzzy controller and fuzzy control methods that allow “decoupling” the output signals from the extreme controller, speed ratio and total pressure regulators designed for a single actuator – the compressor. Thus, the use of an additional speed ratio stabilization loop and coordinating fuzzy control increased the speed of paper web formation control systems by 59.4% in comparison with extreme regulators and allowed reducing the dispersion of paper web weight by 1.4%, which is confirmed by the implementation certificates at the MAYAK enterprise.
Keywords: paper machine, headbox, the dispersion of the weight of the paper, the speed of filling up paper pulp, coordinating fuzzy control
The continuous growth in the number of malicious programs makes the task of their detection urgent: classifying programs into malicious and safe. In this regard, this study is devoted to the development of a malware detection system based on machine learning, namely, training an artificial neural network with a teacher. In the course of the study, we analyzed the structure of Portable Executable files of the Windows operating system, selected characteristics from PE-files to form a training set, and also selected and substantiated the topology (four-level perceptron) and parameters of the antivirus neural network. The Keras library was used to create and train the model. The Ember dataset of safe and malicious software was used to form the training set. We have trained and verified the adequacy of training for the developed malicious code recognition model. The training results of the anti-virus neural network proposed in the study showed a high accuracy of malware detection and the absence of the overtraining effect, which indicates good prospects for using the model. Although the experimental model of a neural network is not able to fully replace the anti-virus scanners, the materials of the article are of practical value for the tasks of classifying programs into malicious and safe.
Keywords: malware, machine learning, anti-virus neural network, neural network training, keras, ember, dropou
The paper considers the issues of prevention and detection of crimes committed in the information and communication environment, as well as its use. Given the increasing demand for the Internet as an important social component in the state's development strategy, the development and implementation of tools, preventive measures and methods for solving crimes committed in the virtual environment in the system of law enforcement cannot be overestimated. Despite the fact that algorithms for committing crimes of this type are widely known and well-studied by domestic and foreign authors, methods for solving such crimes and questions of their practical application remain a topical subject of scientific research. This article discusses a possible mechanism for law enforcement agencies based on a preliminary study and identification of patterns in the use of the Internet by its users. Based on data mining methods, we consider ways to improve the effectiveness of internal Affairs agencies in the application of measures to prevent and solve crimes in the information and communication environment. The method proposed in this paper provides an opportunity to forecast demand and supply for commercial offers posted on the global network that are associated with criminal manifestations. The use of these scenarios in law enforcement provides an opportunity not only to organize preventive measures to prevent the onset of criminal consequences, but also to disclose previously committed criminal acts.
Keywords: data mining, internet, crime, forecasting, electronic commerce, a posteriori probability
The article presents the hierarchical structure of settings for information security tools, introduced criteria for evaluating the effectiveness of security systems, formalizes the concept of “security system configuration” based on evolutionary modeling objects, such as population, chromosome (solution vector), fitness function, etc. The mathematical model for constructing a security system using artificial intelligence methods has been developed. The proposed system is characterized by the possibility of considering the influence of random factors (staff, equipment failures, attack time on the security system) when choosing a protection option and the ability of adapting the protection system to changing environmental conditions. This model allows to use it not only in the professional activities of information security specialists, but also in training process as a kind of simulator. The development of an effective information security system using a genetic algorithm is possible on the basis of system monitoring events data, data received from experts and during simulation of the protection system. Thus, the research results have an applied nature and can be used in developments related to the design of information systems, decision support systems in the field of information security.
Keywords: evolutionary modeling, simulation, genetic algorithm, threats to information security, information security tools, security system configuration, data protection
The relevance of the study is due to the need to improve the efficiency of the use of intrusion detection systems based on immune detectors. The rational placement of immune detectors on separate network nodes is of great importance for the effectiveness of the use of such systems. It is proposed to use the security risk level of individual network nodes as a criterion for selecting nodes for installing immune detectors. In this article, we propose a method for estimating this value, which makes it possible to single out the least protected nodes. Assessing the security risk of network nodes is complicated by the fact that the vulnerability is often not the only one. The main idea underlying the method is the use of a statistical formal model based on Markov chains in combination with a graph of possible trajectories and metrics for analyzing vulnerabilities. Scoring scores are used as metrics for analyzing vulnerabilities, which use three types of metrics: basic, temporal, and contextual. A design example is given. The resulting model can be used to identify critical nodes along the path of access to the target node, in which intruders can be most dangerous. Based on the information obtained using the model, the network administrator can install immune detectors on these nodes, which will significantly improve the protection system.
Keywords: information security, intrusion detection systems, immune detectors, markov chains
At present, unmanned vehicle (UV) to provide the accurate navigation under motion are in majority cases depended on GPS, what makes the access to the Network of importance for correct performance in the smart city environment. To implement the smart city conception, the search of alternative techniques of UV localization is vital, since in real conditions GPS signal may be either absent, or its accuracy may be found insufficient to move over a route or to implement maneuvers. One should note that there exist problems for putting in operation the UV technologies: ethical (confidentiality and trust) and cybersecurity. Since in the smart city environment all UVs are to be connected to the Network, then cybersecurity issues also require an additional attention. Cyber threats can provoke violations in both individual UVs and the transportation system as a whole. The paper emphasizes three main categories of UV program systems providing, correspondingly, sampling and processing data, planning, and control. An approach to the UV performance architecture is presented, based on the sampling and processing data, decision making, network and computational multi-level analytics. To increase the UV security in a smart city, the paper proposes to utilize a safety management system based on the factor analysis and risks calculation techniques. To increase the UV security in the part of unobstructed motion, local positioning network models are proposed enabling to work out motion schemes.
Keywords: unmanned vehicle, smart city, functioning architecture, safety management system, local positioning, network models
The relevance of the study is due to the need to develop a methodology for choosing the optimal sync sequence length during majority processing of a pseudo-random sequence segment (PRS), which will reduce the synchronization time in the face of increasing errors. In this regard, this article is aimed at studying the probabilistic characteristics of the compared PRS synchronization methods and developing a methodology for choosing the optimal sync sequence length. The leading method to study this problem is the Ward sequential estimation method, which allows for a small signal / noise ratio in the band of the received signal (H 2 < 1) to enter synchronism within one period. The article presents the results of simulation for the method based on majority checks and the Ward method. The dependences of the decoding bit error Pм on the length of the processed segment N, the dependencies of the symbolic decoding error Pсимв on the length of the processed segment N, and the dependence of the average search time on the PRS on the length of the processed segment N are constructed. A comparative analysis of the simulation results for the Ward method and the method based on majority decoding is performed. Based on the studies, a methodology was developed for choosing the optimal length of the synchronization sequence during majority processing of the PRS segment. The materials of the article are of practical value for scientists, doctoral students, graduate students, teachers, practitioners working and studying in the field of information security.
Keywords: probability of destructive error, decoding bit error, average memory bandwidth search time, length of the processed segment, majority information processing method, ward method
According to the World Health Organization, 3.2% of the world's adult population has cerebral aneurysms. A ruptured aneurysm is often fatal, which makes cerebral aneurysm one of the most dangerous pathological conditions. Methods widely used in real clinical practice for assessing the probability of a cerebral aneurysm rupture based on the analysis of risk factors, its geometry, and individualized mathematical modeling of cerebral hemodynamics lead to contradictory results. The risk of cerebral aneurysm rupture can be estimated based on instrumental research methods to assess the biomechanical properties of the vessel walls. A method for evaluation of the shear modulus for the large blood vessel walls is described. Structural images of the investigated part of the blood vessel wall with aneurysm are sequentially obtained using intravascular optical coherence tomography system for at least several cardiocycles. B-scans correspondent to diastole and shear deformation stages between systole and diastole are taken for the evaluation from a sequence of structural images. The pulse wave is considered to be the only deforming stimulus. The surface area of the deforming force is considered to be equal to the scanning area of the IOCT system. B-scans’ profiles are processed and plotted according to the average truncated level of the interference signal intensity. These profiles are divided into overlapping blocks. Shear deformation is estimated for overlapping blocks by the abscissa projection of the average displacement vector. The dimensions of the deformed region are to be equal to corresponding coherence probing depth. Shear modulus in the point of interest of the blood vessel wall is calculated using the classical formula and verified using known values of the Young's modulus and Poisson's ratio. The proposed method can be used in real clinical practice, in particular, in neurosurgical tasks of choosing optimal approaches to the treatment of cerebral aneurysms and technical means for their implementation.
Keywords: compression elastography, intravascular applications, optical coherence tomography, forward-view probe, high-precision positioning, coherence probing depth, shear modulus, displacement, pulse wave, cerebral aneurysm
A promising direction in ensuring the functional safety of subject-centric systems, which include information and computing systems, which are hardware and software systems, is the so-called “barrier thinking” (English - barrier thinking). The emergence of this scientific trend dates back to the late 80s and is associated with the name J. Reason. The starting point of the scientific direction is the recognition of the inevitability of latent defects in the control systems of a complex system. The focus of philosophy isthe development of multilayer, layered systems of protection against external aggressive influences, as well as manifestations of latent defects in control systems. The practical implementation techniques based on “barrier thinking” is reduced to eliminating the possibility of such a combination of latent defects at various levels of the control object (organizational, tactical, operational), at which the hazards are transformed into unwanted effects. One of the promising approaches to the formation of a systematic procedure for creating barriers is the approach known in foreign literature as the Anticipatory Failure Determination (AFD), and in the domestic one as “diversion analysis”. The approach called “diversion analysis” includes reactive and proactive approaches to ensuring the functional safety of subject-centric systems. This article analyzes the conceptual framework of AFD, the result of which is the conclusion that the methodological basis of AFD is system analysis. This justifies the possibility of adapting models and methods of system analysis to the problems of qualitative and quantitative research of systems within the framework of AFD. A description of a typical event analysis framework for AFD-1 is provided. An example of the use of this circuit in the failure analysis case of a software product is given. In conclusion, the restrictions on the scope of applicability of AFD as a methodological basis for ensuring the functional safety of hardware and software systems in the conditions of uncertainty in the environment of use are determined.
Keywords: digital environment, functional safety, hardware-software complex, “barrier thinking”, diversion analysis
The purpose of this work is to identify the factors that ensure optimal planning of the cargo transportation process with the subsequent formation of the logistics structure of this process. To solve this problem, the main factors that will ensure technical equipment, the use of various types of transport and the determination of transport capacity, accounting for production processes are identified. Based on the theory of Queuing, an algorithm for forming the structure of the process of organizing optimal planning of cargo delivery, with determining the value of throughput, as well as the costs of this process, is developed, which allows in the event of a time delay to make an effective management decision to ensure this process. In the process of analyzing cargo transportation planning, the decision-maker must take into account: the time spent on completing all the processes at the stage of preparatory work, followed by the implementation of steps aimed at equipping the points of acceptance and departure of goods. The structure of the process of organizing optimal planning of cargo delivery considered in this paper allows us to determine the main factors involved in the formation of a cargo transportation plan.
Keywords: cargo transportation, optimal planning, schedule of vehicles, throughput, cargo transportation volumes, supply chain management system
The paper discusses the possibilities of modeling the control of industrial organizations based on rating approaches. There is a control center for control production facilities. It is proposed to organize the interaction of the control center with the facilities based on the rating score and then proceed to the rating control process. At the same time, the rating is used to analyze control, accounting, forecasting and regulation of the activities of the objects included in it in the analyzed production system. A model for the interaction of the control center in industrial production with the objects of the production system based on the classification criteria is formed. The structure of the interaction of the control center with the objects of the production system at rating control is given. Rating control mechanisms based on: control the distribution of resource support for the implementation of all areas of the main activity of the production system, control the coordination of interests of the control center and objects of the production system, and control the distribution of additional resource support for development, are considered. The block diagram of the implementation of mechanisms for rating control is given. The characteristics of modeling the interaction of the control center and objects of industrial systems are indicated. The results of rating assessment of industrial production objects are given on the example of growth in sales of enterprise products.
Keywords: production organization, model, rating approach, resource, management, control
This article deals with the problem landing a helicopter-type aircraft on an unprepared site, in particular, a model landing control on a body water with snow and ice cover is proposed. The analysis the standard means landing, installed on helicopter - type aircraft, has now shown that in Arctic conditions they are not able to provide the crew with information about the underlying surface (landing site) such as the depth snow and the thickness the ice cover. Simulation the process landing control helicopter aircraft on an unprepared site on a body water with snow and ice cover with the proposed radar landing system showed that the task can be successfully solved. To do this, the underlying surface (landing site) is probed and information is given to the crew about the possibility landing, or lack thereof, comparing the measured values with those specified for a particular type aircraft. The paper presents a logical information model that reflects the automation the landing control process by assessing the possibility a safe landing, by radar determination the parameters and characteristics flat-layered media, snow depth and ice thickness. The use the model is possible in the development radar systems to ensure the safe landing a helicopter - type aircraft on an unprepared site with snow or snow-ice cover in conditions insufficient information about the underlying surface.
Keywords: snow and ice cover, subsurface sensing, helicopter landing, landing site, unprepared site
Despite the fact that the cluster approach is quite common in scientific research, the issues of formation, development and evaluation of the effectiveness of cluster-network interactions remain unresolved. The relevance of the research is because with the optimal mechanism of the cluster-network approach, it is possible to maximize the profit of participants in cluster-network relations, thereby increasing tax revenues to the budget, ensuring the growth of GRP in the region. In this regard, this paper considers one of the elements of cluster-network approaches as a tool for managing regional development of regions focused on the extractive industry. This approach allows us to develop and implement effective tools to stimulate the development of the socio-economic system of the region and organizations. Management refers to the variability of structural shifts in the sector economy by redistributing key subsectors. This paper uses graph theory to determine the critical mass of the cluster core. The paper focuses on the cluster core and its critical mass as one of the indicators of the cluster policy mechanism. Under critical mass, we will understand the development of cluster-network connections of cluster participants. The hypothesis about the influence of critical mass core to the ability and desirability to developments in the mining sector in such a way that changes the final graph elements that are set in accordance with podotraslej, leads to substantial changes in the industry. The materials of the article are of practical value for participants in cluster-network interactions of oil and gas sector entities, who can maximize the volume of production of goods and services, increase profitability and business profitability indicators by optimizing the cluster-network mechanism.
Keywords: cluster-network connections, oil industry, cluster core, graph theory, optimal path