Keywords: database, software package, indexing, search trees, api
DOI: 10.26102/2310-6018/2025.48.1.022
The article discusses the implementation of a database management system, which is interesting because it allows for fast searches of static and unchangeable data, including large volumes of this data. To obtain the results, programs for processing and unifying files, combining and indexing them, as well as for searching by indexed data were developed from scratch. The methods of parallelization, binary search, interpolation search, mmap-mapping, clustering, caching, direct and reverse indexing, merging, LZ archiving and B-trees were used. A search engine was created that allows performing thousands of search queries per second and working with databases of several terabytes in size. The relevance of the study is due to the need to perform a large number of search operations on large arrays of data. In this regard, this article is aimed at disclosing and implementing the most effective mechanisms for such a search. The leading approach to the study of this problem is the practical implementation of various search algorithms and their further optimization to obtain the fastest search methods. Ready-made algorithms for data processing and further methods of searching on them are presented. The materials of the article are of practical value for specialists solving problems related to big data and performing search queries on them. At present, such development for improving databases is necessary due to the constantly increasing flow of digital information that must be correctly collected, processed, analyzed and stored.
Keywords: database, software package, indexing, search trees, api
DOI: 10.26102/2310-6018/2024.45.2.046
The work is devoted to the formation of principles for constructing components of a monitoring environment for managing multifunctional intelligent systems. The relevance of the topic under study is substantiated, the goal and objectives of the work are set. The task of forming a system of indicators describing the operation of the system is highlighted as a key task in the formation of a monitoring environment. Three stages are described that determine the formation of a system of indicators from system performance indicators to performance indicators of individual elements. A system of indicators for the monitoring environment is proposed in the form of a hierarchical structure with 3 levels: the level of performance criteria, the level of performance indicators, the level of a combination of types of resources and types of activities. Algorithms for collecting and generating data sets are proposed. The algorithm for generating a data set for the monitoring environment involves obtaining data from different sources. The task of the data collection algorithm is to prepare data sets for subsequent processing and obtain the values required by the monitoring environment. When collecting data, various approaches to generating target data sets may be considered. To determine the correspondence between functional areas, resources, types of activities, divisions and performers, an algorithm for generating correspondence directories is attached. The architecture of a web application is proposed as one of the forms of implementation of the monitoring environment. Using the example of using the Next.js framework, the components of the application architecture are described and an architecture diagram is presented.
Keywords: management, data, monitoring, architecture, algorithm, intelligent systems
DOI: 10.26102/2310-6018/2024.46.3.002
The presented article examines an innovative algorithm for assessing the attractiveness of potential partners in the context of online dating. The algorithm employs two neural networks: a generative network and a convolutional network. The generative neural network creates visual profiles based on various attractiveness parameters, while the convolutional neural network analyzes and extracts these parameters from images of real users. This approach allows for the dynamic adaptation of user preferences, ensuring high relevance of recommendations even with a limited pool of candidates in a given region. The method described in the article aims to significantly enhance the user experience and increase the success rate of online dating. By utilizing neural networks, the algorithm can account for individual user preferences and adapt to them in real-time. This makes the recommendations more accurate and personalized, which in turn facilitates the creation of deeper and higher-quality interpersonal connections. The research also emphasizes the importance of forming stable and happy long-term relationships. The presented approach contributes to this by providing users with a more satisfactory and effective experience in online dating. Thus, the use of algorithms and neural networks in the field of online dating has the potential to greatly improve the quality of interactions and interpersonal connections, which is a crucial aspect in the modern digital age.
Keywords: neural networks, attractiveness, online dating, generative neural network, convolutional neural network, matchmaking, recommendations, user preferences, relevance
DOI: 10.26102/2310-6018/2024.46.3.008
The relevance of the study is due to the need to systematize key skills and knowledge for effective scientific activity in research organizations. In this regard, this article aims to develop a competency model for scientific workers. The leading approach to the study of this problem is the method of integrating the competency model into the grading system, which allows for a comprehensive consideration of the assessment and stimulation of professional growth of employees. The article presents: a competency model for scientific employees of research organizations, including four main categories of competencies: professional, personal, interpersonal and managerial, detailed by skill level; a methodology for integrating a competency model into an automated grading system, providing an objective assessment and stimulation of professional growth; stages of creating a model, including identifying job groups, calculating a grading scale, differentiating skill levels and adapting the model based on feedback. A competency model proposed as a set of competencies, which, according to the heads of research organizations, are considered as indicators of behavior with the help of which scientists are able to effectively and efficiently perform their job duties. The materials of the article are of practical value for research organizations, contributing to effective talent management, career planning and development of training programs.
Keywords: competency model, grading, scientific employees, professional development, HR processes, interdisciplinary interaction, talent management
DOI: 10.26102/2310-6018/2024.45.2.020
This article discusses the problems of using neural networks of the ART family to optimize the decision-making process in risk management systems. The advantages of this approach, such as the ability to quickly respond to new information and flexibility in learning, are weighed against disadvantages, including the difficulty of adjusting parameters and interpreting results. The next part of the article will explore various ways to train ART networks, including unsupervised learning and supervised learning methods, as well as key points for configuring network parameters. Possible problems related to the quality of input data and the difficulty of interpreting output data are raised. The article also presents a concrete example of the use of ART-type neural networks in the construction industry to assess risks and make informed decisions. In conclusion, the article focuses on the prospects for using neural networks of the ART family for cluster analysis of risks, identifying related factors and grouping them for more effective management. The possibilities for further development of decision-making methods in risk management using neural networks such as ART and their potential to provide more accurate and predictive practices are discussed.
Keywords: ART-type neural networks, risks, decision-making processes, monitoring data, neural network training
DOI: 10.26102/2310-6018/2024.45.2.045
The relevance of this research stems from the fact that controlling a drone using hand gestures is more natural and intuitive than using traditional joysticks. This allows users to easily learn control and focus on task execution rather than technical aspects of operation. In turn, developing a gesture recognition system requires advancements in machine learning-based image processing algorithms. This paper aims to investigate the feasibility of implementing drone motion control using hand gestures in conjunction with modern neural network technologies. The main approach in addressing this problem involves the application of convolutional artificial neural networks for image processing and computer vision tasks. The work also explores methods for hyperparameter optimization using the Optuna tool, the use of TensorFlow Lite for implementing machine learning models on resource-constrained devices, and the application of the MediaPipe library for gesture analysis. Technologies such as Dropout and L2-regularization are used to enhance model efficiency. The materials presented in this paper hold practical value for researchers in the fields of artificial intelligence and robotics, software developers, and companies involved in the development of unmanned aerial vehicles.
Keywords: quadcopter, hand gestures, computer vision, convolutional neural networks, artificial neural networks, hyperparameter optimization, control
DOI: 10.26102/2310-6018/2024.46.3.003
The paper considers a method in which SWOT analysis is combined with the hybrid assessment method. SWOT analysis includes the identification of internal strengths and weaknesses of the organization and external opportunities and threats, which allows to choose strategies to maximize benefits and minimize risks. In turn, the hybrid assessment method combines the advantages of several well-known methods for increasing the efficiency and convenience of the decision-making process. The main idea of the method is the combined use of the hierarchy analysis method and the statistical method of calculating the weighted average, which makes it possible to combine their strengths and at the same time minimize the disadvantages. The analytical hierarchy process allows one to structure complex tasks in the form of a hierarchy, which is then formed into separate levels. Paired comparisons of hierarchy elements make it possible to assess the relative importance of each element, which provides a systematic approach to decision-making process. The purpose of the integration was to combine the positive features of the two methods. Within the framework of this article, one of the main disadvantages of the combined use of SWOT analysis and the hierarchy analysis method was identified and described in detail. A comparative analysis of the number of required pairwise comparison operations was also carried out between the combined use of SWOT analysis and the hierarchy analysis method, and the use of SWOT analysis in conjunction with the hybrid assessment method.
Keywords: decision making method, hierarchy analysis method, hybrid assessment method, non-functional requirements, functional requirements, SWOT analysis
DOI: 10.26102/2310-6018/2024.45.2.019
The relevance of this work is associated with the expanding use of information systems and models that make it possible to monitor the dynamics of key indicators of the functioning of enterprises and make appropriate organizational and managerial decisions. When working with enterprise information models, it is necessary to access data arrays, which can lead to problems with time for data analysis and query processing. When considering this task, it is important to take into account the size and structure of the basic information arrays storing the basic data of the enterprise. In this regard, this paper examines the feasibility of combining arrays that reflect the state of objects in certain workshops of a machine-building enterprise. It is shown that the gain from such an operation is possible by reducing the time of operations with the array. A problem is proposed for finding the optimal structure of the composition of the resulting base arrays, characterized by the optimal updating time. To solve this problem, an algorithm is proposed for combining the main arrays. An analysis of the feasibility of the merger process is carried out, as a result of which the conditions under which such a merger is advisable are determined. For the algorithm, it is proposed to use the “branch and bound” method. The proposed algorithm allows you to make the optimal decision on the choice of the composition of the base arrays and allows you to combine the base data arrays of the enterprise information model, ensuring a reduction in the total time of accessing the data.
Keywords: information model of an enterprise, information array, data integration, data analysis, optimization criteria, efficiency of combining information arrays, enterprise management, production organization, automation
DOI: 10.26102/2310-6018/2024.46.3.028
The article discusses the development of a mobile gaming application for the formation and development of leadership qualities in high school students, college and technical school students. The educational gaming application corresponds to the concept of "innovative educational technology", that is, it includes a set of three interrelated components: modern content, modern teaching methods, modern digital learning infrastructure. The developed mobile application allows you to systematically develop such leadership qualities as self-confidence, responsibility, time management skills, creativity, the ability to act in a situation of uncertainty, and determination. The application logic is based on the principle of forming an individual educational trajectory. To build individual learning trajectories for each user, neural network clustering of questionnaire data is used. That is, when generating individual trajectories for the development of leadership qualities, not only questionnaire methods are used, but also the result of applying clustering methods to a set of questionnaires. Self-organizing Kohonen maps are used for clustering. The resulting division into clusters was analyzed by experts, several clearly defined clusters were identified, for each of which a model of individual change in the trajectory of development of leadership qualities was compiled. As a result of expert analysis of the clustering results, seven clusters were identified. A description of each cluster was compiled jointly with experts.
Keywords: clustering, kohonen network, leadership qualities, individualization of learning trajectory, mobile application
DOI: 10.26102/2310-6018/2024.45.2.021
The article shows the possibilities of using machine learning methods to build and analyze an authentication system based on the dynamics of keystrokes. The paper substantiates the need to improve the multifactor authentication system. A method of classifying the work of behavioral biometrics for comparison and use of research results is proposed. The basic possibilities of processing and generating dynamic and static signs of the dynamics of keystrokes are considered. Various combinations of feature sets and training samples were tested, and the best combination with an Equal Error Rate (EER) of 4.7% was described. An iterative analysis of the quality of the system allows us to establish the importance of the first characters of the input sequence, as well as the nonlinear relationship between the degree of ranking of the model and EER. The high performance achieved by the boosting model indicates the significant potential of behavioral authentication for further improvement, development and application. The significance of this method, its practical usefulness not only in the task of authentication, development prospects, including the use of neural network methods and data dynamics analysis are presented. Despite the achieved results, there is a need for further work on the model, including the development of additional clustering, classification models, changing the set of features and building a cascade. The importance of the research area, which can make a significant contribution to the development of information security and technology, is emphasized.
Keywords: authentication, behavioral biometrics, keystroke dynamics, classification, machine learning
DOI: 10.26102/2310-6018/2024.45.2.013
The article presents algorithms for reconstruction, calculation of stone parameters and visualization of three-dimensional kidney and stone objects based on data obtained after the detection of 2D objects by a neural network on medical images obtained as a result of computed tomography of human internal organs. The algorithms allow you to restore (assemble) kidney and stone objects, calculate the physical parameters of stones, and perform flat and three-dimensional visualization of stones. The implementation of algorithms in the software code allows you to obtain the dimensions of the found concretions in the kidneys, visualize the density distribution inside the stone, visualize the location of the found stones in the kidney, which simplifies the support of medical decision-making during diagnosis and subsequent planning of surgical intervention during the stone crushing procedure using a laser installation. The proposed algorithms and models were implemented in a prototype of a medical decision support system in surgery and urology using computer vision technologies as part of software modules. The use of the developed algorithms for layered assembly of stones and kidneys in the prototype of a medical decision support system in surgery and urology using computer vision reduces the time for diagnosis and planning of stone crushing surgery, and also helps to avoid errors in determining the location of stones inside the kidney and, thereby, reduce the likelihood of injury to the patient.
Keywords: detection, visualization, 3D voxel reconstruction, DICOM images, YOLO network
DOI: 10.26102/2310-6018/2024.45.2.016
The article explores the possibilities of applying semantic analysis of user posts on the social network VKontakte for monitoring and predicting depression. It emphasizes the seriousness of the depression issue, its negative impact on health and society, and the relevance of early diagnosis and assistance. The study also justifies the necessity and prospects of analyzing data from Russian-language social networks to prevent the development of depression among users. The article examines the analysis of textual data and the use of logistic regression to classify users based on the presence of depression. The study's results show high model accuracy using logistic regression, demonstrating the potential for automating the processes of identifying and supporting users suffering from depression in the online environment based on user information from social networks. The significance of this method is also highlighted, along with its practical usefulness for personalized interventions, its advantages, and its development prospects, including the use of neural network methods and the analysis of data dynamics. Despite the results achieved, there is a need for further work on the model, including the study of other machine learning methods and taking into account changes in the user’s mental state over time. The development of depression prediction methods based on social network data, as proposed in the article, is an important direction that can make a significant contribution to psychology, healthcare, and information technology.
Keywords: forecasting, depression, psychological disorder, logistic regression, classification, social network, machine learning
DOI: 10.26102/2310-6018/2024.45.2.044
The work is devoted to the problem of planning ship routes in water areas with heavy traffic. In conditions of heavy traffic, navigational safety can be ensured only if ships adhere to a certain traffic pattern. The paper examines the problem of planning a route in such a way that it corresponds to the shipping practices that have developed in a particular area. The route planning method proposed in this work is based on clustering data on vessel traffic. The selected clusters represent areas in three- or four-dimensional phase space with similar speeds and courses of vessels, on the basis of which a graph of possible routes is formed. A feature of the approach for constructing a graph is the reduction in the number of vertices and edges by identifying the location of the selected clusters by covering polygons. The work shows that in many cases not only concave, but also convex polygons can be used, which can further reduce the power of the graph. The paper provides a metric for the distance between points in phase space, which is used to cluster data, and discusses the problem of choosing metric parameters and the clustering algorithm. The promise of using the DBSCAN algorithm is noted. The work is accompanied by calculations of planned vessel routes based on data from real water areas (Tsugaru Strait). The results of clustering traffic data, identifying the location of clusters by constructing enclosing polygons, and calculating the route of the vessel are presented. It is noted that the problem under consideration may be promising in the context of the future development of autonomous vessels navigation. In this case, the calculated route of the vessel will correspond to the movement of other vessels that were previously in the water area. This will reduce the likelihood of dangerous situations occurring when an autonomous vessel moves in the general traffic flow.
Keywords: navigation safety, vessel traffic control, traffic route establishment system, heavy traffic, route planning, clustering, graph algorithms
DOI: 10.26102/2310-6018/2024.45.2.043
The article considers the task of building a tourist route with predetermined points of the beginning and end of the route. The objects are divided into two types. The first ones are mandatory, which should certainly be included in the resulting route. And the second ones are additional ones, which are not necessary to visit. The route is formed taking into account the priorities set for the objects by the tourist, based on his interests and preferences, while the total time of visiting the objects should not exceed the specified deadline for arrival at the end point of the route. To solve this problem, the article proposes an approach based on the construction of a route by known methods along the main objects and its further expansion using ant strategies. To this end, the concept of "satiety" of the ant and the probability of returning to the main route are introduced, so that it is possible to control the time reserve. At the end of the article, we present the results of a computational experiment aimed at assessing the influence of the ant algorithm parameters on the resulting route and developing recommendations for adjusting these parameters depending on the size of the problem. In addition, a comparative analysis of the routes obtained by the proposed algorithm and the exact branch-and-bound method for a given set of objects is carried out, based on the results of which a conclusion is drawn about the effectiveness of the proposed algorithm.
Keywords: tourist route, ant algorithm, priority, traveling salesman's task, probabilistic choice
DOI: 10.26102/2310-6018/2024.45.2.042
The relevance of the study is due to the low level of use of dialogue in natural language in distance learning. The creation of such tools based on artificial intelligence will make the process of distance learning more accessible and attractive. The article proposes to build a dialogue based on standard questions for the content of the distance learning course. The answer is selected based on the similarity of the user's question to the standard. It is recommended to use the structural units of the distance learning course as a set of answers, and the corresponding headings as standard questions. The training dialogue data is remembered and used to expand the list of standard questions and train the system. To control learning, a measure of the similarity of the student’s answers to test questions and the correct answer options is used. To generate test questions, you can use distance learning dictionaries and test tasks. It is proposed to determine the measure of similarity of two texts using the cosine of the embeddings of the closest terms. Data from comparing texts using the proposed methodology confirm its ability to correctly assess the similarity of texts and justify its use for organizing dialogue in natural language in distance learning.
Keywords: distance learning, ranking chatbot, natural language dialogue, embedding, soft testing, sentence similarity measure
DOI: 10.26102/2310-6018/2024.45.2.041
The technology of simultaneous multithreading is considered to be of little use in programs involved in intensive calculations, in particular when multiplying matrices - one of the main operations of machine learning. The purpose of this work is to determine the limits of applicability of this type of multithreading to high performance numerical code using the example of block matrix multiplication. The paper highlights a number of characteristics of matrix multiplication code and processor architecture that affect the efficiency of using simultaneous multithreading. A method is proposed for determining the presence of structural limitations of the processor when executing more than one thread and their quantitative estimation. The influence of the used synchronization primitive and its features in relation to simultaneous multithreading are considered. The existing algorithm for dividing matrices into blocks is considered, and it is proposed to change the size of blocks and loop parameters for better utilization of the computing modules of the processor core by two threads. A model has been created to evaluate the performance of executing identical code by two threads on one physical core. A criteria has been created to determine whether computationally intensive code can be optimized using this type of multithreading. It is shown that dividing calculations between logical threads using a common L1 cache is beneficial in at least one of the common processor architectures.
Keywords: simultaneous multithreading, matrix multiplication, computation intensive, microcore, BLAS, BLIS, synchronization, cache hierarchy, spinlock
DOI: 10.26102/2310-6018/2024.45.2.015
The problem of allocation and operation of parking spaces is an important part of research in the field of intelligent transportation. In recent years, due to the sharp increase in the number of cars, the problem of limited parking space resources has been expressed. Effective parking management requires analysis of huge amounts of data and modeling to optimize the use of parking spaces. The implementation and operation of smart paid parking space in Vladivostok creates an interesting application area for data mining and machine learning. The study uses a large-scale data set of historical parking transactions in Vladivostok, including vehicle type, time, location, session duration, and more, to create a data model that reflects the relationship between parking prices, demand, and revenue. The article describes the mechanism for creating a data model that includes all important aspects of the functioning of paid parking lots and factors affecting occupancy. Using this model will allow for machine learning, application of models and evaluation of the effectiveness of their application. The study also identifies key factors influencing parking demand, such as time of day, day of week, location, etc. The data model and insights gained from this research can be used by governments and property owners to optimize the use of paid parking and improve traffic management in smart cities. The approach presented in this article can be applied to other cities to create data-driven pricing systems that meet the specific needs and characteristics of each city.
Keywords: modeling, paid parking lots, data analysis, gaussian distribution, optimization
DOI: 10.26102/2310-6018/2024.45.2.040
The article discusses choosing a technological approach to porting a Windows desktop application that utilizes a non-cross-platform user interface component library, and that implements a plugin architecture, to Linux. The approach described can be used in cases when flexibility and low overhead is preferred over a ready-made solution. The work has been done based on systems analysis. A collection of existing options and their elements is examined. The resulting solution consists in using model-driven software development to separate platform-specific components from cross-platform ones by means of well-defined programming interfaces. The suggested version of a technology by which source code is generated from a declarative description of an object-oriented interface model provides interoperability between objects, residing in different modules and separated by a compiler or a runtime library boundary. The XML technology stack is used to implement validation, code completion and transformation of model descriptions into C++ source code. Interfaces are represented by virtual method tables. Each method is a C-style function. A reference to an interface is a structure containing a pointer to a virtual method table, and a pointer to an object instance. For each interface there is a number of declarations and definition generated: a set of function declarations, a virtual method table declaration, an interface reference structure declaration, wrappers for interface references and implementation base classes in C++. The technology is successfully applied in the development of INTEGRO geographic information system.
Keywords: plug-in architecture, object-oriented programming, application binary interface, c++, INTEGRO
DOI: 10.26102/2310-6018/2024.45.2.012
Machine learning methods are widely used to build medical predictive models. At the same time, along with methods based on classical statistics, Bayesian methods are used, which are most effective for small sample sizes. In this paper, a number of models for predicting the patient's bio-age based on his functional data using both classical machine learning methods and the Bayesian approach are constructed. The data used were the results of clustering that we carried out earlier in a previous study on the material of medical organizations “Sverdlovsk Regional Clinical Psychoneurological Hospital for War Veterans” and “Institute of Medical Cell Technologies” for 1995–2022 in a volume of 6440 records, where 4 clusters were obtained, divided by gender and patient status (inpatient and outpatient). Based on the assumption that patients in outpatient status have the smallest difference in biological and calendar age, and therefore make less error in the accuracy of the model than patients in inpatient status, it was decided to build models only for patients in outpatient status. The work constructed a set of models for 2 clusters – a cluster of men in outpatient status (sample size 344 records) and a cluster of women in outpatient status (sample size 991 records). The analysis of the age distribution in each group showed a two-modal distribution with a boundary at a value of 40 years. Therefore, the groups were divided by age into two parts: up to 40 years and after. The lazypredict platform was used to select classical machine learning models. For each group, 4 methods were selected that gave the highest accuracy and models were built based on them, as well as ensembles of models - stacking and votinmg. The accuracy of the models based on the test data ranged from 4.1 to 6.3 years. In the Bayesian approach, a linear multifactorial regression model with a given a priori distribution of regression coefficients is constructed. The accuracy of the models ranged from 4.9 to 6.6 years.
Keywords: bayesian approach, random forest, ensembles of models, voiting, stacking, geroprophylactic effect, predicting the effectiveness of treatment, bio-growth
DOI: 10.26102/2310-6018/2024.46.3.001
The article discusses the features of using social media as a tool for interaction between a user and the organizational system of higher education. Based on market analysis and target objectives, a conclusion is drawn about the relevance and accessibility of the tool. The analysis of the online platform market in Russia from the perspective of social media usage by users is presented. The basic requirements for implementing a social network as an interaction tool with the organizational system are formulated, and a conclusion is made about the need to avoid duplicating information transmission channels within the organizational system. Requirements for the use of a tool, namely a social network, have been determined, on the basis of which it has become possible to use a social network as the main channel of communication with the user to support decision-making in the organizational system. These changes are demonstrated within the framework of a typical version of the information architecture of an organization using the example of a higher education institution. The integration of a social networking service to support decision-making into the information architecture of the organization is shown. A conclusion is drawn about new possibilities for using a social networking service as a tool to support decision-making, as well as the positive impact of this study on the information architecture of the organization and the activities of employees.
Keywords: smart assistant, organizational system, social network service, information architecture, application layer, chatbot
DOI: 10.26102/2310-6018/2024.45.2.039
The article is devoted to the development and possibility of using a new mathematical form of connection between the output variable and input factors in regression analysis. For this purpose, previously studied simpler modular linear regression models were used, in which one or more input factors are transformed once using the modulus operation. A symbiosis of linear regression and modular regression with a multiary operation module is proposed. On its basis, a multilayer modular regression is formulated, built on the “module within a module” principle, that is, each new layer uses a module from the value of the previous layer. The problem of estimating multilayer modular regression with a given number of layers using the least modulus method is reduced to a partial-Boolean linear programming problem. Using the proposed regressions, the problem of modeling timber reserves in the Irkutsk region was solved. In this case, single-layer, two-layer and three-layer modular regression were constructed. The new models turned out to be significantly better in quality than linear regression, and with an increase in the number of layers, a decrease in the sum of the residual modules was observed. In the three-layer model, all residuals turned out to be zero. The developed mathematical apparatus can be successfully used to solve many data analysis problem.
Keywords: regression analysis, multilayer modular regression, least absolute deviations method, partial-boolean linear programming problem, wood
DOI: 10.26102/2310-6018/2024.45.2.038
Over the last decades one of the key problems of operation of trunk oil pipelines is formation of wax desposits and their deposition on the inner wall of pipelines during pumping of highly paraffinic oil. This process is characteristic both for new fields located in Western Siberia and for fields at the III stage of development with a high degree of exploration and gradually decreasing oil production volume and quality. Paraffinisation of pipes is a negative factor both for oil transportation processes and subsequent diagnostics of main oil pipelines, reducing the reliability of the oil transportation system. Nowadays in practice the struggle against wax deposits is carried out in two directions: removal of formed deposits by means of in-pipe cleaning devices and prevention of their formation by means of application of appropriate inhibitors. According to the results of researches conducted by a number of scientists, including the authors of the article, the economic feasibility of the idea about the efficiency of using the formed wax deposits layer on the inner wall of the pipeline as an additional internal thermal insulation layer was considered and confirmed. At the moment, there is no device in industrial operation whose design allows the formation of a uniform layer of paraffin deposits.
Keywords: modeling, main oil pipeline, treatment device, asphalt resin-paraffin deposits, mechatronics, prototype, arduino, thermal insulation
DOI: 10.26102/2310-6018/2024.45.2.037
This article discusses the existing methods of positioning the base stations of the local positioning system in the work area. The choice of the station placement method largely determines the final accuracy and economic feasibility of the entire designed system. A review of the scientific literature has shown that there is currently no universal method for placing base stations in the positioning work area. Existing solutions implement either one of the standard approaches of station placement on a grid, or embody a method of sorting through many combinations of placements. The method of placing stations on a grid is not adapted to the conditions of designing a positioning system in a complex-shaped work area divided internally by various partitions and massive objects, since it does not take into account the peculiarities of radio signal propagation. The method of sorting through various combinations of base station placement in most software implementations is reduced to minimizing the influence of a geometric factor (Geometric Dilution of Precision - GDOP) on the measurement error of distances to stations and also does not take into account the distortion of the navigation signal introduced when passing through various obstacles. Therefore, the development of a methodology for the placement of base stations of a local positioning system is an urgent problem and the article is devoted to its solution. According to the proposed methodology, the working area containing massive obstacles on its area is divided into convex free subdomains in accordance with a greedy algorithm, in which the base stations are arranged. As a result of the work on the article, the principles for the operation of the base station placement methodology are outlined and a universal algorithm for station placement in work areas with obstacles is proposed.
Keywords: local positioning system, dilution of Precision, geometric factor, greedy algorithm, DOP, trilateration
DOI: 10.26102/2310-6018/2024.45.2.036
The tubing hanger is a structural element included in the subsea production system. The pipe hanger body is the basis of the tubing hanger structure and absorbs the downhole pressure and gravity of the screwed pipe string, whose strength and performance play a decisive role in ensuring the safety of the production process. Compromise of the structural integrity of the pipe hanger body structure can cause irreversible catastrophic consequences. Insufficiently developed engineering solutions for the design of the flow part of the pipe holder housing can lead to an increase in local hydraulic resistance, which contributes to an increase in energy costs for pumping the produced fluid using the gas lift method and, as a consequence, a decrease in the efficiency of the entire production pipeline line. In this regard, this article is aimed at identifying the degree of influence of the geometric parameters of the flow part of the pipe hanger body on the strength and hydraulic characteristics of the structure. The paper presents the results of computer modeling of the pipe hanger body under operating conditions using the finite element method, as well as the finite volume method using the Ansys calculation package. In finite element modeling of the stress-strain state of the pipe holder body, the problem was considered within the framework of an elastic formulation. Using the finite volume method, a single-phase gas flow was simulated with a pressure difference Δp = 1 MPa between the inlet and outlet of the flow channel, taking into account the k-ɛ turbulence model. Based on the modeling results, the strength and hydraulic parameters of the structure were determined. The results of calculations of equivalent stresses, as well as the coefficient of hydraulic resistance for various types of design of the flow part of the pipe holder body are presented. The materials of the article are of practical value for engineers involved in the design of elements of an underwater production system.
Keywords: subsea production system, pipe hanger body, stress-strain state, tubing hanger, underwater fountain fittings, hydraulic resistance coefficient
DOI: 10.26102/2310-6018/2024.45.2.018
The article discusses an approach to the intelligent management in organizational systems aimed at ensuring the efficiency of interaction between producers and consumers of activity results using digital technologies and optimization modeling. In the conditions of active digitization of business, a class of organizational systems with a digital activity results hub is identified. It is shown that in organizing the interaction between producers and consumers, management is aimed not only at coordinating objects of trading operations but also at regulating objects of information flows in order to reduce costs for digital transfer. At the same time, two optimization tasks arise, related to different schemes of distribution of objects from information flows by producers and consumers. In the first case, the optimized variables are the distribution coefficients of the planned volume of flow entering the digital hub between producing objects, taking into account promotion options. The extreme requirement ensures the minimization of costs, and the marginal requirement is associated with the planned maximum and minimum level of income of objects from the exchange of information with consumers. The decision-making algorithm combines random selection of coefficient values on a given interval with subsequent adjustment using gradient search. A stopping rule for the iterative process is selected, upon fulfillment of which the optimal distribution of information flows between objects is determined. In the second case, an optimization model is constructed in which the optimization variables are the coefficients of distribution of the planned volume of information flow between producers, taking into account the categories of activity results registered by the digital hub.
Keywords: organizational system, digital hub, intellectualization, management, optimization modeling
DOI: 10.26102/2310-6018/2024.45.2.035
In this paper, we consider methods for recognizing on video a specific class of technological manual labor operations, which are a sequence of movements of the hands and fingers. The technological operation in this work is considered as a sequence of new specific symbols of the sign language. The paper considers various methods of gesture recognition on video. In this paper, a two-step approach was investigated. At the first stage, the key points of the hands on each frame are recognized by using the open mediapipe library. At the second stage, a frame-by-frame sequence of keypoints transformed into text using a trained neural network of the transformer architecture. The main attention is paid to training a neural network model of the Transformer architecture based on the open American Sign Language (ASL) dataset for recognizing sign language sentences in video. The paper considers the applicability of approach and the trained model of ASL for recognizing technological operations of manual labor with fine-motor skills as a text sequence. The results obtained in this paper can be useful in the study of labor processes with fast movements and short time intervals in algorithms for recognizing technological operations of manual labor on video data.
Keywords: video analysis of hand movements, gesture recognition, action recognition, deep neural networks, transformer, technological operations
DOI: 10.26102/2310-6018/2024.45.2.034
The power demand on the electric grid varies depending on the time of day and the needs of consumers. Demand response is a change in the consumer load curve accompanied by a change of price, used primarily by suppliers to limit consumption peaks. Reducing the short-term mismatch between production and consumption helps to integrate renewable energy sources, various low-carbon technologies, battery storage of electricity and electric vehicles into the electric grid. One of the tools used to maintain a balance between electricity production and consumption is smart meters, which operating in asmart grid. Such devices are widespread in the United States and the European Union, in the residential sector too. At the moment, the introduction of smart grids in the residential sector is just beginning in the Russian Federation. The article considers a stochastic model of electricity consumption by household appliances, based on the convolution theory. The measurement of power consumption by the most common household appliances has been performed. Several examples of consumer profiling based on the obtained data are given.The barriers that arise during the implementation of smart grids in the Russian Federation are identified, as well as the reasons why the interest of electricity suppliers in smart grids is growing.
Keywords: stochastic models, demand forecasting, multi-stage load, smart grid, energy consumption graph
DOI: 10.26102/2310-6018/2024.45.2.033
The relevance of modeling forced oscillations of microdroplet aggregates included in magnetic fluids is associated with the problem of predicting the parameters of working bodies of new devices and to the creation of new magnetosensitive media with controllable properties. The scientific interest is due to the unique sensitivity of microdroplet aggregates to the magnetic field, high magnetic permeability (for liquid media) and low interfacial tension at the aggregate-environmental liquid interface, which makes it possible to obtain forced oscillations of large amplitude. The nature of oscillations depends on the frequency and strength of the external field, as well as on the parameters of the aggregates. The peculiarities of forced oscillations of microdroplet aggregates at large amplitude are poorly understood; in particular, it is of interest to develop a universal modeling method suitable for computational experiments in a wide range of interfacial tension changes and to investigate the possibility of oscillation suppression with increasing frequency, carried out in this work. The modeling of forced oscillations is based on the energy approach and the assumption that the shape of the aggregate elongated along the field can be represented by an ellipsoid of rotation and its magnetization by a linear dependence on the external magnetic field strength. This allowed for a computational experiment with a change in interfacial tension by an order of magnitude in the range from 2 ∙ 10-6 N/m to 2 ∙ 10-5 N/m and obtain satisfactory agreement with the data of full-scale experiments. As a result of computational experiment, it is found that an increase in interfacial tension leads to a decrease in oscillation amplitude and a reduction in elongation, i.e., it suppresses oscillation. Of practical value is the prediction of the deformation of aggregates under the action of a magnetic field for the development of new materials with controllable properties.
Keywords: numerical modeling, forced oscillations, microdroplet aggregates, interfacial tension, magnetic fluid
DOI: 10.26102/2310-6018/2024.45.2.032
Any living organism has its own biological field, which depends both on the characteristics and state of the living organism and on environmental factors. Under informational influence of external factors, a change in the fractal structure of this field is observed and the formation of special chaotic signals, the parameters of which can serve as a basis for solving various scientific and practical problems. The article presents a technology for studying the electromagnetic fields of biological objects based on an analysis of changes in the chaos structure of broadband chaotic signals of their own electromagnetic radiation generated under the influence of an external informative electromagnetic field with a given strength and modulation-time parameters. To estimate the structure of chaotic signals it is proposed to use such methods of fractal approach as Poincaré mapping, calculation of the corresponding Hausdorff dimensionality and chaos-rhythm parameters. On the basis of the conducted experiments, the presence of a characteristic dependence of chaos-rhythm parameters of own electromagnetic emissions of a bioobject on the characteristics and state of the living organism itself, as well as on the parameters, sequence and rate of change of the external informative electromagnetic field has been established. The degree of informative influence of the external electromagnetic field on a human being is determined, which can exceed the energetic one by some indicators almost 4 times. The possibility of using the proposed technology to solve various scientific and practical problems has been proved: medical studies of the functional state of the organism, assessment and control of the impact of electromagnetic fields on human health, development of means to protect the environment and humans from radio-emitting systems, detection and recognition of bioobjects of a given class.
Keywords: fractal approach, chaos-rhythm, hausdorff dimension, wideband chaotic signal, bioradioinformative technology, biological object, electromagnetic radiation, integral field, information interaction
DOI: 10.26102/2310-6018/2024.45.2.031
The relevance of the study is due to the problem of untimely analysis of the composition of liquid mixtures during their production by enterprises of the food, chemical and oil refining industries. The traditional method of such analysis is carried out after the formation of a batch of products, which is why enterprises incur costs associated with the disposal of defective batches of products. This article is devoted to the development of an acoustic measuring system for analyzing the composition of liquid substances, capable of being used to analyze various liquid products in industry in a continuous mode when transporting these products through the internal industrial pipeline system, which makes it possible to identify defects before the formation of a batch of products, thereby reducing disposal costs. The sounded system, built into the pipeline, contains two measuring channels, including two piezoelectric receivers and one piezoelectric emitter, common to the two channels. As part of this work, equipment is selected based on an analysis of the repeatability of research results, in particular, the possibility of using one or another generator that generates exciting signals for a piezoelectric emitter is considered. The possibility of using excitation signals of various shapes and/or durations is investigated, and repeatability is assessed based on the linear correlation coefficient between several repetitions of experiments with the same type of excitation signal. The need for two measuring channels is analyzed. The materials are of practical value for enterprises producing liquid products, as well as for manufacturers of analytical equipment.
Keywords: acoustic measurement method, piezoelectric transducer, analytical studies, repeatability of results, linear correlation coefficient