Keywords: solar energy, renewable energy sources, aspects and operation of a solar power plant implementation, forecasting solar energy generation, forecasting methods
DOI: 10.26102/2310-6018/2024.44.1.008
The paper considers the relevant issues related to the problem of calculations and forecasting in the production of solar electricity as a renewable energy source. To detect problems, the initial data for modeling and their sources have been identified. Renewable energy sources are systematized and an example is given for each. An analysis of the state of the global energy market and the state of government policy in the field of energy in Russia has underscored the need to address solar energy issues and solve the problems of forecasting electricity generation. This is important not only due to the availability of resources, but also to environmental friendliness. The classification of existing models and methods for forecasting SES energy generation is examined. Existing methods allow calculations to predict the power generation capacity, but they give average figures for the year. New technological and innovative methods are required to solve the existing problem. The key factors and aspects of the introduction and operation of a solar power plant are presented. The main difficulty in forecasting is taking into account a variety of nonlinear characteristics. An attempt to solve this problem is proposed. An overview of the state of the problem and trends in the development of solar energy is made, among which the main problems are identified and solutions are outlined.
Keywords: solar energy, renewable energy sources, aspects and operation of a solar power plant implementation, forecasting solar energy generation, forecasting methods
DOI: 10.26102/2310-6018/2024.44.1.019
Predictive management with all its errors and difficulties is still an effective means of providing an organizational and technical system with time to increase its readiness for changes in the situation. To formulate and solve the problem of optimal control of this process, the Fokker-Planck-Kolmogorov equation was used, which is the first approximation in the probabilistic description of random processes. To formulate the optimal control problem, the Letov criterion was modified, a coordinate-parametric approach was applied, and the obvious fact of an increase in management costs with a decrease in the time to improve the readiness of the organizational and technical system was taken into account in the form of the square of change rate in the probability density. The Euler-Ostrogradsky-Poisson equations are applied to the final Lagrangian. The resulting nonlinear equations were solved using the small parameter method. The study of the resulting solution proves that even with optimal control, the magnitude of control actions increases in proportion to the target value and duration of control (increasing the planning horizon), the increase occurs according to the cube of the exponential, that is, very slowly at the beginning of control and very sharply at the end, and a similar pattern of increase demonstrates the dependence of the control influences from the demand for management results, but it is expressed through hyperbolic functions.
Keywords: optimal control, fokker-Planck-Kolmogorov equation, probabilistic quality criteria, intensity of application of control actions, small parameter method
DOI: 10.26102/2310-6018/2024.44.1.007
The article examines the development of a new approach to storing and organizing the results of laboratory experiments with consideration to the features of their subsequent processing. To solve this problem, laboratory experiments are considered as structured data with unstructured parts. During the development of the system, the features of storing and processing laboratory test data were analyzed, after which the basic requirements for the system were formulated. The main data models were defined as well as the database entities. A standard relational data model has been chosen for storing structured data, and the storage of unstructured information such as experiment results or experiment parameters is implemented through the BJSON field. To solve the problem of providing secure access and creating an API for the system, the asynchronous FastAPI framework was chosen. The implementation of storing additional experiment files, which are located in the object storage and are associated with the experiment in the relational model through an additional entity, is also considered. The presented approach is notable for its flexibility to the structure of stored laboratory experiments, takes into account the features of geological laboratory experiments and also provides opportunities for complex meta-analysis of large volume of data. The system was tested and implemented into the technological process of the geotechnical laboratory at JSC MOSTDORGEOTREST.
Keywords: storage of geological laboratory experiment data, unstructured data, experiment results storage system, geoinformation system, database, geological environment, information resource, engineering geology
DOI: 10.26102/2310-6018/2024.44.1.004
The article presents the results of the acoustic emission method application (AE) and machine learning algorithms in the problem of diagnosing defects in the stratification of a multilayer printed circuit board structure (MPB). A combination of physical and computational experiments is used to solve the problem. To conduct full-scale tests, the study uses a vibration stand to generate a load on the test object and receive acoustic emission signals. The computational experiment is carried out using mathematical modeling in a specialized ABAQUS environment. In order to obtain the best solution to the problem, an optimization problem is solved during the experiment to determine the frequency of the harmonic signal generated by the vibration stand with a view to receiving the maximum response of the MPB under review and unambiguous identification of the bundle defect. When conducting the numerical experiments, the effects and reactions (AE signals) of MPB were modeled at different frequencies of input vibration signals ranging from 100 to 2000 Hz. Full-scale experiments were conducted in the laboratory of control and testing of radioelectronic devices at the Department of KPRES of RTU MIREA. The results of the study have shown that the vibration frequency most effective for detecting a delamination defect equals 1500 Hz (a defect of almost rectangular shape with a size of 30×37 mm). Subsequently, this was confirmed by correlation analysis, which made it possible to identify the maximum differences between the acoustic emission signals of a suitable MPB sample and a sample with a delamination defect for the input vibration of a given frequency. The second part of the study deals with processing of the physical and computational experiment results, establishing the degree of adequacy of the obtained mathematical models to real samples of MPB and the processes occurring in them, as well as the use of machine learning algorithms for more reliable diagnosis of MPB defects. In the presented study, the random forest and the support vector machine learning (SVM) methods were employed as machine learning algorithms. Based on the results of their execution, the accuracy of the two algorithms was evaluated.
Keywords: acoustic emission, multilayer printed circuit board, hidden defects, structure stratification, modeling, physical experiment, machine learning algorithm, support vector machine method, random forest method, non-destructive testing
DOI: 10.26102/2310-6018/2024.44.1.001
This paper considers methods for authorship attribution of natural-language and artificially generated texts, which are important in the context of cybersecurity and intellectual property protection to prevent misinformation and fraud. The use of authorship methods is justified by the findings on the fastText and support vector method (SVM) effectiveness discussed in past studies. The feature selection algorithm is chosen based on the comparison of five different methods: genetic algorithm, forward and backward sequential methods, regularization selection and Shapley's method. The considered selection algorithms include heuristic methods, game theory elements and iterative algorithms. The regularisation-based algorithm is found to be the most efficient method, while methods based on complete brute-force selection are found to be inefficient for any set of authors. The regularization-based and SVM-based selection accuracy averaged 77 %, outperforming the other methods by between 3 and 10 % for an identical number of features. For the same tasks, the average accuracy of fastText is 84 %. A study was conducted to examine the robustness of the developed approach to generative samples. SVM proved to be more robust to model confounding. The maximum loss of accuracy for fastText was 16 % and for SVM was 12 %.
Keywords: feature selection, authorship attribution, machine learning, neural networks, text analysis, information security
DOI: 10.26102/2310-6018/2024.44.1.011
Today, the X-ray analysis procedure makes it possible to detect osteoarthritis (OA) in the early stages of the disorder. The presence or absence of the disorder is detected only when it has already manifested, and X-ray diagnostics have been carried out. The use of automated procedures for analyzing X-ray images and the availability of archives of such information with a long history can improve the results of predicting complications in patients. The article describes the experience of developing an application for computer analysis of radiographs, which, based on deep learning methods, allows us to identify the risks of developing osteoarthritis of the hip joint. The archive of a specialized medical institute is used as a training sample. In order to increase the size of the training set of radiographs, a data augmentation method is used, which increases the variability of the original data and, in some cases, increases the recognition efficiency. The research uses a convolutional network (U-net) designed for image segmentation, which is trained on X-ray images of a specific medical institution. As part of a project to segment and analyze the geometric characteristics of X-ray images of the hip joints, the software to automate the recognition of the joint space size was developed, which helps to clarify the patient’s diagnosis and prognosis for the development of the pathology.
Keywords: convolutional neural network, image segmentation, machine learning, osteoarthritis, hip joint
DOI: 10.26102/2310-6018/2024.44.1.021
The current stage of unmanned aircraft system development is characterized by the widespread introduction of automated and intelligent electronic systems. One of the most difficult and critical stages in the development of unmanned aerial vehicles is determining the optimal locations for placing on-board equipment in the fuselage space. To solve this problem, the approach for determining the optimal installation locations for on-board equipment in the fuselage space of an unmanned aerial vehicle is proposed. The approach is based on the use of a genetic algorithm. A meaningful and mathematical formulation of the problem of determining the optimal installation locations for on-board equipment in the fuselage space of an unmanned aerial vehicle is given. Criteria and restrictions have been developed. As optimization criteria, first of all, electromagnetic compatibility criteria are considered, which are characterized by minimizing the sensitivity of on-board equipment above the level of electromagnetic field strength at the installation sites of on-board equipment, as well as limiting the excess of the threshold level of susceptibility of on-board equipment over the electromagnetic environment resulting from electromagnetic influences or interactions. Additionally, criteria for minimizing the total weighted length of cable connections are considered, and the maximum load-carrying capacity of the fuselage compartments of an unmanned aerial vehicle is limited. The plan has been developed for the installation of on-board equipment in the fuselage space using a developed program that implements a genetic algorithm.
Keywords: placement, optimization, on-board equipment, genetic algorithm, unmanned aerial vehicle
DOI: 10.26102/2310-6018/2024.44.1.017
An analytical study was carried out on the problem of preventing emergency situations and predictive diagnostics of equipment during hydrocarbon production in oil and gas fields as well as the ways to solve this problem by means of artificial intelligence based on deep neural networks. One of the key factors hindering the development of predictive equipment diagnostic systems is the lack of data describing pre-emergency situations, which is necessary for high-quality training of neural network models. An analysis of recent publications and research on the subject of telemetry data analysis and emergency recognition is provided. Neural network models are considered that can be used to predict the failure of pumping and compressor equipment and other units. Cases of the use of neural network models specially trained to solve this problem, as well as neural network models used in other tasks but analyzing similar data structures, were studied. The issue of transfer learning is raised to adapt neural network models originally developed and trained for other areas to use in the area under consideration in order to reduce the sample size when training industrial artificial intelligence. A comparison of the achieved results was carried out, and the advantages and disadvantages of existing technical solutions were identified.
Keywords: artificial neural networks, predictive diagnostics, machine learning, time series, telemetry, maintenance, data sets
DOI: 10.26102/2310-6018/2024.44.1.002
The relevance of the paper is due to the difficulties of oral interaction between people with speech disorders and normotypic interlocutors as well as the low quality of abnormal speech recognition by standard speech recognition systems and the inability to create a system capable of processing any speech disorders. In this regard, this article is aimed at developing a method for automatic recognition of dysarthric speech using a pre-trained neural network for recognizing phonemes and hidden Markov models for converting phonemes into text and subsequent correction of recognition results using a search in the space of acceptable words of the nearest Levenshtein word and a dynamic algorithm for splitting the output of the model into separate words. The main advantage of using hidden Markov models in comparison with neural networks is the small size of the training data set collected individually for each user, as well as the ease of training the model further in case of progressive speech disorders. The data set for model training is described, and recommendations for collecting and marking data for model training are given. The effectiveness of the proposed method is tested on an individual data set recorded by a person with dysarthria; the recognition quality is compared with neural network models trained on the data set used. The materials of the article are of practical value for creating an augmented communication system for people with speech disorders.
Keywords: hidden Markov models, dysarthria, automatic speech recognition, phonemes recognition, phoneme correction
DOI: 10.26102/2310-6018/2024.44.1.032
Underwater optical wireless communications are promising and future-oriented wireless carriers to support underwater activities focused on 5G and beyond (5GB) wireless systems. The main challenges for the deployment of underwater applications are the physicochemical properties and strong turbulence in the transmission channel. Therefore, this paper analyzes the end-to-end performance of a hybrid free space optics (FSO) and underwater wireless visible light communication (UVLC) system under intensity modulation or direct detection (IM/DD) in a method considering a pulse amplitude modulation (PAM) scheme. In this study, a fading model with Gamma-Gamma (GG) distribution is used to deal with channel conditions with moderate and strong turbulence, and the links are designed by combining plane wave modeling in the corresponding links, respectively. The proposed performance methods excel in higher achievable data rates with minimal delay response and improves network connectivity in real-time monitoring scenarios compared to conventional underwater wireless communication techniques. The simulation results provide reliable estimates of system performance metrics such as average bit error rate (ABER) and probability of failure (Pout) in the presence of pointing errors. Finally, this paper uses a Monte Carlo approach for best curve fitting and validate the numerical expression with simulation results.
Keywords: 5G and 5GB networks, cooperative communication, optical communication, underwater communication, underwater sensor networks (USNs), VLC light communication
DOI: 10.26102/2310-6018/2024.44.1.005
Sensor devices and biomedical imaging technologies used in clinical application scenarios are essential for providing a comprehensive portrait of patients’ state, but these technologies, despite their outstanding advantages, have their inherent disadvantages. Beginning with the principle of complementary images of medical imaging techniques, this review examines the functional near- infrared spectroscopy (fNIRS) technique and its use as a hybrid system. The fNIRS technology delivers impressive results in terms of the biological signal classification accuracy, but its use as a hybrid system with electroencephalography (EEG) and electromyography (EMG) achieved better results because it has become a complementary tool to fill the deficit of the common technology with it, and this has been highlighted in this review. The results show that the superiority in the biological signal classification accuracy provided by hybrid systems from fNIRS with EEG and EMG would provide a comprehensive and objective assessment of the patients’ state from the stage of illness to healing. In conclusion, we have no indication from the scientific studies of the previous four years (2020–2023) that demonstrate which of the hybrid systems is better than others when used in clinical practice, and this encourages further in-depth studies to validate the combination of methods to prove their success and preference.
Keywords: HBCIs, fNIRS, fMRI, EEG, EMG, MEG
DOI: 10.26102/2310-6018/2024.44.1.006
The paper considers the best-known models of a porous body used to simplify the performance of thermohydraulic calculations by the finite element method. The main approaches and dependencies when using the porous body model in calculations are shown. The results of thermohydraulic calculations using the Darcy porous body model are presented. The calculation of a heat exchanger with spirally wound tubes was performed, the calculation of a complex technological system consisting of mechanical filters of different configurations was performed. The discrepancies between the calculated and actual parameters of the equipment are determined. The use of a porous body model as a hydraulic analogue of equipment using the example of mechanical filters and a heat exchanger showed acceptable results (deviations from the design values range from 0,1 % to 10 %). These discrepancies are related to the accuracy/correctness of the selection of porous body resistance laws (dependencies). The use of the porous body approach in modeling the operating modes of technological systems including equipment with a complex design is explained, first of all, when it is required to predict the operating modes of the system as a whole from the result of computational modeling, but local processes occurring inside the equipment are not. Secondly, when it is necessary to reduce the time for performing calculations with low available power capabilities of computers. However, the proposed approach has disadvantages, in particular, the procedure for determining the degree of porosity of the simulated object and the laws of hydraulic resistance selected from empirical dependencies is quite complex.
Keywords: porous body model, complex technological systems, heat exchanger, finite element method, hydraulic resistance, mechanical filters
DOI: 10.26102/2310-6018/2024.44.1.003
The article examines the problem of developing an integration platform to facilitate end-to-end business processes supporting the life cycle of heterogeneous information objects. The platform topology is chosen according to the functionality of the integrated systems and the structure of the information object. To create a unified enterprise information environment, various topologies are considered, including peer-to-peer, message broker, centralized, and hybrid topologies. The basis for the description of an object is a complete data model, including defining attributes and transformation rules corresponding to each of the integrated systems. Using the object model of the information support system for digital products and special templates, a methodology for forming policies, methods and documents (PMD) and organizing a unified digital environment of the enterprise is proposed. However, to solve this problem, the development of a specialized integration platform is required which is capable of processing data from production facilities on a centralized basis and facilitating their interaction in a unified information environment. Such a platform must take into account the characteristics of each system component and ensure the security of information exchange; it also should be able to scale and adapt to the changing needs of the enterprise. In addition, this article discusses in detail various topologies for creating a unified enterprise information space. Peer-to-peer, message brokered, centralized, and hybrid topologies are included. Each of these topologies has its own characteristics and advantages, and the choice of the optimal one depends on the requirements and characteristics of a particular enterprise. To successfully implement integration and create a unified digital environment of the enterprise, it is suggested to use an object model of an information support system for digital products. This model helps to structure information and determine the relationships between various components of the system. Furthermore, the article proposes a methodology for the formation of PMD, which is the basis for organizing a unified digital environment of the enterprise. This methodology takes into account the requirements for security, consistency and efficiency of the system and also ensures standardization and consistency of processes within the enterprise.
Keywords: information production facilities, integration, digital environment, full data model, process automation
DOI: 10.26102/2310-6018/2024.44.1.015
The article examines the optimization of investment management in the formation and implementation of multi-object information system development program. The stage connected with the transition from the development program executed for a certain time period to a new development program with a given planning horizon is considered. It is shown that the investments are balanced at the moment of transition and the need to rebalance them arises in the process of implementation. For the first problem, a multilevel system of balance conditions is formed, which is the basis for the construction of optimization models of the balancing process. Since the lower level of balance conditions is associated with the requirement to increase the value of organizational system development indicators of objects up to a certain value set by the managing center, the optimization problems are based on predictive assessments. These estimates are calculated either using the results of neural network modeling or expert evaluation. When forming optimization models of the investment rebalancing process, two ways of detecting the deviation of the development indicators value from the planned growth trajectory are considered: at a given point in time; when the threshold value is exceeded. In these cases, the point in time is determined, at which the optimal strategy of investment allocation between time transitions is adjusted in order to reach a given level of development indicators at the end point. Thus, the proposed transition makes it possible to optimize the distribution of investments as part of the development program both in the process of their balancing and rebalancing.
Keywords: multi-object organization system, development program, investment, optimization, neural-network modeling, expert assessment
DOI: 10.26102/2310-6018/2024.44.1.030
The increasing scope of application of mobile technologies and devices as elements of distributed systems to enhance the efficiency and convenience of access to various information systems and digital services has made it necessary to improve methods and mechanisms for information protection and information security. One of the main security mechanisms is access control. Features of traditional (discretionary and mandatory) access control model application in distributed information systems (IS) when using mobile systems (MS) as elements are analyzed. Thematically, hierarchical model is proposed as the most effective model that meets the required security policy. For this access control model, an ontological method for forming trust rights to access objects is proposed based on the use of semantic proximity metrics. When using traditional thematic hierarchical access control models, the logical information architecture of IS resources forms a thematic hierarchical classifier (categorizer). The Hasse diagram introduces order relations in the thematic classifier on the security grid to form trust-thematic powers of IS users. Constructing Hasse diagrams on a security grid that includes several security levels is a rather complex algorithmic task. When constructing trust-thematic powers of users in order to avoid uncertainty due to the incompleteness of the constructed Hasse diagram and overestimation of the granted powers when forming access rights, it is proposed to use the semantic proximity of the user access request and the thematic heading of the hierarchical classifier. An analysis of existing approaches to the formation of semantic proximity metrics has shown that proximity measures based on the hierarchy of concepts can be used as the best metric for setting the user’s trust authority.
Keywords: mobile station, access control, hierarchical thematic classification, semantic proximity, semantic distance