Гибридная нейроэволюция как способ обучения нейронных сетей на примере решения задачи поиска пути в лабиринте
Работая с нашим сайтом, вы даете свое согласие на использование файлов cookie. Это необходимо для нормального функционирования сайта, показа целевой рекламы и анализа трафика. Статистика использования сайта отправляется в «Яндекс» и «Google»
Научный журнал Моделирование, оптимизация и информационные технологииThe scientific journal Modeling, Optimization and Information Technology
Online media
issn 2310-6018

Hybrid neuroevolution as a way to train neural networks by the example to solve the maze problem

Berezina V.A.,  Mezentseva O.S.,  Ganshin K.Y. 

UDC 004.021
DOI: 10.26102/2310-6018/2021.34.3.014

  • Abstract
  • List of references
  • About authors

In this article a neural network trained by hybrid neuroevolution solves the maze problem. Hybrid neuroevolution combines differential evolution with the novelty search. Algorithms that preserve the best solutions face the problem that estimates of the novelty of these archival solutions will not change from generation to generation. This article aims to address this problem by proposing two methods for adjusting estimates of the novelty of solutions: novelty destruction and actualization of novelty rates. The novelty destruction allows novelty to diminish over time, thereby allowing the search algorithm to evolve, while the actualization of novelty rates updates the novelty of these solutions in each generation. When testing on the problem of navigation in the maze, it was noticed that the novelty destruction and the actualization of novelty rates converge faster than just the standard search by objective function and the novelty search.

1. Lehman J., Stanley K.O. Abandoning objectives: Evolution through the search for novelty alone. Evolutionary computation. 2011;19(2):189-223.

2. Ahn C.W., Ramakrishna R.S. Elitism-based compact genetic algorithms. IEEE Transactions on Evolutionary Computation. 2003;7(4):367-385.

3. Bratton D., Kennedy J. Defining a standard for particle swarm optimization. IEEE Swarm Intelligence Symposium. 2007. SIS 2007. IEEE.

4. Storn R.M., Price K. Differential evolution – a simple and efficient heuristic for global optimization over continuous spaces. Journal of global optimization. 1997;11(4):341-359

5. Corucci F., Calisti M., Hauser H., Laschi C. Evolutionary discovery of self-stabilized dynamic gaits for a soft underwater legged robot. Advanced Robotics (ICAR), 2015 IEEE International Conference.

6. Reehuis E., Olhofer M., Sendhoff B., Back T. Novelty-guided restarts for diverse solutions in optimization of airfoils. A Bridge between Probability, Set-Oriented Numerics, and Evolutionary Computation, EVOLVE 2013.

7. Naredo E., Trujillo L., Martínez Y. Searching for novel classifiers. European Conference on Genetic Programming. EUROGP 2013.

8. Liapis A., Yannakakis G., Togelius J. Enhancements to constrained novelty search: Two-population novelty search for generating game content. Proceedings of the Genetic and Evolutionary Computation Conference.2013.

9. Cuccu G., Gomez F. When novelty is not enough. Applications of Evolutionary Computation. EvoApplications 2011. Lecture Notes in Computer Science, vol 6624. Springer, Berlin, Heidelberg.

10. Yao X. Evolving artificial neural networks. Proceedings of the IEEE. 1999;87(9):1423-1447.

11. Stanley K.O., Miikkulainen R. Evolving neural networks through augmenting topologies. Evolutionary computation. 2002;10(2):99-127.

Berezina Victoria Andreevna

North-Caucasus Federal University

Stavropol, Russian Federation

Mezentseva Oksana Stanislavovna
Candidate of physical-mathematical sciences, Professor of the Department of Information Systems and Technologies

North-Caucasus Federal University

Stavropol, Russian Federation

Ganshin Konstantin Yuryevich

North-Caucasus Federal University

Stavropol, Russian Federation

Keywords: neuroevolution, neural networks, maze, novelty search, differential evolution

For citation: Berezina V.A., Mezentseva O.S., Ganshin K.Y. Hybrid neuroevolution as a way to train neural networks by the example to solve the maze problem. Modeling, Optimization and Information Technology. 2021;9(3). URL: https://moitvivt.ru/ru/journal/pdf?id=1012 DOI: 10.26102/2310-6018/2021.34.3.014 (In Russ).

553

Full text in PDF

Received 13.07.2021

Revised 25.09.2021

Accepted 23.09.2021

Published 30.09.2021