I read up the paper on NEAT and learned how a basic Neural Network works and I went from the there. No libraries or frameworks relating to neural nets were used.
The NEAT system works in a way that doesn’t start from a predefined number of nodes or depth like most other neural net systems, but rather from the bare minimum, working it’s way up. I implemented this in Python, and I’ve added much more functionality than was described in the original paper.
The program creates a set of groups, and in each group a set of neural nets. Each net is then tested, and the strongest groups are allowed more nets, the weaker are given less. However the groups will always have a minimum amount of nets, and the only way the group can die is if it cannot make any progress in a given amount of generations. New nets are created based on how successful it’s own groups nets were, and to do this a quick sort is performed. The new nets can come either from a single mutated net, or as the offspring from two different nets. The successful groups can branch off into their own groups with their own set of nets if they become different enough. After generations the top performers will get good at what they do. This can take a long time depending on the processor.
I used Godot to allow these nets to easily compete in games, as making games is far easier in Godot than pure python. The games took a very short amount of time compared to the NEAT itself which was quite the project. I am proud of having been able to fulfill my dream of creating my own artificial intelligence, even if it was based in the foundation of a classic neural network paper.