An AI agent that plays Slither.io using NeuroEvolution of Augmenting Topologies (NEAT) algorithm with OpenAI Gym/Universe integration.
This project implements a neural network-based AI that learns to play Slither.io through evolutionary algorithms. Using NEAT, the agent evolves increasingly sophisticated strategies over generations, demonstrating machine learning concepts in a real-time game environment.
- Python - Core implementation
- NEAT-Python - Neuroevolution algorithm
- OpenAI Gym/Universe - Game environment integration
- Neural Networks - Decision-making architecture
- Autonomous gameplay with visual input processing
- Evolutionary learning with population-based training
- Real-time decision making for movement and survival
- Fitness optimization for score maximization
- Neural network receives game state (snake position, food, obstacles)
- NEAT evolves network topology and weights over generations
- Fittest agents survive and reproduce with mutation
- Gradually develops survival and scoring strategies
- Reinforcement learning fundamentals
- Evolutionary algorithm implementation
- Neural network architecture design
- Game AI development