Automated Feature Selection in Neuroevolution

Maxine Yen Ling TAN, Michael Hartley, Michel Bister, Rudi Deklerck

Research output: Contribution to journalArticlepeer-review

21 Citations (Scopus)

Abstract

Feature selection is a task of great importance. Many feature selection methods have been proposed, and can be divided generally into two groups based on their dependence on the learning algorithm/classifier. Recently, a feature selection method that selects features at the same time as it evolves neural networks that use those features as inputs called Feature Selective NeuroEvolution of Augmenting Topologies (FS-NEAT) was proposed by Whiteson et al. In this paper, a novel feature selection method called Feature Deselective NeuroEvolution of Augmenting Topologies (FD-NEAT) is presented. FD-NEAT begins with fully connected inputs in its networks, and drops irrelevant or redundant inputs as evolution progresses. Herein, the performances of FD-NEAT, FS-NEAT and traditional NEAT are compared in some mathematical problems, and in a challenging race car simulator domain (RARS). On the whole, the results show that FD-NEAT significantly outperforms FS-NEAT in terms of network performance and feature selection, and evolves networks that offer the best compromise between network size and performance.
Original languageEnglish
Pages (from-to)271-292
Number of pages22
JournalEvolutionary Intelligence
Volume1
Publication statusPublished - Feb 2009

Keywords

  • Neural networks
  • Genetic algorithms
  • Evolution
  • Learning

Fingerprint

Dive into the research topics of 'Automated Feature Selection in Neuroevolution'. Together they form a unique fingerprint.

Cite this