Running the Experiments

The experiments will be carried out in a network of 50 Sun Sparkstations. I will use the Stuttgart Neural Network Simulator (SNNS) package to simulate the neural networks. The Genetic Algorithm process will be performed with Genitor, a package developed by Dr. Darrell Whitley, at the University of Colorado, which I have modified it to allow for additional features. Some of these additional features are: the ability to match all members of the population in every generation (the original package would do only one matching per generation), the ability to choose if both offsprings of a mating will fight for a position in the next generation, or only one of them (in the original package, only one offspring - chosen randomly - would be allowed the chance to fight for a position in the next generation), and the fact that the package now performs all matings before starting to compute the fitness of any one offspring.

Both the neural network and the genetic algorithm packages were downloaded from sites on the Internet. I have programmed an interface between them by providing a fitness function in the genetic algorithm package that creates all the files that the SNNS software needs, calls SNNS in batch mode, waits until SNNS finishes, and then reads the performance of the neural network from a file created by SNNS at the end of its execution. I have tested this interface with an experiment that uses the Genitor genetic algorithm package to look for good topologies to solve the XOR problem for two bits.

As might be expected, the experiment in which neural networks are trained to attack a language learning task is much more complicated than the XOR problem mentioned above. For this reason, I have already performed one more modification to the Genitor genetic algorithm package. The original package evaluates the fitness of an offspring immediately after it is created. In the case of a language learning task, this can take several hours. To speed up the process of evaluating a complete generation, I have modified the Genitor package so that it first creates all offsprings of a generation and then evaluates each of them. This would not make a difference in the case when the SNNS package is running on a single CPU. The advantage comes from the fact that I have included code in the Genitor package fitness function computation so that it looks for an idle CPU in a network of 50 Sparkstations, and then assigns it the task of training and evaluating a neural network. By doing this, I can have up to 50 neural networks evaluated at the same time. This reduces the time it takes to evaluate a complete generation greatly. This parallel version of the fitness function has also been tested with the XOR problem.

Back to the Table of Content

Back to the previous subject

To the next subject