next up previous contents
Next: Neuronal Networks Up: Introduction Previous: Principles of Genetic Algorithms   Contents

About this Project

This project deals with the use of genetic algorithms for the training of neural networks. While gradient descend methods, of which backpropagation is the most popular, can be very fast and, since they normally don't need global data, are well apt to run specialised hardware as neural chips, they also have certain drawbacks: They are ``greedy'' (i.e. only search in the momentarily best direction) which can lead to convergence problems with highly nonlinear problems, and they are hard to parallelize on a distributed memory system due to their high interdependence which raises communication costs.

Genetic algorithms, on the other hand, are very robust and explore the search space more uniformly. And, since every individual is evaluated independently, they are perfectly suited to run on a distributed memory machine like the 16-transputer-network, for which the parallel versions of the simulations have been written.

This report should give a brief description of the genetic, backpropagation and combined algorithms used and present their serial and parallel implementations, the usage of the corresponding programs, the statistical data, gathered from test runs and the overall conclusions that can be drawn from them.


next up previous contents
Next: Neuronal Networks Up: Introduction Previous: Principles of Genetic Algorithms   Contents

(c) Bernhard Ömer - oemer@tph.tuwien.ac.at - http://tph.tuwien.ac.at/~oemer/