Next: Bibliography
Up: Performance
Previous: The Comparator Problem
  Contents
Subsections
The given examples are only a small but representative
selection of all test runs and measurements performed
for this project, and illustrate the advantages and
disadvantages of genetic methods for the training of
neuronal networks.
Theoretic analysis and the optained practical results
suggest the following conclusions:
- Despite its inherently parallel nature, an efficient
parallel implementation of the backpropagation
algorithm on distributed memory systems for small
and medium network sizes is impossible due to
its high interdependencies and therefore low
computation/communication ratio.
- An efficient parallel implementation of the genetic
algorithm on distributed memory systems
is possible if the number of training patterns is
sufficiently high.
- Due to its stochastic nature, the standard genetic
algorithm can handle highly nonlinear problems
and usually finds the global minimum of the error
function if the population is sufficiently large.
- Backpropagation is a gradient descend method
and therefore very sensible to nonlinearities in the
error function.
Even with very small nonlinear problems like XOR,
the algorithm may converge against a local
minimum and fail.
- The necessary population size and number of generations
for the standard genetic algorithm
dramatically increase with the problem size and
render the method impracticable for bigger
networks.
- Provided that the problem is sufficiently linear,
the backpropagation algorithm normally
converges reasonably fast.
However, the actual speed depends very much on
the simulation parameters
and
and
on the initial weight values.
- Unlike standard backpropagation, the combined
genetic backpropagation algorithm
is relatively robust against nonlinearities,
can dynamically adapt learn and impulse rates and can
efficiently be parallelised on distributed memory
systems.
- Unlike the standard genetic algorithm, the required
population sizes and the number of
necessary generations are smaller, which allows
the use of the combined algorithm for bigger networks
and larger training sets.
Next: Bibliography
Up: Performance
Previous: The Comparator Problem
  Contents
(c) Bernhard Ömer - oemer@tph.tuwien.ac.at - http://tph.tuwien.ac.at/~oemer/