next up previous contents
Next: Principles of Genetic Algorithms Up: Introduction Previous: Motivation   Contents

Principles of Neuronal Networks

In the most general case, neural networks consist of an (often very high) number of neurones, each of which has a number of inputs which are mapped via a relatively simple function to its output. Networks differ in the way their neurones are interconnected (topology), in the way the output of a neurone determined out of its inputs (propagation function) and in their temporal behaviour (synchronous, asynchronous or continuous).

While the temporal behaviour is normally determined by the simulation hard- and software used and the topology remains very often unchanged, the propagation function is associated with a set of variable parameters witch refer to the relative importance of the different inputs (weights) or to describe a threshold-value of the output (bias).

The most striking difference between neural networks and traditional programming is, that neural networks are in fact not programmed at all, but are able to ``learn'' by example, thus extracting and generalising features of a presented training set and correlating them to the desired output. After a certain training period, the net should be able to produce the right output also for new input values, which are nor part of the training set.

This learning process is accomplished by a training algorithm, which successively changes the parameters (i.e. the weights and the bias) of each neurone until the desired result, typically expressed by the maximum distance between the actual and the training output, is achieved. Those algorithms cant be subdivided into two major groups according to the data used to update the neurone parameters. Local algorithms (e.g. Perceptron Learn Algorithm (PLA), Backpropagation) restrict themselves to the local data at the in- and outputs of each neurone, while global methods (e.g. Simulated Annealing) also use overall data (e.g. statistical information).

The main application areas of neural networks are pattern recognition, picture and speech processing, artificial intelligence and robotics. As a scientific concept; they also play a role in other disciplines as e.g. theoretical physics and chaos theory.


next up previous contents
Next: Principles of Genetic Algorithms Up: Introduction Previous: Motivation   Contents

(c) Bernhard Ömer - oemer@tph.tuwien.ac.at - http://tph.tuwien.ac.at/~oemer/