All sample problems deal with 2 layer networks of standard backpropagation neurones and use Boolean logic. Since the aim of this project lies mainly in quantitative comparisons, all problems are abstract, well defined and can be adapted for arbitrary problem sizes.
The variables ,
and
refer to the number of
input, hidden and output nodes of an
-
-
-layer
network.
is the number of training patterns
.
The problem parameters are set by upper-case
command line options of the corresponding programs;
if omitted, the standard values given in the right
column are assumed.
Only the problem specific options are given here;
for general genetic, backpropagation, sequential and parallel
options, please refer to the corresponding sections.
The program names of the sequential backpropagation version have the postfix - back, the sequential and parallel versions of the genetic and the combined algorithm end in - seq and - par.
Source: | enc.c | problem definition |
Programs: | encseq | sequential genetic implementation | |||
encpar | parallel genetic implementation | ||||
encback | sequential backpropagation implementation |
Options: | -N![]() |
number of input and output nodes | ![]() |
||||
-M![]() |
number of hidden nodes |
![]() |
An -
-
encode/decoder reproduces input unit vectors at
the output by finding a a compressed binary
intermediate representation in the hidden layer.
The encoder/decoder problem allows big networks to be trained by a relatively small training set since the number of training patters is equal to the number of nodes.
Source: | cnt.c | problem definition |
Programs: | cntseq | sequential genetic implementation | |||
cntpar | parallel genetic implementation | ||||
cntback | sequential backpropagation implementation |
Options: | -N![]() |
number of input nodes | ![]() |
||||
-M![]() |
number of hidden nodes | ![]() |
|||||
-O![]() |
number of output nodes |
![]() |
Counts all input nodes set to and produces the number binary
encoded at the output.
If the number of output nodes
is set to 1, then 1 will be
produced, if an odd number of inputs is set to 1, and 0 will
be produced otherwise. The 1-norm can thus be seen as a
generalisation of the n-parity problem.
In the case of 2 input and 1 output node, this results in the
2-parity or exclusive or (XOR) problem.
The 1-norm and the n-parity problem have both highly nonlinear error functions and are therefor a good test for the robustness of the training algorithm.
Source: | cmp.c | problem definition |
Programs: | cmpseq | sequential genetic implementation | |||
cmppar | parallel genetic implementation | ||||
cmpback | sequential backpropagation implementation |
Options: | -N![]() |
length of one operand | ![]() |
||||
-M![]() |
number of hidden nodes |
![]() |
|||||
-Q![]() |
test for ![]() ![]() ![]() |
![]() |
Compares the binary encoded operands and
and sets the
output node according to the operator determined by
.
The 2N comparator is a relatively simple problem (at
least for
) which very large (
) and
highly redundant training sets and is therefor a good
test for online learning (Section 4.1.1) or error
estimation (Section 3.4.3).