next up previous contents
Next: Temporal Behaviour Up: Neuronal Networks Previous: Neuronal Networks   Contents

Topology

In the most general point of view, artificial neural networks can be seen as function networks of a certain topology. A topology can be defined as a directed graph $G$, which consists of a set of nodes $N$ (the Neurones) and a set of transitions $T$, which represent directed connections between the nodes. A graph is called cyclic, if there exists a series of transitions which begins and end at the same node.


\begin{displaymath}G=({\bf N},{\bf T}), \qquad {\bf N}=\{N_1,N_2, \ldots N_n\},
\qquad {\bf T} \subseteq {\bf N}^2 \end{displaymath}

Each Neurone $N_i$ in the topology graph is associated with a state $s(N_i)=s_i$, which represents its current activation i.e. its output state $O_i$, while the vector $I_i$ of the states of all nodes from which a transition leads to $N_i$ is called the input state of $N_i$. The set of possible states S can be any real interval or any finite set. If it happens to be ${\bf S}=\{0,1\}$, the network is called Boolean or logical. The propagation function $f_i$ of the neurone $N_i$ associates the input state $I_i$ with the output state $O_i$. If the $I_i$ is of dimension 0, then $f_i$ and $O_i$ are constant.


\begin{displaymath}I_i=(x_1,x_2, \ldots x_k), \qquad O_i=s_i,
\qquad f_i: {\bf S}^k \rightarrow {\bf S} \end{displaymath}


\begin{displaymath}\mbox{with} \quad x_j=s_{l_j}, \quad k=\vert{\bf M}_i\vert,
...
...\bf M}_i, \quad {\bf M}_i=\{l\,\vert\, (N_l,N_i) \in {\bf T}\} \end{displaymath}

A set I of p Nodes are defined as the input nodes, the vector I of their states is the input state or input vector of the network. (Normally it is also demanded that I satisfies $\{A \, \vert \, (A,B)\in {\bf T} \} \cap {\bf I} = \emptyset$.) A set O of q Nodes are defined as output nodes, their state vector O is the output vector of the network.


\begin{displaymath}{\bf I} = \{ X_1,X_2, \ldots X_p\}, \qquad X_i \in {\bf N},
\qquad I = (s(X_1), s(X_2), \ldots s(X_p)) \end{displaymath}


\begin{displaymath}{\bf O} = \{ Y_1,Y_2, \ldots Y_q\}, \qquad Y_i \in {\bf N},
\qquad O = (s(Y_1), s(Y_2), \ldots s(Y_p)) \end{displaymath}


next up previous contents
Next: Temporal Behaviour Up: Neuronal Networks Previous: Neuronal Networks   Contents

(c) Bernhard Ömer - oemer@tph.tuwien.ac.at - http://tph.tuwien.ac.at/~oemer/