In accordance with such model, the formula of the activation potential φ is as follows Signal φ is processed by activation function, which can take different shapes. If the function is linear the output signal can be described as: Neural networks described by above formula are called linear neural networks.
The other type of activation function is threshold function: where φ _{h} is a given constant threshold value.
Functions that more accurate describe the non-linear characteristic of the biological neuron activation function are: sigmoid function: where β is a parameter, and hyperbolic tangent function: where α is a parameter. The next picture presents the graphs of particular activation functions: - linear function,
- threshold function,
- sigmoid function.
- feedforward networks
- one-layer networks
- multi-layer networks
- recurrent networks
- cellular networks
Feedforward neural networks, which typical example is one-layer perceptron (see figure of
Single-layer perceptron), consist of neurons set in layers. The information flow has one direction.
Neurons from a layer are connected only with the neurons from the preceding layer. The multi-layer
networks usually consist of input, hidden (one or more), and output layers. Such system may be treated
as non-linear function approximation block: y = f(u).
Recurrent neural networks. Such networks have feedback loops (at least one) – output signals of
a layer are connected to its inputs. It causes dynamic effects during network work. Input signals of layer
consist of input and output states (from the previous step) of that layer. The structure of recurrent network
depicts the below figure.
Cellular networks. In this type of neural networks neurons are arranged in a lattice. The connections
(usually non-linear) may appear between the closest neurons. The typical example of such networks is Kohonen
Self-Organising-Map.
References
Ryszard Tadeusiewcz "Sieci neuronowe", Kraków 1992 Józef Korbicz, Andrzej Obuchowicz, Dariusz Uciński "Sztuczne sieci neuronowe. Podstawy i zastosowania", Warszawa 1994 Simon Haykin "Neural Networks. A Comprehensive Foundation", New Jersey 1999 Andrzej Kos, (Lecture) "Przemysłowe zastosowania sztucznej inteligencji", 2003/2004 (WWW) Wstęp do sieci neuronowych |
||

MSc. Adam Gołda AGH-UST, 2005 |