C++ Neural Networks and Fuzzy Logic C++ Neural Networks and Fuzzy Logic
by Valluru B. Rao
M&T Books, IDG Books Worldwide, Inc.
ISBN: 1558515526   Pub Date: 06/01/95
  

Previous Table of Contents Next


Performance of the Perceptron

When you input the coordinates of the vertex G, which has 1 for each coordinate, the first hidden-layer neuron aggregates these inputs and gets a value of 2.2. Since 2.2 is more than the threshold value of the first neuron in the hidden layer, that neuron fires, and its output of 1 becomes an input to the output neuron on the connection with weight 0.6. But you need the activations of the other hidden-layer neurons as well. Let us describe the performance with coordinates of G as the inputs to the network. Table 5.7 describes this.

Table 5.7 Results with Coordinates of Vertex G as Input

Vertex/
Coordinates
Hidden Layer Weighted Sum Comment Activation Contribution to Output Sum
G:1,1,1 1 2.2 >1.8 1 0.6
2 -0.8 <0.05 0 0
3 -1.4 <-0.2 0 0 0.6

The weighted sum at the output neuron is 0.6, and it is greater than the threshold value 0.5. Therefore, the output neuron fires, and at the vertex G, the function is evaluated to have a value of +1.

Table 5.8 shows the performance of the network with the rest of the vertices of the cube. You will notice that the network computes a value of +1 at the vertices, O, A, F, and G, and a –1 at the rest.

Table 5.8 Results with Other Inputs

Hidden Layer Neuron# Weighted Sum Comment Activation Contribution to Output Sum
O :0, 0, 0 1 0 <1.8 0 0
2 0 <0.05 0 0
3 0 >-0.2 1 0.6 0.6*
A :0, 0, 1 1 0.2 <1.8 0 0
2 0.3 >0.05 1 0.3
3 0.6 >-0.2 1 0.6 0.9*
B :0, 1, 0 1 1 <1.8 0 0
2 -1 <0.05 0 0
3 -1 <-0.2 0 0 0
C :0, 1, 1 1 1.2 <1.8 0 0
2 0.2 >0.05 1 0.3
3 -0.4 <-0.2 0 0 0.3
D :1, 0, 0 1 1 <1.8 0 0
2 .1 >0.05 1 0.3
3 -1 <-0.2 0 0 0.3
E :1, 0, 1 1 1.2 <1.8 0 0
2 0.4 >0.05 1 0.3
3 -0.4 <-0.2 0 0 0.3
F :1, 1, 0 1 2 >1.8 1 0.6
2 -0.9 <0.05 0 0
3 -2 <-0.2 0 0 0.6*


*The output neuron fires, as this value is greater than 0.5 (the threshold value); the function value is +1.

Other Two-layer Networks

Many important neural network models have two layers. The Feedforward backpropagation network, in its simplest form, is one example. Grossberg and Carpenter’s ART1 paradigm uses a two-layer network. The Counterpropagation network has a Kohonen layer followed by a Grossberg layer. Bidirectional Associative Memory, (BAM), Boltzman Machine, Fuzzy Associative Memory, and Temporal Associative Memory are other two-layer networks. For autoassociation, a single-layer network could do the job, but for heteroassociation or other such mappings, you need at least a two-layer network. We will give more details on these models shortly.

Many Layer Networks

Kunihiko Fukushima’s Neocognitron, noted for identifying handwritten characters, is an example of a network with several layers. Some previously mentioned networks can also be multilayered from the addition of more hidden layers. It is also possible to combine two or more neural networks into one network by creating appropriate connections between layers of one subnetwork to those of the others. This would certainly create a multilayer network.

Connections Between Layers

You have already seen some difference in the way connections are made between neurons in a neural network. In the Hopfield network, every neuron was connected to every other in the one layer that was present in the network. In the Perceptron, neurons within the same layer were not connected with one another, but the connections were between the neurons in one layer and those in the next layer. In the former case, the connections are described as being lateral. In the latter case, the connections are forward and the signals are fed forward within the network.

Two other possibilities also exist. All the neurons in any layer may have extra connections, with each neuron connected to itself. The second possibility is that there are connections from the neurons in one layer to the neurons in a previous layer, in which case there is both forward and backward signal feeding. This occurs, if feedback is a feature for the network model. The type of layout for the network neurons and the type of connections between the neurons constitute the architecture of the particular model of the neural network.


Previous Table of Contents Next

Copyright © IDG Books Worldwide, Inc.