C++ Neural Networks and Fuzzy Logic C++ Neural Networks and Fuzzy Logic
by Valluru B. Rao
M&T Books, IDG Books Worldwide, Inc.
ISBN: 1558515526   Pub Date: 06/01/95
  

Previous Table of Contents Next


Program Output

Four input vectors are used in the trial run of the program, and these are specified in the main function. The output is self-explanatory. We have included only in this text some comments regarding the output. These comments are enclosed within strings of asterisks. They are not actually part of the program output. Table 10.1 shows a summarization of the categorization of the inputs done by the network. Keep in mind that the numbering of the neurons in any layer, which has n neurons, is from 0 to n – 1, and not from 1 to n.

Table 10.1 Categorization of Inputs

input winner in F2 layer
0 1 0 0 0 0 0, no reset
1 0 1 0 1 0 1, no reset
0 0 0 0 1 0 1, after reset 2
1 0 1 0 1 0 1, after reset 3

The input pattern 0 0 0 0 1 0 is considered a subset of the pattern 1 0 1 0 1 0 in the sense that in whatever position the first pattern has a 1, the second pattern also has a 1. Of course, the second pattern has 1’s in other positions as well. At the same time, the pattern 1 0 1 0 1 0 is considered a superset of the pattern 0 0 0 0 1 0. The reason that the pattern 1 0 1 0 1 0 is repeated as input after the pattern 0 0 0 0 1 0 is processed, is to see what happens with this superset. In both cases, the degree of match falls short of the vigilance parameter, and a reset is needed.

Here’s the output of the program:

THIS PROGRAM IS FOR AN ADAPTIVE RESONANCE THEORY
1-NETWORK. THE NETWORK IS SET UP FOR ILLUSTRATION WITH SIX INPUT NEURONS
AND SEVEN OUTPUT NEURONS.
*************************************************************
Initialization of connection weights and F1 layer activations. F1 layer
connection weights are all chosen to be equal to a random value subject
to the conditions given in the algorithm. Similarly, F2 layer connection
weights are all chosen to be equal to a random value subject to the
conditions given in the algorithm.
*************************************************************
weights for F1 layer neurons:
1.964706  1.964706  1.964706  1.964706  1.964706  1.964706  1.964706
1.964706  1.964706  1.964706  1.964706  1.964706  1.964706  1.964706
1.964706  1.964706  1.964706  1.964706  1.964706  1.964706  1.964706
1.964706  1.964706  1.964706  1.964706  1.964706  1.964706  1.964706
1.964706  1.964706  1.964706  1.964706  1.964706  1.964706  1.964706
1.964706  1.964706  1.964706  1.964706  1.964706  1.964706  1.964706

weights for F2 layer neurons:
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444

activations of F1 layer neurons:
-0.357143 -0.357143 -0.357143 -0.357143 -0.357143 -0.357143
*************************************************************
A new input vector and a new iteration
*************************************************************
Input vector is:
0 1 0 0 0 0

activations of F1 layer neurons:
0   0.071429   0   0   0   0

outputs of F1 layer neurons:
0   1   0   0   0   0

winner is 0
activations of F2 layer neurons:
0.344444   0.344444   0.344444   0.344444   0.344444   0.344444   0.344444

outputs of F2 layer neurons:
1   0   0   0   0   0   0

activations of F1 layer neurons:
-0.080271   0.013776   -0.080271   -0.080271   -0.080271   -0.080271

outputs of F1 layer neurons:
0   1   0   0   0   0
*************************************************************
Top-down and bottom-up outputs at F1 layer match, showing resonance.
*************************************************************
degree of match: 1 vigilance:  0.95

weights for F1 layer neurons:
0  1.964706  1.964706  1.964706  1.964706  1.964706  1.964706
1  1.964706  1.964706  1.964706  1.964706  1.964706  1.964706
0  1.964706  1.964706  1.964706  1.964706  1.964706  1.964706
0  1.964706  1.964706  1.964706  1.964706  1.964706  1.964706
0  1.964706  1.964706  1.964706  1.964706  1.964706  1.964706
0  1.964706  1.964706  1.964706  1.964706  1.964706  1.964706

winner is 0

weights for F2 layer neurons:
0  1  0  0  0  0
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444

learned vector # 1  :
0  1  0  0  0  0
*************************************************************
A new input vector and a new iteration
*************************************************************
Input vector is:
1 0 1 0 1 0

activations of F1 layer neurons:
0.071429   0   0.071429   0   0.071429   0

outputs of F1 layer neurons:
1   0   1   0   1   0

winner is 1
activations of F2 layer neurons:
0   1.033333   1.033333   1.033333   1.033333   1.033333   1.033333

outputs of F2 layer neurons:
0   1   0   0   0   0   0

activations of F1 layer neurons:
0.013776   -0.080271   0.013776   -0.080271   0.013776   -0.080271

outputs of F1 layer neurons:
1   0   1   0   1   0
*************************************************************
Top-down and bottom-up outputs at F1 layer match,
showing resonance.
*************************************************************
degree of match: 1 vigilance:  0.95

weights for F1 layer neurons:
0  1  1.964706  1.964706  1.964706  1.964706  1.964706
1  0  1.964706  1.964706  1.964706  1.964706  1.964706
0  1  1.964706  1.964706  1.964706  1.964706  1.964706
0  0  1.964706  1.964706  1.964706  1.964706  1.964706
0  1  1.964706  1.964706  1.964706  1.964706  1.964706
0  0  1.964706  1.964706  1.964706  1.964706  1.964706

winner is 1

weights for F2 layer neurons:
0  1  0  0  0  0
0.666667  0  0.666667  0  0.666667  0
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444

learned vector # 2  :
1  0  1  0  1  0
*************************************************************
A new input vector and a new iteration
*************************************************************
Input vector is:
0 0 0 0 1 0

activations of F1 layer neurons:
0   0   0   0   0.071429   0

outputs of F1 layer neurons:
0   0   0   0   1   0

winner is 1
activations of F2 layer neurons:
0   0.666667   0.344444   0.344444   0.344444   0.344444   0.344444

outputs of F2 layer neurons:
0   1   0   0   0   0   0

activations of F1 layer neurons:
-0.189655   -0.357143   -0.189655   -0.357143   -0.060748   -0.357143

outputs of F1 layer neurons:
0   0   0   0   0   0

degree of match: 0 vigilance:  0.95
winner is 1 reset required
*************************************************************
Input vector repeated after reset, and a new iteration
*************************************************************
Input vector is:
0 0 0 0 1 0

activations of F1 layer neurons:
0   0   0   0   0.071429   0

outputs of F1 layer neurons:
0   0   0   0   1   0

winner is 2
activations of F2 layer neurons:
0   0.666667   0.344444   0.344444   0.344444   0.344444   0.344444
outputs of F2 layer neurons:
0   0   1   0   0   0   0

      activations of F1 layer neurons:
-0.080271   -0.080271   -0.080271   -0.080271   0.013776   -0.080271

outputs of F1 layer neurons:
0   0   0   0   1   0
*************************************************************
Top-down and bottom-up outputs at F1 layer match, showing resonance.
*************************************************************
degree of match: 1 vigilance:  0.95

weights for F1 layer neurons:
0  1  0  1.964706  1.964706  1.964706  1.964706
1  0  0  1.964706  1.964706  1.964706  1.964706
0  1  0  1.964706  1.964706  1.964706  1.964706
0  0  0  1.964706  1.964706  1.964706  1.964706
0  1  1  1.964706  1.964706  1.964706  1.964706
0  0  0  1.964706  1.964706  1.964706  1.964706

winner is 2

weights for F2 layer neurons:
0  1  0  0  0  0
0.666667  0  0.666667  0  0.666667  0
0  0  0  0  1  0
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444

learned vector # 3  :
0  0  0  0  1  0
*************************************************************
An old (actually the second above) input vector is retried after trying a
subset vector, and a new iteration
*************************************************************
Input vector is:
1 0 1 0 1 0

activations of F1 layer neurons:
0.071429   0   0.071429   0   0.071429   0

outputs of F1 layer neurons:
1   0   1   0   1   0

winner is 1
activations of F2 layer neurons:
0   2   1   1.033333   1.033333   1.033333   1.03333

outputs of F2 layer neurons:
0   1   0   0   0   0   0

activations of F1 layer neurons:
-0.060748   -0.357143   -0.060748   -0.357143   -0.060748   -0.357143

outputs of F1 layer neurons:
0   0   0   0   0   0

degree of match: 0 vigilance:  0.95
winner is 1 reset required
*************************************************************
Input vector repeated after reset, and a new iteration
*************************************************************
Input vector is:
1 0 1 0 1 0

activations of F1 layer neurons:
0.071429   0   0.071429   0   0.071429   0

outputs of F1 layer neurons:
1   0   1   0   1   0

winner is 3
activations of F2 layer neurons:
0   2   1   1.033333   1.033333   1.033333   1.033333

outputs of F2 layer neurons:
0   0   0   1   0   0   0

activations of F1 layer neurons:
0.013776   -0.080271   0.013776   -0.080271   0.013776   -0.080271

outputs of F1 layer neurons:
1   0   1   0   1   0
*************************************************************
Top-down and Bottom-up outputs at F1layer match, showing resonance.
*************************************************************
degree of match: 1 vigilance:  0.95

weights for F1 layer neurons:
0  1  0  1  1.964706  1.964706  1.964706
1  0  0  0  1.964706  1.964706  1.964706
0  1  0  1  1.964706  1.964706  1.964706
0  0  0  0  1.964706  1.964706  1.964706
0  1  1  1  1.964706  1.964706  1.964706
0  0  0  0  1.964706  1.964706  1.964706

winner is 3

weights for F2 layer neurons:
0  1  0  0  0  0
0.666667  0  0.666667  0  0.666667  0
0  0  0  0  1  0
0.666667  0  0.666667  0  0.666667  0
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444
0.344444  0.344444  0.344444  0.344444  0.344444  0.344444

learned vector # 4  :
1  0  1  0  1  0


Previous Table of Contents Next

Copyright © IDG Books Worldwide, Inc.