C++ Neural Networks and Fuzzy Logic C++ Neural Networks and Fuzzy Logic
by Valluru B. Rao
M&T Books, IDG Books Worldwide, Inc.
ISBN: 1558515526   Pub Date: 06/01/95
  

Previous Table of Contents Next


Example of Encoding

Suppose the two fuzzy sets we use to encode have the fit vectors ( 0.3, 0.7, 0.4, 0.2) and (0.4, 0.3, 0.9). Then the matrix W is obtained by using max–min composition as follows.

   0.3 [0.4 0.3 0.9] min(0.3,0.4) min(0.3,0.3) min(0.3,0.9) 0.3 0.3 0.3
 W=0.7              =min(0.7,0.4) min(0.7,0.3) min(0.7,0.9)=0.4 0.3 0.7
   0.4               min(0.4,0.4) min(0.4,0.3) min(0.4,0.9) 0.4 0.3 0.4
   0.2               min(0.2,0.4) min(0.2,0.3) min(0.2,0.9) 0.2 0.2 0.2

Recall for the Example

If we input the fit vector (0.3, 0.7, 0.4, 0.2), the output (b1, b2, b3) is determined as follows, using bj = max( min(a1, w1j), …, min(am, wmj), where m is the dimension of the ‘a’ fit vector, and wij is the ith row, jth column element of the matrix W.

     b1 = max(min(0.3, 0.3), min(0.7, 0.4), min(0.4, 0.4),
          min(0.2, 0.2)) =  max(0.3, 0.4, 0.4, 0.2) = 0.4
     b2 = max(min(0.3, 0.3), min(0.7, 0.3), min(0.4, 0.3),
          min(0.2, 0.2)) = max( 0.3, 0.3, 0.3, 0.2 ) = 0.3
     b3 = max(min(0.3, 0.3), min(0.7, 0.7), min(0.4, 0.4),
          min(0.2, 0.2)) = max (0.3, 0.7, 0.4, 0.2) = 0.7

The output vector (0.4, 0.3, 0.7) is not the same as the second fit vector used, namely (0.4, 0.3, 0.9), but it is a subset of it, so the recall is not perfect. If you input the vector (0.4, 0.3, 0.7) in the opposite direction, using the transpose of the matrix W, the output is (0.3, 0.7, 0.4, 0.2), showing resonance. If on the other hand you input (0.4, 0.3, 0.9) at that end, the output vector is (0.3, 0.7, 0.4, 0.2), which in turn causes in the other direction an output of (0.4, 0.3, 0.7) at which time there is resonance. Can we foresee these results? The following section explains this further.

Recall

Let us use the operator o to denote max–min composition. Perfect recall occurs when the weight matrix is obtained using the max–min composition of fit vectors U and V as follows:

(i)  U o W = V if and only if height (U) ≥ height (V).
(ii)  V o WT = U if and only if height (V) ≥ height (U).

Also note that if X and Y are arbitrary fit vectors with the same dimensions as U and V, then:

(iii)  X o WV.
(iv)  Y o WTU.


A ∴ B is the notation to say A is a subset of B.

In the previous example, height of (0.3, 0.7, 0.4, 0.2) is 0.7, and height of (0.4, 0.3, 0.9) is 0.9. Therefore (0.4, 0.3, 0.9) as input, produced (0.3, 0.7, 0.4, 0.2) as output, but (0.3, 0.7, 0.4, 0.2) as input, produced only a subset of (0.4, 0.3, 0.9). That both (0.4, 0.3, 0.7) and (0.4, 0.3, 0.9) gave the same output, (0.3, 0.7, 0.4, 0.2) is in accordance with the corollary to the above, which states that if (X, Y) is a fuzzy associated memory, and if X is a subset of X’, then (X’, Y) is also a fuzzy associated memory.

C++ Implementation

We use the classes we created for BAM implementation in C++, except that we call the neuron class fzneuron, and we do not need some of the methods or functions in the network class. The header file, the source file, and the output from an illustrative run of the program are given in the following. The header file is called fuzzyam.hpp, and the source file is called fuzzyam.cpp.

Program details

The program details are analogous to the program details given in Chapter 8. The computations are done with fuzzy logic. Unlike in the nonfuzzy version, a single exemplar fuzzy vector pair is used here. There are no transformations to bipolar versions, since the vectors are fuzzy and not binary and crisp.

A neuron in the first layer is referred to as anrn, and the number of neurons in this layer is referred to as anmbr. bnrn is the name we give to the array of neurons in the second layer and bnmbr denotes the size of that array. The sequence of operations in the program are as follows:

  We ask the user to input the exemplar vector pair.
  We give the network the X vector, in the exemplar pair. We find the activations of the elements of bnrn array and get corresponding output vector as a binary pattern. If this is the Y in the exemplar pair, the network has made a desired association in one direction, and we go on to the next.step. Otherwise we have a potential associated pair, one of which is X and the other is what we just got as the output vector in the opposite layer. We say potential associated pair because we have the next step to confirm the association.
  We run the bnrn array through the transpose of the weight matrix and calculate the outputs of the anrn array elements. If, as a result, we get the vector X as the anrn array, we found an associated pair, (X, Y). Otherwise, we repeat the two steps just described until we find an associated pair.
  We now work with the next pair of exemplar vectors in the same manner as above, to find an associated pair.
  We assign serial numbers, denoted by the variable idn, to the associated pairs so we can print them all together at the end of the program. The pair is called (X, Y) where X produces Y through the weight matrix W, and Y produces X through the weight matrix, which is the transpose of W.
  A flag is used to have value 0 until confirmation of association is obtained, when the value of the flag changes to 1.
  Functions compr1 and compr2 in the network class verify if the potential pair is indeed an associated pair and set the proper value of the flag mentioned above.
  Functions comput1 and comput2 in the network class carry out the calculations to get the activations and then find the output vector, in the respective directions of the fuzzy associative memory network.

A lot of the code from the bidirectional associative memory (BAM) is used for the FAM. Here are the listings, with comments added where there are differences between this code and the code for the BAM of Chapter 8.


Previous Table of Contents Next

Copyright © IDG Books Worldwide, Inc.