C++ Neural Networks and Fuzzy Logic C++ Neural Networks and Fuzzy Logic
by Valluru B. Rao
M&T Books, IDG Books Worldwide, Inc.
ISBN: 1558515526   Pub Date: 06/01/95
  

Previous Table of Contents Next


The stability and robustness of this system was checked by using over 1000 moving time windows of 3-month, 6-month, and 12-month duration over the 4.5-year interval and noting the standard deviations in profits and maximum drawdown. The maximum drawdown varied from 30 to 48 basis points.

Neural Nets versus Box-Jenkins Time-Series Forecasting

Ramesh Sharda and Rajendra Patil used a standard 12-12-1 feedforward backpropagation network and compared the results with Box-Jenkins methodology for time-series forecasting. Box-Jenkins forecasting is traditional time-series forecasting technique. The authors used 75 different time series for evaluation. The results showed that neural networks achieved better MAPE (mean absolute percentage error) with a mean over all 75 time series MAPEs of 14.67 versus 15.94 for the Box-Jenkins approach.

Neural Nets versus Regression Analysis

Leorey Marquez et al. compared neural network modeling with standard regression analysis. The authors used a feedforward backpropagation network with a structure of 1-6-1. They used three functional forms found in regression analysis:

1.  Y = B0 + B1 X + e
2.  Y = B0 + B1 log(X) + e
3.  Y = B0 + B1/X + e

For each of these forms, 100 pairs of (x,y) data were generated for this “true” model.

Now the neural network was trained on these 100 pairs of data. An additional 100 data points were generated by the network to test the forecasting ability of the network. The results showed that the neural network achieved a MAPE within 0.6% of the true model, which is a very good result. The neural network model approximated the linear model best. An experiment was also done with intentional mis-specification of some data points. The neural network model did well in these cases also, but comparatively worse for the reciprocal model case.

Hierarchical Neural Network

Mendelsohn developed a multilevel neural network as shown in Figure 14.9. Here five neural networks are arranged such that four network outputs feed that final network. The four networks are trained to produce the High, Low, short-term trend, and medium-term trend for a particular financial instrument. The final network takes these four outputs as input and produces a turning point indicator.


Figure 14.9  Hierarchical neural network system to predict turning points.

Each network was trained and tested with 1200 fact days spanning 1988 to 1992 (33% used for testing). Preprocessing was accomplished by using differences of the inputs and with some technical analysis studies:

  Moving averages
  Exponential moving averages
  Stochastic indicators

For the network that produces a predicted High value, the average error ranged between 7.04% and 7.65% for various financial markets over the test period, including Treasury Bonds, Eurodollar, Japanese Yen, and S&P 500 futures contracts.

The Walk-Forward Methodology of Market Prediction

A methodology that is sometimes used in neural network design is walk-forward training and testing. This means that you choose an interval of time (e.g., six months) over which you train a neural network and test the network over a subsequent interval. You then move the training window and testing window forward one month, for example, and repeat the exercise. You do this for the time period of interest to see your forecasting results. The advantage of this approach is that you maximize the network’s ability to model the recent past in making a prediction. The disadvantage is that the network forget characteristics of the market that happened prior to the training window.

Takashi Kimoto et al. used the walk forward methodology in designing a trading system for Fujitsu and Nikko Securities. They also, like Mendelsohn, use a hierarchical neural network composed of individual feedforward neural networks. Prediction of the TOPIX, which is the Japanese equivalent of the Dow Jones Industrial Average, was performed for 33 months from January 1987 to September 1980. Four networks were used in the first level of the hierarchy trained on price data and economic data. The results were fed to a final network that generated buy and sell signals. The performance of the trading system achieved a result that is 20% better than a buy and hold strategy for the TOPIX.

Dual Confirmation Trading System

Jeremy Konstenius, discusses a trading system for the S&P 400 index with a holographic neural network, which is unlike the feedforward backpropagation neural network. The holographic network uses complex numbers for data input and output from neurons, which are mathematically more complex than feedforward network neurons. The author uses two trained networks to forecast the next day’s direction based on data for the past 10 days. Each network uses input data that is detrended, by subtracting a moving average from the data. Network 1 uses detrended closing values. Network 2 uses detrended High values. If both networks agree, or confirm each other, then a trade is made. There is no trade otherwise.

Network 1 showed an accuracy of 61.9% for the five-month test period (the training period spanned two years prior to the test period), while Network 2 also showed an accuracy of 61.9%. Using the two networks together, Konstenius achieved an accuracy of 65.82%.

A Turning Point Predictor

This neural network approach is discussed by Michitaka Kosaka et al. (1991).

They discuss applying the feedforward backpropagation network to develop buy/sell signals for securities. You would gather time-series data on stock prices, and want to find trends in the data so that changes in the direction of the trend provide you the turning points, which you interpret as signals to buy or sell.

You will need to list these factors that you think have any influence on the price of a security you are studying. You need to also determine how you measure these factors. You then formulate a nonlinear function combining the factors on your list and the past however many prices of your security (your time series data).

The function has the form, as Michitaka Kosaka, et al. (1991) put it,

     p(t + h) = F(x(t), x(t -1), ... , f1, f2, ... )
     where
     f1, f2, represent factors on your list,
     x(t) is the price of your stock at time t,
     p(t + h) is the turning point of security price at time t + h, and
     p(t + h) = -1 for a turn from downward to upward,
     p(t + h) = +1 for a turn from upward to downward,
     p(t + h) = 0 for no change and therefore no turn

Here you vary h through the values 1, 2, etc. as you move into the future one day (time period) at a time. Note that the detailed form of the function F is not given. This is for you to set up as you see fit.


Previous Table of Contents Next

Copyright © IDG Books Worldwide, Inc.