Development of a novel stopping technique for optimization

Date

1997-12

Journal Title

Journal ISSN

Volume Title

Publisher

Texas Tech University

Abstract

Neural networks are being used widely in areas of process control, pattern recognition, etc. The possibility of improving the efficiency of data utilization in neural network training and automating the decision to stop training, using a novel Steady-state Identifier (SSID) algorithm, have been investigated. One conclusion is that complete automation of the decision criterion to stop training is probably beyond the realm of possibility and human judgment seems unavoidable.

However, as a beneficial outcome of this study, a technique has been developed to determine the number of neural network training repetitions to guarantee the convergence of the training algorithm within a certain vicinity ofthe global optimum of the objective function, with a desired level of confidence. The concept used is the weakest-link-in-the chain analysis.

As another outcome, a novel approach of stopping neural network training has been developed. In this technique, a random fraction ofthe training set data is sampled at each epoch. The error on the random fraction is tested for its attainment of steady-state or otherwise using either a novel Steady-State Identifier or equivalently by visual observation by a human operator. Training is stopped when the error on the random fraction attains Steady-State. This technique, in general, is more cost effective than cross-validation.

The overall developments are perfectly general and can also be applied to optimization problems other than neural network training.

Description

Citation