Abstract
This paper describes the implementation of nonlinear optimization methods into the learning of neural networks (NN) and the speed efficiency of four proposed improvements into the backpropagation algorithm. The problems of the backpropagation learning method are pointed out first, and the efficiency of implementing a nonlinear optimization method as a solution to this problems is described. Two nonlinear optimization methods are selected after inspecting several nonlinear methods from the viewpoint of NN learning to avoid the problem of the backpropagation algorithm. These are the linear search method by Davies, Swann, and Campey (DSC), and the conjugate gradient method by Fletcher and Reeves. The NN learning algorithms with these standard methods being implemented are formulated. Moreover, the following four improvements of the nonlinear optimization methods are proposed to shorten the NN learning time: (a) fast forward calculation in linear search by consuming a larger amount of memories; (b) avoiding the trap to local minimum point in an early stage of linear search; (c) applying a linear search method suitable for parallel processing; and (d) switching the gradient direction using the conjugate gradient method. The evaluation results have shown that all methods described here are effective in shortening the learning time.
Original language | English |
---|---|
Pages (from-to) | 101-111 |
Number of pages | 11 |
Journal | Systems and Computers in Japan |
Volume | 23 |
Issue number | 1 |
DOIs | |
Publication status | Published - 1992 |
Externally published | Yes |
All Science Journal Classification (ASJC) codes
- Theoretical Computer Science
- Information Systems
- Hardware and Architecture
- Computational Theory and Mathematics