Kuwait-University-Journal-of-Law-header
Search
Kuwait Journal of Science

Previous Issues

Advance Search
Year : From To Vol
Issue Discipline:
Author

Volume :35 Issue : 1 2008      Add To Cart                                                                    Download

PTGMVS: Parallel tangent gradient with modified variable steps for a reliable and fast MLP neural networks learning

Auther : PAYMAN MOALLEM*, S. AMIRHASSAN MONADJEMI**, AND BEHZAD MIRZAEIAN*

* Electrical Engineering Department and ** Computer Engineering Department

Faculty of Engineering, University of Isfahan, Hezarjarib Avenue, Isfahan, 81746-73441, Iran. email : p_moallem,monadjemi,mirzaeian@eng.ui.ac.ir

 

ABSTRACT

 

In this paper, a reliable and fast learning algorithm for MLP neural networks based on the parallel tangent gradient and proposed variable learning rates is introduced. In typical gradient-based learning algorithms, the momentum has usually got an improving effect on the convergence rate and decreases the zigzagging phenomena. However, it also sometimes causes the convergence rate to decrease. The parallel tangent which can be used instead of the momentum to improve the convergence, is as simple as the momentum from the implementation point of view. This method tries to overcome the inefficiency of zigzagging phenomena of conventional backpropagation by deflecting the gradient through an acceleration phase. In the proposed algorithm, we use two adaptive learning rates, h for gradient search direction, and m for the accelerating direction through the parallel tangent. In the proposed learning rate adaptation algorithm, each learning rate is adapted locally to the cost function landscape and the previous learning rate. We tested the proposed algorithm for optimization of the Rosenbrock function. In addition, various artificial training sets such as parity generators and encoders, and real training/testing data sets such as IRIS and Wisconsin breast cancer were trained on MLP neural networks using the proposed learning scheme. The results were compared with those of the dynamic self adaptation of gradient learning rate and momentum and parallel tangent with variable learning rates algorithms. Experimental results for optimizing Rosenbrock function showed that the convergence speed of the proposed algorithm is faster. Furthermore, in MLP tests, the experimental results suggested that the average number of epochs of the proposed method is decreased around 40%. Meanwhile, our proposed algorithm also showed around 50% higher success rates, thus avoiding local minimum more easily. The generality of the proposed method is similar to the other compared gradient-based learning algorithms.

 

Keywords: adaptive learning rates; backpropagation; parallel tangent; zigzagging,

 

 

Kuwait Journal of Science
Journal of Law

You are Visitor No.

55202

Journal of Law
Journal of Law
Tell your friendsJournal of Law
Journal of Law

Last Updated

Jun 19, 2012

Journal of Law
Journal of Law
Journal of Law

Please enter your email Here to receive our news

Journal of Law