Please use this identifier to cite or link to this item:
http://ir.futminna.edu.ng:8080/jspui/handle/123456789/6768
Title: | A Comparative Analysis of Gradient Descent-Based Optimization Algorithms on Convolutional Neural Networks |
Authors: | Dogo, E. M. Afolabi, O. J. Nwulu, N. I. Twala, B. Aigbavboa, C. O. |
Keywords: | Optimization Training Deep learning Neural networks Stochastic processes Convergence Classification algorithms |
Issue Date: | 2018 |
Publisher: | IEEE |
Abstract: | In this paper, we perform a comparative evaluation of seven most commonly used first-order stochastic gradient-based optimization techniques in a simple Convolutional Neural Network (ConvNet) architectural setup. The investigated techniques are the Stochastic Gradient Descent (SGD), with vanilla (vSGD), with momentum (SGDm), with momentum and nesterov (SGDm+n)), Root Mean Square Propagation (RMSProp), Adaptive Moment Estimation (Adam), Adaptive Gradient (AdaGrad), Adaptive Delta (AdaDelta), Adaptive moment estimation Extension based on infinity norm (Adamax) and Nesterov-accelerated Adaptive Moment Estimation (Nadam). We trained the model and evaluated the optimization techniques in terms of convergence speed, accuracy and loss function using three randomly selected publicly available image classification datasets. The overall experimental results obtained show Nadam achieved better performance across the three datasets in comparison to the other optimization techniques, while AdaDelta performed the worst. |
URI: | http://repository.futminna.edu.ng:8080/jspui/handle/123456789/6768 |
Appears in Collections: | Computer Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
A Comparative Analysis of Gradient Descent-Based Optimization Algorithms_CNN.pdf | 58.48 kB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.