Abstract
The random neural network (RNN) is a probabilitsic queueing theory-based model for artificial neural networks, and it requires the use of optimization algorithms for training. Commonly used gradient descent learning algorithms may reside in local minima, evolutionary algorithms can be also used to avoid local minima. Other techniques such as artificial bee colony (ABC), particle swarm optimization (PSO), and differential evolution algorithms also perform well in finding the global minimum but they converge slowly. The sequential quadratic programming (SQP) optimization algorithm can find the optimum neural network weights, but can also get stuck in local minima. We propose to overcome the shortcomings of these various approaches by using hybridized ABC/PSO and SQP. The resulting algorithm is shown to compare favorably with other known techniques for training the RNN. The results show that hybrid ABC learning with SQP outperforms other training algorithms in terms of mean-squared error and normalized root-mean-squared error.
Original language | English |
---|---|
Pages (from-to) | 436-456 |
Number of pages | 21 |
Journal | Probability in the Engineering and Informational Sciences |
Volume | 31 |
Issue number | 4 |
Early online date | 22 May 2017 |
DOIs | |
Publication status | Published - Oct 2017 |
Keywords
- random neural network
- heuristics
- communication systems
- artificial bee colony
- learning algorithms
- particle swarm optimisation
- sequential quadratic programming
ASJC Scopus subject areas
- Industrial and Manufacturing Engineering
- Statistics and Probability
- Statistics, Probability and Uncertainty
- Management Science and Operations Research