Random neural network learning heuristics

Abbas Javed, Hadi Larijani, Ali Ahmadinia, Rohinton Emmanuel

    Research output: Contribution to journalArticlepeer-review

    5 Citations (Scopus)
    107 Downloads (Pure)


    The random neural network (RNN) is a probabilitsic queueing theory-based model for artificial neural networks, and it requires the use of optimization algorithms for training. Commonly used gradient descent learning algorithms may reside in local minima, evolutionary algorithms can be also used to avoid local minima. Other techniques such as artificial bee colony (ABC), particle swarm optimization (PSO), and differential evolution algorithms also perform well in finding the global minimum but they converge slowly. The sequential quadratic programming (SQP) optimization algorithm can find the optimum neural network weights, but can also get stuck in local minima. We propose to overcome the shortcomings of these various approaches by using hybridized ABC/PSO and SQP. The resulting algorithm is shown to compare favorably with other known techniques for training the RNN. The results show that hybrid ABC learning with SQP outperforms other training algorithms in terms of mean-squared error and normalized root-mean-squared error.
    Original languageEnglish
    Pages (from-to)436-456
    Number of pages21
    JournalProbability in the Engineering and Informational Sciences
    Issue number4
    Early online date22 May 2017
    Publication statusPublished - Oct 2017


    • random neural network
    • heuristics
    • communication systems
    • artificial bee colony
    • learning algorithms
    • particle swarm optimisation
    • sequential quadratic programming


    Dive into the research topics of 'Random neural network learning heuristics'. Together they form a unique fingerprint.

    Cite this