Assessing hyper parameter optimization and speedup for convolutional neural networks

Sajid Nazir, Shushma Patel, Dilip Patel

Research output: Contribution to journalArticlepeer-review

105 Downloads (Pure)


The increased processing power of graphical processing units (GPUs) and the availability of large image datasets has fostered a renewed interest in extracting semantic information from images. Promising results for complex image categorization problems have been achieved using deep learning, with neural networks comprised of many layers. Convolutional neural networks (CNN) are one such architecture which provides more opportunities for image classification. Advances in CNN enable the development of training models using large labelled image datasets, but the hyper parameters need to be specified, which is challenging and complex due to the large number of parameters. A substantial amount of computational power and processing time is required to determine the optimal hyper parameters to define a model yielding good results. This article provides a survey of the hyper parameter search and optimization methods for CNN architectures.
Original languageEnglish
Article number1
Pages (from-to)1-17
Number of pages17
JournalInternational Journal of Artificial Intelligence and Machine Learning (IJAIML)
Issue number2
Early online date7 Jul 2020
Publication statusPublished - Dec 2020


  • Deep Learning
  • Artificial Intelligence
  • Hidden Layers
  • Machine learning
  • Convolution
  • Semantics


Dive into the research topics of 'Assessing hyper parameter optimization and speedup for convolutional neural networks'. Together they form a unique fingerprint.

Cite this