Abstract

There exist numerous scientific contributions to the design of deep learning networks. However, using the right architecture that is suited for a given business problem with all constraints such as memory and inference time requirements can be cumbersome. We reflect on the evolution of the state-of-the-art architectures for convolutional neural networks(CNN) for the case of image classification. We compare architectures regarding classification results, model size, and inference time to discuss the choices of designs for CNN architectures. To maintain scientific comprehensibility, the established ILSVRC benchmark is used as a basis for model selection and benchmark data. The quantitative comparison shows that while the model size and the required inference time correlate with result accuracy across all architectures, there are major trade-offs between those factors. The qualitative analysis further depicts that published models always build on previous research and adopt improved components in either evolutionary or revolutionary ways. Finally, we discuss design and result improvement during the evolution of CNN architectures. Further, we derive practical implications for designing deep learning networks

Share

COinS