Download PDFOpen PDF in browser

An Enhanced Hybrid MobileNet

EasyChair Preprint no. 460

5 pagesDate: August 26, 2018


Complicated and deep neural network models can achieve high accuracy for image recognition. However, they require a huge amount of computations and model parameters, which are not suitable for mobile and embedded devices. Therefore, MobileNet was proposed, which can reduce the number of parameters and computational cost dramatically. The main idea of MobileNet is to use a depthwise separable convolution. Two hyper-parameters, a width multiplier and a resolution multiplier, are used to the trade-off between the accuracy and the latency. In this paper, we propose a new architecture to improve the MobileNet. Instead of using the resolution multiplier, we use a depth multiplier and combine with either Fractional Max Pooling or the max pooling. Experimental results on CIFAR database show that the proposed architecture can reduce the amount of computational cost and increase the accuracy simultaneously.

Keyphrases: deep learning, Image Classifier, image recognition, MobileNet, neural networks

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
  author = {Hong-Yen Chen and Chung-Yen Su},
  title = {An Enhanced Hybrid MobileNet},
  howpublished = {EasyChair Preprint no. 460},
  doi = {10.29007/xg3f},
  year = {EasyChair, 2018}}
Download PDFOpen PDF in browser