Download PDFOpen PDF in browser

Loss Functions: Evaluating Their Performance and the Need For An Adaptive Approach

EasyChair Preprint 6964

5 pagesDate: October 31, 2021

Abstract

Mainstream machine learning is dominated by semi-supervised learning. Developments in this field has permitted scholars to harness large amounts of unlabeled data with typically smaller sets of labelled data. This study focuses on the need for an adaptive loss function which automatically adjusts itself by training the model on various datasets. Once semantic segmentation is embedded in the architecture of any model, deeper layers are needed to extract features from images, causing the gradient to be too small for training the network during the learning process, particularly when pixelwise cross entropy loss function is in high dimensional settings, with large number of classes larger objects often overlap with smaller objects causing inaccurate detection. The need is to overcome the impact of super imposed objects on accuracy of classification caused by model confusion owing to the large number of classes. Our research endeavors to deal with the imbalanced data set problem in neural networks by experimenting on various loss functions. The experiments conducted on two different data sets show that different loss functions produce varying results. We present results on Indian driving dataset (IDD) and Cityscapes.

Keyphrases: Focal Loss, combo loss, cross-entropy, image process, loss function, neural networks, semantic segmentation

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:6964,
  author    = {Haroon Haider Khan and Majid Iqbal Khan},
  title     = {Loss Functions: Evaluating Their Performance and the Need For An Adaptive Approach},
  howpublished = {EasyChair Preprint 6964},
  year      = {EasyChair, 2021}}
Download PDFOpen PDF in browser