Download PDFOpen PDF in browser

Study of Overfitting by Machine Learning Methods Using Generalization Equations

EasyChair Preprint 10528

8 pagesDate: July 10, 2023

Abstract

The training error of Machine Learning (ML) methods has been extensively used for performance assessment, and its low values have been used as a main justification for complex methods such as estimator fusion and ensembles, and hyper parameter tuning. We present two practical cases where independent tests indicate that the low training error is more of a reflection of over-fitting rather than their generalization ability. We derive a generic form of the generalization equations that separates the training error terms of ML methods from their epistemic terms that correspond to approximation and learnability properties. It provides a framework to separately account for both terms to ensure an overall high generalization performance. For regression estimation tasks, we derive conditions for performance enhancements achieved by hyper parameter tuning, and fusion and ensemble methods over their constituent methods. We present experimental measurements and ML estimates that illustrate the analytical results for the throughput profile estimation of a data transport infrastructure.

Keyphrases: Hyper-parameter tuning, Regression, fusion and ensemble, generalization bounds, machine learning, over-fitting, throughput profile

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:10528,
  author    = {Nageswara Rao},
  title     = {Study of Overfitting by Machine Learning Methods Using Generalization Equations},
  howpublished = {EasyChair Preprint 10528},
  year      = {EasyChair, 2023}}
Download PDFOpen PDF in browser