Download PDFOpen PDF in browser

Developing Optimized Large Language Models on Limited Compute Resources

EasyChair Preprint 15800

2 pagesDate: February 4, 2025

Abstract

Large language models (LLMs) have demonstrated remarkable performance across a wide range of natural language tasks. However, the computational resources required to train these models at scale remain a significant challenge, particularly in resource-constrained environments. This paper proposes a holistic optimization framework that combines data-centric techniques, compute efficiency improvements, and architectural enhancements to enable the development of high-quality LLMs on limited hardware. We outline our methodology and proposed experimental evaluation plan. Our preliminary analysis suggests that such an approach could potentially yield up to a 30% reduction in training compute while maintaining competitive downstream task performance. This framework aims to democratize LLM development by reducing the computational barriers and fostering more sustainable scaling strategies

Keyphrases: Data Optimization, Dynamic Inference, Mixture of Experts, compute efficiency, large language models

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:15800,
  author    = {S Kasinadhsarma},
  title     = {Developing Optimized Large Language Models on Limited Compute Resources},
  howpublished = {EasyChair Preprint 15800},
  year      = {EasyChair, 2025}}
Download PDFOpen PDF in browser